Physical Layer · Audit 2026-04-13T23:19:05+10:00

Infrastructure

9-node Talos Linux dual-cluster (SOC: 3 ctrl + 3 worker · SEC: 3 ctrl), running K8s v1.35.2 · Talos v1.12.6 · containerd 2.1.6. Cilium 1.19.2 CNI, MetalLB 0.15.3 load balancer, Longhorn 1.11.1 storage.

Live Cluster State

Kubernetes Nodes — SOC Cluster · soc1–soc6

All nodes Ready. SOC: 3 control-plane (etcd · apiserver · scheduler) + 3 worker nodes (Longhorn · workloads). SEC: 3 control-plane-only nodes (all roles). Total: 130 + 92 = 222 pods across both clusters.

soc1
Control Plane
IP: 10.0.x.x
OS: Talos v1.12.6
K8s: v1.35.2
Kernel: 6.18.18-talos
Runtime: containerd 2.1.6
Ready · 9h uptime
soc2
Control Plane
IP: 10.0.x.x
OS: Talos v1.12.6
K8s: v1.35.2
Kernel: 6.18.18-talos
Runtime: containerd 2.1.6
Ready · 9h uptime
soc3
Control Plane
IP: 10.0.x.x
OS: Talos v1.12.6
K8s: v1.35.2
Kernel: 6.18.18-talos
Runtime: containerd 2.1.6
Ready · 9h uptime
soc4
Worker
IP: 10.0.x.x
OS: Talos v1.12.6
K8s: v1.35.2
Kernel: 6.18.18-talos
Workloads: ArgoCD · Longhorn
Ready · 9h uptime
soc5
Worker
IP: 10.0.x.x
OS: Talos v1.12.6
K8s: v1.35.2
Kernel: 6.18.18-talos
Workloads: ArgoCD · Ingress · Longhorn
Ready · 9h uptime
soc6
Worker
IP: 10.0.x.x
OS: Talos v1.12.6
K8s: v1.35.2
Kernel: 6.18.18-talos
Workloads: ArgoCD · Longhorn · cert-manager
Ready · 9h uptime
Live Cluster State

Kubernetes Nodes — SEC Cluster · sec1–sec3

All nodes Ready. 3 control-plane-only nodes — no dedicated workers. Hosts OpenCTI 6.x + full CTI stack (Elasticsearch · RabbitMQ · Redis · MinIO). ClusterMesh peered with SOC. 92 pods · 90 Running · 1 Pending.

sec1
Control Plane
IP: 172.16.x.x
OS: Talos v1.12.6
K8s: v1.35.2
RAM: 32 GiB total · 9.2 GiB used
Workloads: OpenCTI · Connectors
Ready · VIP 172.16.x.x
sec2
Control Plane
IP: 172.16.x.x
OS: Talos v1.12.6
K8s: v1.35.2
RAM: 32 GiB total · 14 GiB used
Workloads: Elasticsearch · RabbitMQ
Ready · highest memory node
sec3
Control Plane
IP: 172.16.x.x
OS: Talos v1.12.6
K8s: v1.35.2
RAM: 32 GiB total · 11.7 GiB used
Workloads: Redis · MinIO · Wazuh
Ready
// SEC CLUSTER — PODS PER NAMESPACE
longhorn-system27
kube-system25
opencti16
monitoring8
metallb-system4
wazuh · otel · cert-manager3 each
ingress-nginx2
default1
⚠ PENDING
kube-system/clustermesh-apiserver-generate-certs — CronJob stuck pending (0 restarts). ClusterMesh apiserver itself is Running (40 restarts).
Ingress Resources · kubectl get ingress -A

Active Ingress Endpoints — SOC Cluster

SOC ingress via MetalLB VIP 172.16.x.x. SEC cluster VIP 172.16.x.x — routes: cti.onelabs.work (OpenCTI).

NamespaceIngress NameHost / URLClassTLS SecretLB IPPortsAge
argocd argocd-server argo.onelabs.work nginx wildcard-onelabs-tls 172.16.x.x 80, 443 36m
kube-system hubble-ui-ingress hub.onelabs.work nginx wildcard-onelabs-tls 172.16.x.x 80, 443 3h48m
longhorn-system longhorn-ui stog.onelabs.work nginx wildcard-onelabs-tls 172.16.x.x 80, 443 6h19m
helm list -A

Helm Releases

// Installed Charts
NameNamespaceChartApp VersionRevisionStatusUpdated
argocdargocdargo-cd-9.5.0v3.3.61deployed2026-04-13 22:42:28
cert-managercert-managercert-manager-v1.20.1v1.20.12deployed2026-04-13 16:54:12
ciliumkube-systemcilium-1.19.21.19.25deployed2026-04-13 19:46:23
ingress-nginxingress-nginxingress-nginx-4.15.11.15.11deployed2026-04-13 17:10:31
longhornlonghorn-systemlonghorn-1.11.1v1.11.14deployed2026-04-13 17:49:25
metallbmetallb-systemmetallb-0.15.3v0.15.31deployed2026-04-13 17:02:22
kubectl get ns

Namespaces — SOC Cluster · 130 pods

argocd
Active
15 pods
cert-manager
Active
Age: 6h50m
cilium-secrets
Active
Age: 9h
default
Active
Age: 9h
ingress-nginx
Active
1 pod
kube-system
Active
37 pods
monitoring
Active
28 pods
wazuh
Active
6 pods
otel
Active
6 pods
minio
Active
1 pod
longhorn-system
Active
27 pods
metallb-system
Active
6 pods
⚠ Audit Finding

Workloads NOT Using Internal Registry

These workloads pull from external registries (quay.io, ecr-public, registry.k8s.io) instead of regis.onelabs.work. Consider mirroring to internal registry for air-gap compliance.

KindNamespaceWorkloadExternal ImageRisk
Deploymentargocdargocd-redis-ha-haproxyecr-public.aws.com/…haproxy:3.0.8-alpine⚠ EXTERNAL
StatefulSetargocdargocd-redis-ha-serverecr-public.aws.com/…redis:8.2.3-alpine⚠ EXTERNAL
DaemonSetkube-systemciliumquay.io/cilium/cilium:v1.19.2CNI CORE
Deploymentkube-systemcilium-operatorquay.io/cilium/operator-generic:v1.19.2CNI CORE
DaemonSetkube-systemcilium-envoyquay.io/cilium/cilium-envoy:v1.35.9…CNI CORE
Deploymentkube-systemcorednsregistry.k8s.io/coredns/coredns:v1.13.2K8S CORE
Deploymentkube-systemhubble-relayquay.io/cilium/hubble-relay:v1.19.2⚠ EXTERNAL
Deploymentkube-systemhubble-uiquay.io/cilium/hubble-ui:v0.13.3⚠ EXTERNAL
Network Path

Traffic Ingress Chain

Internet
Public
HTTPS
Cloudflare
DDoS · WAF
WG Tunnel
WireGuard VPS
154.26.x.x
Proxy
OPNsense HAProxy
L7 · SSL
→ MetalLB
MetalLB LB
172.16.x.x
NodePort
ingress-nginx
v1.15.1
ClusterIP
Service Pod
Cilium eBPF
Standalone VMs

Services Outside Kubernetes

adone.onelabs.work
172.16.x.x · Windows Server
Active Directory Root CA · saza-AD-CA DNS
vault + sso.onelabs.work
172.16.x.x · Linux
Vault HA Raft Authentik SSO PKI Intermediate CA
c2c.onelabs.work
External VM · Internet-facing
Caldera C2 Red Team Custom Docker + CA
base.saza.com.au
Bastion · Management
talosctl kubectl onelabs-ops.sh v2.0