Containers inherit Linux network namespaces, each with its own interfaces, routes, and socket tables—logically isolated from siblings on the same host unless you deliberately wire them together.
On a laptop, docker compose usually drops services on a user-defined bridge. Each container gets a private IP on that bridge’s subnet; Compose also wires in an embedded DNS resolver so containers can resolve service names (api, postgres) without you hard-coding IPs.
Publishing with -p 8080:80 does not “open a port abstractly”; it publishes through iptables or nft NAT rules so traffic arriving on the host’s 8080/tcp lands on container 80/tcp. Blind 0.0.0.0 binds are how weekend demos quietly become perimeter surprises.
Overlays vs single-host bridges
Kubernetes cluster networking overlays multiple nodes: Pods get cluster IPs; kube-proxy (or newer dataplanes) forwards Service VIPs toward healthy endpoints. Symptoms that look “random” (Connection refused, half-working internal HTTP) often decode to stale endpoints, mis-specified NetworkPolicy egress rules, DNS search paths, or CNI upgrades—not mystical container entropy.
MTU surprises
VXLAN/Geneve encap eats header bytes; cross-cloud tunnels sometimes surface as silent TLS flake or fragmentation weirdness until you reconcile path MTU expectations. Operational teams that win keep a checklist: clamp MSS thoughtfully before blaming applications.
Debug like a principal engineer
- Verify DNS inside the client container versus on the host;
nsswitchand stub resolvers diverge surprisingly. tcpdump/ ephemeral netshoot pods beats restarting containers ritually hoping for enlightenment.- Log SYN/RST timelines separately from application logs.
If you can narrate bridging, NAT, DNS, and policy layers calmly in an outage, you have already signaled seniority beyond résumé keyword density.