IPv6 in Docker and Kubernetes: Container Networking Guide
Configure IPv6 for containerized applications. Covers Docker daemon settings, Kubernetes dual-stack, CNI plugins, and service exposure.
Container networking defaults to IPv4. That's a problem if you're running production workloads in 2024.
TL;DR - Quick Summary
Key Points:
- Docker and Kubernetes default to IPv4-only; IPv6 requires explicit configuration
- Docker needs daemon.json changes and custom networks with
--ipv6flag - Kubernetes 1.23+ supports stable dual-stack with pod and service IPv6
- CNI plugins (Calico, Cilium, Flannel) handle dual-stack differently
Skip to: Docker Configuration | Kubernetes Dual-Stack | CNI Plugins | Testing
Why IPv6 in Containers#
Your containers might be IPv4-only while the rest of the Internet moves to IPv6. Mobile networks, ISPs, and cloud providers are IPv6-first. If your containerized services don't support IPv6, you're adding latency through NAT64 gateways or worse—missing customers entirely.
Container orchestration platforms (Docker, Kubernetes, ECS, Nomad) were built during the IPv4 era. IPv6 support came later as an afterthought. The defaults still assume IPv4-only networking.
Enabling IPv6 isn't hard, but it requires explicit configuration at multiple layers: daemon, network, container, and service. Miss one layer and you'll have partial connectivity that breaks mysteriously.
Docker IPv6 Configuration#
Docker daemon doesn't enable IPv6 by default. You need to configure it in /etc/docker/daemon.json.
Daemon Configuration#
Create or modify /etc/docker/daemon.json:
{
"ipv6": true,
"fixed-cidr-v6": "fd00::/80",
"experimental": false,
"ip6tables": true
}Breaking this down:
"ipv6": trueenables IPv6 support"fixed-cidr-v6"sets the subnet for containers (use ULA or GUA prefix)"ip6tables": trueenables IPv6 firewall rules (Docker 20.10+)
For production with globally routable addresses, use your provider's assigned prefix:
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1234::/64"
}Restart Docker to apply changes:
sudo systemctl restart dockerVerify IPv6 is enabled:
docker network inspect bridge | grep IPv6You should see "EnableIPv6": true.
Default Bridge Network#
The default bridge network doesn't automatically get IPv6 even after daemon configuration. Create a custom network:
docker network create --ipv6 \
--subnet=172.20.0.0/16 \
--subnet=fd00:dead:beef::/48 \
mynetworkThis creates a dual-stack network with both IPv4 and IPv6 subnets.
Run containers on this network:
docker run -d --network mynetwork nginxContainers now receive both IPv4 and IPv6 addresses.
User-Defined Networks#
User-defined bridge networks support dual-stack:
docker network create --ipv6 \
--subnet=10.1.0.0/24 \
--gateway=10.1.0.1 \
--subnet=fd00:cafe::/64 \
--gateway=fd00:cafe::1 \
appnetworkContainers on this network can communicate via IPv6:
# Terminal 1
docker run -it --rm --network appnetwork --name container1 alpine shIn a second terminal, test connectivity:
# Terminal 2
docker run -it --rm --network appnetwork alpine sh
ping6 container1Docker's embedded DNS resolver returns AAAA records for container names when IPv6 is enabled.
IPv6 NAT and Routing#
By default, Docker uses NAT for IPv4 but may not NAT IPv6. This depends on your fixed-cidr-v6 configuration.
With ULA prefixes (fd00::/8), you need NAT for Internet access:
# Enable IPv6 forwarding
sudo sysctl -w net.ipv6.conf.all.forwarding=1Add masquerade rule for outbound traffic:
# Add masquerade rule
sudo ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADEWith GUA prefixes (globally routable), route directly without NAT:
# Add route for container subnet
sudo ip -6 route add 2001:db8:1234::/64 via <docker-host-ipv6>Configure upstream router to route your container subnet to the Docker host.
Docker Compose IPv6#
Docker Compose requires explicit IPv6 configuration in the network definition.
Example docker-compose.yml:
version: '3.8'
services:
web:
image: nginx
networks:
- frontend
ports:
- "80:80"
- "[::]:8080:80" # Bind IPv6 explicitly
app:
image: myapp:latest
networks:
- frontend
- backend
db:
image: postgres:14
networks:
- backend
environment:
POSTGRES_HOST_AUTH_METHOD: trust
networks:
frontend:
enable_ipv6: true
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
- subnet: fd00:1::/64
gateway: fd00:1::1
backend:
enable_ipv6: true
ipam:
config:
- subnet: 172.21.0.0/16
- subnet: fd00:2::/64The enable_ipv6: true flag is required per network. IPAM (IP Address Management) configuration assigns both IPv4 and IPv6 subnets.
Port binding syntax for IPv6:
ports:
- "80:80" # IPv4 and IPv6
- "0.0.0.0:8080:80" # IPv4 only
- "[::]:8081:80" # IPv6 only
- "127.0.0.1:8082:80" # IPv4 localhost
- "[::1]:8083:80" # IPv6 localhostStart services:
docker-compose up -dVerify containers have IPv6:
docker-compose exec web ip -6 addr showKubernetes Dual-Stack#
Kubernetes supports dual-stack networking starting with version 1.21 (beta) and 1.23 (stable).
Prerequisites#
- Kubernetes 1.23 or later
- CNI plugin that supports dual-stack (Calico, Cilium, Flannel, Weave)
- kube-proxy with dual-stack mode
- Cloud provider support (for LoadBalancer services)
Enabling Dual-Stack#
For new clusters, enable dual-stack during initialization. With kubeadm:
# kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
podSubnet: "10.244.0.0/16,fd00:10:244::/56"
serviceSubnet: "10.96.0.0/16,fd00:10:96::/112"Initialize the cluster:
kubeadm init --config kubeadm-config.yamlFor managed Kubernetes (EKS, GKE, AKS), enable dual-stack during cluster creation:
# EKS
eksctl create cluster \
--name mycluster \
--ip-family ipv4,ipv6This creates an EKS cluster with dual-stack enabled.
# GKE
gcloud container clusters create mycluster \
--enable-ip-alias \
--stack-type=IPV4_IPV6For Azure:
# AKS
az aks create \
--resource-group myResourceGroup \
--name mycluster \
--network-plugin azure \
--ip-families IPv4,IPv6Verify dual-stack is enabled:
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDRs}'You should see both IPv4 and IPv6 CIDRs.
Pod Networking#
Pods receive addresses from both families automatically. No special configuration needed in most cases.
Example pod:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: nginx
image: nginxDeploy and check addresses:
kubectl apply -f pod.yaml
kubectl get pod test-pod -o jsonpath='{.status.podIPs}'Output shows both IPv4 and IPv6:
[{"ip":"10.244.1.5"},{"ip":"fd00:10:244:1::5"}]Applications inside pods should bind to :: (all addresses) or 0.0.0.0 specifically:
# Python example - bind to both IPv4 and IPv6
import socket
sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, 0)
sock.bind(("::", 8080))
sock.listen(5)Or bind to 0.0.0.0 for IPv4 and :: for IPv6 separately.
Service Configuration#
Services can be IPv4-only, IPv6-only, or dual-stack. Control this with ipFamilyPolicy and ipFamilies fields.
Dual-stack service (default in dual-stack clusters):
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
ipFamilyPolicy: RequireDualStack
ipFamilies:
- IPv4
- IPv6
selector:
app: myapp
ports:
- port: 80
targetPort: 8080IPv4-only service:
apiVersion: v1
kind: Service
metadata:
name: myservice-v4
spec:
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv4
selector:
app: myapp
ports:
- port: 80IPv6-only service:
apiVersion: v1
kind: Service
metadata:
name: myservice-v6
spec:
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv6
selector:
app: myapp
ports:
- port: 80Check service IPs:
kubectl get svc myservice -o jsonpath='{.spec.clusterIPs}'Output:
["10.96.100.5","fd00:10:96::a5"]LoadBalancer Services#
LoadBalancer services provision cloud load balancers with dual-stack frontends (if cloud provider supports it).
apiVersion: v1
kind: Service
metadata:
name: web-lb
spec:
type: LoadBalancer
ipFamilyPolicy: RequireDualStack
ipFamilies:
- IPv4
- IPv6
selector:
app: web
ports:
- port: 80
targetPort: 8080Check external IPs:
kubectl get svc web-lbOutput:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
web-lb LoadBalancer 10.96.100.10 203.0.113.10,2001:db8:1234::10 80:30123/TCPCloud Provider Compatibility
Not all cloud providers support dual-stack load balancers yet. Verify support for your platform before deploying.
Ingress Controllers#
Ingress support for IPv6 depends on the controller implementation.
Popular controllers with IPv6 support:
- nginx-ingress: Supports dual-stack, listens on both IPv4 and IPv6
- Traefik: Full dual-stack support
- HAProxy Ingress: Supports IPv6
- Contour: Dual-stack capable
Configure nginx-ingress for dual-stack:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
ipFamilyPolicy: RequireDualStack
ipFamilies:
- IPv4
- IPv6
selector:
app.kubernetes.io/name: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: httpsIngress resources don't need special IPv6 configuration—they work automatically if the controller supports it.
Network Policies#
NetworkPolicy resources support IPv6 CIDR blocks:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ipv6
spec:
podSelector:
matchLabels:
app: myapp
ingress:
- from:
- ipBlock:
cidr: 2001:db8::/32
- ipBlock:
cidr: fd00::/8
egress:
- to:
- ipBlock:
cidr: ::/0
except:
- fc00::/7 # Block ULABoth IPv4 and IPv6 rules can coexist in the same policy.
CNI Plugin Considerations#
Different CNI plugins handle dual-stack differently.
Calico#
Calico supports dual-stack with IP pools:
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: ipv4-pool
spec:
cidr: 10.244.0.0/16
ipipMode: Never
natOutgoing: true
nodeSelector: all()
---
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: ipv6-pool
spec:
cidr: fd00:10:244::/56
natOutgoing: false
nodeSelector: all()Enable IPv6 in Calico:
kubectl set env daemonset/calico-node -n kube-system IP6=autodetect
kubectl set env daemonset/calico-node -n kube-system FELIX_IPV6SUPPORT=trueCilium#
Cilium has native dual-stack support. Enable during installation:
helm install cilium cilium/cilium \
--namespace kube-system \
--set ipv4.enabled=true \
--set ipv6.enabled=true \
--set tunnel=disabled \
--set autoDirectNodeRoutes=trueFlannel#
Flannel requires dual-stack mode in the DaemonSet configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-system
data:
net-conf.json: |
{
"Network": "10.244.0.0/16",
"IPv6Network": "fd00:10:244::/56",
"Backend": {
"Type": "vxlan"
}
}Weave#
Weave supports dual-stack with both IPAM modes:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.244.0.0/16&env.IPALLOC_RANGE=fd00:10:244::/56"Common Pitfalls#
Application Binding#
Applications must explicitly listen on IPv6 addresses. Binding to 0.0.0.0 only listens on IPv4.
Wrong:
server.bind(("0.0.0.0", 8080)) # IPv4 onlyRight:
server.bind(("::", 8080)) # IPv6 (and IPv4 if IPV6_V6ONLY=0)Many languages default to IPv4-only. Check your framework documentation.
DNS Resolution#
DNS queries in dual-stack environments return both A and AAAA records. Applications should try both, preferring IPv6.
Some older libraries only query A records. Update dependencies or configure DNS resolution explicitly.
Firewall Rules#
Container firewalls (iptables/ip6tables) need rules for both families. Docker and Kubernetes handle this automatically if configured correctly, but custom rules may block IPv6.
Verify IPv6 firewall rules:
sudo ip6tables -L -n -vExternal Connectivity#
Containers with IPv6 need proper routing to external networks. If your host doesn't have IPv6 connectivity, containers won't either.
Test host IPv6 first:
ping6 google.comIf that fails, fix host networking before troubleshooting containers.
StatefulSet Headless Services#
StatefulSets with headless services return both IPv4 and IPv6 addresses for pod DNS names:
nslookup web-0.myservice.default.svc.cluster.localApplications connecting to StatefulSet pods must handle multiple addresses gracefully.
Testing Dual-Stack Deployments#
Deploy a test application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 2
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-svc
spec:
type: LoadBalancer
ipFamilyPolicy: RequireDualStack
ipFamilies:
- IPv4
- IPv6
selector:
app: test
ports:
- port: 80Apply and test:
kubectl apply -f test-app.yamlGet the service external IPs:
# Get service IPs
kubectl get svc test-svcTest both address families:
# Test IPv4
curl http://<ipv4-external-ip>
# Test IPv6
curl -6 http://[<ipv6-external-ip>]From inside a pod, test dual-stack connectivity:
kubectl run -it --rm debug --image=alpine --restart=Never -- shInside the debug pod:
# Inside pod
apk add curl bind-tools
nslookup test-svc
curl -4 http://test-svc # IPv4
curl -6 http://test-svc # IPv6Production Considerations#
- Monitor both address families in observability tools
- Test failover behavior when one family is unavailable
- Configure health checks for both IPv4 and IPv6
- Document network topology including IPv6 prefixes
- Plan IP address allocation to avoid conflicts
- Enable IPv6 in CI/CD pipelines for testing
- Train team on dual-stack troubleshooting
Related Articles#
- IPv6 for Developers - Application-level IPv6 implementation
- IPv6 in AWS, Azure, and GCP - Cloud platform IPv6 configuration
- Enable IPv6 on Your Network - Infrastructure-wide IPv6 deployment
Verify Container Connectivity
Use our Ping Tool and IPv6 Validator to test your containerized services are reachable over IPv6.