ping6.net

IPv6 in Docker and Kubernetes: Container Networking Guide

Configure IPv6 for containerized applications. Covers Docker daemon settings, Kubernetes dual-stack, CNI plugins, and service exposure.

ping6.netDecember 14, 202410 min read
IPv6DockerKubernetescontainersnetworkingDevOps

Container networking defaults to IPv4. That's a problem if you're running production workloads in 2024.

TL;DR - Quick Summary

Key Points:

  • Docker and Kubernetes default to IPv4-only; IPv6 requires explicit configuration
  • Docker needs daemon.json changes and custom networks with --ipv6 flag
  • Kubernetes 1.23+ supports stable dual-stack with pod and service IPv6
  • CNI plugins (Calico, Cilium, Flannel) handle dual-stack differently

Skip to: Docker Configuration | Kubernetes Dual-Stack | CNI Plugins | Testing

Why IPv6 in Containers#

Your containers might be IPv4-only while the rest of the Internet moves to IPv6. Mobile networks, ISPs, and cloud providers are IPv6-first. If your containerized services don't support IPv6, you're adding latency through NAT64 gateways or worse—missing customers entirely.

Container orchestration platforms (Docker, Kubernetes, ECS, Nomad) were built during the IPv4 era. IPv6 support came later as an afterthought. The defaults still assume IPv4-only networking.

Enabling IPv6 isn't hard, but it requires explicit configuration at multiple layers: daemon, network, container, and service. Miss one layer and you'll have partial connectivity that breaks mysteriously.


Docker IPv6 Configuration#

Docker daemon doesn't enable IPv6 by default. You need to configure it in /etc/docker/daemon.json.

Daemon Configuration#

Create or modify /etc/docker/daemon.json:

{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80",
  "experimental": false,
  "ip6tables": true
}

Breaking this down:

  • "ipv6": true enables IPv6 support
  • "fixed-cidr-v6" sets the subnet for containers (use ULA or GUA prefix)
  • "ip6tables": true enables IPv6 firewall rules (Docker 20.10+)

For production with globally routable addresses, use your provider's assigned prefix:

{
  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:1234::/64"
}

Restart Docker to apply changes:

sudo systemctl restart docker

Verify IPv6 is enabled:

docker network inspect bridge | grep IPv6

You should see "EnableIPv6": true.

Default Bridge Network#

The default bridge network doesn't automatically get IPv6 even after daemon configuration. Create a custom network:

docker network create --ipv6 \
  --subnet=172.20.0.0/16 \
  --subnet=fd00:dead:beef::/48 \
  mynetwork

This creates a dual-stack network with both IPv4 and IPv6 subnets.

Run containers on this network:

docker run -d --network mynetwork nginx

Containers now receive both IPv4 and IPv6 addresses.

User-Defined Networks#

User-defined bridge networks support dual-stack:

docker network create --ipv6 \
  --subnet=10.1.0.0/24 \
  --gateway=10.1.0.1 \
  --subnet=fd00:cafe::/64 \
  --gateway=fd00:cafe::1 \
  appnetwork

Containers on this network can communicate via IPv6:

# Terminal 1
docker run -it --rm --network appnetwork --name container1 alpine sh

In a second terminal, test connectivity:

# Terminal 2
docker run -it --rm --network appnetwork alpine sh
ping6 container1

Docker's embedded DNS resolver returns AAAA records for container names when IPv6 is enabled.

IPv6 NAT and Routing#

By default, Docker uses NAT for IPv4 but may not NAT IPv6. This depends on your fixed-cidr-v6 configuration.

With ULA prefixes (fd00::/8), you need NAT for Internet access:

# Enable IPv6 forwarding
sudo sysctl -w net.ipv6.conf.all.forwarding=1

Add masquerade rule for outbound traffic:

# Add masquerade rule
sudo ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE

With GUA prefixes (globally routable), route directly without NAT:

# Add route for container subnet
sudo ip -6 route add 2001:db8:1234::/64 via <docker-host-ipv6>

Configure upstream router to route your container subnet to the Docker host.

Docker Compose IPv6#

Docker Compose requires explicit IPv6 configuration in the network definition.

Example docker-compose.yml:

version: '3.8'
 
services:
  web:
    image: nginx
    networks:
      - frontend
    ports:
      - "80:80"
      - "[::]:8080:80"  # Bind IPv6 explicitly
 
  app:
    image: myapp:latest
    networks:
      - frontend
      - backend
 
  db:
    image: postgres:14
    networks:
      - backend
    environment:
      POSTGRES_HOST_AUTH_METHOD: trust
 
networks:
  frontend:
    enable_ipv6: true
    ipam:
      config:
        - subnet: 172.20.0.0/16
          gateway: 172.20.0.1
        - subnet: fd00:1::/64
          gateway: fd00:1::1
 
  backend:
    enable_ipv6: true
    ipam:
      config:
        - subnet: 172.21.0.0/16
        - subnet: fd00:2::/64

The enable_ipv6: true flag is required per network. IPAM (IP Address Management) configuration assigns both IPv4 and IPv6 subnets.

Port binding syntax for IPv6:

ports:
  - "80:80"              # IPv4 and IPv6
  - "0.0.0.0:8080:80"    # IPv4 only
  - "[::]:8081:80"       # IPv6 only
  - "127.0.0.1:8082:80"  # IPv4 localhost
  - "[::1]:8083:80"      # IPv6 localhost

Start services:

docker-compose up -d

Verify containers have IPv6:

docker-compose exec web ip -6 addr show

Kubernetes Dual-Stack#

Kubernetes supports dual-stack networking starting with version 1.21 (beta) and 1.23 (stable).

Prerequisites#

  1. Kubernetes 1.23 or later
  2. CNI plugin that supports dual-stack (Calico, Cilium, Flannel, Weave)
  3. kube-proxy with dual-stack mode
  4. Cloud provider support (for LoadBalancer services)

Enabling Dual-Stack#

For new clusters, enable dual-stack during initialization. With kubeadm:

# kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
  podSubnet: "10.244.0.0/16,fd00:10:244::/56"
  serviceSubnet: "10.96.0.0/16,fd00:10:96::/112"

Initialize the cluster:

kubeadm init --config kubeadm-config.yaml

For managed Kubernetes (EKS, GKE, AKS), enable dual-stack during cluster creation:

# EKS
eksctl create cluster \
  --name mycluster \
  --ip-family ipv4,ipv6

This creates an EKS cluster with dual-stack enabled.

# GKE
gcloud container clusters create mycluster \
  --enable-ip-alias \
  --stack-type=IPV4_IPV6

For Azure:

# AKS
az aks create \
  --resource-group myResourceGroup \
  --name mycluster \
  --network-plugin azure \
  --ip-families IPv4,IPv6

Verify dual-stack is enabled:

kubectl get nodes -o jsonpath='{.items[*].spec.podCIDRs}'

You should see both IPv4 and IPv6 CIDRs.

Pod Networking#

Pods receive addresses from both families automatically. No special configuration needed in most cases.

Example pod:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: nginx
    image: nginx

Deploy and check addresses:

kubectl apply -f pod.yaml
kubectl get pod test-pod -o jsonpath='{.status.podIPs}'

Output shows both IPv4 and IPv6:

[{"ip":"10.244.1.5"},{"ip":"fd00:10:244:1::5"}]

Applications inside pods should bind to :: (all addresses) or 0.0.0.0 specifically:

# Python example - bind to both IPv4 and IPv6
import socket
 
sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, 0)
sock.bind(("::", 8080))
sock.listen(5)

Or bind to 0.0.0.0 for IPv4 and :: for IPv6 separately.

Service Configuration#

Services can be IPv4-only, IPv6-only, or dual-stack. Control this with ipFamilyPolicy and ipFamilies fields.

Dual-stack service (default in dual-stack clusters):

apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ipFamilyPolicy: RequireDualStack
  ipFamilies:
    - IPv4
    - IPv6
  selector:
    app: myapp
  ports:
    - port: 80
      targetPort: 8080

IPv4-only service:

apiVersion: v1
kind: Service
metadata:
  name: myservice-v4
spec:
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  selector:
    app: myapp
  ports:
    - port: 80

IPv6-only service:

apiVersion: v1
kind: Service
metadata:
  name: myservice-v6
spec:
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv6
  selector:
    app: myapp
  ports:
    - port: 80

Check service IPs:

kubectl get svc myservice -o jsonpath='{.spec.clusterIPs}'

Output:

["10.96.100.5","fd00:10:96::a5"]

LoadBalancer Services#

LoadBalancer services provision cloud load balancers with dual-stack frontends (if cloud provider supports it).

apiVersion: v1
kind: Service
metadata:
  name: web-lb
spec:
  type: LoadBalancer
  ipFamilyPolicy: RequireDualStack
  ipFamilies:
    - IPv4
    - IPv6
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080

Check external IPs:

kubectl get svc web-lb

Output:

NAME     TYPE           CLUSTER-IP      EXTERNAL-IP                        PORT(S)
web-lb   LoadBalancer   10.96.100.10    203.0.113.10,2001:db8:1234::10    80:30123/TCP

Cloud Provider Compatibility

Not all cloud providers support dual-stack load balancers yet. Verify support for your platform before deploying.

Ingress Controllers#

Ingress support for IPv6 depends on the controller implementation.

Popular controllers with IPv6 support:

  • nginx-ingress: Supports dual-stack, listens on both IPv4 and IPv6
  • Traefik: Full dual-stack support
  • HAProxy Ingress: Supports IPv6
  • Contour: Dual-stack capable

Configure nginx-ingress for dual-stack:

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  ipFamilyPolicy: RequireDualStack
  ipFamilies:
    - IPv4
    - IPv6
  selector:
    app.kubernetes.io/name: ingress-nginx
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: https

Ingress resources don't need special IPv6 configuration—they work automatically if the controller supports it.

Network Policies#

NetworkPolicy resources support IPv6 CIDR blocks:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ipv6
spec:
  podSelector:
    matchLabels:
      app: myapp
  ingress:
    - from:
      - ipBlock:
          cidr: 2001:db8::/32
      - ipBlock:
          cidr: fd00::/8
  egress:
    - to:
      - ipBlock:
          cidr: ::/0
          except:
            - fc00::/7  # Block ULA

Both IPv4 and IPv6 rules can coexist in the same policy.


CNI Plugin Considerations#

Different CNI plugins handle dual-stack differently.

Calico#

Calico supports dual-stack with IP pools:

apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: ipv4-pool
spec:
  cidr: 10.244.0.0/16
  ipipMode: Never
  natOutgoing: true
  nodeSelector: all()
 
---
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: ipv6-pool
spec:
  cidr: fd00:10:244::/56
  natOutgoing: false
  nodeSelector: all()

Enable IPv6 in Calico:

kubectl set env daemonset/calico-node -n kube-system IP6=autodetect
kubectl set env daemonset/calico-node -n kube-system FELIX_IPV6SUPPORT=true

Cilium#

Cilium has native dual-stack support. Enable during installation:

helm install cilium cilium/cilium \
  --namespace kube-system \
  --set ipv4.enabled=true \
  --set ipv6.enabled=true \
  --set tunnel=disabled \
  --set autoDirectNodeRoutes=true

Flannel#

Flannel requires dual-stack mode in the DaemonSet configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
data:
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "IPv6Network": "fd00:10:244::/56",
      "Backend": {
        "Type": "vxlan"
      }
    }

Weave#

Weave supports dual-stack with both IPAM modes:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.244.0.0/16&env.IPALLOC_RANGE=fd00:10:244::/56"

Common Pitfalls#

Application Binding#

Applications must explicitly listen on IPv6 addresses. Binding to 0.0.0.0 only listens on IPv4.

Wrong:

server.bind(("0.0.0.0", 8080))  # IPv4 only

Right:

server.bind(("::", 8080))  # IPv6 (and IPv4 if IPV6_V6ONLY=0)

Many languages default to IPv4-only. Check your framework documentation.

DNS Resolution#

DNS queries in dual-stack environments return both A and AAAA records. Applications should try both, preferring IPv6.

Some older libraries only query A records. Update dependencies or configure DNS resolution explicitly.

Firewall Rules#

Container firewalls (iptables/ip6tables) need rules for both families. Docker and Kubernetes handle this automatically if configured correctly, but custom rules may block IPv6.

Verify IPv6 firewall rules:

sudo ip6tables -L -n -v

External Connectivity#

Containers with IPv6 need proper routing to external networks. If your host doesn't have IPv6 connectivity, containers won't either.

Test host IPv6 first:

ping6 google.com

If that fails, fix host networking before troubleshooting containers.

StatefulSet Headless Services#

StatefulSets with headless services return both IPv4 and IPv6 addresses for pod DNS names:

nslookup web-0.myservice.default.svc.cluster.local

Applications connecting to StatefulSet pods must handle multiple addresses gracefully.


Testing Dual-Stack Deployments#

Deploy a test application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
 
---
apiVersion: v1
kind: Service
metadata:
  name: test-svc
spec:
  type: LoadBalancer
  ipFamilyPolicy: RequireDualStack
  ipFamilies:
    - IPv4
    - IPv6
  selector:
    app: test
  ports:
    - port: 80

Apply and test:

kubectl apply -f test-app.yaml

Get the service external IPs:

# Get service IPs
kubectl get svc test-svc

Test both address families:

# Test IPv4
curl http://<ipv4-external-ip>
 
# Test IPv6
curl -6 http://[<ipv6-external-ip>]

From inside a pod, test dual-stack connectivity:

kubectl run -it --rm debug --image=alpine --restart=Never -- sh

Inside the debug pod:

# Inside pod
apk add curl bind-tools
nslookup test-svc
curl -4 http://test-svc  # IPv4
curl -6 http://test-svc  # IPv6

Production Considerations#

  1. Monitor both address families in observability tools
  2. Test failover behavior when one family is unavailable
  3. Configure health checks for both IPv4 and IPv6
  4. Document network topology including IPv6 prefixes
  5. Plan IP address allocation to avoid conflicts
  6. Enable IPv6 in CI/CD pipelines for testing
  7. Train team on dual-stack troubleshooting

Verify Container Connectivity

Use our Ping Tool and IPv6 Validator to test your containerized services are reachable over IPv6.

Additional Resources#