Troubleshooting K3s Node Not Being Able to Access the External Network

I’m running K3s directly on my server(a Dell Latitude E7450 laptop) in my homelab setup. It had been working fine for a while, but yesterday I noticed that the cluster couldn’t pull images anymore . Just so happens that there was no access to the external network from inside pods.

Here’s how I tracked it down and fixed it.

Root Cause

My network interface ended up with two IP addresses bound to it, one valid and one stale. K3s was trying to send traffic using the bad IP, so anything that needed internet access (like pulling container images) failed.

Network Path

K3s runs directly on the host, so there’s no VM layer. The pods’ network (cni0 or flannel.1) connects through the host’s main interface.

Run ip route on the host:

ip route

You’ll see something like:

default via 192.168.1.1 dev enp3s0 proto dhcp src 192.168.1.100 metric 100
10.42.0.0/16 dev cni0 proto kernel scope link src 10.42.0.1

This means:

  • Pod traffic goes through cni0
  • Anything else goes through the default route via enp3s0 (your main network card)

Checking the Problem

Use ip addr show enp3s0 to see the IPs on your main interface:

3: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
    inet 192.168.1.100/24 brd 192.168.1.255 scope global dynamic enp3s0
    inet 192.168.1.101/24 brd 192.168.1.255 scope global secondary enp3s0

Notice there are two IPs. Normally, there should only be one (the one assigned by DHCP). That second IP is the problem — K3s sometimes picks it when routing traffic.

Verifying with tcpdump

You can confirm which IP is being used by capturing packets:

sudo tcpdump -i enp3s0 -nn host 8.8.8.8

Then from inside a pod:

kubectl exec -it <pod> -- ping -c 3 8.8.8.8

You’ll likely see packets going out with the wrong source IP.

Fix

Delete the wrong IP:

sudo ip addr del 192.168.1.101/24 dev enp3s0

After that, check again to make sure only one IP remains:

ip addr show enp3s0

Restart K3s:

sudo systemctl restart k3s

Now everything should work fine. Pods can reach the external network again, and images can be pulled normally.

Notes

Not sure what caused the duplicate IP. Possibly a DHCP quirk or a leftover lease when switching networks. Either way, cleaning up the extra IP fixed the issue.