/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
link/none
inet 10.8.8.10/24 brd 10.8.8.255 scope global tun0
valid_lft forever preferred_lft forever
102: eth0@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ip r
0.0.0.0/1 via 10.8.8.1 dev tun0
default via 172.19.0.1 dev eth0
10.8.8.0/24 dev tun0 proto kernel scope link src 10.8.8.10
128.0.0.0/1 via 10.8.8.1 dev tun0
172.19.0.0/16 dev eth0 proto kernel scope link src 172.19.0.2
212.102.42.84 via 172.19.0.1 dev eth0
缺點
但是這樣做不是完美的,我遇到了幾個比較煩的問題
port binding
如果要在 test 這個 container expose port 的話,會遇到這個問題:
ERROR: for vpn_test_1 Cannot create container for service test: conflicting options: port publishing and the container type network mode
network_mode: service 是無法跟 port publishing 共存的,因此如果要 expose port 的話,必需要在 vpn 這個 container 上做設定,但這樣整份 config 看起來就會很奇怪。
ERROR: for vpn_test_1 Cannot restart container 9345feb04c10564c0a9443891bd4e6dd67d01a7d822f855b5b901d7a618fba56: No such container: cce530d075a16267da10213949622244cdbbcff40070b38c1a12ff1f9c40325f
因為 vpn 這個 container 重新建立後 container id 變了,原先 test 依賴的 container 消失了所以才會遇到這個問題。雖然可以透過 docker-compose up --always-recreate-deps -d 來強制重新建立所有有 dependency 關係的 container,但這樣實在是不太方便。
bug
不知道為什麼重開的時候有時候會遇到這個錯誤,要再重新下一次指令才會成功:
ERROR: for vpn_test_1 Cannot start service vpn_test_1: Container bb98419511b1aaa72a897bfcb8b61d01ff8508dbd7aa9af9136569b075d3c073 is restarting, wait until the container is running
By definition, all containers in the same Podman pod share the same network namespace. Therefore, the containers will share the IP Address, MAC Addresses and port mappings. You can always communicate between containers in the same pod, using localhost.
進到 service 的 container 後,可以發現多了 vxlan0 這個 interface 通到 pod-gateway,並且路由也設好了:
> kubectl exec --stdin --tty -n vpn vpn-test -c terminal -- /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if145: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 8e:d0:ed:fc:02:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.42.0.74/24 brd 10.42.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::8cd0:edff:fefc:2c5/64 scope link
valid_lft forever preferred_lft forever
4: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 1a:ef:44:af:f4:ad brd ff:ff:ff:ff:ff:ff
inet 172.16.0.237/24 brd 172.16.0.255 scope global vxlan0
valid_lft forever preferred_lft forever
inet6 fe80::18ef:44ff:feaf:f4ad/64 scope link
/ # ip r
default via 172.16.0.1 dev vxlan0
10.42.0.0/24 dev eth0 proto kernel scope link src 10.42.0.74
10.42.0.0/16 via 10.42.0.1 dev eth0
10.43.0.10 via 10.42.0.1 dev eth0
172.16.0.0/24 dev vxlan0 proto kernel scope link src 172.16.0.237