问题背景
我在2台阿里云的云服务器上搭建Docker Swarm集群,现有一个管理节点和一个工作节点,以及一个overlay网络overlay-demo,并创建了两个容器,我尝试在一台服务器的容器中使用容器名ping另一台服务器的容器,提示“bad address”。
# 在服务器A上初始化集群
docker swarm init --advertise-addr xxx --data-path-port 5789
# 服务器B加入集群
docker swarm join --token xxx
#创建一个overlay网络
docker network create --driver overlay --attachable overlay-demo
# 创建两个容器
docker service create --name whoami --network overlay-demo -p 12028:8000 -d jwilder/whoami
docker service create --name client -d --network overlay-demo busybox sh -c "while true; do sleep 3600;done"
-bash-4.2# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
m7c1ausrqbz1 client replicated 1/1 busybox:latest
zkepy77jlbiq whoami replicated 1/1 jwilder/whoami:latest *:12028->8000/tcp
运行结果及详细报错内容
此时服务器A上运行着whoami,服务器B上运行着client
#在服务器B上执行
-bash-4.2# docker network ls
NETWORK ID NAME DRIVER SCOPE
4eaafe02c0c4 bridge bridge local
f5ca4e293cee docker_gwbridge bridge local
99fdd314534a host host local
97by9ml6ew7a ingress overlay swarm
fde5a3c2b796 none null local
0obzphh4kf4j overlay-demo overlay swarm
-bash-4.2# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9ef11e0851e busybox:latest "sh -c 'while true; …" About a minute ago Up About a minute client.1.8pyn4xpskru16hjgu4odbwsrr
-bash-4.2# docker exec -it d9ef ping client
PING client (10.0.1.11): 56 data bytes
64 bytes from 10.0.1.11: seq=0 ttl=64 time=0.102 ms
64 bytes from 10.0.1.11: seq=1 ttl=64 time=0.098 ms
64 bytes from 10.0.1.11: seq=2 ttl=64 time=0.102 ms
^C
--- client ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.098/0.100/0.102 ms
-bash-4.2# docker exec -it d9ef ping whoami
ping: bad address 'whoami'
# 在服务器A上查看docker信息
-bash-4.2# docker info
Client: Docker Engine - Community
Version: 26.1.4
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.14.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.27.1
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 4
Running: 3
Paused: 0
Stopped: 1
Images: 15
Server Version: 26.1.4
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: active
NodeID: 2xxkioglkc2f2reyewtrcsmba
Is Manager: true
ClusterID: jswgrj5pb1gq7ig9vuw5tz0fn
Managers: 1
Nodes: 2
Data Path Port: 5789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: xxx
Manager Addresses:
xxx:2377
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d2d58213f83a351ca8f528a95fbd145f5654e957
runc version: v1.1.12-0-g51d5e94
init version: de40ad0
Security Options:
seccomp
Profile: builtin
Kernel Version: 3.10.0-957.21.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.561GiB
Name: aliyun-yr
ID: 9def1e07-e0f7-4c89-b374-c53459c06207
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://docker.foreverlink.love/
Live Restore Enabled: false
# 查看overlay-demo具体信息
-bash-4.2# docker network inspect overlay-demo
[
{
"Name": "overlay-demo",
"Id": "0obzphh4kf4j8q3kjdanszo44",
"Created": "2024-10-08T17:33:17.73269471+08:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5fb7dcf272f67031e0f99924153df4f2b5d1651235af0636676eb514cdecd499": {
"Name": "whoami.1.lg1qykgudfnsy545ukara3zwy",
"EndpointID": "67b49b58e72f72318ad4e230b1c8cd9e3ecd975488f21993c11e4f4bb4c9d1cc",
"MacAddress": "02:42:0a:00:01:09",
"IPv4Address": "10.0.1.9/24",
"IPv6Address": ""
},
"lb-overlay-demo": {
"Name": "overlay-demo-endpoint",
"EndpointID": "88ee3c49e51bc884c014a9307b00003162429c320fe2ec4406d69d5e95eb543a",
"MacAddress": "02:42:0a:00:01:0a",
"IPv4Address": "10.0.1.10/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": {},
"Peers": [
{
"Name": "cddbb9b2879f",
"IP": "xxx"
}
]
}
]
我的解答思路和尝试过的方法
1、在网上查阅诸多资料,有人说Docker Swarm 需要开放TCP 2377、UDP 4789、TCP/UDP 7946端口,我尝试开放
2、发现阿里云不开放4789,于是我重新初始化集群,使用5789端口,仍不行
我想要达到的结果
我想要达到的是,通过Docker Swarm部署在不同节点上的容器,可以通过容器名跨节点访问