Docker swarm join node as worker

@Ayman when i setup my environment and add 2 vm one run as a master and 2nd when i ran this ```
docker swarm join --token SWMTKN-1-0wyjx6pp0go18oz9c62cda7d3v5fvrwwb444o33x56kxhzjda8-9uxcepj9pbhggtecds324a06u 192.168.65.3:2377

@Ayman ```
Error response from daemon: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = “transport: Error while dialing dial tcp 192.168.65.3:2377: connect: connection refused”

2

I did firewall-cmd --add-port=2377/tcp --permanent firewall-cmd --reload already on master side and was still getting the same error. I did telnet <master ip> 2377 on worker node and then I did reboot on master.

Did you try to disable the firewall in all nodes including the master node and try again?

@Ayman no
because when i search on google they say port listen on tcp6 so this may be issue?

It looks like your docker swarm manager leader is not running on port 2377. You can check it by firing this command on your swarm manager leader vm. If it is working just fine then you will get similar output

[root@host1]# docker node ls
ID                            HOSTNAME                     STATUS              AVAILABILITY        MANAGER STATUS
tilzootjbg7n92n4mnof0orf0 *   host1    Ready               Active              Leader

Furthermore you can check the listening ports in leader swarm manager node. It should have port tcp 2377 for cluster management communications and tcp/udp port 7946 for communication among nodes opened.

[root@host1]# netstat -ntulp | grep dockerd
tcp6       0      0 :::2377                 :::*                    LISTEN      2286/dockerd
tcp6       0      0 :::7946                 :::*                    LISTEN      2286/dockerd
udp6       0      0 :::7946                 :::*                                2286/dockerd