Setting Firewall on OpenStack nodes
I tried to configure the firewall for OpenStack Controller and Compute node and here are the rules I added to the firewall:
myZone (active)
target: default
icmp-block-inversion: no
interfaces: enp2s0 enp7s4
sources:
services: ssh dhcpv6-client
ports: 80/tcp 6080/tcp 11211/tcp 9696/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="192.168.0.32" accept
rule family="ipv4" source address="192.168.0.31" accept
The address of the Controller and the Compute nodes are 192.168.0.31 and 192.168.0.32, respectively. Using these rules I can use Horizon on the browser and the Compute node services can connect to the Controller nodes ports.
The problem is when the firewall is enabled on the Controller node, instances that are running on the Controller node (I configure the Controller node as the Compute node, too) just can be pinged and all other VMs and nodes (including the Controller node) cannot connect to it (using SSH or any other connection to a specific port). There is no firewall running on instances. I configured an external network to connect VMs to each other CentOS7 is running on all nodes Here are ports listening on the Controller node:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN 4478/python2
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 4461/python2
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 10189/httpd
tcp 0 0 0.0.0.0:8776 0.0.0.0:* LISTEN 4487/python2
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 4466/beam.smp
tcp 0 0 0.0.0.0:8778 0.0.0.0:* LISTEN 10189/httpd
tcp 0 0 192.168.0.31:3306 0.0.0.0:* LISTEN 4860/mysqld
tcp 0 0 192.168.0.31:2379 0.0.0.0:* LISTEN 4464/etcd
tcp 0 0 192.168.0.31:11211 0.0.0.0:* LISTEN 4457/memcached
tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 4457/memcached
tcp 0 0 192.168.0.31:5900 0.0.0.0:* LISTEN 16844/qemu-kvm
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 4500/python2
tcp 0 0 192.168.0.31:2380 0.0.0.0:* LISTEN 4464/etcd
tcp 0 0 192.168.0.31:5901 0.0.0.0:* LISTEN 16982/qemu-kvm
tcp 0 0 192.168.0.31:5902 0.0.0.0:* LISTEN 17339/qemu-kvm
tcp 0 0 192.168.0.31:5903 0.0.0.0:* LISTEN 17621/qemu-kvm
tcp 0 0 192.168.0.31:5904 0.0.0.0:* LISTEN 17840/qemu-kvm
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 10189/httpd
tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 1 ...
What do you mean by "cannot connect to it"? When you try to connect, is there any error message? Are you sure there is a security group rule for ssh?
To understand where ssh packets are swallowed, I would do some tracing with tcpdump. On the router interfaces, bridges etc as appropriate.
I mean when firewall of the controller node is off, I can connect to the VMs remotely using SSH (so the security groups work fine), but when I turn the firewall on, I just can ping the VMs and SSH errors No route to host. I guess it's because of the VMs use bridge connection.
The VM's floating IP is converted to its static IP in the router namespace, which is normally connected to some external bridge (e.g. OVS or Linuxbridge) via the gateway interface.
I can't say what effect the firewall has on that bridge, but this is the place to start searching i.e tracing.
Linuxbridge connection diagrams from the networking guide.
I created ports for instances and run them using static IP.