Revision history [back]

click to hide/show revision 1
initial version

Pike: Cinder Volume services are down on Controller Node without any apparent error in logs

Greetings,

I am experiencing issues regarding Cinder. Here are the configuration elements:

From storage node:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.59
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.59
glance_api_servers = http://192.168.10.60:9292
enabled_backends =lvm-192.168.10.59,nfs-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
nas_secure_file_operations = false
nas_secure_file_permissions = false
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[lvm-192.168.10.59]
volume_group = centos
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address = 192.168.10.59
volume_backend_name = LVM_iSCSI-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
lvm_type = default
[nfs-192.168.10.59]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = /var/lib/cinder/nfs
nsf_disk_util = df
nfs_sparsed_volumes = True
nfs_mount_options = rw,hard,intr,timeo=90,bg,vers=3,proto=tcp,rsize=32768,wsize=32768
volume_backend_name = NFS-192.168.10.59
nfs_qcow2_volumes = True
nfs_snapshot_support = True
nas_secure_file_operations = false
nas_secure_file_permissions = false
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3

from the controller:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.60
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.60
glance_api_servers = http://192.168.10.60:9292
enabled_backends =
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]

here is the output from controller:

$ openstack volume service list
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                                | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | srv-heb-stack101.geoconcept.local@lvm-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:18.000000 |
| cinder-volume    | srv-heb-stack101.geoconcept.local@nfs-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:19.000000 |
| cinder-scheduler | srv-heb-stack102.geoconcept.local                   | nova | enabled | up    | 2018-08-31T12:54:05.000000 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+

And no error is shown on logs from both nodes.

I am a little lost here; I haved checked conf files, firewall rules, transport_url as stated on some other threads but everything seems fine yet it does not work.

Would you have some fresh ideas I might try out to solve this?

Best Regards,

Pike: Cinder Volume services are down on Controller Node without any apparent error in logs

Greetings,

I am experiencing issues regarding Cinder. Here are the configuration elements:

From storage node:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.59
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.59
glance_api_servers = http://192.168.10.60:9292
enabled_backends =lvm-192.168.10.59,nfs-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
nas_secure_file_operations = false
nas_secure_file_permissions = false
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[lvm-192.168.10.59]
volume_group = centos
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address = 192.168.10.59
volume_backend_name = LVM_iSCSI-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
lvm_type = default
[nfs-192.168.10.59]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = /var/lib/cinder/nfs
nsf_disk_util = df
nfs_sparsed_volumes = True
nfs_mount_options = rw,hard,intr,timeo=90,bg,vers=3,proto=tcp,rsize=32768,wsize=32768
volume_backend_name = NFS-192.168.10.59
nfs_qcow2_volumes = True
nfs_snapshot_support = True
nas_secure_file_operations = false
nas_secure_file_permissions = false
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3

from the controller:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.60
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.60
glance_api_servers = http://192.168.10.60:9292
enabled_backends =
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]

here is the output from controller:

$ openstack volume service list
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                                | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | srv-heb-stack101.geoconcept.local@lvm-192.168.10.59 srv-heb-stack101@lvm-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:18.000000 |
| cinder-volume    | srv-heb-stack101.geoconcept.local@nfs-192.168.10.59 srv-heb-stack101@nfs-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:19.000000 |
| cinder-scheduler | srv-heb-stack102.geoconcept.local srv-heb-stack102                   | nova | enabled | up    | 2018-08-31T12:54:05.000000 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+

And no error is shown on logs from both nodes.

I am a little lost here; I haved checked conf files, firewall rules, transport_url as stated on some other threads but everything seems fine yet it does not work.

Would you have some fresh ideas I might try out to solve this?

Best Regards,

Pike: Cinder Volume services are down on Controller Node without any apparent error in logs

Greetings,

I am experiencing issues regarding Cinder. Here are the configuration elements:

From storage node:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.59
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.59
glance_api_servers = http://192.168.10.60:9292
enabled_backends =lvm-192.168.10.59,nfs-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
nas_secure_file_operations = false
nas_secure_file_permissions = false
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[lvm-192.168.10.59]
volume_group = centos
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address = 192.168.10.59
volume_backend_name = LVM_iSCSI-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
lvm_type = default
[nfs-192.168.10.59]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = /var/lib/cinder/nfs
nsf_disk_util = df
nfs_sparsed_volumes = True
nfs_mount_options = rw,hard,intr,timeo=90,bg,vers=3,proto=tcp,rsize=32768,wsize=32768
volume_backend_name = NFS-192.168.10.59
nfs_qcow2_volumes = True
nfs_snapshot_support = True
nas_secure_file_operations = false
nas_secure_file_permissions = false
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3

from the controller:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.60
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.60
glance_api_servers = http://192.168.10.60:9292
enabled_backends =
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]

here is the output from controller:

$ openstack volume service list
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                                | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | srv-heb-stack101@lvm-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:18.000000 |
| cinder-volume    | srv-heb-stack101@nfs-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:19.000000 |
| cinder-scheduler | srv-heb-stack102                   | nova | enabled | up    | 2018-08-31T12:54:05.000000 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+

And no error is shown on logs from both nodes.

I am a little lost here; I haved checked conf files, firewall rules, transport_url as stated on some other threads but everything seems fine yet it does not work.

Would you have some fresh ideas I might try out to solve this?

Best Regards,

EDIT: Based on Bernd advices, here are the output from systemctl command on both controller and storage nodes: [root@srv-heb-stack101 ~]# systemctl status -l openstack-cinder-volume.service target.service ? openstack-cinder-volume.service - OpenStack Cinder Volume Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:44:56 CEST; 27s ago Main PID: 350301 (cinder-volume) Tasks: 3 CGroup: /system.slice/openstack-cinder-volume.service +-350301 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log +-350326 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log +-350328 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log

Sep 03 10:45:01 srv-heb-stack101 sudo[350363]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:01 srv-heb-stack101 sudo[350369]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude snapshot /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:01 srv-heb-stack101 sudo[350370]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix centos Sep 03 10:45:01 srv-heb-stack101 sudo[350374]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:02 srv-heb-stack101 sudo[350377]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:02 srv-heb-stack101 sudo[350384]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix centos Sep 03 10:45:02 srv-heb-stack101 sudo[350385]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:03 srv-heb-stack101 sudo[350391]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude snapshot /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:03 srv-heb-stack101 sudo[350392]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:03 srv-heb-stack101 sudo[350398]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662

? target.service - Restore LIO kernel target configuration Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled) Active: active (exited) since Mon 2018-09-03 10:44:56 CEST; 27s ago Process: 350302 ExecStop=/usr/bin/targetctl clear (code=exited, status=0/SUCCESS) Process: 350310 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS) Main PID: 350310 (code=exited, status=0/SUCCESS)

Sep 03 10:44:56 srv-heb-stack101 systemd[1]: Starting Restore LIO kernel target configuration... Sep 03 10:44:56 srv-heb-stack101 target[350310]: No saved config file at /etc/target/saveconfig.json, ok, exiting Sep 03 10:44:56 srv-heb-stack101 systemd[1]: Started Restore LIO kernel target configuration.

[root@srv-heb-stack102 ~]# systemctl status -l openstack-nova-api.service openstack-cinder-api.service openstack-cinder-scheduler.service ? openstack-nova-api.service - OpenStack Nova API Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:46 CEST; 19s ago Main PID: 21892 (nova-api) Tasks: 7 CGroup: /system.slice/openstack-nova-api.service +-21892 /usr/bin/python2 /usr/bin/nova-api +-21929 /usr/bin/python2 /usr/bin/nova-api +-21930 /usr/bin/python2 /usr/bin/nova-api +-21933 /usr/bin/python2 /usr/bin/nova-api +-21934 /usr/bin/python2 /usr/bin/nova-api +-21935 /usr/bin/python2 /usr/bin/nova-api +-21936 /usr/bin/python2 /usr/bin/nova-api

Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Nova API Server... Sep 03 10:41:41 srv-heb-stack102 nova-api[21892]: Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications". Sep 03 10:41:46 srv-heb-stack102 systemd[1]: Started OpenStack Nova API Server. Sep 03 10:41:46 srv-heb-stack102 nova-api[21892]: /usr/lib64/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:161: SAWarning: The IN-predicate on "instance_mappings.instance_uuid" was invoked with an empty sequence. This results in a contradiction, which nonetheless can be expensive to evaluate. Consider alternative strategies for improved performance. Sep 03 10:41:46 srv-heb-stack102 nova-api[21892]: 'strategies for improved performance.' % expr)

? openstack-cinder-api.service - OpenStack Cinder API Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-api.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:38 CEST; 27s ago Main PID: 21890 (cinder-api) Tasks: 5 CGroup: /system.slice/openstack-cinder-api.service +-21890 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21922 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21923 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21924 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21925 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log

Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Started OpenStack Cinder API Server. Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Cinder API Server...

? openstack-cinder-scheduler.service - OpenStack Cinder Scheduler Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:38 CEST; 27s ago Main PID: 21891 (cinder-schedule) Tasks: 1 CGroup: /system.slice/openstack-cinder-scheduler.service +-21891 /usr/bin/python2 /usr/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/scheduler.log

Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Started OpenStack Cinder Scheduler Server. Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Cinder Scheduler Server...

Pike: Cinder Volume services are down on Controller Node without any apparent error in logs

Greetings,

I am experiencing issues regarding Cinder. Here are the configuration elements:

From storage node:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.59
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.59
glance_api_servers = http://192.168.10.60:9292
enabled_backends =lvm-192.168.10.59,nfs-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
nas_secure_file_operations = false
nas_secure_file_permissions = false
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[lvm-192.168.10.59]
volume_group = centos
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address = 192.168.10.59
volume_backend_name = LVM_iSCSI-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
lvm_type = default
[nfs-192.168.10.59]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = /var/lib/cinder/nfs
nsf_disk_util = df
nfs_sparsed_volumes = True
nfs_mount_options = rw,hard,intr,timeo=90,bg,vers=3,proto=tcp,rsize=32768,wsize=32768
volume_backend_name = NFS-192.168.10.59
nfs_qcow2_volumes = True
nfs_snapshot_support = True
nas_secure_file_operations = false
nas_secure_file_permissions = false
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3

from the controller:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.60
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.60
glance_api_servers = http://192.168.10.60:9292
enabled_backends =
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]

here is the output from controller:

$ openstack volume service list
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                                | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | srv-heb-stack101@lvm-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:18.000000 |
| cinder-volume    | srv-heb-stack101@nfs-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:19.000000 |
| cinder-scheduler | srv-heb-stack102                   | nova | enabled | up    | 2018-08-31T12:54:05.000000 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+

And no error is shown on logs from both nodes.

I am a little lost here; I haved checked conf files, firewall rules, transport_url as stated on some other threads but everything seems fine yet it does not work.

Would you have some fresh ideas I might try out to solve this?

Best Regards,

EDIT: Based on Bernd advices, here are the output from systemctl command on both controller and storage nodes: [root@srv-heb-stack101 ~]# systemctl status -l openstack-cinder-volume.service target.service ? openstack-cinder-volume.service - OpenStack Cinder Volume Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:44:56 CEST; 27s ago Main PID: 350301 (cinder-volume) Tasks: 3 CGroup: /system.slice/openstack-cinder-volume.service +-350301 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log +-350326 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log +-350328 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log

Sep 03 10:45:01 srv-heb-stack101 sudo[350363]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:01 srv-heb-stack101 sudo[350369]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude snapshot /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:01 srv-heb-stack101 sudo[350370]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix centos Sep 03 10:45:01 srv-heb-stack101 sudo[350374]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:02 srv-heb-stack101 sudo[350377]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:02 srv-heb-stack101 sudo[350384]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix centos Sep 03 10:45:02 srv-heb-stack101 sudo[350385]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:03 srv-heb-stack101 sudo[350391]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude snapshot /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:03 srv-heb-stack101 sudo[350392]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:03 srv-heb-stack101 sudo[350398]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662

? target.service - Restore LIO kernel target configuration Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled) Active: active (exited) since Mon 2018-09-03 10:44:56 CEST; 27s ago Process: 350302 ExecStop=/usr/bin/targetctl clear (code=exited, status=0/SUCCESS) Process: 350310 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS) Main PID: 350310 (code=exited, status=0/SUCCESS)

Sep 03 10:44:56 srv-heb-stack101 systemd[1]: Starting Restore LIO kernel target configuration... Sep 03 10:44:56 srv-heb-stack101 target[350310]: No saved config file at /etc/target/saveconfig.json, ok, exiting Sep 03 10:44:56 srv-heb-stack101 systemd[1]: Started Restore LIO kernel target configuration.

[root@srv-heb-stack102 ~]# systemctl status -l openstack-nova-api.service openstack-cinder-api.service openstack-cinder-scheduler.service ? openstack-nova-api.service - OpenStack Nova API Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:46 CEST; 19s ago Main PID: 21892 (nova-api) Tasks: 7 CGroup: /system.slice/openstack-nova-api.service +-21892 /usr/bin/python2 /usr/bin/nova-api +-21929 /usr/bin/python2 /usr/bin/nova-api +-21930 /usr/bin/python2 /usr/bin/nova-api +-21933 /usr/bin/python2 /usr/bin/nova-api +-21934 /usr/bin/python2 /usr/bin/nova-api +-21935 /usr/bin/python2 /usr/bin/nova-api +-21936 /usr/bin/python2 /usr/bin/nova-api

Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Nova API Server... Sep 03 10:41:41 srv-heb-stack102 nova-api[21892]: Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications". Sep 03 10:41:46 srv-heb-stack102 systemd[1]: Started OpenStack Nova API Server. Sep 03 10:41:46 srv-heb-stack102 nova-api[21892]: /usr/lib64/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:161: SAWarning: The IN-predicate on "instance_mappings.instance_uuid" was invoked with an empty sequence. This results in a contradiction, which nonetheless can be expensive to evaluate. Consider alternative strategies for improved performance. Sep 03 10:41:46 srv-heb-stack102 nova-api[21892]: 'strategies for improved performance.' % expr)

? openstack-cinder-api.service - OpenStack Cinder API Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-api.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:38 CEST; 27s ago Main PID: 21890 (cinder-api) Tasks: 5 CGroup: /system.slice/openstack-cinder-api.service +-21890 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21922 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21923 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21924 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21925 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log

Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Started OpenStack Cinder API Server. Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Cinder API Server...

? openstack-cinder-scheduler.service - OpenStack Cinder Scheduler Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:38 CEST; 27s ago Main PID: 21891 (cinder-schedule) Tasks: 1 CGroup: /system.slice/openstack-cinder-scheduler.service +-21891 /usr/bin/python2 /usr/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/scheduler.log

Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Started OpenStack Cinder Scheduler Server. Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Cinder Scheduler Server...

Pike: Cinder Volume services are down on Controller Node without any apparent error in logs

Greetings,

I am experiencing issues regarding Cinder. Here are the configuration elements:

From storage node:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.59
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.59
glance_api_servers = http://192.168.10.60:9292
enabled_backends =lvm-192.168.10.59,nfs-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
nas_secure_file_operations = false
nas_secure_file_permissions = false
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[lvm-192.168.10.59]
volume_group = centos
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address = 192.168.10.59
volume_backend_name = LVM_iSCSI-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
lvm_type = default
[nfs-192.168.10.59]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = /var/lib/cinder/nfs
nsf_disk_util = df
nfs_sparsed_volumes = True
nfs_mount_options = rw,hard,intr,timeo=90,bg,vers=3,proto=tcp,rsize=32768,wsize=32768
volume_backend_name = NFS-192.168.10.59
nfs_qcow2_volumes = True
nfs_snapshot_support = True
nas_secure_file_operations = false
nas_secure_file_permissions = false
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3

from the controller:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.60
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.60
glance_api_servers = http://192.168.10.60:9292
enabled_backends =
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]

here is the output from controller:

$ openstack volume service list
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                                | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | srv-heb-stack101@lvm-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:18.000000 |
| cinder-volume    | srv-heb-stack101@nfs-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:19.000000 |
| cinder-scheduler | srv-heb-stack102                   | nova | enabled | up    | 2018-08-31T12:54:05.000000 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+

And no error is shown on logs from both nodes.

I am a little lost here; I haved checked conf files, firewall rules, transport_url as stated on some other threads but everything seems fine yet it does not work.

Would you have some fresh ideas I might try out to solve this?

Best Regards,

EDIT: Based on Bernd advices, here are the output from systemctl command on both controller and storage nodes:

[root@srv-heb-stack101 ~]# systemctl status -l openstack-cinder-volume.service target.service
? openstack-cinder-volume.service - OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-09-03 10:44:56 CEST; 27s ago
 Main PID: 350301 (cinder-volume)
    Tasks: 3
   CGroup: /system.slice/openstack-cinder-volume.service
           +-350301 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
           +-350326 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
           +-350328 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log

/var/log/cinder/volume.log Sep 03 10:45:01 srv-heb-stack101 sudo[350363]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:01 srv-heb-stack101 sudo[350369]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude snapshot *snapshot* /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:01 srv-heb-stack101 sudo[350370]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix centos Sep 03 10:45:01 srv-heb-stack101 sudo[350374]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:02 srv-heb-stack101 sudo[350377]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:02 srv-heb-stack101 sudo[350384]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix centos Sep 03 10:45:02 srv-heb-stack101 sudo[350385]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:03 srv-heb-stack101 sudo[350391]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude snapshot *snapshot* /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 Sep 03 10:45:03 srv-heb-stack101 sudo[350392]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos Sep 03 10:45:03 srv-heb-stack101 sudo[350398]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662

/var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662 ? target.service - Restore LIO kernel target configuration Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled) Active: active (exited) since Mon 2018-09-03 10:44:56 CEST; 27s ago Process: 350302 ExecStop=/usr/bin/targetctl clear (code=exited, status=0/SUCCESS) Process: 350310 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS) Main PID: 350310 (code=exited, status=0/SUCCESS)

status=0/SUCCESS) Sep 03 10:44:56 srv-heb-stack101 systemd[1]: Starting Restore LIO kernel target configuration... Sep 03 10:44:56 srv-heb-stack101 target[350310]: No saved config file at /etc/target/saveconfig.json, ok, exiting Sep 03 10:44:56 srv-heb-stack101 systemd[1]: Started Restore LIO kernel target configuration.

configuration. [root@srv-heb-stack102 ~]# systemctl status -l openstack-nova-api.service openstack-cinder-api.service openstack-cinder-scheduler.service ? openstack-nova-api.service - OpenStack Nova API Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:46 CEST; 19s ago Main PID: 21892 (nova-api) Tasks: 7 CGroup: /system.slice/openstack-nova-api.service +-21892 /usr/bin/python2 /usr/bin/nova-api +-21929 /usr/bin/python2 /usr/bin/nova-api +-21930 /usr/bin/python2 /usr/bin/nova-api +-21933 /usr/bin/python2 /usr/bin/nova-api +-21934 /usr/bin/python2 /usr/bin/nova-api +-21935 /usr/bin/python2 /usr/bin/nova-api +-21936 /usr/bin/python2 /usr/bin/nova-api

/usr/bin/nova-api Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Nova API Server... Sep 03 10:41:41 srv-heb-stack102 nova-api[21892]: Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications". Sep 03 10:41:46 srv-heb-stack102 systemd[1]: Started OpenStack Nova API Server. Sep 03 10:41:46 srv-heb-stack102 nova-api[21892]: /usr/lib64/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:161: SAWarning: The IN-predicate on "instance_mappings.instance_uuid" was invoked with an empty sequence. This results in a contradiction, which nonetheless can be expensive to evaluate. Consider alternative strategies for improved performance. Sep 03 10:41:46 srv-heb-stack102 nova-api[21892]: 'strategies for improved performance.' % expr)

expr) ? openstack-cinder-api.service - OpenStack Cinder API Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-api.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:38 CEST; 27s ago Main PID: 21890 (cinder-api) Tasks: 5 CGroup: /system.slice/openstack-cinder-api.service +-21890 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21922 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21923 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21924 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log +-21925 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log

/var/log/cinder/api.log Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Started OpenStack Cinder API Server. Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Cinder API Server...

Server... ? openstack-cinder-scheduler.service - OpenStack Cinder Scheduler Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-03 10:41:38 CEST; 27s ago Main PID: 21891 (cinder-schedule) Tasks: 1 CGroup: /system.slice/openstack-cinder-scheduler.service +-21891 /usr/bin/python2 /usr/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/scheduler.log

/var/log/cinder/scheduler.log Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Started OpenStack Cinder Scheduler Server. Sep 03 10:41:38 srv-heb-stack102 systemd[1]: Starting OpenStack Cinder Scheduler Server...

Server...

Pike: Cinder Volume services are down on Controller Node without any apparent error in logs

Greetings,

I am experiencing issues regarding Cinder. Here are the configuration elements:

From storage node:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.59
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.59
glance_api_servers = http://192.168.10.60:9292
enabled_backends =lvm-192.168.10.59,nfs-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
nas_secure_file_operations = false
nas_secure_file_permissions = false
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[lvm-192.168.10.59]
volume_group = centos
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address = 192.168.10.59
volume_backend_name = LVM_iSCSI-192.168.10.59
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3
lvm_type = default
[nfs-192.168.10.59]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = /var/lib/cinder/nfs
nsf_disk_util = df
nfs_sparsed_volumes = True
nfs_mount_options = rw,hard,intr,timeo=90,bg,vers=3,proto=tcp,rsize=32768,wsize=32768
volume_backend_name = NFS-192.168.10.59
nfs_qcow2_volumes = True
nfs_snapshot_support = True
nas_secure_file_operations = false
nas_secure_file_permissions = false
volume_clear = zero
volume_clear_size = 50
volume_clear_ionice = -c3

from the controller:

[DEFAULT]
osapi_volume_listen = 192.168.10.60
api_paste_config = /etc/cinder/api-paste.ini
glance_host = 192.168.10.60
auth_strategy = keystone
debug = False
use_syslog = False
my_ip = 192.168.10.60
transport_url = rabbit://openstack:0e932761f8ddf9d9f175@192.168.10.60:5672//openstack
log_dir = /var/log/cinder
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes/
rootwrap_config = /etc/cinder/rootwrap.conf
default_volume_type = lvm-192.168.10.60
glance_api_servers = http://192.168.10.60:9292
enabled_backends =
nova_catalog_info = compute:nova:internalURL
nova_catalog_admin_info = compute:nova:adminURL
os_region_name = Bagneux
notification_driver = messagingv2
control_exchange = cinder
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinderdbuser:680b5b9b4b557830bfdd@192.168.10.60:3306/cinderdb
retry_interval = 10
idle_timeout = 3600
min_pool_size = 1
max_pool_size = 10
max_retries = 100
pool_timeout = 10
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://192.168.10.60:5000
auth_url = http://192.168.10.60:35357
auth_type = password
memcached_servers = 192.168.10.60:11211
project_domain_name = default
user_domain_name = default
project_name = services
username = cinder
password = *snipped*
region_name = Bagneux
project_domain_id = default
user_domain_id = default
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/oslock/cinder
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messagingv2
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]

here is the output from controller:

$ openstack volume service list
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                                | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | srv-heb-stack101@lvm-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:18.000000 |
| cinder-volume    | srv-heb-stack101@nfs-192.168.10.59 | nova | enabled | down  | 2018-08-31T12:59:19.000000 |
| cinder-scheduler | srv-heb-stack102                   | nova | enabled | up    | 2018-08-31T12:54:05.000000 |
+------------------+-----------------------------------------------------+------+---------+-------+----------------------------+

And no error is shown on logs from both nodes.

I am a little lost here; I haved checked conf files, firewall rules, transport_url as stated on some other threads but everything seems fine yet it does not work.

Would you have some fresh ideas I might try out to solve this?

Best Regards,

EDIT: Based on Bernd advices, here are the output from systemctl command on both controller and storage nodes:

[root@srv-heb-stack101 ~]# systemctl status -l openstack-cinder-volume.service target.service
? openstack-cinder-volume.service - OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-09-03 10:44:56 CEST; 27s ago
 Main PID: 350301 (cinder-volume)
    Tasks: 3
   CGroup: /system.slice/openstack-cinder-volume.service
           +-350301 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
           +-350326 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
           +-350328 /usr/bin/python2 /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log

Sep 03 10:45:01 srv-heb-stack101  sudo[350363]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos
Sep 03 10:45:01 srv-heb-stack101  sudo[350369]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662
Sep 03 10:45:01 srv-heb-stack101  sudo[350370]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix centos
Sep 03 10:45:01 srv-heb-stack101  sudo[350374]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662
Sep 03 10:45:02 srv-heb-stack101  sudo[350377]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos
Sep 03 10:45:02 srv-heb-stack101  sudo[350384]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix centos
Sep 03 10:45:02 srv-heb-stack101  sudo[350385]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf stat -f -c %S %b %a /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662
Sep 03 10:45:03 srv-heb-stack101  sudo[350391]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du -sb --apparent-size --exclude *snapshot* /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662
Sep 03 10:45:03 srv-heb-stack101  sudo[350392]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix centos
Sep 03 10:45:03 srv-heb-stack101  sudo[350398]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes /var/lib/cinder/nfs/c24894a33cc4dbd201310976acce9662

? target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
   Active: active (exited) since Mon 2018-09-03 10:44:56 CEST; 27s ago
  Process: 350302 ExecStop=/usr/bin/targetctl clear (code=exited, status=0/SUCCESS)
  Process: 350310 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 350310 (code=exited, status=0/SUCCESS)

Sep 03 10:44:56 srv-heb-stack101  systemd[1]: Starting Restore LIO kernel target configuration...
Sep 03 10:44:56 srv-heb-stack101  target[350310]: No saved config file at /etc/target/saveconfig.json, ok, exiting
Sep 03 10:44:56 srv-heb-stack101  systemd[1]: Started Restore LIO kernel target configuration.



[root@srv-heb-stack102 ~]#  systemctl status -l openstack-nova-api.service openstack-cinder-api.service openstack-cinder-scheduler.service
? openstack-nova-api.service - OpenStack Nova API Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-09-03 10:41:46 CEST; 19s ago
 Main PID: 21892 (nova-api)
    Tasks: 7
   CGroup: /system.slice/openstack-nova-api.service
           +-21892 /usr/bin/python2 /usr/bin/nova-api
           +-21929 /usr/bin/python2 /usr/bin/nova-api
           +-21930 /usr/bin/python2 /usr/bin/nova-api
           +-21933 /usr/bin/python2 /usr/bin/nova-api
           +-21934 /usr/bin/python2 /usr/bin/nova-api
           +-21935 /usr/bin/python2 /usr/bin/nova-api
           +-21936 /usr/bin/python2 /usr/bin/nova-api

Sep 03 10:41:38 srv-heb-stack102  systemd[1]: Starting OpenStack Nova API Server...
Sep 03 10:41:41 srv-heb-stack102  nova-api[21892]: Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications".
Sep 03 10:41:46 srv-heb-stack102  systemd[1]: Started OpenStack Nova API Server.
Sep 03 10:41:46 srv-heb-stack102  nova-api[21892]: /usr/lib64/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:161: SAWarning: The IN-predicate on "instance_mappings.instance_uuid" was invoked with an empty sequence. This results in a contradiction, which nonetheless can be expensive to evaluate.  Consider alternative strategies for improved performance.
Sep 03 10:41:46 srv-heb-stack102  nova-api[21892]: 'strategies for improved performance.' % expr)

? openstack-cinder-api.service - OpenStack Cinder API Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-api.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-09-03 10:41:38 CEST; 27s ago
 Main PID: 21890 (cinder-api)
    Tasks: 5
   CGroup: /system.slice/openstack-cinder-api.service
           +-21890 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log
           +-21922 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log
           +-21923 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log
           +-21924 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log
           +-21925 /usr/bin/python2 /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log

Sep 03 10:41:38 srv-heb-stack102  systemd[1]: Started OpenStack Cinder API Server.
Sep 03 10:41:38 srv-heb-stack102  systemd[1]: Starting OpenStack Cinder API Server...

? openstack-cinder-scheduler.service - OpenStack Cinder Scheduler Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-09-03 10:41:38 CEST; 27s ago
 Main PID: 21891 (cinder-schedule)
    Tasks: 1
   CGroup: /system.slice/openstack-cinder-scheduler.service
           +-21891 /usr/bin/python2 /usr/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/scheduler.log

Sep 03 10:41:38 srv-heb-stack102  systemd[1]: Started OpenStack Cinder Scheduler Server.
Sep 03 10:41:38 srv-heb-stack102  systemd[1]: Starting OpenStack Cinder Scheduler Server...