Ask Your Question

woltjert's profile - activity

2019-02-06 09:06:41 -0500 received badge  Necromancer (source)
2019-02-06 09:06:41 -0500 received badge  Teacher (source)
2018-09-05 05:49:03 -0500 received badge  Famous Question (source)
2018-07-10 16:32:12 -0500 commented question network is unreachable in instance

What is the output of ip netns on the controllers?

2018-07-10 16:32:12 -0500 received badge  Commentator
2018-07-09 12:37:56 -0500 commented question Problem with no traffic using VXLAN provider network type

Was this ever solved? I am having the same problem with VXLANs but on linuxbridge instead of open vswitch.

2018-06-13 03:44:29 -0500 received badge  Famous Question (source)
2018-05-20 12:32:54 -0500 received badge  Famous Question (source)
2018-05-17 08:07:47 -0500 commented answer openstack-nova-api is dead & not able to access dashboard after reboot

netstat -le | grep 8774 returns nothing, even though the nova-api service is running. openstack server list returns an error, because nova-api is not listening on 8774, despite the service running and the setting osapi_compute_listen_port = 8774

2018-05-16 13:45:34 -0500 answered a question openstack-nova-api is dead & not able to access dashboard after reboot

Did you ever figure this out? having the same problem.

2018-05-15 11:07:25 -0500 received badge  Notable Question (source)
2018-04-27 07:36:52 -0500 received badge  Popular Question (source)
2018-04-25 08:36:38 -0500 commented question Nova VNC works, but not in Dashboard

So it seems like the problem more likely has to do with HAProxy.

2018-04-25 08:27:53 -0500 commented answer nova-lxd with ceph storage possible?

That is actually a correspondance I had with James in March. It seems like there are still some limits to nova-lxd, it is not ready for production with ceph, and currently live migration is only experimental. While it doesn't look like LXD is ready for us, it's worth keeping an eye on.

2018-04-25 08:20:32 -0500 received badge  Notable Question (source)
2018-04-24 10:22:41 -0500 asked a question Nova VNC works, but not in Dashboard

I have the VNC console working, but in the openstack dashboard, it does not load and shows an error code 1006 instead. If I click to show only the console, then it loads.

Nova logs on the controller when getting code:1006 on the dashboard:

==> /var/log/nova/nova-consoleauth.log <==
2018-04-24 10:23:58.894 3109 INFO nova.consoleauth.manager [req-41c322e4-76b7-47a3-aa10-27de67b47568 8bf003cdf58a4233b7e279aee1eec593 ee5d821a39174d8a91e3a741a6ecd97e - default default] Received Token: a634ec8f-c7db-45e2-8fc2-851340bb8ad3, {'instance_uuid': u'5f6785ff-f592-4f68-88a4-3ad512a959f2', 'access_url': u'http://controller:6080/vnc_auto.html?token=a634ec8f-c7db-45e2-8fc2-851340bb8ad3', 'token': u'a634ec8f-c7db-45e2-8fc2-851340bb8ad3', 'last_activity_at': 1524579838.89435, 'internal_access_path': None, 'console_type': u'novnc', 'host': u'192.168.116.23', 'port': u'5900'}

==> /var/log/nova/nova-novncproxy.log <==
2018-04-24 10:23:59.647 9358 INFO nova.console.websocketproxy [-] 192.168.116.22 - - [24/Apr/2018 10:23:59] 192.168.116.22: Plain non-SSL (ws://) WebSocket connection
2018-04-24 10:23:59.648 9358 INFO nova.console.websocketproxy [-] 192.168.116.22 - - [24/Apr/2018 10:23:59] 192.168.116.22: Version hybi-13, base64: 'False'
2018-04-24 10:23:59.649 9358 INFO nova.console.websocketproxy [-] 192.168.116.22 - - [24/Apr/2018 10:23:59] 192.168.116.22: Path: '/websockify'
2018-04-24 10:23:59.769 9358 INFO nova.console.websocketproxy [req-261346a6-84eb-4829-bdc6-b85f53939786 - - - - -] handler exception: The token 'a634ec8f-c7db-45e2-8fc2-851340bb8ad3' is invalid or has expired

The same logs when successfully showing only the console:

==> /var/log/nova/nova-consoleauth.log <==
2018-04-24 10:25:04.291 3109 INFO nova.consoleauth.manager [req-7926032b-fcd7-490e-8f0e-99b5bccef8da - - - - -] Checking Token: a634ec8f-c7db-45e2-8fc2-851340bb8ad3, True

==> /var/log/nova/nova-novncproxy.log <==
2018-04-24 10:25:04.589 9513 INFO nova.console.websocketproxy [req-7926032b-fcd7-490e-8f0e-99b5bccef8da - - - - -]   8: connect info: {u'instance_uuid': u'5f6785ff-f592-4f68-88a4-3ad512a959f2', u'internal_access_path': None, u'last_activity_at': 1524579838.89435, u'console_type': u'novnc', u'host': u'192.168.116.23', u'token': u'a634ec8f-c7db-45e2-8fc2-851340bb8ad3', u'access_url': u'http://controller:6080/vnc_auto.html?token=a634ec8f-c7db-45e2-8fc2-851340bb8ad3', u'port': u'5900'}
2018-04-24 10:25:04.590 9513 INFO nova.console.websocketproxy [req-7926032b-fcd7-490e-8f0e-99b5bccef8da - - - - -]   8: connecting to: 192.168.116.23:5900

The [vnc] section of /etc/nova/nova.conf on the controller

my_ip = 192.168.116.21

[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
novncproxy_host = 192.168.116.21

The [vnc] section of /etc/nova/nova.conf on the controller

my_ip = 192.168.116.23

[vnc]
enabled = true
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

The connection to the controller is also going through HAProxy. I wouldn't expect this to be an issue as the console does partially work, but I thought I would include this detail anyway if there could be an issue with this.

2018-04-02 14:19:17 -0500 received badge  Famous Question (source)
2018-03-13 22:55:44 -0500 received badge  Popular Question (source)
2018-03-12 16:48:36 -0500 asked a question nova-lxd with ceph storage possible?

Hello,

I am looking to deploy an openstack cluster using LXD for compute and Ceph for storage, and I was running into some doubt as to whether this was possible; and doubt that nova-lxd was mature enough for production. If anyone is running nova-lxd in production, or knows anything about it, please let me know. I've had a hard time finding good informational resources on the topic, specifically relating to LXD + Ceph; which is supposedly possible, but I haven't heard if it's possible in Openstack. If you otherwise know of a resource that could be helpful to me, I would appreciate hearing it.

Thanks.

2018-03-08 09:03:40 -0500 received badge  Notable Question (source)
2018-03-08 07:21:54 -0500 answered a question [Neutron] Cant ping router's external gateway.

The problem was to do with neutron being inside a VM, the problem disappears after moving it to metal.

2018-03-08 07:20:35 -0500 received badge  Notable Question (source)
2018-03-08 07:16:11 -0500 received badge  Scholar (source)
2018-03-08 07:16:07 -0500 received badge  Supporter (source)
2018-03-08 07:15:49 -0500 commented answer Volume migration fails "NotImplementedError: Swap only supports host devices"

That conclusion does reflect what I have seen. I must say it is disappointing to me, considering ceph is one of the big ticket storage solutions for libvirt and openstack. Hopefully this issue is on their radar. Thanks for all of your help.

2018-03-07 17:11:46 -0500 received badge  Popular Question (source)
2018-03-07 07:17:34 -0500 commented question neutron: Error, "AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=compute1.example.com could not be found", caused by 'l2population'

It seems my cluster has the same issue. I'm curious if you, in the few months since, have discovered another solution. Or, if you have filed a bug report that I can track.

2018-03-06 07:57:13 -0500 commented answer Volume migration fails "NotImplementedError: Swap only supports host devices"

My understanding of that launchpad blueprint was that cinder DOES support migration on RBD stores but it is not driver assisted and thus is ineffective. I am able to migrate volumes as long as they are not attached to an instance, and RBD does give a source path to libvirt, prefixed by rbd://

2018-03-05 12:58:41 -0500 received badge  Editor (source)
2018-03-05 12:56:50 -0500 asked a question Volume migration fails "NotImplementedError: Swap only supports host devices"

Cinder is set up to use two separate ceph storage pools. When trying to migrate an attached volume; the new volume is created on the other pool, an error occurs, the migration fails, and the new volume is eventually deleted.

The only real error output I can find is in /var/log/nova/nova-compute.log

2018-03-05 08:30:57.404 32084 INFO nova.compute.manager [req-6c1860ec-1a71-4b65-bc14-54a3fd7c7069 8bf003cdf58a4233b7e279aee1eec593 ee5d821a39174d8a91e3a741a6ecd97e - default default] [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc] Swapping volume fc8e2f25-60de-44b5-bddf-98a7d35a9c21 for 7f7bf267-2b5a-40b4-9ab6-8b509c5b45f6
2018-03-05 08:30:58.501 32084 ERROR nova.compute.manager [req-6c1860ec-1a71-4b65-bc14-54a3fd7c7069 8bf003cdf58a4233b7e279aee1eec593 ee5d821a39174d8a91e3a741a6ecd97e - default default] [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc] Failed to swap volume fc8e2f25-60de-44b5-bddf-98a7d35a9c21 for 7f7bf267-2b5a-40b4-9ab6-8b509c5b45f6: NotImplementedError: Swap only supports host devices
2018-03-05 08:30:58.501 32084 ERROR nova.compute.manager [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc] Traceback (most recent call last):
2018-03-05 08:30:58.501 32084 ERROR nova.compute.manager [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5058, in _swap_volume
2018-03-05 08:30:58.501 32084 ERROR nova.compute.manager [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc]     resize_to)
2018-03-05 08:30:58.501 32084 ERROR nova.compute.manager [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1349, in swap_volume
2018-03-05 08:30:58.501 32084 ERROR nova.compute.manager [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc]     raise NotImplementedError(_("Swap only supports host devices"))
2018-03-05 08:30:58.501 32084 ERROR nova.compute.manager [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc] NotImplementedError: Swap only supports host devices
2018-03-05 08:30:58.501 32084 ERROR nova.compute.manager [instance: 7249d430-743e-4463-8d28-d13cdb8cfddc] 
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server [req-6c1860ec-1a71-4b65-bc14-54a3fd7c7069 8bf003cdf58a4233b7e279aee1eec593 ee5d821a39174d8a91e3a741a6ecd97e - default default] Exception during message handling: NotImplementedError: Swap only supports host devices
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in wrapped
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server     function_name, call_dict, binary)
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server     self.force_reraise()
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2018-03-05 08:31:02.027 32084 ERROR oslo_messaging ...
(more)
2018-02-07 07:15:29 -0500 received badge  Popular Question (source)
2018-02-01 09:47:30 -0500 asked a question [Neutron] Cant ping router's external gateway.

I have created the router and networks as outlined in https://docs.openstack.org/ocata/install-guide-ubuntu/launch-instance.html#create-virtual-networks (the guide) for both provider and selfservice networks. All interfaces on the neutron router show status:active. However, when I try to ping the router's external gateway on anything but the controller node the neutron services are running on, the pings fail. If I ping it on the node running the neutron services, they succeed, but fail on the other controller. If I shut down the controller where the pings succeed, the other controller starts to get pings, but all other nodes still cannot ping.

I have 2 controllers running in different VMs on different servers running qemu/kvm, the api services on the controllers are set up for HA using the HA guide for reference (everything seems to work in that respect). The host servers running the controller VMs have a 10G bond with 2 VLANs on it, the management and provider network VLANs. All all of the traffic between nodes is switched.

2017-12-13 14:08:30 -0500 received badge  Organizer (source)
2017-12-06 16:05:00 -0500 commented question Migration attached volume between cinder pools

No-one has answered this, I am having the same problem. Within the year since asking, have you solved the problem?

2017-12-01 08:59:50 -0500 received badge  Enthusiast
2017-11-28 14:22:44 -0500 received badge  Famous Question (source)
2017-11-27 07:47:29 -0500 received badge  Notable Question (source)
2017-11-25 14:29:02 -0500 received badge  Popular Question (source)
2017-11-24 10:05:27 -0500 asked a question Migration of attached volumes fails

I have two storage pools on shared storage. When I try to migrate a volume with cinder from one pool to another, everything works as expected; unless the volume is attached. If I try to migrate an attached volume: It creates the new volume on the 2nd pool, spits out a large error in the logs, and then deletes the volume from the 2nd pool. This is the error I am seeing:

2017-12-13 09:44:29.526 2171 INFO cinder.volume.flows.manager.create_volume [req-e53bd700-d327-4787-a7af-cce4b8f955f4 0f2400d4b85341dea4481382e49eab41 0bc21ce6de3f4870b0f4f68148956053 - default default] Volume ed416399-f356-4d65-9b6b-c4bf2c4efe6d: being created as raw with specification: {'status': u'creating', 'volume_size': 10, 'volume_name': 'volume-ed416399-f356-4d65-9b6b-c4bf2c4efe6d'}
2017-12-13 09:44:29.917 2171 INFO cinder.volume.flows.manager.create_volume [req-e53bd700-d327-4787-a7af-cce4b8f955f4 0f2400d4b85341dea4481382e49eab41 0bc21ce6de3f4870b0f4f68148956053 - default default] Volume volume-ed416399-f356-4d65-9b6b-c4bf2c4efe6d (ed416399-f356-4d65-9b6b-c4bf2c4efe6d): created successfully
2017-12-13 09:44:29.925 2171 INFO cinder.volume.manager [req-e53bd700-d327-4787-a7af-cce4b8f955f4 0f2400d4b85341dea4481382e49eab41 0bc21ce6de3f4870b0f4f68148956053 - default default] Created volume successfully.
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager [req-e53bd700-d327-4787-a7af-cce4b8f955f4 0f2400d4b85341dea4481382e49eab41 0bc21ce6de3f4870b0f4f68148956053 - default default] Failed to copy volume dc06f100-af7f-4124-b4f6-8f1a7e6a95a4 to ed416399-f356-4d65-9b6b-c4bf2c4efe6d
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager Traceback (most recent call last):
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 1788, in _migrate_volume_generic
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager     new_volume.id)
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/cinder/compute/nova.py", line 181, in update_server_volume
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager     new_volume_id)
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/novaclient/v2/volumes.py", line 68, in update_server_volume
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager     body, "volumeAttachment")
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 370, in _update
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager     resp, body = self.api.client.put(url, body=body)
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 196, in put
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager     return self.request(url, 'PUT', **kwargs)
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 107, in request
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager     **kwargs)
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 344, in request
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager     resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 112, in request
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager     return self.session.request(url, method, **kwargs)
2017-12-13 09:44:30.678 2256 ERROR cinder.volume.manager   File "/usr/lib/python2.7/dist-packages/positional ...
(more)
2017-11-24 09:38:43 -0500 received badge  Popular Question (source)
2017-10-23 09:33:36 -0500 commented question Cannot fully boot Linux install CD image.

It seems that when I use a flavor that has ephemeral storage, the installer correctly boots. Is this expected behavior or does that indicate a problem with my configuration of Cinder with Ceph?

2017-10-21 22:43:25 -0500 asked a question Cannot fully boot Linux install CD image.

I am trying to create a linux VM inside of openstack, I have added the installer ISO as an image, and intend to install to a volume. Ceph is used for storage of both images and volumes. When I create the instance booting from the ISO image it starts up fine, and I am able to see the ISO's boot selection from the console. However, booting the installer greets me with Initramfs unpacking failed: write error. I don't quite know what would cause this, as I am very new to the openstack platform. I followed the base install guide for openstack on ubuntu, and added ceph storage to that with cinder. If there is any other information that would be useful in solving the problem, I can try to provide it.