Ask Your Question

xdaxdb's profile - activity

2018-04-04 15:39:00 -0500 received badge  Notable Question (source)
2018-04-04 15:39:00 -0500 received badge  Popular Question (source)
2017-12-12 15:48:33 -0500 received badge  Famous Question (source)
2017-12-12 15:48:33 -0500 received badge  Notable Question (source)
2017-09-16 05:51:00 -0500 answered a question DNSDomainNotFound while integrating neutron with Designate

The docs example does use the neutron user, but both users should have the admin role so I think both would work.

I think the reason you aren't seeing anything when you CLI query is because you are specifying the service tenant name. That means you are looking for DNS records in the service and admin tenants themselves, which is probably not the tenant that holds the records. The os-tenant name is not a part of the user's identity in the same way that the os-domain is. Users can exist in multiple tenants. Specify the tenant name where the floating ip, instance, and DNS zone were created in your CLI query.

2017-09-16 04:58:27 -0500 commented question DNSDomainNotFound while integrating neutron with Designate

The thing I find odd is that your log errors are referencing the dnscase2 record but you are showing it as the output for dnscase3 record. Also the first error in the output is a rabbit communication error. What does your designate config look like? What errors are you seeing in the designate logs.

2017-09-08 10:42:27 -0500 received badge  Famous Question (source)
2017-07-24 13:07:36 -0500 received badge  Notable Question (source)
2017-07-24 13:07:36 -0500 received badge  Famous Question (source)
2017-04-20 10:46:36 -0500 received badge  Nice Answer (source)
2017-04-20 10:46:34 -0500 received badge  Nice Question (source)
2017-04-20 03:05:57 -0500 received badge  Notable Question (source)
2017-04-19 23:36:12 -0500 received badge  Self-Learner (source)
2017-04-19 23:36:12 -0500 received badge  Teacher (source)
2017-04-19 23:36:11 -0500 received badge  Student (source)
2017-04-19 23:34:42 -0500 received badge  Popular Question (source)
2017-04-19 20:26:57 -0500 answered a question metadata for instances that boot from volumes

Image metadata still works when using volumes as a boot source because when the volume is created it has a source property. This provides a relationship (somehow) between the volume and the metadata of the image from which it was created.

In order to customize the instance properties, metadata needs to be configured in both the image and the flavor. The image metadata allows the tenant to customize the instance to their preference. The flavor metadata allows the administrator to allow/configure which properties the image can actually set.

Both image and flavor metadata can be configured via Horizon by selecting update metadata from their respective drop-down lists.

Reference:

https://docs.openstack.org/admin-guid...

https://docs.openstack.org/image-guid...

2017-04-19 16:14:32 -0500 asked a question metadata for instances that boot from volumes

In my environment all instances boot from cinder volumes. So when I launch an instance I do so by creating a volume using an image as the source. Then I launch the instance using the volume as the boot source.

I am trying to modify he instances metadata to add a RNG device using

glance image-update --property hw_rng=true <UUID>

But my concern is that the metadata does not survive through my instance launching process because the instance is launched with a volume as the source rather than a glance image.

I would like to know if my assumptions about why this is not working are correct, and if there is a way I can overcome this.

2017-03-23 06:08:33 -0500 received badge  Popular Question (source)
2017-03-20 22:18:46 -0500 received badge  Popular Question (source)
2017-03-13 04:12:28 -0500 answered a question "neutron bgp-speaker-create" command is failing

The problem was actually pretty simple, I was missing the

[DEFAULT]
service_plugins = neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin,

in /etc/neutron/neutron.conf on the controller.

2017-03-12 09:30:58 -0500 answered a question can't set allocation pools for subnets created from subnet pools

I was able to overcome this by translating the neutron cli command to an openstack cli command, and specifying different combination of options.

openstack subnet create --subnet-pool provider --network provider \
--prefix-length 27 --subnet-range 192.168.84.64/27 \
--allocation-pool start=192.168.84.67,end=192.168.84.94 \
provider-64

I think the most important part of the change is including the --subnet-range option which enables the use of the --allocation-pool.

2017-03-10 21:33:53 -0500 asked a question cinder-volume storwize driver "host" creation on SAN

Recently (Ocata) I am seeing cinder-volume only creating a single "host" on the SAN (IBM Storwize). All the compute nodes seem to be sharing this single host/iqn definition.

In previous experience using the Storwize driver, cinder-volume would create a separate host/iqn definition for each compute node. Anyone know if this is a (default)setting causing this, or some other explanation or insight into how the host creation decides this?

2017-03-09 16:58:39 -0500 asked a question "neutron bgp-speaker-create" command is failing

When running neutron bgp-speaker-create or neutron bgp-speaker-list the command is failing with resource not found

In the neutron-server.log I see a 404 related to the request

"POST /v2.0/bgp-speakers.json HTTP/1.1" 404 328 0.276875

Prior to to trying to create a BGP speaker in neutron, there were other errors in neutron-bgp-dragent.log

ERROR neutron_dynamic_routing.services.bgp.agent.bgp_dragent [req-6b80d9cb-e730-4623-97f8-d649aef8791b - - - - -] Unable to sync BGP speaker state

I assumed this was in the logs because there were no speakers defined in neutron yet. But now I am wondering if this is a symptom of the same problem.

2017-03-09 14:42:18 -0500 commented question can't set allocation pools for subnets created from subnet pools

I'm considering just disabling DHCP for the subnet. I don't think the protocol itself is needed but I'm concerned that this will also stop neutron from being aware of IP configuration it needs to function properly.

2017-03-09 12:55:33 -0500 asked a question can't set allocation pools for subnets created from subnet pools

I'm trying to configure a Neutron BGP speaker.

Part of the process involves creating a subnet from a subnet pool, from an address scope. This is the exact command from the docs:

$ neutron subnet-create --name provider --subnetpool provider \
  --prefixlen 24 --allocation-pool start=203.0.113.11,end=203.0.113.254 \
  --gateway 203.0.113.1 provider

The problem is that it returns an error about the --allocation-pool option not being available when creating a subnet from a subnet pool.

Bad subnets request: allocation_pools allowed only for specific subnet requests..

I would love to simply leave the option out. However this is an external/physical network, and VRRP is being used on the routers.

The resulting subnet without the --allocation-pool option would assign the .2 address to some virtual entity when both the .1 and .2 address are used by the physical gateways on that external network.

2017-02-14 23:27:31 -0500 received badge  Famous Question (source)
2017-01-17 18:54:15 -0500 asked a question OVS HA DVR: SNAT only works from one network node

I have an HA DVR set up with 2 network nodes, one of which (node1) is also the network controller. There are 2 compute nodes.

SNAT only seems to work when the centralized SNAT interface is physically located on the network node that is also configured as the controller (node1) . When I reboot node1 and the centralized SNAT port moves over to the other network node (node2), SNAT stops working and only instances with floating IPs can reach the outside.

After node1 comes back, if I reboot node2 to force failover of the centralized SNAT port. SNAT starts working again as expected.

2017-01-14 05:13:06 -0500 received badge  Famous Question (source)
2017-01-13 16:28:05 -0500 answered a question Live migration "volume backed" no error, but no change

The cause of this had nothing to do with Cinder or Volume Backed Instances.

The cause was the default libvirtd settings. The libvirtd settings on RH/CentOS by default do not listen for external connections. In order for Live migration to work the hypervisors need to talk to each other directly.

Uncomment LIBVIRTD_ARGS="--listen" in /etc/sysconfig/libvirtd on the compute nodes.

I'm not sure if Nova is supposed to manage this or not (it should). I've noticed that the TLS settings in libvirtd are ignored by Nova, and there are settings in nova.conf that control TLS (which I am still struggling with, I don't think they ware working properly)

2017-01-03 22:10:06 -0500 received badge  Editor (source)
2017-01-03 22:02:16 -0500 asked a question BGP speaker options using RDO newton

I was following the network guide for "BGP dynamic routing" which describes using the "bgp" service_plugin and the "RyuBgpDriver" bgp_speaker_driver.

I'm using RDO newton, and it doesn't look like there is a package available in the RDO repositories for the ryu driver. I also came across a note in the Red Hat OpenStack config reference saying that the ryu driver was discontinued in the Mitaka release, with a recommendation to use ml2 plugins instead.

If ryu is being deprecated, what is replacing it? Or perhaps I should be asking, what other methods are there for configuring a BGP speaker in RDO?

2017-01-01 19:11:05 -0500 received badge  Famous Question (source)
2016-12-31 22:05:22 -0500 commented question Live migration "volume backed" no error, but no change

Any thoughts anyone might have about what makes a volume backed migration different than a typical image backed migration would be helpful.

2016-12-29 04:37:09 -0500 marked best answer Can LDAP and SQL authentication be used together

I'd like to have users be able to authenticate using their LDAP credentials, but have admin authentication remain in SQL only. I do not want to use LDAP groups for authorization.

Is this a common configuration? So far I get the impression it is not.

2016-12-29 03:24:49 -0500 received badge  Notable Question (source)
2016-12-29 01:39:09 -0500 commented answer Live migration "volume backed" no error, but no change

There's nothing very interesting in there, basically:

Volume info retrieved successfully.
Initialize volume connection completed successfully.
Volume info retrieved successfully.
Terminate volume connection completed successfully.
Unreserve volume completed successfully.
2016-12-29 01:21:18 -0500 received badge  Enthusiast
2016-12-28 23:40:19 -0500 received badge  Popular Question (source)
2016-12-28 20:47:20 -0500 asked a question Live migration "volume backed" no error, but no change

I am trying to live migrate an instance which is running from a cinder volume. The command completes without any errors, and I can't find any errors in logs on the controller, source host, or destination host. Dashboard/CLI (nova and openstack commands) same result.

The only thing I can find that is remotely suspicious is in nova-compute.log on the destination host:

WARNING os_brick.initiator.connectors.iscsi ... ISCSI volume not yet found at: [u'...']. Will rescan & retry.  Try number: 0.
WARNING nova.compute.manager ... Detaching volume from unknown instance
WARNING nova.virt.libvirt.driver ... During detach_volume, instance disappeared.

There are some messages in syslog that correspond with this, but I'm not certain if there is a problem or if this is normal behavior when the host attaches a new volume and re-scans the SCSI bus:

kernel: sd 11:0:0:0: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatical
kernel: sd 11:0:0:3: [sdi] Unit Not Ready
kernel: sd 11:0:0:3: [sdi] Sense Key : Illegal Request [current]
kernel: sd 11:0:0:3: [sdi] Add. Sense: Logical unit not supported
kernel: sd 11:0:0:3: [sdi] Read Capacity(16) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
kernel: sd 11:0:0:3: [sdi] Sense Key : Illegal Request [current]
kernel: sd 11:0:0:3: [sdi] Add. Sense: Logical unit not supported
kernel: sd 11:0:0:3: [sdi] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
kernel: sd 11:0:0:3: [sdi] Sense Key : Illegal Request [current]
kernel: sd 11:0:0:3: [sdi] Add. Sense: Logical unit not supported
kernel: scsi 11:0:0:1: Direct-Access     IBM      2145             0000 PQ: 0 ANSI: 6
kernel: scsi 11:0:0:1: alua: supports implicit TPGS
kernel: scsi 11:0:0:1: alua: port group 00 rel port 180
kernel: scsi 11:0:0:1: alua: rtpg failed with 8000002
kernel: scsi 11:0:0:1: alua: port group 00 state A non-preferred supports tolusna
kernel: scsi 11:0:0:1: alua: Attached
kernel: sd 11:0:0:1: Attached scsi generic sg3 type 0
kernel: sd 11:0:0:1: [sdd] 209715200 512-byte logical blocks: (107 GB/100 GiB)
kernel: sd 11:0:0:1: [sdd] Write Protect is off
kernel: sd 11:0:0:1: [sdd] Mode Sense: 97 00 10 08
kernel: sd 11:0:0:1: [sdd] Write cache: disabled, read cache: enabled, supports DPO and FUA
kernel:  sdd: sdd1

My only theory is that the timing of the process is a little off and Nova isn't seeing the volume available on the destination host and is aborting the migration. But there is no explicit error message.