Ask Your Question

schmaustech's profile - activity

2018-02-24 05:50:50 -0500 received badge  Nice Answer (source)
2017-06-22 23:15:02 -0500 received badge  Nice Answer (source)
2016-09-27 19:24:38 -0500 received badge  Nice Answer (source)
2015-07-03 18:30:44 -0500 received badge  Nice Answer (source)
2015-05-07 04:36:02 -0500 received badge  Nice Answer (source)
2015-05-06 12:09:30 -0500 received badge  Necromancer (source)
2015-02-01 18:59:30 -0500 answered a question Neutron dvr and vrrp config

Here is the DVR configuration steps: http://schmaustech.blogspot.com/2014/...

2015-02-01 18:56:49 -0500 answered a question Some issue on Havana to IceHouse migration document
2015-02-01 18:39:44 -0500 commented answer Installation of RDO single Node

Hi Steve :)

2015-01-30 19:16:50 -0500 commented question Taking more time to display network topology in Horizon

What does the load on the box look like that is running Horizon?

2015-01-30 19:05:10 -0500 received badge  Citizen Patrol (source)
2015-01-30 15:49:21 -0500 received badge  Supporter (source)
2015-01-30 06:45:28 -0500 answered a question Upgrade from Folsom to Juno

I don't see you being able to accomplish this as this was never the intent of the OpenStack community. If you want at all to take what you have from Folsom and get it to Juno, you would need to take the iterative steps of going through the upgrade process of each release and even then there maybe be caveats and pitfalls for each release.

Your first step should be to read the release notes of every release and see what they have listed for upgrades.

Folsom->Grizzly->Havana->Icehouse->Juno

2015-01-30 06:37:26 -0500 answered a question Installation of RDO single Node

Use RDO-Juno with CentOS7 and bypass all the Redhat sales jargon of Server/Workstation/Desktop. At the end of the day all three of them are kinda same and use the same kernel.

With regards to hardware make sure you have a CPU that has multiple cores (i7), the more the better and at least 8GB of ram to run it adequately. It does seem to run on 4GB but rather slowly especially if you plan on launching compute instances on the same node in an all in one configuration. Also ensure you have adequate disk if you plan on doing anything with Cinder LVM and/or Swift.

As far what hardware make/models, anything that Redhat/CentOS supports should work fine which in most cases is any of the current server, desktop or laptop models.

2015-01-30 06:32:27 -0500 answered a question Instance creation errors for tiny flavor whereas rest of the flavors works fine

In my experience, I have encountered an error in launching an instance because the flavor backing that image does not have enough resources to launch it. The Cirros image will work with any of the stock flavors in OpenStack. Use that image to test the 2 flavors that are not working with your current image and if it works, you know the image you are using can only be used with the other successful flavors. If Cirros does not work with those 2 flavors that did not work with your current image, then your flavors might not be working properly. In that case remove them and recreate.

2015-01-30 06:14:39 -0500 answered a question can't ssh to new created instances any more

I don't have an exact answer as to what the problem is, but I can provide some ideas on how to troubleshoot it.

First, launch an instance with the Cirros image because with that image you can login via the console even if your instances cloud-init does not get any metadata/key injection or network connectivity from outside.

Once you have launch above instance, log into it and do:

ifconfig -a

This will determine if you are getting a DHCP private address assigned to your instance. If you are, that at least confirms your OpenvSwitch plumbing and DHCP agent on the neutron node are working and most likely indicates a problem with the L3-agent.

In your reference above you mention that the console is not accessible either.

In this case, I would check:

nova service-list and make sure everything is happy there neutron agent-list and make sure everything is happy there

If you are seeing issues with nova service output on compute node, restart openvswitch, ovs-cleanup and then openstack-service restart.

If neutron also looks unhappy, restart those services as well but also ensure that before you bring them back up, all the neutron services have been killed off. I have seen dnqmasq and metadata have a tendency to not get stopped. You will also want to do ip netns on neutron node and ensure that all qdhcp and qrouter namespaces have been removed.

Then start up all your neutron services.

2015-01-29 20:32:01 -0500 answered a question is it possible to aggregate physical resources in order to obtain a vm with more resources than each physical server makes available?

There is no mechanism to aggregate two physically separate compute nodes into a sum of resources that a virtual machine can then run on top of. You can however over provision on a compute node so in your example of a compute node with 2 processors and 8GB of ram you could over provision to 4 CPU and 16GB of ram. However you will need to track resource usage closely given you are over provisioned.

2015-01-29 06:27:19 -0500 answered a question How to configure monasca on openstack?

This might be a good starting point:

https://github.com/stackforge/monasca...

https://github.com/stackforge/monasca...

Both have README's associated with the code that provide a lot of detail on setup.

2015-01-26 19:33:25 -0500 commented question failed to launch instance. ceph issue

Confirm on your compute node that /etc/ceph/ceph.conf exists, contains the correct information about your ceph cluster and has the right authx information so that the compute node which is a client of Ceph can connect to the cluster outside of OpenStack/Nova.

2015-01-26 06:54:30 -0500 commented question Can anyone assist with an issue attaching volume to instance when storage is used?

I would recommend manually configuring an iSCSI target, presenting it to the compute node and manually scanning to see if it is visible. This would ensure the plumbing is in place to allow for the actual automation of OpenStack to work. Right now it appears the iSCSI target is never visible.

2015-01-26 06:41:14 -0500 answered a question Switch speed for instances on shared storage

The size of your network connectivity really depends on the scale of your compute environment, Ceph cluster and your rate of change against that Ceph cluster which will increase the amount of replication between the Ceph OSD nodes. 10GB might work on a small cluster/compute environment but as your environment grows, you may find that you are saturating your backend switches. One way to prepare for scale would be to ensure you have your replication traffic on your Ceph cluster on a dedicated 10GB link with a physically separate switch from the 10GB front side client traffic to the OpenStack compute nodes. Ensure your switches can support up to 40GB if again you plan on growing the environment very large.

With regards to your question about configuring Ceph as a standby in the event that the local disks on a compute node are not available, I have not seen anything that would support this configuration. First, the local disk as it was changing for a compute instance would somehow need to be replicating from a block level over to the Ceph cluster to ensure any writes were kept in sync from the local disk to the Ceph disk. Along with that, OpenStack would have to have some kind of understanding/configuration that indicates that when local compute disk are down request storage from the Ceph cluster. Niether of these configuration items exist today.

2015-01-25 14:27:41 -0500 received badge  Autobiographer
2015-01-25 10:22:43 -0500 answered a question More official way to listen to notifications

I think your best bet would be to look into StackTach developed by Rackspace but available to the community:

https://github.com/rackerlabs/stacktach

http://stacktach.readthedocs.org/en/l...

https://www.openstack.org/summit/open...

Even if you are not wanting to use StackTach, it can provide the examples you seek since the code is available and viewable.

2015-01-25 06:51:42 -0500 received badge  Commentator
2015-01-25 06:51:42 -0500 commented question l3 agent does not create qrouter namespace in ha

Do you have a br-ex interface associated to an interface that allows connectivity between your internal network and external network? For example, your ovs-vsctl show on primary node should look the same as on the failover node.

2015-01-25 06:44:45 -0500 answered a question How to let tgtd listen on a specific interface

Example: to bind tgtd to a specific address and port

tgtd --iscsi portal=192.0.2.1:3260

Adjust your startup script as needed to gain those desired results.

2015-01-25 06:43:19 -0500 answered a question What is a "publishing policy" for ceilometer

You can get a sense of what a publisher is in the Pipelines section of the following link, first paragraph.

https://openstack.redhat.com/Ceilomet...

Depending on the version of OpenStack you might have been hitting this bug:

https://bugs.launchpad.net/ceilometer...

2015-01-25 06:32:03 -0500 answered a question Is there a Cinder driver for HP P9500 storage array?

There does not appear to be a driver for HP 9500 to work with Cinder. Your only option might be to mount luns from it and use LVM as a iSCSI target.

2015-01-25 06:22:16 -0500 commented question Channel Bonding Interfaces with OVS

If your bond0 interface is going to be the network you want you want your internal network to route out to your regular network, don;t you need to associate the br-ex interface to the bond0 interface?

2015-01-25 06:12:16 -0500 commented question glance image-create window 2012 R2 with 500 error

I am guessing this image might be rather large as having worked with a Win 2012 image before it was pretty big. I am wondering if you are running into a timeout. Check glance logs. I am assuming you do not get this error with smaller images?

2015-01-25 06:09:20 -0500 commented question Dnsmasq failure to provide private IP to VM on fresh RDO Juno install on TwoNode Controller&&Network,Compute CentOS 7 Cluster

What kind of connectivity do you have between the compute nodes and management nodes (VLAN,GRE,VXLAN)? Can you confirm you have connectivity between them? Also if you sniff the interfaces, do you see the DHCP request going out on the compute node and do you see it coming in on the mgmt node?

2015-01-14 19:24:19 -0500 received badge  Enthusiast
2015-01-11 20:40:53 -0500 received badge  Necromancer (source)
2015-01-10 18:51:33 -0500 commented question Error: Failed to launch instance "test": Please try again later [Error: No valid host was found. There are not enough hosts available.].

What does the output of nova service-list look like or neutron agent-list?

2015-01-10 18:26:26 -0500 answered a question How could adding a compute node cause my VMs disks to go RO?

This scenario really sounds like something that should be traced down on the communication between the compute nodes and the NFS storage. This is not likely an issue with OpenStack but more likely related to the NFS solution you are using. I would recommend confirming on the storage side there was no major event ( I am thinking fail-over events like with Netapp's and/or cluster NFS file servers). Then if you still have inconclusive evidence, I would setup packet traces when you go to add the next compute node to capture the NFS activity happening on an existing node with the NFS storage and also collect it from the new compute node being added. You might find that their is some initial file locking going on that is tripping up the entire environment.

2015-01-10 14:36:45 -0500 answered a question FC cinder driver

When I have used Cinder with EMC VNX for iSCSI the provider location is just a 5-digit number that appears to be unique for every volume created. With a LVM backend it appears to be NULL. With a Ceph setup it also appears to be NULL.

2015-01-10 12:40:34 -0500 answered a question RabbitMQ: Number of connections

Actually this looks fairly normal. I have see in our environment upwards of 800+ connections.

To confirm if indeed these connections are growing and not being closed properly you could run the following command to get a count and run it over the course of a few days to see if the number is increasing or just rising and falling back:

rabbitmqctl list_connections|wc -l

Also the lack of Keystone connections can be explained by the fact that keystone only talks to RabbitMQ if you have enabled notifications. Otherwise unlike the other OpenStack components, Keystone does not rely on messages.

There is a known issue with RabbitMQ and OpenStack services with regards to keepalives:

https://bugs.launchpad.net/nova/+bug/...

This has caused issues with services especially when connections are severed by networking type timeouts.

2015-01-10 08:26:31 -0500 commented question Unable to connect Metadata ?

If you added SSL to keystone, did you update your configuration files for all your other services (nova,neutron,cinder,etc) and restart the services?

2015-01-10 08:24:29 -0500 answered a question InvalidBDM: Block Device Mapping is Invalid.
2015-01-10 08:20:23 -0500 answered a question How to replicate Cinder Volumes?

The replication of your Cinder volumes really depends more on your backend storage device then OpenStack Cinder. If you are using Ceph as a storage backend for example though, your opportunity for replication could happen in a couple of ways. If your sites have low latency between them you could stretch your Ceph cluster between sites and then with the proper crush map could could conceivable have your volume pool (which would most likely contain your Cinder volumes) map its placement groups at both location to ensure multi-site coverage. If you had two Ceph clusters, one at each site, you could still do asynchronous replication through the RADOS gateway as another example.

Again depending on your RPO and the storage backend you are using will ultimately determine how you replicate.

2015-01-10 01:09:16 -0500 received badge  Necromancer (source)
2015-01-07 07:02:12 -0500 answered a question Why am I not getting the internet on both "host" and "guest" OS simultaneously ?

Do you have a floating-IP range on the external network you configured? In that external net, how many floating-IP's do you have allocated and does each instance have its own floating-IP?

2015-01-05 19:19:49 -0500 answered a question How to create a new agent that communicates with ceilometer
2015-01-05 17:47:09 -0500 commented question most suitable networking model for this example

I would probably go with DVR since you are on Juno and make each of those 6x addresses a bridge interface that would then become your ext_net interface. Then use your 6x addresses to define a floating IP pool. However I have no idea how many 6x addresses you have or if they are in numeric order.

2015-01-05 17:43:18 -0500 commented question Virtual Machine not connecting but logs seem to provide no errors

What image are you using? I would recommend using a Cirros image, that way you can use the console to login to the host because Cirros has a predefined user/pass. This can be helpful in determining if your instance got an IP address, can ping out etc. There seems to be injection errors above.

2015-01-05 17:39:12 -0500 commented question Associate floating ip notifications are not working in juno

Are you using Nova Networks or Neutron in this configuration? If using neutron, did you configure your ext_net and a pool of IP's used for floating IP allocation for that external net? Are you using legacy L3 agent or did you configure DVR?

2015-01-04 20:29:28 -0500 commented answer Can't access to VM (already worked fine for 5 months)

What does openstack-service status show on your nodes?

Also, what is ovs-vsctl show showing?

Did you try restarting openvswitch on each node?

2015-01-04 20:03:22 -0500 answered a question determining ip information for eth0...

Are you using GRE tunnels between management node and additional compute node?

ovs-vsctl show does not show any tunnels so either GRE is not in use or it is not configured correctly.

The fact that the all in one node works and the additional compute node does not work tells me that the management node (all in one) does not seem to know about the new compute node.

What does the following output show you from the management node (all in one):

nova service-list neutron agent-list

Do you see the additional compute node in the output of the two items above and does it indicate it is up?

2015-01-04 19:57:10 -0500 answered a question instance hostname vs display_name

The display name is always the name that is specified by the client/tenant user. The hostname is always the sanitized version of the display name which conforms to RFC-952 and RFC 1123 specs.

Here is the Python code that does the sanitation work:

https://github.com/openstack/nova/blo...

2015-01-04 19:23:35 -0500 answered a question Can't access to VM (already worked fine for 5 months)

What is the status of the following:

nova service-list neutron agent-list

If everything looks good there, what happens if you reboot the instance?