Ask Your Question

tekka's profile - activity

2016-03-31 02:24:05 -0600 answered a question Network config when testing on nested environment

Thanks for your answer. Indeed I noticed that on my laptop I had the default 192.168.122.x network pre-configured by plain libvirt with name "default" and I wanted to expose that network as external network. But also the L1 hypervisor, being a CentOS 7 system, was pre-configured by libvirt with that same default network and that caused problems. So on L1 hosts, that are the openstack_co1 and openstack_co2 vms, I run:

# virsh net-destroy default
# virsh net-undefine default

so that at the end, from a libvirt point of view:

 
[root@openstack_co1 ~]# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
[root@openstack_co1 ~]# 

And all went good with openstack so that after assigning a floating ip on 192.168.122.x network to my instance (L2 vm) I was able to connect from my laptop to it via ssh with the configured ssh key (tested both with cirros and with CentOS 7 cloud images). No need to create any particular additional firewall rule on laptop or anywhere.

After all, anyway I reconfigured using a totally different network, to avoid confusion in case I forget to consider again the pre-default libvirt created 192.168.122.x network upon installation of qemu/libvirt packages. Cheers, Gianluca

2016-03-31 01:54:40 -0600 received badge  Famous Question (source)
2016-03-29 06:53:44 -0600 received badge  Notable Question (source)
2016-03-14 10:14:10 -0600 received badge  Popular Question (source)
2016-03-04 02:06:12 -0600 received badge  Enthusiast
2016-02-29 10:21:02 -0600 asked a question Network config when testing on nested environment

I'm composing a test environment for Openstack Icehouse on CentOS 7 It will be nested in the sense that the 2 openstack nodes I'm going to setup are virtual machines.

In detail:

hypervisor is my laptop with Fedora 23, 16Gb of ram, SSD disk and standard Qemu/KVM environment included inside the distro the laptop would be also where I run web browser to access horizon dashboard

openstack_co1 vm where I initially run and test packstak on CentOS 7.2 with copy host cpu configuration and setup controller+compute

openstack_co2 vm where I will run additional compute node with copy host cpu configuration

the 2 vms will have two networks

1) 192.168.122.x/24 that is configured as NAT in Qemu/KVM of my laptop and should be the public lan for Openstack env

2) 192.168.124.x/24 that is configured as "isolated, internal and host routing only" in Qemu/KVM and should be the private lan for openstack env

So my laptop has the 192.168.122.1 and 192.168.124.1 ip addresses set up and chronyd configured to be queried by the openstack nodes

I was able to run packstak after some debugging:

  • configuring repo with baseurl=http://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-7/

  • preinstalling packages so that packstack needs only to setup and not download

  • modifying file /usr/lib/python2.7/site-packages/packstack/puppet/templates/openstack_client.pp

and changing python-iso8601 with python2-iso8601 as provided now

  • modifying file /etc/mongod.conf

changing bind_ip = 127.0.0.1

with my node ip bind_ip = 192.168.122.81

because during puppet phase it is this file that is read and not mongodb.conf and it listens only on localhost causing time out and failure

found reference here: https://ask.openstack.org/en/question/86847/packstack-fails-with-error-unable-to-connect-to-mongodb-server/ (https://ask.openstack.org/en/question...)

  • modifying /usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py

line 818

if fips.is_supported

with

if True

otherwise you get error connecting to neutron in instances page. Found reference here: https://bugs.launchpad.net/horizon/+bug/1349841 (https://bugs.launchpad.net/horizon/+b...)

  • downgrading python-websockify from python-websockify-0.6.0-2.el7 to python-websockify-0.5.1-1.el7.noarch.rpm

otherwise unable to use novnc console.

Found reference here: https://bugzilla.redhat.com/show_bug.cgi?id=1220081 (https://bugzilla.redhat.com/show_bug....)

  • running packstack

All seems ok then.

I have now some doubts related to my nested vm environment and network and iptables configuration.

Is it ok the network config I plan to use and then during the tests to assign as floating ips the ones on 192.168.122.x network (the Qemu/KVM natted one)?

Is it expected ootb that from my laptop (192.168.122.1) I will be able to reach (if related security groups configurations are in place) via ssh the openstack instances on these floating ips or should I create a particular firewalld rule on my laptop?

Thanks in advance, Gianluca

2014-04-14 01:18:31 -0600 received badge  Famous Question (source)
2014-03-13 06:41:22 -0600 received badge  Notable Question (source)
2014-03-12 16:01:46 -0600 received badge  Popular Question (source)
2014-02-22 03:49:18 -0600 asked a question Havana instance doesn't get IP

Hello, I have a custom Havana configuraton based on CentOS 6.5, with epel and redo repos enabled (but with rabbit for message queues instead of qpid)

All seems good apart instance (that runs on one compute node) that doesn't get ip address from dhcp agent (that runs on one controller node).

I'm using neutron with OVS and vlan.

In messages of controller node where dnsmasq process has started I see sometimes nothing and sometimes only dhcpdiscover and dhcpoffer lines for the mac of the instance, but nothing more and at instance side no ip has been got.

Tried with Cirros, CentOS 6.4, Fedora 20 and precise cloud images but all manifest this problem.

At the end of the message you can find a link to a file with al the commands run to debug problem (I'm not sure if directly pasting here gets correct output). Eventually I can replicate the environment with packstack and 1 compute + 1 network node, if the resulting configuration is similar, so I can compare files and behaviours.

My configuration is like this

kernel is the latest CentOS: 2.6.32-431.el6.x86_64 (it seems it is newer than the latest rdo provided one that is kernel-2.6.32-358.123.2.openstack.el6.x86_64.rpm and so yum update comand picked it)

for example openstack and openvswitch packages from rdo such as openvswitch-1.11.0_8ce28d-1.el6ost.x86_64 openstack-neutron-2013.2.1-1.el6.noarch

controller node with Neutron Server, Neutron OVS plugin, Neutron DHCP Agent

From Linux services started point of view:

  • neutron-dhcp-agent
  • neutron-openvswitch-agent
  • neutron-ovs-cleanup
  • neutron-server
  • openstack-cinder-api
  • openstack-cinder-scheduler
  • openstack-glance-api
  • openstack-glance-registry
  • openstack-heat-api
  • openstack-heat-api-cfn
  • openstack-heat-engine
  • openstack-keystone
  • openstack-nova-api
  • openstack-nova-conductor
  • openstack-nova-consoleauth
  • openstack-nova-novncproxy
  • openstack-nova-scheduler
  • openstack-nova-xvpvncproxy
  • openvswitch

Compute with Nova Compute, Neutron OVS Agent

From Linux services point of view:

  • neutron-openvswitch-agent
  • neutron-ovs-cleanup
  • openstack-cinder-volume
  • openstack-nova-compute
  • openvswitch

mac of instance: fa:16:3e:4a:eb:2b


on controller node I see dnsmasq active and generated file such as:

[root@vcontrol03 ~]# cat /var/lib/neutron/dhcp/d5bca3a8-2255-4a13-b0a7-f0d644bc4670/host
fa:16:3e:4a:eb:2b,host-172-16-0-100.openstacklocal,172.16.0.100

but actually no message regarding dhcp in messages The servers actually are virtual machines inside a vSPhere 5.1 infra and on their eth1 adapter (used for br-ex) I have configured trunk port and promiscuous mode

Below link with output of many commands (openvswitch, neutron, iptables...) on controller and compute. because directly inside is scrambling...

https://drive.google.com/file/d/0BwoP...