Ask Your Question

Micha's profile - activity

2018-02-20 13:06:19 -0500 received badge  Famous Question (source)
2017-08-23 12:10:39 -0500 received badge  Famous Question (source)
2017-08-10 11:01:23 -0500 commented answer DNS Resolution fails

Unfortunatly I never digged deeper into that issue. I assume that the l3 agent is missing the functionality to load the user data from the config, but I'm not sure since I'm quite new to openstack.

2017-08-10 11:01:23 -0500 received badge  Commentator
2017-07-13 05:56:50 -0500 received badge  Teacher (source)
2017-07-13 05:56:50 -0500 received badge  Self-Learner (source)
2017-07-13 05:56:47 -0500 received badge  Student (source)
2017-07-13 04:38:59 -0500 received badge  Notable Question (source)
2017-07-08 06:27:25 -0500 received badge  Notable Question (source)
2017-07-07 17:17:45 -0500 answered a question Where is the getting started documentation for dummies?
  1. No, you shouldn't. Do a manual setup with some kind of HAProxy in front. Administrating and updating OpenStack can be a fultime job, also it would be way to much hassle for a project that small. Maybe consider going to a public cloud provider like AWS or an OpenStack Provider like Open Telekom Cloud
    1. I would assume one Controller Node, atleast one Compute node and one Network node. Storage not taken into the consideration, for production probably something like Ceph, which would need about 5-10 Nodes.
    2. With three servers, probably Controller and Cinder, on one, Nova on another, and the third one for networking. On both sites obviously.
    3. In terms of storage? Usually with a distributed storage system like Ceph. I hope you have a 20-40Gbit link between your sites? Otherwise performance will be not prd. ready.
    4. Openstack does not care. Your ecommerce application needs to deal with that
    5. Yes. OpenStack is quite resiliant towards node outage, and Ceph also.

Seriously, DON'T use ptivate openstack in production for that. Please. And if you don't need to scale that much, even consider 'classic' VPS hosting at another provider.

2017-07-07 15:03:15 -0500 answered a question VM traffic isolation in same tenant network

Either Tenant Segregation or using the FWaaS module. Using security Groups woulkd be quite a hassler.

2017-07-06 11:52:43 -0500 answered a question Attach instance to Provider Network

It's working now - awesome, thank to Bernd Bausch! I created a br-ex, added the needed cfg parameters to the openvswitch-agent.ini/ml2_conf.ini and created the patch ports towards the int interface. And now, everything works like a charm - wonderful :)

2017-07-06 07:34:14 -0500 received badge  Popular Question (source)
2017-07-06 04:26:03 -0500 commented question Attach instance to Provider Network

Ohhhhh, networking just got so much clearer to me. Awesome Bernd. I'ld like this to add a general Thank you for the amount of knowledge and tips you provide here. People like you are the reason I love working with OS projects rather than dark magic like Oracle or SAP <3

2017-07-06 03:01:52 -0500 commented question Attach instance to Provider Network

Thank you for your input. I added some output to the initial post due the character limitation in the comment section :)

2017-07-05 12:48:45 -0500 asked a question Attach instance to Provider Network

Hi all together,

I have a working Openstack Newton Cluster, running on Ubuntu 16.04.

I'm using OpenVSwitch for networking in general and VLAN for provider networks, VXLAN for tenant networks.

/etc/neutron/plugins/ml2/ml2 config.ini

[DEFAULT]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security,dns
path_mtu = 9000
[ml2_type_flat]
flat_networks = *
[ml2_type_vlan]
network_vlan_ranges = vlannet:2003:2004
[ml2_type_vxlan]
vni_ranges = 65000:69999
[securitygroup]
firewall_driver = iptables_hybrid
enable_security_group = true
enable_ipset = true
--------------------------------------------------------------
/etc/neutron/plugins/ml2/openvswitch.ini

[DEFAULT]
[agent]
tunnel_types = vxlan
l2_population = true
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.2.41
bridge_mappings = vlannet:br-ex
[securitygroup]
firewall_driver = iptables_hybrid
enable_security_group = true
enable_ipset = true

I now created two networks:

neutron net-create Externes-Netz --provider:network_type vlan --provider:physical_network vlannet --router:external true --provider:segmentation_id 2003
neutron subnet-create Externes-Netz --disable-dhcp --ip-version 4 --gateway 192.168.10.1 --allocation-pool start=192.168.10.10,end=192.168.10.20 192.168.10.0/24

neutron net-create VPN-Netz --shared --provider:network_type vlan --provider:physical_network vlannet --router:external true --provider:segmentation_id 2004
neutron subnet-create VPN-Netz --enable-dhcp --ip-version 4 --gateway 10.10.0.1 --allocation-pool start=10.10.0.10,end=10.10.0.20 10.10.0.0/24

While connecting to 'Externes-Netz' should happen via Router and a tenant network (which works absolutly fine btw, including floating IPs), I want to connect instances directly to the 'VPN-Netz' since they will be reachable from a VPN without floating IPs. A for testing purposes created router on that network can ping reached via ping from the vpn net, which lets me think that the network side is fine.

However, I can't connect instances directly to the VPN-Netz. While I can use my regular user (not admin) to create a port, attaching a VM to it will fail with message': u'No valid host was found. There are not enough hosts available.', u'code': 500, u'created': u'2017-07-05T17:38:14Z

Can anyone give me a hint where to find the error? Log files are leaving me clueless too:

/var/log/nova/nova-compute.log

2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager [req-3c336082-4b94-4598-9a8c-84b639921b74 3518016355da4f8e91b1b568f580fa6d d091b5e5309541b38897bd584833df30 - - -] Instance failed network setup after 1 attempt(s)
2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager Traceback (most recent call last):
2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1401, in _allocate_network_async
2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager     bind_host_id=bind_host_id)
2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager   File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 881, in allocate_for_instance
2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager     bind_host_id, dhcp_opts, available_macs)
2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager   File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1000, in _update_ports_for_instance
2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager     vif.destroy()
2017-07-05 19:38:11.689 1659 ERROR nova.compute.manager   File "/usr/lib/python2 ...
(more)
2017-07-05 11:28:36 -0500 answered a question DNS Resolution fails

Just fyi: In Mitaka the domain name for the metadata is always set via nova.conf. Ping an Instance via Hostname is not possible.

After upgrading to Newton, this now works :)

2017-07-05 11:27:41 -0500 received badge  Popular Question (source)
2017-06-20 17:09:26 -0500 asked a question DNS Resolution fails

Hi,

on my Mitaka installation on Ubuntu I use Neutron for networking.

I have some issues with DNS Resolution at the moment. I set my own domain name by setting

/etc/neutron/neutron.conf

dns_domain = mydomain.

I also set

/etc/neutron/plugins/ml2/ml2_conf.ini

extension_drivers = port_security,dns

For test reasons.

This results in the following:

  1. Hostname -A still shows something along host-192-168-10-3.openstacklocal
  2. Hostname -f fails with hostname: Name or service not known
  3. curl 169.254.169.254/latest/meta-data/hostname shows correct_hostname.novalocal

I then set the following

/etc/nova/nova.conf

dhcp_domain=mydomain

Now I get:

  • the correct output when curling the metadata server
  • the correct output when showing the port (talking about dns_assignment and dns_name)

But still:

  • the wrong domain when asking for hostname -A - host-192-168-10-3.openstacklocal
  • a failing hostname -f

Further confusion ensures when taking a look at the dnsmasq config files:

/var/lib/neutron/dhcp/207cc6aa-ede3-465f-bee3-2e85d864fc18/addn_hosts

192.168.10.2    host-192-168-10-2.mydomain. host-192-168-10-2
192.168.10.3    host-192-168-10-3.openstacklocal host-192-168-10-3
192.168.10.1    host-192-168-10-1.mydomain. host-192-168-10-1

but those are now missing from the leases file

/var/lib/neutron/dhcp/207cc6aa-ede3-465f-bee3-2e85d864fc18/leases

1498080444 fa:16:3e:e9:3c:43 192.168.10.3 host-192-168-10-3 *
1498080317 fa:16:3e:bc:6f:16 192.168.10.2 host-192-168-10-2 *

I have a hard time understanding what is happening here and how to fix. All I want is a working hostname -f (which will result in working sudo without touching any files on the instance) and my custom domain.

Also I read this spec: https://specs.openstack.org/openstack/neutron-specs/specs/liberty/internal-dns-resolution.html (https://specs.openstack.org/openstack...)

Does anybody know if this was already implemented in newton/ocata?

2017-06-14 08:04:22 -0500 commented answer Nested virtualization with KVM/OpenVSwitch

You were totally right. I was missing the exit point with the correct VLAN Tag. Turns out you need no promiscous mode when using OpenVSwitch.

After adding the VLAN tag everything works now as intended an I have a multi node OpenStack Lab Setup. On a single Hetzner node for about $60 per month :)

2017-06-13 03:42:30 -0500 received badge  Enthusiast
2017-05-10 11:24:28 -0500 received badge  Famous Question (source)
2017-05-10 05:41:15 -0500 commented answer Sudo commands are very slow in VM

Execute the command with sudo or allow passwordless sudo for this account.

I do not want to sound rude, but you are not really familiar with linux, aren't you? Maybe you should try to get used to this environment before using OpenStack.

2017-05-10 05:17:15 -0500 commented answer Sudo commands are very slow in VM

On the instance you want to access. Change PasswordAuthentication to yes and restart the ssh service via systemctl restart ssh or service ssh restart.

2017-05-10 05:16:03 -0500 commented answer Nested virtualization with KVM/OpenVSwitch

I added the VLANs via ovs-vsctl set port vnet8 trunks=2003,2004

2017-05-10 04:32:51 -0500 answered a question Sudo commands are very slow in VM

This could be because of a missing connection to the openstack dns service. Installing and enabling dnsmasq could help, since the name resolution for your hostname will then happen locally. You may need to add the hostname to the /etc/hosts.

For the ssh password login you need to set PasswordAuthentication yes in /etc/ssh/sshd_config

2017-05-10 04:07:31 -0500 received badge  Editor (source)
2017-05-10 04:07:06 -0500 commented answer Nested virtualization with KVM/OpenVSwitch

Added info to the start post since there is no 'reply' function here.

2017-05-10 04:05:12 -0500 answered a question Nested virtualization with KVM/OpenVSwitch

Oh, just remembered I can't be sure the instance got the IP - I was just checking the metadata (which may be incorrect).

The network is not configured in virsh (yet - should I do that?), I attached the instances by editing their config file and adding another interface like this:

<interface type='bridge'>   
  <mac address='52:54:00:43:68:01'/>  
  <source bridge='ostun0'/>  
  <virtualport type='openvswitch'>
    <parameters interfaceid='85dc9976-0f7b-4995-891d-c1ce2f5902a6'/>
  </virtualport>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> 
</interface>

However, I also found out that virsh apparently does some security implementation:

virsh # nwfilter-list 
 UUID                                  Name                 
------------------------------------------------------------------
 33bbc764-11b7-4ca4-aa1a-18824e34c4e6  allow-arp           
 04184b11-1b34-4b6d-b165-ca4b4e5ba95d  allow-dhcp          
 b1852242-1ad1-4532-a93a-877eaa50c4c6  allow-dhcp-server   
 2cf82839-eba7-42e9-a185-d8769ffa1426  allow-incoming-ipv4 
 21b246cc-af53-4534-bdc9-1ca7d02cba45  allow-ipv4          
 7415b3e5-56d2-4307-87db-05192550a250  clean-traffic       
 0ae14293-0feb-43b4-bfad-02becb9ed97f  no-arp-ip-spoofing  
 29b87460-fdb9-4ad6-877f-6845fa39d036  no-arp-mac-spoofing 
 42c7f827-fa04-4950-a47c-85f87093db54  no-arp-spoofing     
 3a490558-4afb-4905-be38-cd89ea3446a3  no-ip-multicast     
 8036475f-6f81-4fb7-85b7-19672c6fb4bb  no-ip-spoofing      
 1e0f16dd-7f07-4cd8-b6db-2d5d7b43d7de  no-mac-broadcast    
 4842ab50-6826-400c-9d4b-5e26a1fe0a93  no-mac-spoofing     
 9e694f78-e0d6-4d35-84eb-5bda16f471d0  no-other-l2-traffic 
 ce03eadf-4cc5-4f26-8b31-28f2d76fd879  no-other-rarp-traffic
 865e29f9-7088-4519-9dc4-57499ecb92be  qemu-announce-self  
 96416978-64bf-4a63-88e0-7e514f04d1c5  qemu-announce-self-rarp

ovs-vsctl show on the host:

user@host:~$ sudo ovs-vsctl show
73bade2a-a301-49d6-9bdc-306c92a490de
    Bridge "ostun0"
        Port "ostun0"
            Interface "ostun0"
                type: internal
        Port "vnet12"
            Interface "vnet12"
        Port "vnet10"
            Interface "vnet10"
        Port "vnet7"
            Interface "vnet7"
    Bridge "intbr0"
        Port "vnet14"
            Interface "vnet14"
        Port "vnet6"
            Interface "vnet6"
        Port "vnet5"
            Interface "vnet5"
        Port "vnet3"
            Interface "vnet3"
        Port "vnet1"
            Interface "vnet1"
        Port "vnet9"
            Interface "vnet9"
        Port "vnet11"
            Interface "vnet11"
        Port "vnet4"
            Interface "vnet4"
        Port "intbr0"
            Interface "intbr0"
                type: internal
    Bridge "databr0"
        Port "vnet8"
            trunks: [2003, 2004]
            Interface "vnet8"
        Port "vnet0"
            Interface "vnet0"
        Port "databr0"
            Interface "databr0"
                type: internal
        Port "dev1"
            Interface "dev1"
                type: internal
    ovs_version: "2.5.2"

vnet6, vnet7 and vnet8 are the ports of the network instance. vnet10 and vnet12 are the compute nodes.

and on the network node:

user@host:~$ sudo ovs-vsctl show
05cd067e-3831-4721-bd1b-c61561c5388b
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tape567d86e-f1"
            tag: 1
            Interface "tape567d86e-f1"
                type: internal
        Port "qr-1bd55b0a-f1"
            tag: 1
            Interface "qr-1bd55b0a-f1"
                type: internal
        Port "qg-3e0b069a-a3"
            tag: 2
            Interface "qg-3e0b069a-a3"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-ex
        fail_mode: secure
        Port "ens9"
            trunks: [2003, 2004]
            Interface "ens9"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-c0a8022b"
            Interface "vxlan-c0a8022b"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.41", out_key=flow, remote_ip="192.168.2.43"}
        Port "vxlan-c0a8022a"
            Interface "vxlan-c0a8022a"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.41", out_key=flow, remote_ip="192.168.2.42"}
    ovs_version: "2.5.2"
2017-05-07 12:26:55 -0500 received badge  Notable Question (source)
2017-05-05 04:12:26 -0500 commented question Nested virtualization with KVM/OpenVSwitch

Since the router resides (afaik) in the same security group as the node, it should be pingable by the default mitaka rules.

This would also not explain why I can't ping the external public IPs.

Can somebody tell me if the IP after boot is assigned via DHCP or if there is some voodoo happening?

2017-05-04 12:36:34 -0500 received badge  Popular Question (source)
2017-05-04 01:50:26 -0500 asked a question Nested virtualization with KVM/OpenVSwitch

Hi all together,

I have a OpenStack Installation running on an bunch of old hardware we put together from old PCs. Now I want to set up OpenStack on a single host, but inside multiple VMs virtualized with KVM. The KVMs are connected via OpenVSwitch. The network setup is the following: https://imgur.com/a/HZzYw (Openstack Network Setup)

I can successfully ping each instance over the Management network, running over intbr0. I can also ping the compute1, compute2 and network via their IP Addresses on the second NIC, which is connected to the tunbr0. A third NIC on the network node is set to 'manual' and attached to the br-ext. I then set up keystone, glance, neutron and nova. Inside the VMs I use OpenVSwitch again, which is this time managed by OpenStack, for the inter-vm traffic I use vxlan tunnels. The connection to the outer world is done via VLAN, where I implemented trunking on both OpenVSwitches (br-ext and databr0)

I was able to create multiple instances which appear correctly on the compute nodes and are shown without an error. I also created two networks - one internal Test-Customer network and one external network. I also created the needed subnets and the router and attached it to both networks correctly.

After doing so I was able to: Ping the Router IPs from Routers Namespace (Internal IP and gateway IP)

I was NOT able to: Ping the attached VMs Ping another machine in the same Subnet attached to databr0 Ping the Router Gateway from the machine attached to databr0

However, my instances got IPs from the correct subnet after booting up, so I assume some DHCP magic was working (but why is my ping failing then?).

I had a setup like that running on ESXI. I remember I had to set the vSwitch to promiscous mode. However, I did nothing like that on databr0, tunbr0 or intbr0 which is - as I assume - the reason packets may get dropped there. I found https://darabi.wordpress.com/2014/10/06/howto-set-promiscuous-mode-and-enable-it-in-openvswitch/ (THIS) article, however I'm not sure how to set that up.

If anyone could help me to get this working, I would highly appreciate it. OpenVSwitch and OpenFlow looks like some dark art to me right now and I'm stuck at this point for over a week. If you need more information, please feel free to ask :)

Greetings from Germany Micha


EDIT: More infos: Oh, just remembered I can't be sure the instance got the IP - I was just checking the metadata (which may be incorrect).

The network is not configured in virsh (yet - should I do that?), I attached the instances by editing their config file and adding another interface like this:

<interface type='bridge'>   
  <mac address='52:54:00:43:68:01'/>  
  <source bridge='ostun0'/>  
  <virtualport type='openvswitch'>
    <parameters interfaceid='85dc9976-0f7b-4995-891d-c1ce2f5902a6'/>
  </virtualport>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> 
</interface>

However, I also found out that virsh apparently does ... (more)

2017-05-03 12:31:06 -0500 commented question Launching instance with Ansible

Hi,

please try to use the os_auth token first for debug reasons. This way you can ensure it's not a problem with your credentials. Please also try to create the instance with the python-openstackclient.

I suspect that a part of your API is not reachable from the remote host.

2017-05-03 12:31:05 -0500 answered a question Launching instance with Ansible

Hi,

please try to use the os_auth token first for debug reasons. This way you can ensure it's not a problem with your credentials. Please also try to create the instance with the python-openstackclient.

I suspect that a part of your API is not reachable from the remote host.

Greetings from Germany Micha