Marco Marino's profile - activity

2020-09-02 16:47:46 -0500 received badge  Notable Question (source)
2020-05-25 01:20:55 -0500 received badge  Popular Question (source)
2020-05-23 06:10:56 -0500 received badge  Editor (source)
2020-05-23 06:10:35 -0500 asked a question OVN and Openstack - Configuration and interfaces assignment

Hi, I'm trying to understand how ovn in openstack works and I'm focused on a particular kind of configuration: One controller node and N compute nodes with distributed virtual router. Please, excuse me if I'm going to ask stupid questions, but I have doubts about this even after tried to search in documentation. Basically I need to know if some assumptions are true:

1) If I use distributed virtual routing and I have provider networks (flat or vlans, not important), I have to connect all compute nodes to the overlay network AND to the provider network. Is this true? Furthermore, I have to connect compute nodes to the management network, so in total I need at least 3 network interface card for each compute node. Please give me a feedback on this.

2) I studied OVN and it seems to me that the connection between the ovn-controller (on compute nodes) and southbound database use the management network and not the overlay network. It makes sense in my opinion because in this way there is a real separation between control plane and data plane. So, can I avoid to connect the openstack controller node to the overlay network? if this is true, it seems that I need only 1 interface on the openstack controller node, just for management. Is this correct? (yes, I know I can connect external network, api network and so on... but minimal functionality can be obtained only with management network in a basic installation)

3) In a configuration where DVR is active, what is the purpose of the gateway node? (please use this reference architecture: ). It seems to me that I can completely remove the gateway node.

Thank you,


2016-11-10 00:48:24 -0500 received badge  Notable Question (source)
2016-09-19 04:05:25 -0500 received badge  Famous Question (source)
2016-08-28 20:03:10 -0500 received badge  Popular Question (source)
2016-08-27 04:06:38 -0500 asked a question add disks to swift

Hi, I have a swift cluster with 3 zone (replica 3) with 4 equal disks for each zone (8TB a disk). Each disk has a weight = 8000 for a total weight of 32000 in each zone. Now I have to add one disk per zone, for a total of 3 new equal (8TB) disks in the cluster. I think I need to add one disk at a time, but if I add the first disk in z1 with a weight of 8000, the total weight of z1 will be 40000. This means that z1 will have more datas than z2 and z3 until I add the disk to the remaing zones. Is this correct? Is this the right procedure? Is there some common way to avoid this behavior? Thank you

2016-04-18 03:55:20 -0500 received badge  Popular Question (source)
2016-04-18 03:55:20 -0500 received badge  Famous Question (source)
2016-04-18 03:55:20 -0500 received badge  Notable Question (source)
2016-04-07 05:15:36 -0500 answered a question Add new replica to swift

Ok, problem solved. Some (not detailed) informations here:

  1. Create or update the configuration of rsync in /etc/rsyncd.conf, specifying the address of the interface that is used for the replication network
  2. Modify the configuration of account,container and object servers for utilizing the "directory based configuration" (eg: 1.conf, 2.conf, ecc..) and use these files if you want to configure different port and ip address for replication network
  3. Update the ring. Use a temporary folder! Do not change the ring directly in /etc/swift !!! In particular: a) Add new zone with all devices. b) set_replicas 2 (eg) and c)rebalance the ring. Check that no dispersion is involed and that all is ok.
  4. Distribute the ring across all nodes of the storage
2016-04-06 03:26:42 -0500 received badge  Popular Question (source)
2016-04-06 03:26:42 -0500 received badge  Notable Question (source)
2016-03-31 06:15:45 -0500 asked a question Add new replica to swift

Hi, I have a running swift cluster with 1024 partitions, 1 replica, 1 region, 1 zone and 4 devices (and 1 storage server). I'd like to add new server with a new replica, how can I do this? I noticed that swift-ring-builder has a subcommand named "set_replicas" , but I cannot understand if I need to add before a zone (in a new server), then rebalance and then add a new replica or should I add one zone and the replica before rebalance? Thank you

2016-03-02 12:02:03 -0500 asked a question neutron mtu problem with centos 7

Hi, I have a production cluster with openstack icehouse release and I'm using openstack-neutron with gre tunnels. Actually I have 12 compute nodes, 2 network nodes and 2 controllers all with centos 6. I'm planning an upgrade and I'd like to upgrade the OS to centos 7 before install the juno release (because from juno there is no support for centos 6). So, I formatted one compute node and then I installed centos 7 and icehouse (yes, I added the EOL repository). All seems to work well, because I can launch a cirrOS instance without problems. However, when I try to launch a centos7 VM, I cannot ssh into it (ssh hangs with SSH2_MSG_KEXINIT sent).I tried to set my MTU to 1300 and all works well (I can connect to ssh), so I think the problem is related to MTU.
First question: Why on centos 6 compute nodes this doesn't happens? On centos 6 nodes I can connect to the vm via ssh (with a client that has an MTU=1500) and then I disable segmentation offloading inside the vm (tx,sg,tso,ufo,gso,gro all to off). Furthermore, If I connect to the vm (on the centos 7 compute node) from a client with an MTU=1300 and I disable tcp offloading inside the vm nothins changes. Is there some kernel parameter that should I enlarge on the physical centos7 node?
Second question: I'm using tcpdump on the physical interface of the compute node used for gre tunnel and packets seems to going out, so I think the problem is on the network node. Could be related to the fact that after the new installation I used same hostname and same ip address? Should I restart the openvswitch network agent and the openvswitch daemon on the network node?
NB: ping works and the MTU between network node and compute nodes is 1500
Thank you

2016-02-15 06:48:31 -0500 received badge  Nice Question (source)
2016-01-20 06:13:05 -0500 received badge  Famous Question (source)
2016-01-18 09:43:34 -0500 received badge  Notable Question (source)
2016-01-18 04:28:44 -0500 received badge  Popular Question (source)
2016-01-17 03:44:41 -0500 asked a question cinder high availability

Hi, I'm using the icehouse release on centos6 servers. Actually I'm searching for a solution for high availability in cinder. Using LVM, cinder needs a Volume Group named "cinder-volumes", so my idea is to build a 2 node corosync/pacemaker/drbd cluster for: 1) create a physical drbd device, eg /dev/drbd0 2) on top of /dev/drbd0, create a PV 3) Using the PV, I create a VG named "cinder-volumes" 4) Create 1 pacemaker RA for drbd (master/slave) 5) Create a pacemaker RA for LVM 6).... VIP.... fencing...

In this way, if I start (only) openstack-cinder-volume on the cluster, I should have high availability for "cinder-volumes" VG. My question is: is this the right way? Actually I'm using centos7 for that cluster, and I have problems related to the RA openstack-cinder-volume. Furthermore, it seems that /etc/cinder/volumes/ should be shared between cluster nodes. It this mandatory? If anyone using other solutions, please share them. Thank you

2015-12-03 02:21:26 -0500 received badge  Famous Question (source)
2015-12-03 02:21:26 -0500 received badge  Notable Question (source)
2015-10-07 11:42:12 -0500 received badge  Famous Question (source)
2015-09-28 01:44:57 -0500 received badge  Notable Question (source)
2015-09-27 03:05:47 -0500 received badge  Popular Question (source)
2015-09-26 13:43:17 -0500 asked a question upgrade from icehouse to juno

Dear all, I have to upgrade my openstack environment from icehouse to juno and this post contains plenty of questions. Please, don’t tell me that exists an openstack guide for upgrade from icehouse to juno, because the guide is a base but I need more precise answers. My actual configuration is: 2 controller nodes in HA with mysql, rabbitmq, keystone, glance, nova, neutron-server. No cinder, no swift….. 1 network node with neutron-openvswitch-agent, metadata-agent, dhcp-agent and l3-agent. I’m using neutron with gre for the network part 12 compute nodes with nova-compute and neutron-openvswitch-agent

My OS is centos 6 for all servers and since juno is not supported in epel6, I have to upgrade my OS to centos 7. CONTROLLER UPGRADE: As suggested in the guide, I have to upgrade controller first. My idea is to install a completely new controller (“temp” controller) with centos 7 and icehouse, then import the database from production and perform an upgrade on this new server to juno (following the official guide). At this point, I could shut down my old icehouse controllers and change the ip of the new juno server. My first question is: since I install the new controller using the same management network of the icehouse controllers, having 2 rabbitmq-server on the same management network could be a problem? I’m thinking to multicast packets…. Should I install my new controller on a separate network and change the ip only when the icehouse controller is powered off? What are the problems related to an openstack environment without controller from the vm point of view? I’m thinking to the dhcp lease time inside vms. What happens if lease time expires and the controller is down (more precisely rabbitmq and neutron-server are down)? Vms succeed in ip renewal?

NETWORK UPGRADE: For network node I will install a completely new Centos7 server with juno. Then, using the controller and neutron cli, I have to change the l3-router associated agent and dhcp associated agent. I tested this in a virtualized environment and works well! Downtime of vms is really short (about 2 or 3 seconds if made with a script). I need your opinion here…. This could be a useful guide for other people.

COMPUTE UPGRADE: For compute my idea is to install a new compute node with centos 7 and juno. Then, with a “manual” script, i migrate instances from one icehouse node to the new node with juno upgrading the database. I tested this script, and if anyone is interested please send me an email. This step has downtime for instances because I have to scp the disk file from one compute node to another.

Finally, when all compute nodes are migrated, I can reinstall my 2 controllers in HA with pacemaker/corosync and I will be an happy openstack administrator :) Please, give me some suggestions. I upgrade openstack since grizzly and this is a complicated task! Thanks

2015-08-06 04:09:11 -0500 received badge  Notable Question (source)
2015-08-06 02:39:07 -0500 received badge  Popular Question (source)
2015-08-06 01:40:52 -0500 commented answer cannot resize. is_image_partitionless

Sorry, I have the same behavior with inject_partition=-1 and inject_partition=-2. Are there other solution? I searched everywhere and there are no solutions provided

2015-08-05 10:40:15 -0500 asked a question cannot resize. is_image_partitionless

Hi, i'm using openstack (icehouse) with virt-manager and 3 vms (controller, compute and network with neutron and gre). When I try to launch a new cirros 0.3.4 image, i have an error on the compute node:

DEBUG nova.virt.disk.api [req-e98d3647-5bff-4524-8d3f-c3275f10f607 66713c43084e4163ad01a79d66f96276bfd8576b0dae41a9bdd67b58e3c835a4] 
Unable to mount image /var/lib/nova/instances/5e0693fc-891a-43d4-be78-33ec47bc7b07/disk with error 
Error mounting /var/lib/nova/instances/5e0693fc-891a-43d4-be78-33ec47bc7b07/disk with libguestfs (mount_options: /dev/sda on / (options: ''): mount: you must specify the filesystem type). 
Cannot resize. is_image_partitionless /usr/lib/python2.6/site-packages/nova/virt/disk/

On the compute node, I have virt_type = qemu and inject_partition = -2

Please, someone can help me? I cannot find a solution


2015-07-24 14:18:30 -0500 received badge  Famous Question (source)
2015-07-15 22:39:55 -0500 received badge  Nice Question (source)
2015-05-26 22:47:21 -0500 received badge  Famous Question (source)
2015-05-11 02:23:30 -0500 received badge  Notable Question (source)
2015-05-08 06:42:02 -0500 received badge  Popular Question (source)
2015-05-07 05:22:49 -0500 asked a question corrupted neutron database in icehouse

Hi, after an upgrade from havana to icehouse i think my neutron db is corrupted. I'm using neutron + gre with openvswitch and ML2 and i have 2 networks with the same segmentation_id. In particular i have: Networks tables: networks



ml2_gre_allocations table:

image description

First, note that networks are not deleted from the ml2_network_segments table. How is this possible? Furthermore, i see that only gre_id 2 and 3 are allocated, but i have an active network ("test-net") with segmentation_id = 1 Another important point is that the external (shared) network "Ext_Net" has the same segmentation_id (=2) of the "cvg_net". Why?

How can i adjust this situation? Please help me. I don't want to drop the entire neutron database because i don't want lose all security groups.... Thank you

2015-05-07 01:34:31 -0500 answered a question icehouse upgrade, missing qbr

I solved creating a new private subnet and using nova interface-detach and nova interface-attach commands. These commands have recreated all interfaces after a reboot of the vms. Thanks to SGPJ

2015-05-07 01:30:15 -0500 received badge  Notable Question (source)
2015-04-30 10:24:00 -0500 commented answer neutron gre segmentation id unique?

I have the same problem. How have you solved? Should i delete networks with the same segmentation id and then recreate?

2015-04-28 09:39:19 -0500 received badge  Popular Question (source)
2015-04-28 09:12:04 -0500 commented question icehouse upgrade, missing qbr

When I launch a new vm interfaces appear. Furthermore i see in ml2_port_bindings table the value "{"port_filter": true, "ovs_hybrid_plug": true}" in the vif_details column for new instances. This field is empty for havana imported instances. dhcp not works again...

2015-04-28 04:02:47 -0500 asked a question icehouse upgrade, missing qbr

Hi, I've updated from havana to icehouse and now qbr, qvo and qvb interface are missing on the compute node and tap interface is directly connected to the integration bridge. Apparently, security groups rules are working (and i don't know thy). Furthermore, vms continuously send dhcp request even though ip is assigned (firewall problem?). I'm using ml2 with gre. Is there some problem with the database in neutron? Please help me!

Thank you

2015-04-27 11:21:59 -0500 received badge  Taxonomist
2015-03-10 00:39:23 -0500 received badge  Famous Question (source)
2015-01-30 09:47:18 -0500 received badge  Famous Question (source)