mladen.grujic's profile - activity

2018-10-29 04:07:43 -0600 received badge  Famous Question (source)
2018-09-10 08:47:34 -0600 received badge  Notable Question (source)
2018-09-10 01:18:04 -0600 received badge  Popular Question (source)
2018-09-07 05:03:58 -0600 asked a question unable to launch second instance from volume on a compute node

Hi,

as a title states I am able to create a volume from image and then to launch an instnace from that volume. But then when I create another volume and try to launch an instance from it (on the SAME compute node where the first instance is) am I getting following error:

2018-09-07 11:12:58.482 1784942 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpZyBsLx/privsep.sock']
2018-09-07 11:12:58.979 1784942 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] Spawned new privsep daemon via rootwrap
2018-09-07 11:12:58.937 1785220 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep daemon starting
2018-09-07 11:12:58.941 1785220 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep process running with uid/gid: 0/0
2018-09-07 11:12:58.945 1785220 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2018-09-07 11:12:58.946 1785220 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep daemon running as pid 1785220
2018-09-07 11:13:00.682 1784942 INFO nova.virt.libvirt.driver [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Creating image
2018-09-07 11:13:00.692 1784942 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpSuoEAr/privsep.sock']
2018-09-07 11:13:01.176 1784942 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] Spawned new privsep daemon via rootwrap
2018-09-07 11:13:01.135 1785241 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep daemon starting
2018-09-07 11:13:01.139 1785241 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep process running with uid/gid: 0/0
2018-09-07 11:13:01.144 1785241 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
2018-09-07 11:13:01.144 1785241 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep daemon running as pid 1785241
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [req-577b44bc-01d7-4aeb-8c84-f5e3cb7c0a9d fe87f17ed9a24e83a35c871726108067 5adead39cd364593ac89e8010e553ecb - default default] [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Instance failed to spawn: ProcessExecutionError: Unexpected error while running command.
Command: mount -t nfs 10.38.250.51:/data/volumes /var/lib/nova/mnt/07ab66f11b778324939eddc9c1015ac6
Exit code: 32
Stdout: u''
Stderr: u'mount.nfs: /var/lib/nova/mnt/07ab66f11b778324939eddc9c1015ac6 is busy or already mounted\n'
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Traceback (most recent call last):
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2251, in _build_resources
2018-09-07 11:13:01.306 1784942 ...
(more)
2017-12-13 12:43:23 -0600 received badge  Enthusiast
2017-12-12 18:46:21 -0600 asked a question Segmetation_id doesn’t match with port tag

Hi, I am using Ocata version of Openstack with OpenVswitch. Upon creating networks and ports I noticed that segmetation_id (neutron) doesn’t match with port tag (OVS):

root@controller:/home/user# ovs-vsctl list port  tap11cdd2de-78
_uuid               : 831fc61e-1efe-47f6-a373-9a4784d69940
bond_downdelay      : 0
bond_fake_iface     : false
fake_bridge         : false
interfaces          : [6f3b6896-78f1-4cb3-a7d1-b494f13b95a6]
name                : "tap11cdd2de-78"
other_config        : {net_uuid="7342af7b-4e52-4675-9bbe-45e44941a66b", network_type=vlan, physical_network="physnet1", segmentation_id="1101", tag="2"}
tag                 : 2

ovs-vsctl show

root@controller:/home/user#  ovs-vsctl show
06e454f8-accb-45c0-9db2-f743a6b22e13
Manager "ptcp:6640:127.0.0.1"
    is_connected: true
Bridge br-int
    Controller "tcp:127.0.0.1:6633"
        is_connected: true
    fail_mode: secure
    Port "tap11cdd2de-78"
        tag: 2
        Interface "tap11cdd2de-78"
            type: internal

ovs-ofctl dump-flows br-ex

root@controller:/home/user # ovs-ofctl dump-flows br-ex
NXST_FLOW reply (xid=0x4):
cookie=0xa6e56c73e3430aaf, duration=344414.654s, table=0, n_packets=2120, n_bytes=206576, idle_age=8361, hard_age=65534, priority=4,in_port=6,dl_vlan=1 actions=mod_vlan_vid:1102,NORMAL
cookie=0xa6e56c73e3430aaf, duration=247637.833s, table=0, n_packets=24, n_bytes=2132, idle_age=65534, hard_age=65534, priority=4,in_port=6,dl_vlan=2 actions=mod_vlan_vid:1101,NORMAL
cookie=0xa6e56c73e3430aaf, duration=344415.807s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=6 actions=drop
cookie=0xa6e56c73e3430aaf, duration=344415.810s, table=0, n_packets=30413791, n_bytes=9635454555, idle_age=0, hard_age=65534, priority=0 actions=NORMAL

After creating instances on this network I am able to ping it and access it from different VLAN but my question is actually if it is a default behavior for segmentation_id and tag to have different values? And is it possible to for OVS to get same VLAN tag as segmetation_id upon port creation in cli?

2017-12-12 18:46:21 -0600 asked a question Segmetation_id doesn’t match with port tag

Hi, I am using Ocata version of Openstack with OpenVswitch. Upon creating networks and ports I noticed that segmetation_id (neutron) doesn’t match with port tag (OVS):

root@controller:/home/user# ovs-vsctl list port  tap11cdd2de-78
_uuid               : 831fc61e-1efe-47f6-a373-9a4784d69940
bond_downdelay      : 0 
bond_fake_iface     : false
fake_bridge         : false
interfaces          : [6f3b6896-78f1-4cb3-a7d1-b494f13b95a6]
name                : "tap11cdd2de-78"
other_config        : {net_uuid="7342af7b-4e52-4675-9bbe-45e44941a66b", network_type=vlan, physical_network="physnet1", segmentation_id="1101", tag="2"}
tag                 : 2

ovs-vsctl show

root@controller:/home/user#  ovs-vsctl show
06e454f8-accb-45c0-9db2-f743a6b22e13
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "tap11cdd2de-78"
            tag: 2
            Interface "tap11cdd2de-78"
                type: internal

ovs-ofctl dump-flows br-ex

root@controller:/home/user # ovs-ofctl dump-flows br-ex
NXST_FLOW reply (xid=0x4):
 cookie=0xa6e56c73e3430aaf, duration=344414.654s, table=0, n_packets=2120, n_bytes=206576, idle_age=8361, hard_age=65534, priority=4,in_port=6,dl_vlan=1 actions=mod_vlan_vid:1102,NORMAL
 cookie=0xa6e56c73e3430aaf, duration=247637.833s, table=0, n_packets=24, n_bytes=2132, idle_age=65534, hard_age=65534, priority=4,in_port=6,dl_vlan=2 actions=mod_vlan_vid:1101,NORMAL
 cookie=0xa6e56c73e3430aaf, duration=344415.807s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=6 actions=drop
 cookie=0xa6e56c73e3430aaf, duration=344415.810s, table=0, n_packets=30413791, n_bytes=9635454555, idle_age=0, hard_age=65534, priority=0 actions=NORMAL

After creating instances on this network I am able to ping it and access it from different VLAN but my question is actually if it is a default behavior for segmentation_id and tag to have different values? And is it possible to for OVS to get same VLAN tag as segmetation_id upon port creation in cli?