Ask Your Question

MonsieurWilson's profile - activity

2018-08-14 21:10:46 -0500 received badge  Famous Question (source)
2017-11-14 09:56:56 -0500 received badge  Student (source)
2017-11-14 09:56:52 -0500 received badge  Self-Learner (source)
2017-08-21 04:51:39 -0500 received badge  Necromancer (source)
2017-08-21 04:51:39 -0500 received badge  Teacher (source)
2017-05-05 01:22:28 -0500 received badge  Notable Question (source)
2017-04-06 19:48:14 -0500 received badge  Famous Question (source)
2017-03-23 02:52:42 -0500 received badge  Popular Question (source)
2017-03-23 02:48:56 -0500 answered a question nova-compute state is down

May be you should notice about the value of state_path in nova.conf. Here are some sentences from the nova configuration

#
# The top-level directory for maintaining Nova's state.
#
# This directory is used to store Nova's internal state. It is used by a
# variety of other config options which derive from this. In some scenarios
# (for example migrations) it makes sense to use a storage location which is
# shared between multiple compute hosts (for example via NFS). Unless the
# option ``instances_path`` gets overwritten, this directory can grow very
# large.
#
# Possible values:
#
# * The full path to a directory. Defaults to value provided in ``pybasedir``.
#  (string value)
state_path=/var/lib/nova

It seems that if the state_path value isn't provided, its default value will be the pybasedir, which is '/usr/lib/python2.7/dist-packages/instances in your error log.

Hopes that will help.

P.S. I'm in Ocata, and the default value of state_path is /var/liib/nova in Newton. Maybe its a new bug?

2017-03-15 05:05:15 -0500 answered a question Router status is down when integrated with OpenDayLight

Good news. Network finally works fine with version distribution-karaf-0.4.4-Boron-SR2.

2017-03-03 01:35:16 -0500 received badge  Famous Question (source)
2017-03-02 23:50:05 -0500 answered a question Unable to login to horizon

First, try to comment the line SESSION_ENGINE = 'django.contrib.sessions.backends.cache'.

If that doesn't help, then try to use CONTROLLER_NODE_IP instead of controller here:

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
2017-03-02 23:45:08 -0500 commented answer Router status is down when integrated with OpenDayLight

It's pretty strange that VM can't ssh outside nodes after associated a floating IP. Life sucks..

2017-03-02 20:50:55 -0500 commented answer Router status is down when integrated with OpenDayLight

May be there are still some problems with ODL DNAT? Any other ideas?

2017-03-02 20:50:06 -0500 commented answer Router status is down when integrated with OpenDayLight

I'm afraid we encountered the different case? I can ssh into controller/compute/network node from VM, but can't ssh into VM from any node. But still thanks a lot. ODL Summit 2016 PPT shows that we can ping the VMs from outside, so I just believe that and had never try to ssh from VM.

2017-03-02 17:07:13 -0500 received badge  Notable Question (source)
2017-03-01 02:37:25 -0500 received badge  Popular Question (source)
2017-02-28 21:42:31 -0500 asked a question Router status is down when integrated with OpenDayLight

Hi there,

I am trying to integrate Opendaylight Boron with OpenStack Newton, but encouter some problems. I have followed the http://docs.opendaylight.org/en/stable-boron/submodules/netvirt/docs/openstack-guide/openstack-with-netvirt.html (OpenStack with NetVirt Docs) with three CentOS 7.1 servers.

At first, a deployment of 1 controller 1 network and 1 compute OpenStack is done, and everything works fine. (Not devstack)

Then, I install the Opendaylight odl-router on my OpenStack cloud, and make every configuration according to the previous link.

Here are some informations that may be important for you to help me:)

OpenStack Deployment:

  • Controller Node: (neutron-server)
    • neutron.conf is modified correctly
    • Opendaylight controller is started and ${CONTROLLER}:8181/index.html looks fine.
  • Network Node:
    • neutron-openvswitch-agent is shutted down
    • Open vSwitch configuration is empty and the manager is correctly set
  • Network Node: (neutron-dhcp-agent neutron-metadata-agent)
    • neutron-l3-agent is shutted down (Is it correct ? I'm not sure)
    • neutron-dhcp-agent & neutron-metadata-agent configuration is modified correctly
  • Each Node have two interfaces em1 em2. em1 for tunnel and em2 for provider network.

Opendaylight Boron version:

  • distribution-karaf-0.5.2-Boron-SR2

Open vSwitch version: (I have tested following two versions, but encountered the same problem)

  • 2.6.1
  • 2.5.0

The kernel modules releated with openvswitch looks like:

$lsmod | grep openvswitch
openvswitch           257678  4 vport_vxlan
nf_nat_ipv6            13279  1 openvswitch
nf_nat_ipv4            13263  2 openvswitch,iptable_nat
nf_defrag_ipv6         34651  2 openvswitch,nf_conntrack_ipv6
nf_defrag_ipv4         12729  2 openvswitch,nf_conntrack_ipv4
nf_nat                 21798  4 openvswitch,nf_nat_ipv4,nf_nat_ipv6,iptable_nat
nf_conntrack          101024  7 openvswitch,nf_nat,nf_nat_ipv4,nf_nat_ipv6,iptable_nat,nf_conntrack_ipv4,nf_conntrack_ipv6
gre                    13808  1 openvswitch
libcrc32c              12644  2 xfs,openvswitch

After configuring external network on Network Node, ovs-vsctl looks like:

$ovs-vsctl set Open_vSwitch . other_config:provider_mappings=provider:em2
$ovs-vsctl show
81d7f08b-80cb-45d7-a7c6-adcd58c99fe7
    Manager "tcp:10.24.84.144:6640"
        is_connected: true
    Bridge br-int
        Controller "tcp:10.24.84.144:6653"
            is_connected: true
        fail_mode: secure
        Port "em2"
            Interface "em2"
        Port br-int
            Interface br-int
                type: internal

But I got some errors in karaf.log:

2017-02-28 15:23:07,284 | ERROR | ool-46-thread-15 | InterfacemgrProvider             | 335 - org.opendaylight.genius.interfacemanager-impl - 0.1.2.Boron-SR2 | Interface 229203650765326:em2:flat is not present
...
2017-02-28 15:23:07,287 | ERROR | ool-46-thread-15 | VpnInterfaceManager              | 344 - org.opendaylight.netvirt.vpnmanager-impl - 0.3.2.Boron-SR2 | Handling addition of VPN interface 229203650765326:em2:flat skipped as interfaceState is not available 
...
2017-02-28 15:23:07,614 | ERROR | ool-46-thread-15 | NatInterfaceStateChangeListener  | 353 - org.opendaylight.netvirt.natservice-impl - 0.3.2.Boron-SR2 | Unable to process add for interface 229203650765326:em2 ,since Interface ConfigDS entry absent for the same
...
2017-02-28 15:23:07,614 | ERROR | ool-46-thread-15 | InterfaceStateChangeListener     | 344 - org.opendaylight.netvirt.vpnmanager-impl - 0.3.2.Boron-SR2 | Unable to process add for interface 229203650765326:em2 ,since Interface ConfigDS entry absent for the same
...
2017-02-28 15:23:07,617 | ERROR | ool-46-thread-15 | SubnetOpDpnManager               | 344 - org.opendaylight.netvirt.vpnmanager-impl - 0.3.2.Boron-SR2 | Cannot get, portOp for port 229203650765326:em2:flat is not available in datastore
...
2017-02-28 15:23:07,782 | ERROR | entLoopGroup-9-6 | DeviceFlowRegistryImpl           | 295 - org.opendaylight.openflowplugin.impl - 0.3.2.Boron-SR2 | Flow with flowId L2switch-5 already exists in table 0

Then I create a provider network and subnet ... (more)

2017-02-19 20:07:54 -0500 received badge  Notable Question (source)
2017-02-17 21:25:28 -0500 received badge  Popular Question (source)
2017-02-17 04:50:43 -0500 received badge  Enthusiast
2017-02-16 23:46:57 -0500 asked a question What's the neutron traffic flow between VMs?

I recently focused on neutron traffic flow between VMs, and got confused about this part of the instructions on the OpenStack Docs: https://docs.openstack.org/newton/networking-guide/deploy-ovs-selfservice.html (East-west scenario 1: Instances on the same network)

I used my local environment to do following tests:

env

  1. CentOS7
  2. OpenStack Newton

Deployment

  1. 1 Controller 1 Compute 1 Network
  2. Two VMs which belongs to the same selfservice subnet on the Compute Node with floating ip

Action

SSH into one VM and do icmp request to another one

Result

It's interesting to find there are packets forwarding through overlay interface of Network Node, and also tap interface of l3-agent router.

But according to the previous OpenStack Docs, VMs on the same network should communicate directly.

Questions:

  1. VMs on the same network should communicate directly, floating ip or fixed ip? I think there are differences.

  2. What's the reason caused VMs on the same network in my deployment communicate with Network Node? Some mistakes in neutron configurations?


Configurations that may make sense:

/etc/neutron/neutron.conf

[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router

/etc/neutron/plugin.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_vxlan]
vni_ranges = 1:1000

[ml2_type_flat]
flat_networks = provider

[securitygroup]
enable_ipset = true

/etc/neutron/plugins/ml2/openvswitch.ini

[agent]
tunnel_types = vxlan
l2_population = true

[ovs]
local_ip = **
bridge_mappings = provider:br-provider

[securitygroup]
firewall_driver = iptables_hybrid
2017-02-16 22:05:41 -0500 received badge  Famous Question (source)
2017-02-16 22:03:14 -0500 asked a question neutron traffic flow

最近研究neutron traffic flow时有了这样的疑问:

在自己的环境中,相同子网(selfservice)的VM之间通信并不像East-west scenario 1: Instances on the same network(https://docs.openstack.org/newton/networking-guide/deploy-ovs-selfservice.html)中说的那样,不走router直接转发。

我测试的方法是:

  1. 为两个相同子网的VM分配floating ip,在其中一个VM中ping另一个VM的floating ip
  2. 在overlay interface、network node的router抓包

经测试发现流量还是走router的:在network node的overlay interface和router的tap口都能看到ICMP的包。

问题:

  1. Docs里说的,same network之间的traffic flow不走router是指floating ip,还是指fixed ip?

  2. 如果是floating ip的话,请问是我遗漏了neutron的什么配置么?

求指点!

附neutron的一些配置的ovs、vxlan以及l2_population的有关选项,几乎和newton的install-guide-rdo一样。

ml2_conf.ini

[ml2] type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers =
openvswitch,l2population

openvswitch-agent.ini

[agent]
tunnel_types = vxlan
l2_population = true
2017-02-15 21:54:49 -0500 received badge  Supporter (source)
2017-01-22 09:01:42 -0500 received badge  Notable Question (source)
2017-01-21 22:20:56 -0500 received badge  Popular Question (source)
2017-01-21 21:35:31 -0500 commented question PortNotFoundClient: Port * could not be found.

I haven't changed the /etc/nova/policy.json, so it's empty. Does it matters?

2017-01-21 21:33:24 -0500 answered a question PortNotFoundClient: Port * could not be found.

Well, after I have changed the neutron.conf, the instance finally stared. So ... what is the reason?

/* neutron.conf */
[DEFAULT]
auth_strategy = noauth

Update:

Oh my, I finally address it. After reviewing my nova configuration carefully several times, I found that something important is missed...

2017-01-13 10:49:33 -0500 asked a question PortNotFoundClient: Port * could not be found.

These days, I have focused on deploying a cluster with 1 controller node and 1 compute node. When following the steps in Launch an instance part of openstack install-guide-rdo, I can't launch the instance of cirros. Here are some import logs from my controller node and compute node.

/* nova-compute.log on compute node*/
DEBUG nova.policy [req-707d1cf4-efca-4606-a082-7f87a08da2d9 96a180f421144cfe8f8d5bf582e7edf2 47465838620a4ccc9b9e30b1bbbb951d - - -] Policy check for network:attach_external_network failed with credentials {'domain': None, 'project_name': u'demo', 'project_domain': None, 'timestamp': '2017-01-13T10:47:57.475661', 'remote_address': u'10.24.67.190', 'quota_class': None, 'resource_uuid': None, 'is_admin': False, 'user': u'96a180f421144cfe8f8d5bf582e7edf2', 'service_catalog': [], 'tenant': u'47465838620a4ccc9b9e30b1bbbb951d', 'read_only': False, 'project_id': u'47465838620a4ccc9b9e30b1bbbb951d', 'user_id': u'96a180f421144cfe8f8d5bf582e7edf2', 'show_deleted': False, 'roles': [u'user'], 'user_identity': u'96a180f421144cfe8f8d5bf582e7edf247465838620a4ccc9b9e30b1bbbb951d - - -', 'is_admin_project': True, 'read_deleted': u'no', 'request_id': u'req-707d1cf4-efca-4606-a082-7f87a08da2d9', 'instance_lock_checked': False, 'user_domain': None, 'user_name': u'demo'} authorize /usr/lib/python2.7/site-packages/nova/policy.py:169
...
Instance failed network setup after 1 attempt(s)
...
ERROR nova.compute.manager PortNotFoundClient: Port bd814f00-befc-42ca-9983-2da3a335ffe3 could not be found.

/* nova-conductor.log on controller node*/
...
NoValidHost: No valid host was found. There are not enough hosts available.
...

I am pretty confused with the error, and google even can't give me an answer. Please someone who can tell how to get it work?

P.S. All the configurations is the same to http://docs.openstack.org/newton/install-guide-rdo/index.html, and I use two CentOS 7 virtual machines to deploy controller node and compute node.