Ask Your Question

gkeller2's profile - activity

2017-01-27 03:01:16 -0500 received badge  Necromancer (source)
2014-07-31 14:01:55 -0500 received badge  Famous Question (source)
2014-07-22 15:11:07 -0500 commented question Cannot ping virtual router

@dbaxps: trying to bring the interface UP with the following command doesn't seem to have any effect either:

ip netns exec qrouter-0207cda2-e97e-47ae-b088-7cc7b38349f6 ip link set dev qr-a3bb5c35-62 up
2014-07-22 15:09:58 -0500 commented question Cannot ping virtual router

@dbaxps: Sorry for the delay. Cannot access VM through dashboard. However, I discovered that though the router is active, the interface qr-a3bb5c35-62 up (192.168.1.1) is DOWN and there's no info about the gateway. See: https://dl.dropboxusercontent.com/u/4...

2014-07-21 13:33:36 -0500 received badge  Notable Question (source)
2014-07-21 09:15:10 -0500 commented answer ovs_neutron_agent out of sync with plugin, agent tunnel out of sync with plugin

@dwyaneini, glad I could help with the little I know. =)

2014-07-18 08:40:36 -0500 answered a question ovs_neutron_agent out of sync with plugin, agent tunnel out of sync with plugin

Hi, dwyaneini. It's my understanding -- I read about it in the last two weeks or so -- that when the agents first boot up, they are out of sync. And that's normal behaviour. Then they do synchronize, but no message is written back in the logs, AFAIK. Regrettably, I don't remember whether I read it here on Ask OpenStack, on a mailing list, or elsewhere.

2014-07-17 15:13:53 -0500 received badge  Popular Question (source)
2014-07-17 13:32:52 -0500 commented question Cannot ping virtual router

I hadn't gotten as far as installing Horizon yet -- I encountered this problem when setting up Neutron --, so I'll do that now and get back to you ASAP!

2014-07-17 13:03:50 -0500 commented question Cannot ping virtual router

I have a test VM running CirrOS 0.3.1, attached to the private network with IP 192.168.1.3. How should I go about executing a command inside the VM if I cannot ssh into it, neither from the network node nor from the compute node?

2014-07-17 12:45:31 -0500 commented question Cannot ping virtual router

Posted Results under header < UPDATE 2 >.

2014-07-17 11:15:29 -0500 asked a question Cannot ping virtual router

Hello, community.

I'm working on a 3-node deployment (i.e., controller, network and compute) of OpenStack Icehouse in CentOS 6.4, following the official installation guide for CentOS found in openstack.org .

I have installed and configured every service in their respective node, have created networks and subnets, a virtual router, and connected all the required ports. However, when I try to verify connectivity with a ping to the virtual router's external IP address, things fail:

[root@ecco-vmhost-id19 ~]# ping -c 4 199.241.160.101
PING 199.241.160.101 (199.241.160.101) 56(84) bytes of data.
From 199.241.160.40 icmp_seq=2 Destination Host Unreachable
From 199.241.160.40 icmp_seq=3 Destination Host Unreachable
From 199.241.160.40 icmp_seq=4 Destination Host Unreachable

--- 199.241.160.101 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 12999ms
pipe 3

Open vSwitch configuration:

** NETWORK NODE **
[root@ecco-vmhost-id19 ~]# ovs-vsctl show
72c4a8f4-63c3-4d5a-8129-e5e32ade5f93
    Bridge br-int
        Port "tap557eb48f-92"
            tag: 4095
            Interface "tap557eb48f-92"
                type: internal
        Port "qr-a3bb5c35-62"
            tag: 4095
            Interface "qr-a3bb5c35-62"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-0a12171f"
            Interface "gre-0a12171f"
                type: gre
                options: {in_key=flow, local_ip="10.18.23.21", out_key=flow, remote_ip="10.18.23.31"}
    Bridge br-ex
        Port "qg-203cfbd3-86"
            Interface "qg-203cfbd3-86"
                type: internal
        Port "eth0.2"
            Interface "eth0.2"
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "1.11.0"

** COMPUTE NODE **
[root@ecco-vmhost-id21 ~]# ovs-vsctl show
70d35d08-6f92-4a37-8e36-f987f04532bf
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvodde8197f-9a"
            tag: 1
            Interface "qvodde8197f-9a"
        Port "qvo03253756-a7"
            tag: 1
            Interface "qvo03253756-a7"
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        Port "gre-0a121715"
            Interface "gre-0a121715"
                type: gre
                options: {in_key=flow, local_ip="10.18.23.31", out_key=flow, remote_ip="10.18.23.21"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "1.11.0"

One issue that may be relevant: the machines with which I'm working have only one NIC available. Since the guide indicates that the different nodes require one to three NICs, I've created aliases for the one NIC as follows (e.g., network node):

** NETWORK NODE **
[root@ecco-vmhost-id19 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=dhcp
DELAY=0

[root@ecco-vmhost-id19 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0.1
DEVICE=eth0.1
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
VLAN=yes
IPADDR=10.18.23.21
NETMASK=255.255.255.0

[root@ecco-vmhost-id19 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0.2
DEVICE=eth0.2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
VLAN=yes

Is there any reason why this setup would not work?

Any assistance would be deeply appreciated. Thanks for taking the time.

< UPDATE >

[root@ecco-vmhost-id19 ~]# ip netns list
qrouter-0207cda2-e97e-47ae-b088-7cc7b38349f6
qdhcp-a5444285-1e09-4b15-9db6-3c932ea09f06

[root@ecco-vmhost-id19 ~]# ip netns exec qdhcp-a5444285-1e09-4b15-9db6-3c932ea09f06 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.1 ...
(more)
2014-07-15 17:21:42 -0500 received badge  Commentator
2014-07-15 17:21:42 -0500 commented answer How to start Nova service from source code?

Run the following command to find out the proper name of the services you are looking for:

service --status-all | grep openstack

In my case, the services are called -- most of them -- openstack-something. E.g., openstack-nova-scheduler.

2014-07-15 11:11:53 -0500 commented answer How to start Nova service from source code?

@anhdenday: you probably don't need all those services. For example, my deployment is only running cert, consoleauth, scheduler, and conductor in the controller node. Start each service with the following command: service nova-cert restart.

2014-07-14 17:13:43 -0500 commented answer failed to bind port neutron

@caraboga, @Roberto Cano Fernandez, @Chris: check out FuzzyDog's solution in this thread [Error: Unexpected vif_type=binding_failed]. I don't know if you are facing the same issue, but that solution worked for me

2014-07-14 16:59:43 -0500 commented answer [Error: Unexpected vif_type=binding_failed]

@FuzzyDog, thank you so much for coming back to the forum and posting your solution. I'm not sure how that happened, but re-running that command and restarting all networking services did the trick for me. I was getting the error Failed to bind port... when trying to set up a network. Now it works.

2014-06-11 14:00:20 -0500 commented answer failed to bind port neutron

Please, do explain what mistake you had, so that others may learn from it.

2014-06-11 09:55:42 -0500 answered a question rdo - neutron - multinode - single nic ?

Somebody recently wrote an article about this issue. Might be worthy checking it out: http://fosskb.wordpress.com/2014/06/1...

2014-06-06 12:49:11 -0500 received badge  Self-Learner (source)
2014-06-06 12:49:11 -0500 received badge  Teacher (source)
2014-06-06 12:17:38 -0500 answered a question nova-compute fails to connect to nova-conductor

Following the suggestion from dbaxps, I replaced qpid with rabbitMQ. This allowed for the service nova compute running in the compute node to connect to the service nova conductor in the controller node.

I'm still unclear why qpid was failing and probably I won't ever find out. From now on, I'll probably use rabbitMQ in future deployments.

2014-06-05 19:44:00 -0500 received badge  Famous Question (source)
2014-06-04 06:32:01 -0500 received badge  Enthusiast
2014-05-31 14:15:31 -0500 received badge  Notable Question (source)
2014-05-30 17:00:24 -0500 received badge  Student (source)
2014-05-30 15:06:06 -0500 received badge  Popular Question (source)
2014-05-30 14:21:59 -0500 commented question nova-compute fails to connect to nova-conductor

Good. I'll check it out next. Would you mind indicating to me what settings you are using in nova.conf to use rabbitmq instead of qpid, please?

2014-05-30 13:56:00 -0500 commented question nova-compute fails to connect to nova-conductor

According to the installation guide, Qpid was the recommended broker for Red Hat, CentOS and Fedora. Do you suspect Qpid might be causing the problem? In any case, I'll give it a shot -- I have nothing to lose at this stage. Thanks.

2014-05-30 13:26:09 -0500 commented question nova-compute fails to connect to nova-conductor

OpenStack IceHouse.

2014-05-30 13:10:07 -0500 commented question nova-compute fails to connect to nova-conductor

CentOS 6.4. Sorry for not clarifying that before. I'll edit the question.

2014-05-30 11:23:23 -0500 commented question nova-compute fails to connect to nova-conductor

Tried that several times to no effect. I've just did again and nova-compute still fails to connect to nova-conductor. Thanks for the suggestion, though.

2014-05-30 11:05:18 -0500 received badge  Editor (source)
2014-05-30 11:02:37 -0500 asked a question nova-compute fails to connect to nova-conductor

Hi. I've been following the OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora ( http://docs.openstack.org/trunk/install-guide/install/yum/content/ (http://docs.openstack.org/trunk/insta...) ) to make a 3-node deployment (i.e., controller, network and compute) of OpenStack Icehouse in CentOS 6.4.

All services are installed in their respective nodes and appear to be functioning. However, the service nova-compute in the compute node fails to connect to the service nova-conductor in the controller node (though both services are effectively connected to the message server, qpid , running in the controller node).

The following is a snippet of /var/log/nova/compute.log after restarting the service nova-compute in the compute node:

2014-05-30 11:55:01.746 20946 INFO nova.openstack.common.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
2014-05-30 11:55:01.856 20946 INFO nova.virt.driver [-] Loading compute driver 'libvirt.LibvirtDriver'
2014-05-30 11:55:01.906 20946 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on controller:5672
2014-05-30 11:55:01.932 20946 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on controller:5672
2014-05-30 11:55:11.953 20946 WARNING nova.conductor.api [req-7acb5bd3-f2ae-4f0f-996a-0da5e51279b3 None None] Timed out waiting for nova-conductor. Is it running? Or did this service start before nova-conductor?

Any assistance (or suggestion) would be appreciated. I can provide any additional information that would help troubleshoot the issue.

Thanks.

UPDATE:

  • I'm running CentOS 6.4 in all three nodes.
  • I'm trying to deploy OpenStack Icehouse.
2014-05-20 10:21:04 -0500 received badge  Supporter (source)
2014-05-01 16:59:34 -0500 commented answer Why do we need two network interfaces for openstack nodes?

@Saba, did you figure out how to deploy OpenStack using only one NIC on the nodes? I'm trying to do the same and haven't yet succeeded. Any help would be appreciated.