Ask Your Question
1

following guide to configure neutron, but cannot ping gateway of demo-net.

asked 2014-08-27 02:00:20 -0500

hugesu gravatar image

Hello, I meet a problem when I configure the neutron following http://docs.openstack.org/icehouse/in...

I use three machines, named as controller, compute1 and network as the guide advises.

My demo-net, which is tenant net, is 192.168.1.0/24.
My ext-net, which is external net, is 203.0.113.0/24.

I start the instance, but the instance has no ip just after boot. I use "sudo ifconfig eth0 192.168.1.4", but I cannot success to ping 192.168.1.1, which is the gateway of demo-net.

Two instances can ping each other as if I have configured ip on them.

On the fourth machine with ip 203.0.113.12, which connects the same phycical switch as all above three, I cannot ping 203.0.113.101, which should be the gateway of ext-net.

I catch the error message about every 5 seconds as below:

[root@network stx]# tail -f /var/log/neutron/dhcp-agent.log | grep ERROR
2014-08-27 14:14:12.842 25378 ERROR neutron.agent.dhcp_agent [-] Unable to enable dhcp for cb6c1995-834f-43d4-a658-03a95de10585.

Would you please to help me find out where I make a mistake?

Does need any extra configuration on br-ex, br-int or br-tun?

My configuration is as below:

[root@controller stx]# keystone user-list
+----------------------------------+------------+---------+-------+
|                id                |    name    | enabled | email |
+----------------------------------+------------+---------+-------+
| 0e50b432440e413393ccb28558671677 |   admin    |   True  |       |
| 1590e07bf0594fb1a03fad70400893e5 | ceilometer |   True  |       |
| 5144a28f6d874d9b8ffcf4dcb8efc6d2 |    demo    |   True  |       |
| 286b1aa6f25842e0a5cf081b3c07bd28 |   glance   |   True  |       |
| 64e18f4fd5c849abb25c5b3c06ba4b0e |    heat    |   True  |       |
| 07a317be32224c59881273054a2f9e4d |  neutron   |   True  |       |
| 68068c72379749cc83ec3cdc4137a93b |    nova    |   True  |       |
| 5e2718c6e4324ce7a4127a4633c64144 |   swift    |   True  |       |
+----------------------------------+------------+---------+-------+

[root@controller stx]# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| abe21dfa863a4c399721eed59ad1845b |  admin  |   True  |
| 28e76f4806fa455bb7b87f0e51232533 |   demo  |   True  |
| 8c8d2aab5f1c4e4182ad2982582612b2 | service |   True  |
+----------------------------------+---------+---------+

[root@controller stx]# keystone service-list
+----------------------------------+------------+----------------+------------------------------+
|                id                |    name    |      type      |         description          |
+----------------------------------+------------+----------------+------------------------------+
| 18c10b125f6543fe9926c62dbea9d1c1 | ceilometer |    metering    |          Telemetry           |
| 5389a02daee1433ca6afe40e23ec3a83 |   glance   |     image      |   OpenStack Image Service    |
| adeb3cd4e86a46bd87948e5938dcb923 |    heat    | orchestration  |        Orchestration         |
| 57c87b9a0f1146cfa27356578ae0e835 |  heat-cfn  | cloudformation | Orchestration CloudFormation |
| 0f14e7b4fed14a54a2af22fa74513dc7 |  keystone  |    identity    |      OpenStack Identity      |
| 1bf816e263c64373a921a4e600169f04 |  neutron   |    network     |     OpenStack Networking     |
| c0d92ebda3ad41989bb16b1b78c36fdf |    nova    |    compute     |      OpenStack Compute       |
| a562e30a6b5b44d7888629b6a9aefb84 |   swift    |  object-store  |   OpenStack Object Storage   |
+----------------------------------+------------+----------------+------------------------------+

[root@controller stx]# keystone endpoint-list
+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+-----------------------------------------+----------------------------------+
|                id                |   region  |                  publicurl                   |                 internalurl                  |                 adminurl                |            service_id            |
+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+-----------------------------------------+----------------------------------+
| 1b61b6bb817146c19ca6eb55c4066f02 | regionOne |            http://controller:9292            |            http://controller:9292            |          http://controller:9292         | 5389a02daee1433ca6afe40e23ec3a83 |
| 2b9e8187bc0f415e99ed7263a9713317 | regionOne |          http://controller:8000/v1           |          http://controller:8000/v1           |        http://controller:8000/v1        | 57c87b9a0f1146cfa27356578ae0e835 |
| 2c58721201134b7b9ff12e3ca5299d25 | regionOne |            http://controller:9696            |            http://controller:9696            |          http://controller:9696         | 1bf816e263c64373a921a4e600169f04 |
| 726fde8671424be2a391799c7bbdf7c7 | regionOne |         http://controller:5000/v2.0          |         http://controller:5000/v2.0          |       http://controller:35357/v2.0      | 0f14e7b4fed14a54a2af22fa74513dc7 |
| 7678152dc08444169897596facd752b6 | regionOne |   http://controller:8004/v1/%(tenant_id)s    |   http://controller:8004/v1/%(tenant_id)s    | http://controller:8004/v1/%(tenant_id)s | adeb3cd4e86a46bd87948e5938dcb923 |
| 9c9300f8577d448080cea33e1c395b5c | regionOne |            http://controller:8777            |            http://controller:8777            |          http://controller:8777         | 18c10b125f6543fe9926c62dbea9d1c1 |
| c09e18a471574dbd9f22fe1a80aa0390 | regionOne |   http://controller:8774/v2/%(tenant_id)s    |   http://controller:8774/v2/%(tenant_id)s    | http://controller:8774/v2/%(tenant_id)s | c0d92ebda3ad41989bb16b1b78c36fdf |
| f57bb26460ff4938a3b9cf24cf7f5b0d | regionOne | http://controller:8080/v1/AUTH_%(tenant_id)s | http://controller:8080/v1/AUTH_%(tenant_id)s |          http://controller:8080         | a562e30a6b5b44d7888629b6a9aefb84 |
+----------------------------------+-----------+----------------------------------------------+----------------------------------------------+-----------------------------------------+----------------------------------+

[root@controller stx]# neutron net-list
+--------------------------------------+----------+-----------------------------------------------------+
| id                                   | name     | subnets                                             |
+--------------------------------------+----------+-----------------------------------------------------+
| cb6c1995-834f-43d4-a658-03a95de10585 | demo-net | 6540ebe2-7be8-402c-8a2d-07e02314d812 192.168.1.0/24 |
| ee1a8f22-d68f-4e9b-9d03-0376c25bce8c | ext-net  | a8f783a3-2708-4b0d-a6a0-ed33f72b3728 203.0.113.0/24 |
+--------------------------------------+----------+-----------------------------------------------------+

[root@controller stx]# neutron subnet-list
+--------------------------------------+-------------+----------------+----------------------------------------------------+
| id                                   | name        | cidr           | allocation_pools                                   |
+--------------------------------------+-------------+----------------+----------------------------------------------------+
| 6540ebe2-7be8-402c-8a2d-07e02314d812 | demo-subnet | 192.168.1.0/24 | {"start": "192.168.1.2", "end": "192.168.1.254"}   |
| a8f783a3-2708-4b0d-a6a0-ed33f72b3728 ...
(more)
edit retag flag offensive close merge delete

Comments

Whats the status of dhcp agent ? Go to system info -> Network agents , in your horizon admin tab and check the status of all agents .

Khayam gravatar imageKhayam ( 2014-08-27 03:53:27 -0500 )edit

Thanks and sorry for late. I have five agents with their host as below

open vswitch agent(compute1)
metadata agent(network)
open vswitch agent(network)
L3 agent(network)
DHCP agent(network)

Besides, their "Status" is all "Enable", "States" is all "Up", "Update At" is "0 minute

hugesu gravatar imagehugesu ( 2014-08-27 19:50:41 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-08-27 03:20:55 -0500

foexle gravatar image

Hi,

so your VM don't configure on boot your NIC correctly right ? The most common reason is the VM doesn't reach the dhcp server.

Just try before you boot a VM:

  • killall dnsmasq && service neutron-dhcp-agent restart

if not helps try

  • service neutron-plugin-openvswitch-agent restart (on compute node & network/controller node)

If it doesn't helps could you paste which kernel you're using and your syslog (compute & net/controller node) and could you paste the boot log ? :)

Cheers Heiko

edit flag offensive delete link more

Comments

Please , post ls -l in /etc/neutron and also files. (To format make update in Answer field)
ml2_conf.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
plugin.ini -> /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
dhcp_agent.ini
Make sure dnsmasq processes are up and running

dbaxps gravatar imagedbaxps ( 2014-08-27 03:38:01 -0500 )edit

Also view http://bderzhavets.blogspot.com/2014/... , regarding OVS&ML2 configuration files.
What OS are you running ?

dbaxps gravatar imagedbaxps ( 2014-08-27 03:41:03 -0500 )edit

dear dbaxps, i have same problem with hugesu but from controller i can ssh into the instance. As the link your post, i didn't see the route 169.254.0.0 on the routing table and my instance . Please help me
checking http://169.254.169.254/2009-04-04/instance-id faile

leethaoqn gravatar imageleethaoqn ( 2014-08-27 04:22:27 -0500 )edit

netstat -antp | grep 8775

dbaxps gravatar imagedbaxps ( 2014-08-27 04:28:18 -0500 )edit

netstat -antp | grep 8775
on compute note

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      1850/python

On controller node is same, on network node in nothing

leethaoqn gravatar imageleethaoqn ( 2014-08-27 21:16:25 -0500 )edit
0

answered 2014-08-27 22:05:30 -0500

hugesu gravatar image

updated 2014-08-27 22:15:03 -0500

I reboot three of my systems, but make thing worse.

Now, new instance started can only always remain task state "scheduling".

I have 8775 listen program running only on controller. On network, I use "ps -ef | grep dnsmasq" and get nothing, but in /etc/neutron/dhcp_agent.ini, I have added dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq I manually start service dnsmasq, but it does not work.

I just post the log and conf for more help! Thank you !

The log and conf is there: http://pan.baidu.com/s/1gdveCyn

edit flag offensive delete link more

Comments

My OS is RHEL6.5 and kernel is default 2.6.32-431.el6.x86_64.

hugesu gravatar imagehugesu ( 2014-08-27 22:10:24 -0500 )edit

did you install dnsmasq in the network node?

leethaoqn gravatar imageleethaoqn ( 2014-08-28 02:00:25 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2014-08-27 02:00:20 -0500

Seen: 1,205 times

Last updated: Aug 27 '14