Ask Your Question

LainesStack's profile - activity

2016-12-12 15:24:41 -0500 received badge  Famous Question (source)
2015-06-16 10:45:39 -0500 received badge  Notable Question (source)
2014-12-16 01:30:59 -0500 answered a question How to customize libvirt.xml for an instance ?

Is there any update on this? Facing the same issue and don't want to edit the code when it could be so easy to just edit the XML file...

2014-12-12 04:37:55 -0500 received badge  Famous Question (source)
2014-11-10 06:36:23 -0500 received badge  Nice Question (source)
2014-09-29 19:22:34 -0500 received badge  Popular Question (source)
2014-08-27 04:32:16 -0500 answered a question DHCP port binding fails on flat provider network

Hello im facing exactly the same issue, unfortunately my ml2_conf.ini looking fine.

2014-08-18 21:37:44 -0500 received badge  Editor (source)
2014-08-18 21:37:26 -0500 asked a question Routed management network

Hello,

is it possible to have a routed management network, especially in terms of the neutron dhcp and metadate agent?

Thanks!

2014-08-17 22:24:43 -0500 received badge  Commentator
2014-08-17 22:24:43 -0500 commented answer Multiple metadata agents

Hello and thanks for your answer!

I want one metadata agent per subnet, like this:

  • 10.1.200.0/24 (1x metadata agent)
  • 10.1.177.0/24 (1x metadata agent)
2014-08-11 12:56:20 -0500 received badge  Notable Question (source)
2014-08-10 16:13:18 -0500 received badge  Popular Question (source)
2014-08-09 07:28:01 -0500 asked a question Multiple metadata agents

Hello,

Is it possible to have multiple metadata agents?

Thanks in advance!

2014-08-07 02:47:15 -0500 commented question Is there a way to create instances without having them automatically startup?

Interresting question!

2014-08-06 02:27:12 -0500 received badge  Notable Question (source)
2014-06-27 06:48:43 -0500 received badge  Famous Question (source)
2014-06-23 23:39:53 -0500 answered a question How to delete entries from the horizon "System Info"

In the end I had to hack the database. Have a look at: https://bugs.launchpad.net/horizon/+b...

2014-06-23 23:38:35 -0500 received badge  Popular Question (source)
2014-06-23 00:47:10 -0500 asked a question How to delete entries from the horizon "System Info"

Hello,

I have a Compute Node in the System info list in horizon which I don't use anymore. It is still listed as "down". How can I delete entries from this list?

2014-06-18 22:56:42 -0500 commented answer metadata service not reachable from instance in neutron single flat provider network

I set it up on CentOS, the meta service runs on the Network Node and the controller node runs the proxy for it.

2014-06-18 22:35:11 -0500 received badge  Famous Question (source)
2014-06-18 22:31:42 -0500 commented answer metadata service not reachable from instance in neutron single flat provider network

The Instance tries to connect to the meta service. IPTABLES is my guess also, especially I see no errors in the log of the services.

2014-06-18 21:57:55 -0500 commented answer metadata service not reachable from instance in neutron single flat provider network

Unfortunately no, I really despair on it. Before I had the Controller Node on a VM, now I installed everything from scratch on physical machines with the same result...

2014-06-17 03:16:46 -0500 commented question metadata service not reachable from instance in neutron single flat provider network

On the Network Node:

ip netns exec qdhcp-5a227a11-11ea-495e-9a1b-e55b5b6731ef ip -4 a
7: tap1894025f-59: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    inet 10.117.200.50/24 brd 10.117.200.255 scope global tap1894025f-59
    inet 169.254.169.254/16 brd 169.254.255.255 scope global tap1894025f-59
8: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    inet 127.0.0.1/8 scope host lo
2014-06-17 03:05:17 -0500 commented question metadata service not reachable from instance in neutron single flat provider network

I found the logs and had a look, there is nothing about any connection. Its seems like the request never gets to the meta_proxy or the network node.

Here a list of the iptables rules:

Controller Node:

iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 90458 packets, 3790K bytes)
 pkts bytes target     prot opt in     out     source               destination
90458 3790K nova-api-PREROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain POSTROUTING (policy ACCEPT 5699 packets, 368K bytes)
 pkts bytes target     prot opt in     out     source               destination
 5699  368K nova-api-POSTROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0
 5699  368K nova-postrouting-bottom  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 5699 packets, 368K bytes)
 pkts bytes target     prot opt in     out     source               destination
 5699  368K nova-api-OUTPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0 ...
(more)
2014-06-16 13:40:40 -0500 received badge  Notable Question (source)
2014-06-16 02:57:33 -0500 commented question metadata service not reachable from instance in neutron single flat provider network

Could it be a problem the Controller node is running on a VM on Hyper-V? On Hyper-V its not possible to have a promiscuous mode.

2014-06-15 21:38:25 -0500 received badge  Popular Question (source)
2014-06-15 13:22:47 -0500 commented question metadata service not reachable from instance in neutron single flat provider network

Now a different error:

2014-06-15 13:59:57,645 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request timed out. (timeout=50.0)]
2014-06-15 03:31:15 -0500 asked a question metadata service not reachable from instance in neutron single flat provider network

Hello

I have the following Icehouse setup and using neutron with ML2 and the OVS Plugin in a single flat provider network:

  • Controler Node: 10.117.200.7
  • Network Node: 10.117.200.10
  • Compute Node: 10.117.200.12

Network is working fine. The Instances get a IP from the DHCP with the correct getaway and DNS settings. But the metadata service is not reachable from the instance.

Here the log from the CirrOS Image.

adding dns 10.117.192.145
adding dns 10.117.192.160
adding net 169.254.169.254/32 with router 10.117.200.50
cirros-ds 'net' up at 0.72
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 0.73. request failed
failed 2/20: up 12.75. request failed
failed 3/20: up 24.76. request failed

Inside the CirrOS Instance:

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.117.200.254  0.0.0.0         UG    0      0        0 eth0
10.117.200.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.169.254 10.117.200.50   255.255.255.255 UGH   0      0        0 eth0
$ ping 169.254.169.254
PING 169.254.169.254 (169.254.169.254): 56 data bytes
64 bytes from 169.254.169.254: seq=0 ttl=64 time=1.199 ms
64 bytes from 169.254.169.254: seq=1 ttl=64 time=0.235 ms
64 bytes from 169.254.169.254: seq=2 ttl=64 time=0.252 ms
^C
--- 169.254.169.254 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.235/0.562/1.199 ms
$ curl http://169.254.169.254
curl: (7) couldn't connect to host

From the Controller Node I can reach the metadata service with the own IP and the service is listening:

curl http://10.117.200.7:8775
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04

curl http://169.254.169.254
<html>
 <head>
  <title>500 Internal Server Error</title>
 </head>
 <body>
  <h1>500 Internal Server Error</h1>
  Remote metadata server experienced an internal server error.<br /><br />
 </body>
</html>

Network Node:

cat /etc/neutron/neutron.conf | grep "^[^#;\[]"
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = controller
core_plugin = ml2
service_plugins = router
verbose = True
debug = False
allow_overlapping_ips = True
auth_uri = http://controller:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = xxx

cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep "^[^#;\[]"
type_drivers = local,flat
mechanism_drivers = openvswitch,l2population
flat_networks = *
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_tunneling = False
local_ip = 10.117.200.11
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth1

cat /etc/neutron/metadata_agent.ini | grep "^[^#;\[]"
debug = True
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = xxx
nova_metadata_ip = 10.117.200.7
metadata_proxy_shared_secret = xxx

cat /etc/neutron/dhcp_agent.ini | grep "^[^#;\[]"
debug = True
interface_driver = neutron.agent.linux ...
(more)
2014-06-15 02:13:08 -0500 received badge  Scholar (source)
2014-06-15 02:04:51 -0500 received badge  Enthusiast
2014-06-12 22:57:10 -0500 received badge  Supporter (source)
2014-06-12 06:58:26 -0500 commented question nova instance resize bug in icehouse

Same here using KVM! In a one Compute Node setup.

2014-06-12 06:56:27 -0500 answered a question nova instance resize bug in icehouse

Same here using KVM!

2014-05-14 06:12:44 -0500 received badge  Notable Question (source)
2014-05-01 13:52:21 -0500 received badge  Popular Question (source)
2014-04-25 08:21:11 -0500 received badge  Student (source)
2014-04-23 22:01:10 -0500 asked a question Instances alternative gateway / VLAN tags corresponding datacenter VLANs

Hello All I have two questions regarding Neutron:

1) In the documentation I always see diagrams where the network traffic from all the instances is routed through the network node. For example here: http://docs.openstack.org/icehouse/install-guide/install/yum/content/figures/3/figures/installguide_neutron-initial-networks.png (http://docs.openstack.org/icehouse/in...)

Is it possible to configure a different gateway for the instances that the gateway is the real physical router and skip the way through the network node?

2) If there are VLANs created in the physical switches is it possible to create this VLANs also in the OpenStack/Neutron environment to match the datacenter VLANs?

Thank you!