BuddyBoy13's profile - activity

2014-07-01 05:31:06 -0600 received badge  Famous Question (source)
2014-06-06 13:22:11 -0600 received badge  Self-Learner (source)
2014-06-06 13:22:11 -0600 received badge  Teacher (source)
2014-06-02 09:35:17 -0600 commented question Icehouse networks disappear after reboot

Ok. My CentOS installation doesn't have that command available (sorry for not mentioning CentOS in my original post). I'll look online for what package provides that script.

2014-05-30 11:29:00 -0600 received badge  Notable Question (source)
2014-05-30 10:37:55 -0600 received badge  Popular Question (source)
2014-05-30 10:33:10 -0600 commented question Icehouse networks disappear after reboot

In this setup, it's GRE with ovs. I'm just following the installation guide and I can't find anything related to creating an ovs_neutron database like I had to do with the other services. Is that something in the guide or did I completely miss something?

2014-05-30 10:19:57 -0600 commented question Icehouse networks disappear after reboot

I got an access denied message when trying to use ovs_neutron. I then logged in as root and ran show databases:

information_schema glance keystone mysql neutron nova

It's a problem that ovs_neutron isn't there, right?

2014-05-30 09:55:36 -0600 commented question Icehouse networks disappear after reboot

They all look ok. Any services that are inactive or dead are disabled on boot. The two networks that are supposed to be there are not listed. If I manually add them and then run the status command again, they appear.

2014-05-30 09:26:57 -0600 commented question Icehouse networks disappear after reboot

The agent-list command returns alive and true for L3, Metadata, and DHCP. Ip netns returns nothing.

2014-05-30 08:21:25 -0600 asked a question Icehouse networks disappear after reboot

Yeah, this one is a bit odd. I'm following the installation guide for icehouse found here:

http://docs.openstack.org/icehouse/install-guide/install/yum/content/ (http://docs.openstack.org/icehouse/in...)

I'm using the three node model, installing on top of VMWare with promisuous mode enabled on the vSwitch.

I can create networks, subnets, and routers but as soon as the controller is rebooted and comes back up, the networks are gone:

[root@openstack-cloud ~]# source demo-openrc.sh
[root@openstack-cloud ~]# neutron net-list
+--------------------------------------+----------+------------------------------------------------------+
| id                                   | name     | subnets                                              |
+--------------------------------------+----------+------------------------------------------------------+
| 80b275a2-63a5-491f-abaf-36d8768febac | ext-net  | 55a8a6c2-b2ca-401b-a2c4-4e77065506b5 192.168.31.0/24 |
| 5cfa0651-5071-4e6a-b6c4-fc0dded014ba | demo-net | 558d702c-705e-4559-859e-e03d381db265 10.231.0.0/24   |
+--------------------------------------+----------+------------------------------------------------------+
[root@openstack-cloud ~]# reboot

Broadcast message from root@openstack-cloud
        (/dev/pts/1) at 13:10 ...

The system is going down for reboot NOW!
[root@openstack-cloud ~]#
login as: root
root@192.168.30.161's password:
Last login: Fri May 30 13:10:07 2014 from 192.168.9.50
[root@openstack-cloud ~]# source demo-openrc.sh
[root@openstack-cloud ~]# neutron net-list

[root@openstack-cloud ~]#

There's nothing unusual in /var/log/neutron/server.log

Any ideas on where else to look? It's like the info is no longer in the database.

Thanks for any assistance!

2014-04-14 01:25:10 -0600 received badge  Famous Question (source)
2014-04-01 03:53:05 -0600 received badge  Notable Question (source)
2014-03-31 00:44:36 -0600 received badge  Famous Question (source)
2014-03-07 03:36:28 -0600 received badge  Notable Question (source)
2014-03-04 03:42:13 -0600 received badge  Popular Question (source)
2014-02-28 11:08:54 -0600 received badge  Supporter (source)
2014-02-28 11:03:39 -0600 answered a question Metadata server error

Thanks foexle....your comment pointed me in the right direction. I needed to install python-neutronclient. Once I did that, the issue was resolved.

2014-02-28 08:59:51 -0600 commented question Metadata server error

Found this....https://ask.openstack.org/en/question/12439/metadata-agent-throwing-attributeerror-httpclient-object-has-no-attribute-auth_tenant_id-with-latest-release/

2014-02-28 08:49:29 -0600 commented question Metadata server error

When I attempt to connect using curl from the instance to http://169.254.169.254, the metadata-agent.log shows this error: 2014-02-28 09:47:39.609 3524 ERROR neutron.agent.metadata.agent [-] Unexpected error. AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id'

2014-02-28 08:29:27 -0600 commented question Metadata server error

How can I determine if I'm using the proxy? Logfiles under /var/log/nova include: api.log cert.log compute.log conductor.log consoleauth.log console.log metadata-api.log scheduler.log xvpvncproxy.log Nova API logs are not showing any errors.

2014-02-28 06:07:03 -0600 asked a question Metadata server error

I'm running an all-in-one setup created from packstack. Everything is working as it should except for the metadata server. This is a problem for a couple of reasons, the first being that I need it to work so that I can pass variables to the instances at startup. Even if that were not the case, some instances such as Ubuntu for example, won't start sshd if the metadata retrieval fails (at least I think that's how it works). Here's the error I'm receiving when I curl it manually from a CentOS instance:

 [root@host-10-230-0-2 ~]# curl http://169.254.169.254/latest
<html>
 <head>
  <title>500 Internal Server Error</title>
 </head>
 <body>
  <h1>500 Internal Server Error</h1>
  Remote metadata server experienced an internal server error.<br /><br />
</body>

Oddly enough, if I reboot the controller (Where the metadata server is running), and then bring up the CentOS instance, it still fails to retrieve the metadata but if I curl it, I can get a response..........once. After that it gives me the 500 again.

[root@host-10-230-0-2 ~]# curl http://169.254.169.254/latest
meta-data/[root@host-10-230-0-2 ~]#

And then....

[root@host-10-230-0-2 ~]# curl http://169.254.169.254/latest
<html>
 <head>
  <title>500 Internal Server Error</title>
 </head>
 <body>
  <h1>500 Internal Server Error</h1>
  Remote metadata server experienced an internal server error.<br /><br />
 </body>
</html>[root@host-10-230-0-2 ~]#

No and then!

I've been tailing all of the pertinent log files but not seeing anything that I interpret as a problem. I'm looking for any advice here, as to what the cause might be.

Thanks!

2014-02-28 00:50:07 -0600 received badge  Popular Question (source)
2014-02-18 10:14:16 -0600 answered a question PackStack installation networking issue

After digging around a bit, I found the issue. The neutron user needs the ability to sudo. Once I added the account to the appropriate group, the tap interface was created.

https://bugs.launchpad.net/tripleo/+bug/1200409 (https://bugs.launchpad.net/tripleo/+b...)

2014-02-18 07:06:08 -0600 asked a question PackStack installation networking issue

Hello all,

I'm installing/configuring Havana using the instructions at this link:

http://openstack.redhat.com/PackStack_All-in-One_DIY_Configuration (http://openstack.redhat.com/PackStack...)

Commands run up to this point were all from this guide:

packstack --allinone --provision-demo=n --provision-all-in-one-ovs-bridge=n
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT ovs_use_veth True
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT ovs_use_veth True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata = True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
neutron net-create extnet --router:external=True
neutron subnet-create extnet --allocation-pool start=192.168.30.32,end=192.168.30.62  --gateway 192.168.30.1 --enable_dhcp=False  192.168.30.0/24
neutron router-create rdorouter
neutron router-gateway-set ea3ce1c1-6bd4-49ea-9c1d-9d0cfcc0f1db 59300b19-106c-4e2a-8f1d-a2cbd6b63c96

Almost all has gone well up to the point of setting the router's gateway. The command itself works but I am not seeing the expected output that is shown in the document. The doc says that there should be a tap interface created under br-ex and this is what connects the router to the external network. The document says I should see something similar to this:

And now is a good time to do some "looking around". Let's start with the changes to Open vSwitch.
ovs-vsctl show
74613231-71bb-4bc9-81ab-22f2bc04d53a
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port "tap0f7a05c3-8c"
            Interface "tap0f7a05c3-8c"
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "1.10.0"

Unfortunately, after running the gateway-set command, my ovs-vsctl show output looks the same as it did at the initial configuration. Here's my output:

[root@intbc1bl11 ~(keystone_admin)]# ovs-vsctl show
19b2b1e8-c659-4aa9-90a2-38fab8f6687b
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "1.11.0"

ifconfig also shows no additional interfaces:

[root@intbc1bl11 openvswitch(keystone_admin)]# ifconfig
br-ex     Link encap:Ethernet  HWaddr 22:E2:0F:C4:AF:4F
          inet6 addr: fe80::20e2:fff:fec4:af4f/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:468 (468.0 b)

br-int    Link encap:Ethernet  HWaddr C6:50:FA:10:2E:4D
          inet6 addr: fe80::c450:faff:fe10:2e4d/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:468 (468.0 b)

eth0      Link encap:Ethernet  HWaddr 00:10:18:B9:DD:10
          inet addr:192.168.30.161  Bcast:192.168.30.255  Mask:255.255.255.0
          inet6 addr: fe80::210:18ff:feb9:dd10/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:38314 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2289 errors:0 dropped:0 overruns:0 ...
(more)