Ask Your Question

atze.devries's profile - activity

2015-11-09 07:23:03 -0500 answered a question unable to establish connection to localhost

Having the exact same issue over here. Even if i just run

curl url.of.openstack:5000/v2.0

it returns

{"version": {"status": "stable", "updated": "2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}, {"base": "application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}], "id": "v2.0", "links": [{"href": "http://localhost:5000/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/api/openstack-identity-service/2.0/content/", "type": "text/html", "rel": "describedby"}, {"href": "http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf", "type": "application/pdf", "rel": "describedby"}]}}%

SOLUTION for ME

edit /etc/keystone/keystone.conf

change

public_endpoint = http://localhost:%(public_port)s/

to

public_endpoint = http://url.to.openstack:5000/
2015-01-05 03:49:07 -0500 received badge  Famous Question (source)
2014-09-09 15:18:26 -0500 received badge  Favorite Question (source)
2014-07-29 06:05:28 -0500 received badge  Famous Question (source)
2014-06-27 12:22:20 -0500 received badge  Notable Question (source)
2014-06-24 12:47:26 -0500 received badge  Notable Question (source)
2014-06-23 10:07:20 -0500 answered a question migrate_to_ml2.py error during upgrade

I found the issue!

My mysql tabels are in MyISAM but myiasm doesn't support foreign keys. After converting a table to InnoDB it seems to work fine.

2014-06-23 04:02:33 -0500 received badge  Nice Question (source)
2014-06-23 02:51:11 -0500 received badge  Popular Question (source)
2014-06-20 15:54:44 -0500 received badge  Student (source)
2014-06-20 14:25:38 -0500 commented question Dual public network

would it be possible to add more networkcards to you openstack server? The best solution i think would be to have two floating ip pools (one in 192 and one in 86). I think this should be possible. You can seperate you network traffic using GRE. Here is info on multipe floating ips http://docs.openstack.org/grizzly/openstack-network/admin/content/adv_cfg_l3_agent_multi_extnet.html (http://docs.openstack.org/grizzly/ope...)

2014-06-20 12:42:47 -0500 commented question Dual public network

Could you supply us with a little more info on your installation. Do you use a flat network? Or GRE or VLAN?

2014-06-20 12:37:05 -0500 answered a question External network default gateway different with subnet CIDR

For the 40.105.252.216/30 you have to have some form routing. It should be a subnet containing your CIDR. It could be your provider supplies you with this. Otherwise you should set up one yourself. This subnet should have a gateway.

In openstack you can create a exteranl subnet. The CIDR of the subnet should contain your gateway. You can configure a allocation pool. This pool should match 40.105.252.216/30 (don't forget to disable dhcp in this subnet).

2014-06-20 07:08:51 -0500 asked a question migrate_to_ml2.py error during upgrade

Hi All,

I'm trying to upgrade our stack (1 control, 3 computes) from havana to icehouse. We are using the ops guide http://docs.openstack.org/trunk/openstack-ops/openstack-ops-manual-trunk.pdf (http://docs.openstack.org/trunk/opens...) (pdf page, 275 or document page 247).

when running

python -m neutron.db.migration.migrate_to_ml2 openvswitch mysql://root:test@127.0.0.1/neutron

the following error is returned:

Traceback (most recent call last):
  File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 439, in <module>
    main()
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 435, in main
    args.vxlan_udp_port)
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/migrate_to_ml2.py", line 115, in __call__
    metadata.create_all(engine)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 2848, in create_all
    tables=tables)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1479, in _run_visitor
    conn._run_visitor(visitorcallable, element, **kwargs)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1122, in _run_visitor
    **kwargs).traverse_single(element)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 122, in traverse_single
    return meth(obj, **kw)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py", line 56, in visit_metadata
    collection = [t for t in sql_util.sort_tables(tables)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/util.py", line 39, in sort_tables
    {'foreign_key': visit_foreign_key})
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 258, in traverse
    return traverse_using(iterate(obj, opts), obj, visitors)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 249, in traverse_using
    meth(target)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/util.py", line 30, in visit_foreign_key
    parent_table = fkey.column.table
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 612, in __get__
    obj.__dict__[self.__name__] = result = self.fget(obj)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 1474, in column
    tname)
sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column 'ml2_network_segments.network_id' could not find table 'networks' with which to generate a foreign key to target column 'id'

Does anybody have any idea how to solve this one?

Greetinx Atze

some extra info

this is the part of the code from migrate_to_ml2.py which created the ml2_network_segments this is the part of the code which fails.

'ml2_network_segments', metadata,
        sa.Column('id', sa.String(length=36), nullable=False),
        sa.Column('network_id', sa.String(length=36), nullable=False),
        sa.Column('network_type', sa.String(length=32), nullable=False),
        sa.Column('physical_network', sa.String(length=64), nullable=True),
        sa.Column('segmentation_id', sa.Integer(), nullable=True),
        sa.ForeignKeyConstraint(['network_id'], ['networks.id'],
                                ondelete='CASCADE'),
        sa.PrimaryKeyConstraint('id'),

i ' ve added the table by hand with this:

CREATE TABLE ml2_network_segments ( id VARCHAR(36) NOT NULL, network_id VARCHAR(36) NOT NULL, network_type VARCHAR(32) NOT NULL , physical_network VARCHAR(64), segmentation_id ...
(more)
2014-06-19 02:13:13 -0500 received badge  Popular Question (source)
2014-06-16 07:40:12 -0500 commented question install lbaas fwaas vpnaas on production neutron

Yes i've followed all the guidelines but still no solution. The neutron-server service fails when the lbaas plugin is enables (cannot find neutron.pools mysql table). If i leave out lbaas, neutron-server starts but still some mysql tables are not created.

2014-06-16 03:21:55 -0500 asked a question install lbaas fwaas vpnaas on production neutron

Hello,

I'm trying to add neutron services (lbaas, fwaas adn vpnaas) to our production neutron service (ofcourse first in a testing enviroment). When i try to install these services on a empty stack install all works fine. We are running havanna. But if i add these services after there is already some data in neutron stuff does not work. neutron-db-manage does not seem to add the nessesary tables to the neutron db.

Is this behaviour suppost to be like this? Or are we doing something work?

2014-06-05 06:38:58 -0500 received badge  Famous Question (source)
2014-05-07 14:15:22 -0500 received badge  Notable Question (source)
2014-04-27 15:18:49 -0500 received badge  Famous Question (source)
2014-04-15 03:33:50 -0500 received badge  Enthusiast
2014-04-14 02:17:12 -0500 received badge  Scholar (source)
2014-04-14 02:17:05 -0500 received badge  Popular Question (source)
2014-04-11 06:44:09 -0500 commented answer Compute instance root disk thin provision and lvm

Thnx. This is good info. I found a nice article on the life of a openstack image. This gives good info on how the internals work http://www.pixelbeat.org/docs/openstack_libvirt_images/ (http://www.pixelbeat.org/docs/opensta...)

2014-04-09 09:58:07 -0500 asked a question Compute instance root disk thin provision and lvm

Hi there,

Currrently we have a openstack setup with one control and two compute nodes. The compute nodes have the following nova configuration on the libvirt part.

libvirt_use_virtio_for_bridges=True
connection_type=libvirt
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
compute_driver=libvirt.LibvirtDriver
libvirt_type=kvm
libvirt_images_type=lvm
libvirt_images_volume_group=instance-volumes

The base images (comming from glance) are saved in /var/lib/nova/instances/_base as a file on de compute node. Once a instance is started the rootdisk is a lvm volume with 100% of it's size.

We would like to have thin provisioned rootdisks. Is there a way to configure this with lvm? Or should we ditch the lvm construction.

What exactly are the pro's and con's of using lvm in this situaion? The documentation isn't very clear about this.

2014-03-30 23:57:45 -0500 received badge  Notable Question (source)
2014-03-30 23:45:16 -0500 received badge  Popular Question (source)
2014-03-25 07:25:45 -0500 commented question lbaas tables not created in dabase

the error you describe in the redhad link is related to a redhad package which results in a double entry of the service_plugin entry. This is not applicaple to our problem.

2014-03-24 04:01:48 -0500 commented question lbaas tables not created in dabase

added some more information to the origianl question

2014-03-24 03:54:50 -0500 answered a question lbaas tables not created in dabase

Hi

Thnx for your answer. We do have a guideline setup of openstack with a controller and sperate compute nodes. The setup is installed via the stackforge-puppet puppet modules. We edited the neutron.conf and restarted the service. It then shows the error messages shown in my previous message.

This is our neutron.conf:

[DEFAULT]
debug = False
verbose = False
state_path = /var/lib/neutron
lock_path = $state_path/lock
use_syslog = False
log_dir =/var/log/neutron
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
auth_strategy = keystone
base_mac = fa:16:3e:00:00:00
mac_generation_retries = 16
dhcp_lease_duration = 120
allow_bulk = True
allow_pagination = False
allow_sorting = False
allow_overlapping_ips = True
rpc_backend = neutron.openstack.common.rpc.impl_kombu
control_exchange = neutron

rabbit_host = 127.0.0.1
rabbit_password = masked_by_me
rabbit_port = 5672
rabbit_hosts = 127.0.0.1:5672
rabbit_userid = openstack
rabbit_virtual_host = /
rabbit_ha_queues = False

notification_driver = neutron.openstack.common.notifier.rpc_notifier
agent_down_time = 9
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
dhcp_agents_per_network = 1
api_workers = 0
report_interval=4

service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin

[quotas]

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = neutron
admin_password = masked_by_me
signing_dir = $state_path/keystone-signing
auth_uri = http://127.0.0.1:5000/

[database]
connection = mysql://neutron:masked_by_me@127.0.0.1/neutron?charset=latin1
max_retries = 10
retry_interval = 10
idle_timeout = 3600

[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

[AGENT]
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf

The lbaas service running but it has some errors. Could this be the reason for not creating the database tables? This is the lbaas logging:

    2014-03-24 08:42:10.884 19878 ERROR neutron.services.loadbalancer.drivers.haproxy.agent_manager [-] Unable to retrieve ready devices
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager Traceback (most recent call last):
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager   File "/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/drivers/haproxy/agent_manager.py", line 212, in sync_state
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager     ready_logical_devices = set(self.plugin_rpc.get_ready_devices())
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager   File "/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/drivers/haproxy/agent_api.py", line 36, in get_ready_devices
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager     topic=self.topic
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager   File "/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/proxy.py", line 130, in call
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager     exc.info, real_topic, msg.get('method'))
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager Timeout: Timeout while waiting on RPC response - topic: "q-lbaas-process-on-host", RPC method: "get_ready_devices" info: "<unknown>"
2014-03-24 08:42:10.884 19878 TRACE neutron.services.loadbalancer.drivers.haproxy.agent_manager 
2014-03-24 08:42:10.886 19878 WARNING neutron.openstack.common.loopingcall [-] task run outlasted interval by 50.007883 sec
2014-03-20 05:47:26 -0500 asked a question lbaas tables not created in dabase

Hi There,

We did a clean installation of openstack havana (1 controller with all the services and a set of compute nodes, all on ubuntu 12.04). We use neutron and l3. The database backend is mysql. Why are trying the enable the lbaas feature but on enabling the service (configure it in neutron.conf (using http://docs.openstack.org/admin-guide-cloud/content/install_neutron-lbaas-agent.html (openstack lbaas docs)) the neutron-server stop working. The neutron server log returns:

ProgrammingError: (ProgrammingError) (1146, "Table 'neutron.pools' doesn't exist") 'SELECT pools.tenant_id AS pools_tenant_id, pools.id AS pools_id, pools.status AS pools_status, pools.status_description AS pools_status_description, pools.vip_id AS pools_vip_id, pools.name AS pools_name, pools.description AS pools_description, pools.subnet_id AS pools_subnet_id, pools.protocol AS pools_protocol, pools.lb_method AS pools_lb_method, pools.admin_state_up AS pools_admin_state_up, providerresourceassociations_1.provider_name AS providerresourceassociations_1_provider_name, providerresourceassociations_1.resource_id AS providerresourceassociations_1_resource_id \nFROM pools LEFT OUTER JOIN providerresourceassociations AS providerresourceassociations_1 ON pools.id = providerresourceassociations_1.resource_id' ()
2014-03-19 16:10:18.585 10703 TRACE neutron.service
2014-03-19 16:10:18.594 10703 CRITICAL neutron [-] (ProgrammingError) (1146, "Table 'neutron.pools' doesn't exist") 'SELECT pools.tenant_id AS pools_tenant_id, pools.id AS pools_id, pools.status AS pools_status, pools.status_description AS pools_status_description, pools.vip_id AS pools_vip_id, pools.name AS pools_name, pools.description AS pools_description, pools.subnet_id AS pools_subnet_id, pools.protocol AS pools_protocol, pools.lb_method AS pools_lb_method, pools.admin_state_up AS pools_admin_state_up, providerresourceassociations_1.provider_name AS providerresourceassociations_1_provider_name, providerresourceassociations_1.resource_id AS providerresourceassociations_1_resource_id \nFROM pools LEFT OUTER JOIN providerresourceassociations AS providerresourceassociations_1 ON pools.id = providerresourceassociations_1.resource_id' ()

The table pools doens not exists in the neutorn database. None of the lbaas databases are created. There are the tables in the neutron database

+------------------------------+
| Tables_in_neutron            |
+------------------------------+
| agents                       |
| alembic_version              |
| allowedaddresspairs          |
| dnsnameservers               |
| externalnetworks             |
| extradhcpopts                |
| floatingips                  |
| healthmonitors               |
| ipallocationpools            |
| ipallocations                |
| ipavailabilityranges         |
| networkdhcpagentbindings     |
| networks                     |
| ovs_network_bindings         |
| ovs_tunnel_allocations       |
| ovs_tunnel_endpoints         |
| ovs_vlan_allocations         |
| portbindingports             |
| ports                        |
| providerresourceassociations |
| quotas                       |
| routerl3agentbindings        |
| routerroutes                 |
| routers                      |
| securitygroupportbindings    |
| securitygrouprules           |
| securitygroups               |
| servicedefinitions           |
| servicetypes                 |
| subnetroutes                 |
| subnets                      |
+------------------------------+

Does anybody have any idea why the lbaas tables are not created? And more importantly how to create the lbaas tables?

Here is some more information (after answer of mpetason):

Thnx for your answer. We do have a guideline setup of openstack with a controller and sperate compute nodes. The setup is installed via the stackforge-puppet puppet modules. We edited the neutron.conf and restarted the service. It then shows the error messages shown in my previous message.

This is our neutron.conf:

[DEFAULT]
debug = False
verbose = False
state_path = /var/lib/neutron
lock_path = $state_path/lock
use_syslog = False
log_dir =/var/log/neutron
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
auth_strategy = keystone
base_mac = fa:16:3e:00:00:00
mac_generation_retries = 16
dhcp_lease_duration = 120
allow_bulk = True
allow_pagination = False
allow_sorting = False
allow_overlapping_ips = True
rpc_backend = neutron.openstack.common.rpc.impl_kombu
control_exchange = neutron

rabbit_host = 127.0.0.1
rabbit_password = masked_by_me
rabbit_port = 5672
rabbit_hosts = 127.0.0.1:5672
rabbit_userid = openstack
rabbit_virtual_host = /
rabbit_ha_queues = False

notification_driver = neutron.openstack.common.notifier.rpc_notifier
agent_down_time = 9
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
dhcp_agents_per_network = 1
api_workers = 0
report_interval=4

service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin

[quotas]

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken]
auth_host ...
(more)