Ask Your Question

Daniel Ruiz's profile - activity

2019-01-08 01:47:00 -0500 received badge  Famous Question (source)
2017-09-09 06:52:21 -0500 received badge  Famous Question (source)
2016-06-29 08:19:30 -0500 received badge  Notable Question (source)
2016-06-27 07:53:26 -0500 received badge  Popular Question (source)
2016-06-27 06:28:43 -0500 received badge  Editor (source)
2016-06-27 06:28:18 -0500 answered a question Connectivity between instances running in diferents computes nodes

I have reconfigured all scenario and I have reinstalled ALL with Mitaka version. Problem persists... so it's not caused by a bug. It's a configuration problem... How can I configure with Mitaka and 2 nics: first nic has ALWAYS public IP and, also, is for management in openstack and second nic is for VM data (creating, deleting, etc).

Bufff

2016-06-22 06:49:08 -0500 commented question Connectivity between instances running in diferents computes nodes

I try to ping a VM running on different compute from inside a VM, not from a compute, so I try to ping to an private address leases from dhcp agent running in my private network (created from dashboard or CLI).

2016-06-22 04:39:03 -0500 asked a question Connectivity between instances running in diferents computes nodes

Hello,

I'm mananing a Juno Openstack cloud... Yes, I know: I should upgrade to Mitaka, but because of all computes act as students computers, I can't "touch" anything... So, by the moment, Juno will be my version...

My scenario is: One server that acts as controller and network node, with 3 nics: one for management, one for VM-data and one for "external" (with no IP address configured). In my Lab, all machines have public IP, so management nic has configured a public IP 20 computes that act as computes nodes, with 2 nics: one for management (and, like in controller, with one public IP) and one for VM-data

Configuration files (in server and computes) are, always, using public IPs (eth1) and only in ml2_conf.ini (in neutron plugins) I have configured "local_ip=private_IP_value" for openvswitch

With this configuration, when I launch "n" instances and they are running in more than one computer, there are no connectivity between instances running in different computes. However, in instances running in the same compute I can ping each other... All instances get a valid IP from my configured private network (I can see them at dashboard), but I run a VNC console, I can check the problem with connectivity

What can I do? How can I debug this problem?

Thanks, thanks a lot!!! I need this help :( :( :(

2016-02-09 06:23:22 -0500 received badge  Famous Question (source)
2016-02-01 02:46:06 -0500 received badge  Notable Question (source)
2016-01-30 03:10:00 -0500 received badge  Famous Question (source)
2016-01-29 17:31:03 -0500 received badge  Popular Question (source)
2016-01-29 11:55:08 -0500 received badge  Notable Question (source)
2016-01-29 06:13:30 -0500 answered a question NovaException: Unexpected vif_type=binding_failed

/var/log/nova/nova-conductor.log (in controller+network node):

[...]
2016-01-29 12:56:59.453 1732 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on CONTROLLER_IP:5672
2016-01-29 12:57:30.583 1733 INFO oslo.messaging._drivers.impl_rabbit [req-033ec793-e8c6-4783-bbde-920552217e40 ] Connecting to AMQP server on CONTROLLER_IP:5672
2016-01-29 12:57:30.606 1733 INFO oslo.messaging._drivers.impl_rabbit [req-033ec793-e8c6-4783-bbde-920552217e40 ] Connected to AMQP server on CONTROLLER_IP:5672
2016-01-29 12:57:48.801 1733 ERROR nova.scheduler.utils [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] Error from last host: compute2 (node compute2): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2033, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2164, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 96113058-afdf-4d14-a746-b71ddfb45802 was re-scheduled: Unexpected vif_type=binding_failed\n']
2016-01-29 12:57:48.868 1733 WARNING nova.scheduler.driver [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] NoValidHost exception with message: 'No valid host was found.'
2016-01-29 12:57:48.868 1733 WARNING nova.scheduler.driver [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] Setting instance to ERROR state.
2016-01-29 12:59:18.895 1732 INFO oslo.messaging._drivers.impl_rabbit [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 ] Connecting to AMQP server on CONTROLLER_IP:5672
2016-01-29 12:59:18.921 1732 INFO oslo.messaging._drivers.impl_rabbit [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 ] Connected to AMQP server on CONTROLLER_IP:5672
2016-01-29 12:59:26.897 1733 ERROR nova.scheduler.utils [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Error from last host: compute2 (node compute2): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2033, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2164, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6 was re-scheduled: Unexpected vif_type=binding_failed\n']
2016-01-29 12:59:26.962 1733 WARNING nova.scheduler.driver [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] NoValidHost exception with message: 'No valid host was found.'
2016-01-29 12:59:26.964 1733 WARNING nova.scheduler.driver [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Setting instance to ERROR state.

/var/log/nova/nova-compute.log (in compute node):

[...] 2016-01-29 12:59:25.911 982 WARNING nova.virt.disk.vfs.guestfs [-] Failed to close augeas aug_close: do_aug_close: you must call 'aug-init' first to initialize Augeas 2016-01-29 12:59:26.043 982 ERROR nova.compute.manager [-] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Instance failed to spawn 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Traceback (most recent call last): 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2246, in _build_resources 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  yield resources 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2116, in _build_and_run_instance 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  block_device_info=block_device_info) 2016-01-29 12 ...
(more)
2016-01-29 04:18:14 -0500 asked a question NovaException: Unexpected vif_type=binding_failed

Hello,

I'm getting error "NovaException: Unexpected vif_type=binding_failed" when I launch an instance with just external network or with external and internal (tenant) network (if instance has only internal (tenant) network, it runs OK.

How can I find my problem?

Thanks. /var/log/nova/nova-conductor.log (in controller+network node):

2016-01-29 12:56:59.453 1732 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:57:30.583 1733 INFO oslo.messaging._drivers.impl_rabbit [req-033ec793-e8c6-4783-bbde-920552217e40 ] Connecting to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:57:30.606 1733 INFO oslo.messaging._drivers.impl_rabbit [req-033ec793-e8c6-4783-bbde-920552217e40 ] Connected to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:57:48.801 1733 ERROR nova.scheduler.utils [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] Error from last host: compute2 (node compute2): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2033, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2164, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 96113058-afdf-4d14-a746-b71ddfb45802 was re-scheduled: Unexpected vif_type=binding_failed\n']
    2016-01-29 12:57:48.868 1733 WARNING nova.scheduler.driver [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] NoValidHost exception with message: 'No valid host was found.'
    2016-01-29 12:57:48.868 1733 WARNING nova.scheduler.driver [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] Setting instance to ERROR state.
    2016-01-29 12:59:18.895 1732 INFO oslo.messaging._drivers.impl_rabbit [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 ] Connecting to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:59:18.921 1732 INFO oslo.messaging._drivers.impl_rabbit [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 ] Connected to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:59:26.897 1733 ERROR nova.scheduler.utils [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Error from last host: compute2 (node compute2): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2033, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2164, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6 was re-scheduled: Unexpected vif_type=binding_failed\n']
    2016-01-29 12:59:26.962 1733 WARNING nova.scheduler.driver [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] NoValidHost exception with message: 'No valid host was found.'
    2016-01-29 12:59:26.964 1733 WARNING nova.scheduler.driver [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Setting instance to ERROR state.

/var/log/nova/nova-compute.log (in compute node):

2016-01-29 12:59:25.911 982 WARNING nova.virt.disk.vfs.guestfs [-] Failed to close augeas aug_close: do_aug_close: you must call 'aug-init' first to initialize Augeas 2016-01-29 12:59:26.043 982 ERROR nova.compute.manager [-] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Instance failed to spawn 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Traceback (most recent call last): 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2246, in _build_resources 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  yield resources ...
(more)
2016-01-29 02:47:19 -0500 answered a question Floating IPs

Yeahh!!! Thanks!!! with that iptables rules I can connect to the floating IP. I have checked that applying the rule to eth0 or eth2, in both cases, I can do SSH.

Thanks!!!

2016-01-29 01:31:56 -0500 received badge  Popular Question (source)
2016-01-28 20:25:47 -0500 received badge  Famous Question (source)
2016-01-28 08:56:49 -0500 asked a question Floating IPs

Hello,

I'm getting problems with my little openstack cloud. My scenario is this:

  1. one server (controller + network in the same machine) with 3 NICs (managment, data and external network). In this case, managment and external share the same physical network but in configuration files, managment interface has an IP and external has no IP address.
  2. two computes with 2 NICs (managment and data network).

After creating a private (tenant) network, instances running on both compute can ping each other with no problem.

neutron net-create demo-net
neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.0/24

However, after creating a public network and assigning an floating IP, this IP is assigned and associated correctly, but I can't ping and I can't ssh to the floating IP. I have added SSH and ICMP in my security group...

neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net --name ext-subnet --no-gateway --allocation-pool start=STARTING_POOL_FLOATING,end=ENDING_POOL_FLOATING --disable-dhcp PUBLIC_NETWORK
neutron router-create demo-router
neutron router-interface-add demo-router demo-subnet
neutron router-gateway-set demo-router ext-net

As I have commented before, PUBLIC_NETWORK is the same for eth0 and eth2 (both network and wired to the same switch). The only difference is in the configuration file "ifcfg-ethX": for "0" there is an IPADDR and NETWORK and for "2" there are no parameters (only PROMISC="yes" because I have read that external interface needs to be in promiscuos mode).

If I run ping through netns, it runs: "ip netns qrouter-XXX exec ping FLOATING_IP" Also, I can run a bash: "ip netns qrouter-XXX exec /bin/bash" and I go to "somewhere" where I can see private and floating IP, with 2 interfaces (a qr-xxxx-yy and a qg-xxxx-yy). First of that has the private IP and second one has the public (floating IP).

What am I doing wrong in configuration? I need to get access through floating IPs. How can I debug more information?

Thanks a lot!!!

2016-01-28 08:55:27 -0500 asked a question Floating IPs not reacheables

Hello,

I'm getting problems with my little openstack cloud. My scenario is this:

  1. one server (controller + network in the same machine) with 3 NICs (managment, data and external network). In this case, managment and external share the same physical network but in configuration files, managment interface has an IP and external has no IP address.
  2. two computes with 2 NICs (managment and data network).

After creating a private (tenant) network, instances running on both compute can ping each other with no problem.

neutron net-create demo-net
neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.0/24

However, after creating a public network and assigning an floating IP, this IP is assigned and associated correctly, but I can't ping and I can't ssh to the floating IP. I have added SSH and ICMP in my security group...

neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net --name ext-subnet --no-gateway --allocation-pool start=STARTING_POOL_FLOATING,end=ENDING_POOL_FLOATING --disable-dhcp PUBLIC_NETWORK
neutron router-create demo-router
neutron router-interface-add demo-router demo-subnet
neutron router-gateway-set demo-router ext-net

As I have commented before, PUBLIC_NETWORK is the same for eth0 and eth2 (both network and wired to the same switch). The only difference is in the configuration file "ifcfg-ethX": for "0" there is an IPADDR and NETWORK and for "2" there are no parameters (only PROMISC="yes" because I have read that external interface needs to be in promiscuos mode).

If I run ping through netns, it runs: "ip netns qrouter-XXX exec ping FLOATING_IP" Also, I can run a bash: "ip netns qrouter-XXX exec /bin/bash" and I go to "somewhere" where I can see private and floating IP, with 2 interfaces (a qr-xxxx-yy and a qg-xxxx-yy). First of that has the private IP and second one has the public (floating IP).

What am I doing wrong in configuration? I need to get access through floating IPs. How can I debug more information?

Thanks a lot!!!

2015-08-05 10:35:40 -0500 received badge  Popular Question (source)
2015-08-05 10:35:40 -0500 received badge  Notable Question (source)
2015-07-15 02:16:29 -0500 received badge  Notable Question (source)
2015-07-15 02:16:29 -0500 received badge  Popular Question (source)
2015-07-10 08:17:33 -0500 received badge  Famous Question (source)
2015-07-09 10:56:11 -0500 received badge  Notable Question (source)
2015-07-09 06:13:39 -0500 commented answer External network and floating IPs

I have checked my Data Network for instances is going through eth0 and not through eth1, despite of parameters "CONFIG_NOVA_NETWORK_PUBIF=eth0" and "CONFIG_NOVA_NETWORK_PRIVIF=eth1"... Or are that parameters ONLY for nova-network??? I that case, how can I define that network topology in neutron?

2015-07-09 04:00:50 -0500 received badge  Popular Question (source)
2015-07-09 04:00:25 -0500 commented answer External network and floating IPs

I have yet configured with "gre"

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=gre CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=gre

2015-07-09 03:44:57 -0500 answered a question External network and floating IPs

Hi,

Thansk for your answer, but I don't know really if your answer can help me, because I need to you "gre" and not "vlan" or "vxlan". Also I have yet configured keystone with httpd.

My question is focused about network concepts and configuration when eth0 (management) and eth2 (external) networks and nics NEED to share the same physical network.

I will use this answer for asking a "elemental" question for neutron but, for me, answer is not trivial. Management network applied in the configuration files must be the physical management network. Data network can be any "private" network where all my computes and network server could communicate between them, but a user could create any private network from the dashboard, isn't it? ... And external network (for floating IPs) should be configured in files as a subnet of the physical network. Then, I don't undersand the differences between launch an instance with ONLY private network and assign a floating IP and launch an instance with private and public network. In the second case, instance can get a public IP address from my own DHCP agent.

Bufff, what a mess!!

Help, please!!

2015-07-09 01:38:46 -0500 received badge  Enthusiast
2015-07-08 06:32:38 -0500 asked a question External network and floating IPs

Hello,

I'm going to install a Openstack cloud using "packstack" in the following scenario: a server that will act as controller node and network node simultaniously, with 3 NICs. First NIC has (and-need-to-has) a public IP and will be connected into the management network (yes, management network with public IP); second NIC will be for VM data; third NIC will be for external network (floating IPs) but, in this case, this external network need to be (must be) a subnet from my management network because as I have explained before, my management network is also public network. two compute nodes with 2 NICs. First NIC connected to the management network (and, also, with public IPs); second NIC for VM data.

With this answer file:

> CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
> CONFIG_DEFAULT_PASSWORD=MYPASSWORD
> CONFIG_MARIADB_INSTALL=y
> CONFIG_GLANCE_INSTALL=y
> CONFIG_CINDER_INSTALL=n
> CONFIG_MANILA_INSTALL=n
> CONFIG_NOVA_INSTALL=y
> CONFIG_NEUTRON_INSTALL=y
> CONFIG_HORIZON_INSTALL=y
> CONFIG_SWIFT_INSTALL=n
> CONFIG_CEILOMETER_INSTALL=n
> CONFIG_HEAT_INSTALL=n
> CONFIG_SAHARA_INSTALL=n
> CONFIG_TROVE_INSTALL=n
> CONFIG_IRONIC_INSTALL=n
> CONFIG_CLIENT_INSTALL=y
> CONFIG_NTP_SERVERS=ntp-server
> CONFIG_NAGIOS_INSTALL=n
> EXCLUDE_SERVERS= CONFIG_DEBUG_MODE=n
> CONFIG_CONTROLLER_HOST=CONTROLLER
> CONFIG_COMPUTE_HOSTS=COMPUTE-1,COMPUTE-2
> CONFIG_NETWORK_HOSTS=CONTROLLER
> CONFIG_VMWARE_BACKEND=n
> CONFIG_UNSUPPORTED=n
> CONFIG_VCENTER_HOST=
> CONFIG_VCENTER_USER=
> CONFIG_VCENTER_PASSWORD=
> CONFIG_VCENTER_CLUSTER_NAME=
> CONFIG_STORAGE_HOST=CONTROLLER
> CONFIG_SAHARA_HOST=CONTROLLER
> CONFIG_USE_EPEL=n CONFIG_REPO=
> CONFIG_RH_USER= CONFIG_SATELLITE_URL=
> CONFIG_RH_PW= CONFIG_RH_OPTIONAL=y
> CONFIG_RH_PROXY= CONFIG_RH_PROXY_PORT=
> CONFIG_RH_PROXY_USER=
> CONFIG_RH_PROXY_PW=
> CONFIG_SATELLITE_USER=
> CONFIG_SATELLITE_PW=
> CONFIG_SATELLITE_AKEY=
> CONFIG_SATELLITE_CACERT=
> CONFIG_SATELLITE_PROFILE=
> CONFIG_SATELLITE_FLAGS=
> CONFIG_SATELLITE_PROXY=
> CONFIG_SATELLITE_PROXY_USER=
> CONFIG_SATELLITE_PROXY_PW=
> CONFIG_AMQP_BACKEND=rabbitmq
> CONFIG_AMQP_HOST=CONTROLLER
> CONFIG_AMQP_ENABLE_SSL=n
> CONFIG_AMQP_ENABLE_AUTH=n
> CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
> CONFIG_AMQP_SSL_PORT=5671
> CONFIG_AMQP_SSL_CACERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
> CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
> CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem CONFIG_AMQP_SSL_SELF_SIGNED=y
> CONFIG_AMQP_AUTH_USER=amqp_user
> CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
> CONFIG_MARIADB_HOST=CONTROLLER
> CONFIG_MARIADB_USER=root
> CONFIG_MARIADB_PW=MYPASSWORD
> CONFIG_KEYSTONE_DB_PW=MYPASSWORD
> CONFIG_KEYSTONE_REGION=RegionOne
> CONFIG_KEYSTONE_ADMIN_TOKEN=3f87ef703c8443e0bffea9f14c49a615
> CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
> CONFIG_KEYSTONE_ADMIN_USERNAME=admin
> CONFIG_KEYSTONE_ADMIN_PW=password
> CONFIG_KEYSTONE_DEMO_PW=demo
> CONFIG_KEYSTONE_API_VERSION=v2.0
> CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
> CONFIG_KEYSTONE_SERVICE_NAME=httpd
> CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
> CONFIG_KEYSTONE_LDAP_URL=ldap://CONTROLLER
> CONFIG_KEYSTONE_LDAP_USER_DN=
> CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
> CONFIG_KEYSTONE_LDAP_SUFFIX=
> CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
> CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
> CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
> CONFIG_KEYSTONE_LDAP_USER_FILTER=
> CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
> CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
> CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
> CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
> CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE= CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
> CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
> CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
> CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
> CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
> CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
> CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
> CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
> CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
> CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
> CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
> CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
> CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
> CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
> CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
> CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
> CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE= CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
> CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE= CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
> CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
> CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
> CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
> CONFIG_KEYSTONE_LDAP_USE_TLS=n
> CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
> CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
> CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
> CONFIG_GLANCE_DB_PW=MYPASSWORD
> CONFIG_GLANCE_KS_PW=MYPASSWORD
> CONFIG_GLANCE_BACKEND=file
> CONFIG_CINDER_DB_PW=PW_PLACEHOLDER
> CONFIG_CINDER_KS_PW=PW_PLACEHOLDER
> CONFIG_CINDER_BACKEND=lvm
> CONFIG_CINDER_VOLUMES_CREATE=y
> CONFIG_CINDER_VOLUMES_SIZE=20G
> CONFIG_CINDER_GLUSTER_MOUNTS=
> CONFIG_CINDER_NFS_MOUNTS=
> CONFIG_CINDER_NETAPP_LOGIN=
> CONFIG_CINDER_NETAPP_PASSWORD=
> CONFIG_CINDER_NETAPP_HOSTNAME=
> CONFIG_CINDER_NETAPP_SERVER_PORT=80
> CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
> CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
> CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
> CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
> CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
> CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
> CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
> CONFIG_CINDER_NETAPP_NFS_SHARES=
> CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf CONFIG_CINDER_NETAPP_VOLUME_LIST=
> CONFIG_CINDER_NETAPP_VFILER=
> CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
> CONFIG_CINDER_NETAPP_VSERVER=
> CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
> CONFIG_CINDER_NETAPP_SA_PASSWORD=
> CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
> CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
> CONFIG_CINDER_NETAPP_STORAGE_POOLS=
> CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
> CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
> CONFIG_MANILA_BACKEND=generic
> CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
> CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
> CONFIG_MANILA_NETAPP_LOGIN=admin
> CONFIG_MANILA_NETAPP_PASSWORD=
> CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
> CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
> CONFIG_MANILA_NETAPP_SERVER_PORT=443
> CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
> CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
> CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
> CONFIG_MANILA_NETAPP_VSERVER=
> CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
> CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
> CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
> CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
> CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
> CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
> CONFIG_MANILA_NETWORK_TYPE=neutron
> CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
> CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
> CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
> CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
> CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
> CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
> CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
> CONFIG_NOVA_DB_PW=MYPASSWORD
> CONFIG_NOVA_KS_PW=MYPASSWORD
> CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
> CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
> CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
> CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
> CONFIG_NOVA_COMPUTE_PRIVIF=eth1
> CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
> CONFIG_NOVA_NETWORK_PUBIF=eth0
> CONFIG_NOVA_NETWORK_PRIVIF=eth1
> CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
# really, I don't know if I need to change that range because I will use Neutron and not nova-network !?!?!?!
> CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4 ...
(more)
2015-05-24 10:02:01 -0500 asked a question Outbounding connectivity from the VMs (instances) to the Internet

Hello,

I'm managing an OpenStack cloud (Havanna... yeah, too old...) with nova-network and ONLY one interface per node (and one interface in controller and network, that are the same server). All traffic flows in eth0 (and, of course, br100).

When I launch an instances, I assign a floating IP address from the public pool, so the instances can be accessible from the Internet with SSH. But, once I have logged into the instance, I can't browse Internet because it seems there is a NAT problem with any iptables rule in controller. I have run this easy test: from the inside the instances (with a private IP assigned by OpenStack and a public floating IP), I ping to another server (in the same public network as the floating IPs) and, then, that server receives ICMP traffic from the private IP of the instances, like NAT is failing in controller.

However, iptables rules in controller seem to be correct (or maybe it seems for me :( ):

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
nova-network-PREROUTING  all  --  anywhere             anywhere
nova-api-PREROUTING  all  --  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
nova-network-POSTROUTING  all  --  anywhere             anywhere
nova-api-POSTROUTING  all  --  anywhere             anywhere
nova-postrouting-bottom  all  --  anywhere             anywhere
MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535
MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535
MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
nova-network-OUTPUT  all  --  anywhere             anywhere
nova-api-OUTPUT  all  --  anywhere             anywhere

Chain nova-api-OUTPUT (1 references)
target     prot opt source               destination

Chain nova-api-POSTROUTING (1 references)
target     prot opt source               destination

Chain nova-api-PREROUTING (1 references)
target     prot opt source               destination

Chain nova-api-float-snat (1 references)
target     prot opt source               destination

Chain nova-api-snat (1 references)
target     prot opt source               destination
nova-api-float-snat  all  --  anywhere             anywhere

Chain nova-network-OUTPUT (1 references)
target     prot opt source               destination
DNAT       all  --  anywhere             10.3.4.200          to:192.168.32.13
DNAT       all  --  anywhere             FLOATING_PUBLIC_IP  to:192.168.32.13

Chain nova-network-POSTROUTING (1 references)
target     prot opt source               destination
ACCEPT     all  --  192.168.32.0/22      CONTROLLER_PUBLIC_IP
ACCEPT     all  --  192.168.32.0/22      192.168.32.0/22     ! ctstate DNAT
SNAT       all  --  192.168.32.13        anywhere            ctstate DNAT to:10.3.4.200
SNAT       all  --  192.168.32.13        anywhere            ctstate DNAT to:FLOATING_PUBLIC_IP

Chain nova-network-PREROUTING (1 references)
target     prot opt source               destination
DNAT       tcp  --  anywhere             169.254.169.254     tcp dpt:http to:CONTROLLER_PUBLIC_IP:8775
DNAT       all  --  anywhere             10.3.4.200          to:192.168.32.13
DNAT       all  --  anywhere             FLOATING_PUBLIC_IP  to:192.168.32.13

Chain nova-network-float-snat (1 references)
target     prot opt source               destination
SNAT       all  --  192.168.32.13        192.168.32.13       to:10.3.4.200
SNAT       all  --  192.168.32.13        anywhere            to:10.3.4.200
SNAT       all  --  192.168.32.13        192.168.32.13       to:FLOATING_PUBLIC_IP
SNAT       all  --  192.168.32.13        anywhere            to:FLOATING_PUBLIC_IP

Chain nova-network-snat (1 ...
(more)
2015-05-24 10:02:00 -0500 asked a question Outbounding connectivity from the VMs (instances) to the Internet

Hello,

I'm managing an OpenStack cloud (Havanna... yeah, too old...) with nova-network and ONLY one interface per node (and one interface in controller and network, that are the same server). All traffic flows in eth0 (and, of course, br100).

When I launch an instances, I assign a floating IP address from the public pool, so the instances can be accessible from the Internet with SSH. But, once I have logged into the instance, I can't browse Internet because it seems there is a NAT problem with any iptables rule in controller. I have run this easy test: from the inside the instances (with a private IP assigned by OpenStack and a public floating IP), I ping to another server (in the same public network as the floating IPs) and, then, that server receives ICMP traffic from the private IP of the instances, like NAT is failing in controller.

However, iptables rules in controller seem to be correct (or maybe it seems for me :( ):

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
nova-network-PREROUTING  all  --  anywhere             anywhere
nova-api-PREROUTING  all  --  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
nova-network-POSTROUTING  all  --  anywhere             anywhere
nova-api-POSTROUTING  all  --  anywhere             anywhere
nova-postrouting-bottom  all  --  anywhere             anywhere
MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535
MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535
MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
nova-network-OUTPUT  all  --  anywhere             anywhere
nova-api-OUTPUT  all  --  anywhere             anywhere

Chain nova-api-OUTPUT (1 references)
target     prot opt source               destination

Chain nova-api-POSTROUTING (1 references)
target     prot opt source               destination

Chain nova-api-PREROUTING (1 references)
target     prot opt source               destination

Chain nova-api-float-snat (1 references)
target     prot opt source               destination

Chain nova-api-snat (1 references)
target     prot opt source               destination
nova-api-float-snat  all  --  anywhere             anywhere

Chain nova-network-OUTPUT (1 references)
target     prot opt source               destination
DNAT       all  --  anywhere             10.3.4.200          to:192.168.32.13
DNAT       all  --  anywhere             FLOATING_PUBLIC_IP  to:192.168.32.13

Chain nova-network-POSTROUTING (1 references)
target     prot opt source               destination
ACCEPT     all  --  192.168.32.0/22      CONTROLLER_PUBLIC_IP
ACCEPT     all  --  192.168.32.0/22      192.168.32.0/22     ! ctstate DNAT
SNAT       all  --  192.168.32.13        anywhere            ctstate DNAT to:10.3.4.200
SNAT       all  --  192.168.32.13        anywhere            ctstate DNAT to:FLOATING_PUBLIC_IP

Chain nova-network-PREROUTING (1 references)
target     prot opt source               destination
DNAT       tcp  --  anywhere             169.254.169.254     tcp dpt:http to:CONTROLLER_PUBLIC_IP:8775
DNAT       all  --  anywhere             10.3.4.200          to:192.168.32.13
DNAT       all  --  anywhere             FLOATING_PUBLIC_IP  to:192.168.32.13

Chain nova-network-float-snat (1 references)
target     prot opt source               destination
SNAT       all  --  192.168.32.13        192.168.32.13       to:10.3.4.200
SNAT       all  --  192.168.32.13        anywhere            to:10.3.4.200
SNAT       all  --  192.168.32.13        192.168.32.13       to:FLOATING_PUBLIC_IP
SNAT       all  --  192.168.32.13        anywhere            to:FLOATING_PUBLIC_IP

Chain nova-network-snat (1 ...
(more)
2014-05-23 10:15:42 -0500 received badge  Famous Question (source)
2014-05-12 07:57:40 -0500 received badge  Notable Question (source)
2014-05-09 11:16:57 -0500 received badge  Popular Question (source)
2014-05-09 03:39:24 -0500 commented question Pings not work from controller

Yes, of course:

[root@IH-Server ~(keystone_admin)]# nova secgroup-list-rules default +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | | | | | default | | tcp | 22 | 22 | 0.0.0.0/0 | | | | | | | default | | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+

2014-05-08 08:36:45 -0500 received badge  Famous Question (source)
2014-05-08 08:36:04 -0500 received badge  Student (source)
2014-05-08 08:02:20 -0500 asked a question Pings not work from controller

Hi,

After installing OpenStack IceHouse --all-in-one mode using packstack, I cannot ping the VMs from the system (controller and compute at same time)

We can ping using "ip netns exec qrouter-blablabla ping VM_IP"

My router table after launching instances is:

[root@IH-Server ~(keystone_admin)]# route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

MY_NETWORK   0.0.0.0         255.255.240.0   U     0      0        0 eth0

169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0

0.0.0.0         MY_GW              0.0.0.0         UG    0      0        0 eth0

ovs-vsctl show: image description

Any help will be highly appreciated.

Thanks.

2014-05-02 03:41:15 -0500 received badge  Notable Question (source)
2014-04-29 13:08:05 -0500 received badge  Popular Question (source)
2014-04-29 08:40:28 -0500 asked a question Upgrading OpenStack from Grizzly to IceHouse

Hi, I'm trying to upgrade from Grizzly to IceHouse running "packstack --answerfile=myfile.txt". I get following error:

[...] MY-IP_keystone.pp:
[ ERROR ] Applying Puppet manifests
[ ERROR ]

ERROR : Error appeared during Puppet run: MY-IP_keystone.pp Error: /Stage[main]/Nova::Keystone::Auth/Keystone_user[nova]: Could not evaluate: Execution of '/usr/bin/keystone --os-auth-url http://127.0.0.1:35357/v2.0/ token-get' returned 1: The request you have made requires authentication. (HTTP 401)

You will find full trace in log /var/tmp/packstack/20140429-152631-QoY_jJ/manifests/MY-IP_keystone.pp.log Please check log file /var/tmp/packstack/20140429-152631-QoY_jJ/openstack-setup.log for more information

Could someone help me?

Thanks!