Ask Your Question

jbheren's profile - activity

2017-06-22 01:35:28 -0500 received badge  Famous Question (source)
2017-04-05 08:51:19 -0500 answered a question No valid host was found. There are not enough hosts available.

When this happens, I simply connect to one compute node and restart the nova-scheduler service.

juju ssh nova-compute/0
sudo /etc/init.d/nova-scheduler restart
2016-05-23 16:31:09 -0500 received badge  Notable Question (source)
2016-05-23 16:31:09 -0500 received badge  Popular Question (source)
2016-05-12 04:51:56 -0500 received badge  Enthusiast
2016-05-11 11:18:54 -0500 answered a question Liberty 4 nodes juju deployment can ping floating ip but not internal

Finally, I got the answer from @FlorianHaas

Yes, you're not supposed to be able to ping private IPs that live within your VXLAN-tunneled private networks, except from other VMs in that same network.

Then my openstack & neutron work as expected.

2016-05-11 05:31:38 -0500 asked a question Liberty 4 nodes juju deployment can ping floating ip but not internal

I have a 4 Physical nodes deployment based on hastexo's post https://www.hastexo.com/resources/hints-and-kinks/ubuntu-openstack-juju-4-nodes/ (https://www.hastexo.com/resources/hin...)

From any openstack node, I can ping the external router interface, access my instances from floating IPs but cannot ping the internal router interface or internal IPs from any node. Is it the normal behaviour ?

On the VMs itself, after ssh using floating ip, i can ping both internal and external + Internet access is OK.

I did no manual post-configuration except network creation as described in the Liberty for Ubuntu documentation : http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html#verify-service-operation (http://docs.openstack.org/liberty/net...)

Here is how I created Networks

#create external network and subnet
neutron net-create external --shared --router:external=True \
  --provider:physical_network external --provider:network_type flat
neutron subnet-create external --name ext-subnet \
  --allocation-pool start=192.168.1.230,end=192.168.1.250 \
  --disable-dhcp \
  --gateway 192.168.1.1 192.168.1.0/24

#create tenant network and subnet
neutron net-create internal --provider:network_type vxlan
neutron subnet-create internal --name internal-subnet \
  --gateway 10.0.1.1 10.0.1.0/24
  --dns-nameservers 8.8.4.4 8.8.8.8

# add router
neutron router-create ext-router
neutron router-interface-add ext-router internal-subnet
neutron router-gateway-set ext-router external

# create floating ip neutron floatingip-create external

here is the ovs-vsctl show output

sudo: unable to resolve host compute-i5
bb4fe11a-022e-41f3-b8ad-07cd338c1dad
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qvo716db988-23"
            tag: 3
            Interface "qvo716db988-23"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-data
            Interface int-br-data
                type: patch
                options: {peer=phy-br-data}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-data
        Port phy-br-data
            Interface phy-br-data
                type: patch
                options: {peer=int-br-data}
        Port br-data
            Interface br-data
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a00000b"
            Interface "vxlan-0a00000b"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.12", out_key=flow, remote_ip="10.0.0.11"}
    ovs_version: "2.4.0"

Here are the routes I see on the compute node (strange that no br is used?) :

ubuntu@compute-i5:~$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 juju-br0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 juju-br0
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

And the juju config i used for install

juju-gui:
#  openstack-origin: 'cloud:trusty-liberty'
keystone:
  openstack-origin: 'cloud:trusty-liberty'
  admin-password: 'my very secret password'
nova-cloud-controller:
  ha-bindinterface: eth1
  openstack-origin: 'cloud:trusty-liberty'
  network-manager: Neutron
neutron-gateway:
  openstack-origin: 'cloud:trusty-liberty'
  ha-bindinterface: eth1
  ext-port: eth2
  bridge-mappings: 'external:br-ex'
  os-data-network: 192.168.133.0/24
  instance-mtu: 1400
neutron-api:
  openstack-origin: 'cloud:trusty-liberty'
  ha-bindinterface: eth1
  network-device-mtu: 1400
  # Always make sure you enable security groups
  neutron-security-groups: true
  overlay-network-type: vxlan
rabbitmq-server:
  ha-bindinterface: eth1
# Cinder is deployed in two parts: one for the API and scheduler
# (which ...
(more)
2016-05-08 14:43:19 -0500 received badge  Famous Question (source)
2016-04-18 07:03:45 -0500 received badge  Notable Question (source)
2016-04-15 09:07:29 -0500 received badge  Popular Question (source)
2016-04-15 05:08:38 -0500 received badge  Editor (source)
2016-04-14 14:57:22 -0500 asked a question Three nodes setup with neutron, cannot ping VMs either internal or External IP

HI, I installed openstack Liberty using juju on Ubuntu 14.04 on three nodes using maas & juju.

The maas controller machine has two interfaces, eth0 with ip 10.0.0.1 and eth1 192.168.29 as gateway to the external network.

Nodes used for openstack have one physical interface named eth0 connected to the maas managed network 10.0.0.1, except the neutron-gateway node that has an eth1 interface connected to the external network 192.168.0.

Here is my juju deployment config :

keystone:
  admin-password: openstack
  debug: 'true'
  log-level: DEBUG
nova-cloud-controller:
  network-manager: Neutron
nova-compute:
  flat-interface: 'eth0'
  enable-live-migration: 'True'
  migration-auth-type: "none"
  virt-type: kvm
  enable-resize: 'True'
neutron-gateway:
  ext-port: 'eth1'
  bridge-mappings: 'external:br-ex'
  instance-mtu: 1400
glance:
cinder:
openstack-dashboard:
  webroot: "/"

The deployment script

#!/bin/bash
juju deploy --to 0 juju-gui
juju deploy --to lxc:0 mysql
juju deploy --config config.yaml --to lxc:0 keystone
juju deploy --config config.yaml --to lxc:0 nova-cloud-controller
juju deploy --config config.yaml --to lxc:0 glance
juju deploy --to lxc:0 rabbitmq-server
juju deploy --config config.yaml --to lxc:0 openstack-dashboard
juju deploy --config config.yaml --to lxc:0 cinder
juju deploy --config config.yaml nova-compute --constraints "tags=compute"
juju deploy --config config.yaml neutron-gateway --constraints "tags=neutron"
juju add-relation mysql keystone
juju add-relation nova-cloud-controller mysql
juju add-relation nova-cloud-controller rabbitmq-server
juju add-relation nova-cloud-controller glance
juju add-relation nova-cloud-controller keystone
juju add-relation nova-compute nova-cloud-controller
juju add-relation nova-compute mysql
juju add-relation nova-compute rabbitmq-server:amqp
juju add-relation nova-compute glance
juju add-relation glance mysql
juju add-relation glance keystone
juju add-relation glance cinder
juju add-relation mysql cinder
juju add-relation cinder rabbitmq-server
juju add-relation cinder nova-cloud-controller
juju add-relation cinder keystone
juju add-relation openstack-dashboard keystone
juju add-relation neutron-gateway mysql
juju add-relation neutron-gateway:amqp rabbitmq-server:amqp
juju add-relation neutron-gateway nova-cloud-controller
juju set keystone admin-password="openstack"
# display status
juju stat --format=tabular
# Download image
wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
glance add name="Trusty x86_64" is_public=true container_format=ovf disk_format=qcow2 < trusty-server-cloudimg-amd64-disk1.img
# set default security group
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# add keypair
nova keypair-add --pub-key ~/.ssh/id_rsa.pub id_rsa
nova image-list

After initial setup, I followed http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron_initial-networks-verify.html (the documentation) up to "Verify connectivity" and setup neutron networks as follows :

#create external network and subnet
neutron net-create ext-net --shared --router:external=True
neutron subnet-create ext-net --name ext-subnet \
  --allocation-pool start=192.168.1.230,end=192.168.1.250 \
  --disable-dhcp --gateway 192.168.1.1 192.168.1.0/24

#create tenant network and subnet
neutron net-create demo-net
neutron subnet-create demo-net --name demo-subnet \
  --gateway 10.0.1.1 10.0.1.1/24

# add router
neutron router-create demo-router
neutron router-interface-add demo-router demo-subnet
neutron router-gateway-set demo-router ext-net

The neutron-gateway node has two physical network interfaces :

  • eth0 connected to the juju 10.0.0.0/24 network
  • eth1 connected to the external 192.168.1.0/24 network

When created, the router should get the first address ... (more)

2016-04-01 05:15:29 -0500 received badge  Supporter (source)