Ask Your Question

Anonymouslemming's profile - activity

2017-11-14 09:55:09 -0500 received badge  Great Question (source)
2015-08-05 11:49:30 -0500 received badge  Good Question (source)
2015-03-23 16:09:42 -0500 received badge  Nice Question (source)
2015-03-23 16:09:36 -0500 marked best answer No Ports Available when trying to associate a floating IP

I've deployed Openstack using RedHat's Packastack allinone option on Centos7. I then followed the instructions at to use my existing external network.

Next, I created a private network for the admin project with a range of

I then started up a guest in the admin project, and it was allocated an IP from this range ( However, when I come to associate a floating IP with this guest so that I can access it internally, I am receiving a message saying "No Ports Available".

I am able to select an IP address from the pool that I setup when following the document above.

image description

Can anyone advise what I have to do in order to complete this setup and map floating IPs to my compute instances on the private network please ?

2015-02-06 15:16:34 -0500 received badge  Famous Question (source)
2015-02-02 13:31:34 -0500 received badge  Notable Question (source)
2015-02-02 13:31:34 -0500 received badge  Famous Question (source)
2015-01-25 22:04:20 -0500 received badge  Popular Question (source)
2015-01-23 16:31:43 -0500 commented answer Is packstack with Juno supported on CentOS 6.6 ?

Thanks ... I guess it's back to fighting with iSCSI in Centos 7 for me then :(

2015-01-23 14:18:20 -0500 asked a question Is packstack with Juno supported on CentOS 6.6 ?


I'm trying to use packstack with CentOS 6 and I'm getting the following error when trying to install openstack-packstack:

The first step (yum install -y ), succeeds:

  Installing : rdo-release-juno-1.noarch                                    1/1

  Verifying  : rdo-release-juno-1.noarch                                    1/1

  rdo-release.noarch 0:juno-1

Installing openstack-packstack then fails:

 yum install openstack-packstack  
Loaded plugins: fastestmirror, security  
Setting up Install Process    
Loading mirror speeds from cached hostfile  
 * base:  
 * extras:  
 * updates: [Errno 14] PYCURL  ERROR 22 - "The requested URL returned error: 404 Not Found"  
Trying other mirror.  
Error: Cannot retrieve repository metadata (repomd.xml) for repository: openstack-juno. Please verify its path and try again

Looking at , I don't see epel-6, only epel-7 and some Fedora versions.

Is there any way to use packstack to install Juno on Centos 6.6 ?


2015-01-03 16:58:40 -0500 commented answer Unable to start instances after host reboot

The existing config works fine for new machines and keeps working - right up until I reboot. The fact that it works and then stops seems to indicate that something is not right to me.

2015-01-03 16:58:08 -0500 commented answer Unable to start instances after host reboot

I've now spent the better part of the past week working on this, and I can now reproduce it 100% on both a virtual machine and on physical hardware. What seems to be happening is that the iscsi config is not persisted after the reboot. Creating a new machine creates a new config.

2015-01-02 16:40:19 -0500 marked best answer Convert simple KVM network to Openstack

I'm trying to convert my simple KVM based home lab to Openstack, and I'm struggling to find instructions for how to setup the network.

My current setup has 2 hypervisor hosts running Ubuntu 14.04 LTS and KVM. Each host has 2 NICs, but I'm currently only using one. Each NIC is setup as a bridge (br0) with a static IP address. Guests running on each hosts have IP addresses from the same range. The default gateway for all hosts on my network is a PFSense firewall with an internal address of The external address is allocated by the ISP and I NAT my private network outgoing connections to that.

Existing Setup Diagram - A diagram can be seen at

image description

I want to move to Openstack for learning purposes, and I'm a bit lost as to what components I need to use and how to configure them. The icehouse neutron documentation at is more complex than what I'm looking for.

I can find roughly what I think I'd want in the Havana docs at but I can't work out how to translate that into a working example.

Proposed Setup - What I'd like to get to is diagrammed at image description

I want to use a single host for the network and controller node. The controller will run as a guest on the network node. The network node and compute nodes have 2 NICs. Each NIC is connected to a different switch (I don't have VLAN enabled switches yet). There is a management network on and the compute node, the network node and the controller node can all reach each other on this.

I've done the controller setup steps up to and including the compute service setup ( ). Where do I go from here ?

Thanks in advance,

2014-12-27 14:49:04 -0500 received badge  Famous Question (source)
2014-12-26 08:43:13 -0500 commented question Unable to start instances after host reboot
2014-12-26 08:39:09 -0500 commented question Unable to start instances after host reboot

iscsid status after boot but before new instance created -

iscsid status after new instance created (existing instances still not starting) -

safeconfig.json config file contents -

2014-12-25 09:01:36 -0500 received badge  Notable Question (source)
2014-12-25 04:18:01 -0500 received badge  Notable Question (source)
2014-12-24 21:58:09 -0500 received badge  Popular Question (source)
2014-12-24 15:37:59 -0500 commented question Unable to start instances after host reboot

targetcli ls is at - that's from the storage node

iscsid status is at As I'm just running the controller / storage / compute node right now, that was run from the controller node. But it's also a compute node.

2014-12-24 15:37:59 -0500 received badge  Commentator
2014-12-24 13:42:52 -0500 commented question Unable to start instances after host reboot

Sorry - that exceeds comment limit again ... Pastebin is at

I'll also amend the question to include that.

2014-12-24 07:52:09 -0500 asked a question Unable to start instances after host reboot

I have a 2 node environment where I'm using one host for the Controller, Network node and as a Compute node. I have a second host that acts as a compute node. I'm using CentOS7 for both hosts.

At the moment, I have excluded the second node from the setup to simplify troubleshooting.

I am able to create and start instances. I use "Boot from image (creates new volume)" when creating these.

After a reboot, none of the existing instances that were created before the reboot start anymore. When I try to start them in the dashboard, I get an exception in the nova-api log and the status remains 'Shutoff'. The full output is at but what I believe to be the key parts are:

2014-12-24 13:38:37.778 3058 ERROR oslo.messaging.rpc.dispatcher [req-695fe6a8-245b-443e-9d4c-28563c4e31ba ] Exception during message handling: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T -p --rescan
Exit code: 21
Stdout: u''
Stderr: u'iscsiadm: No session found.\n'

If I create a new instance with a new volume, that starts.

After doing that, the error I get when trying to start any of the instances that were created before the reboot changes to

2014-12-24 13:36:00.113 2871 ERROR oslo.messaging.rpc.dispatcher [req-506b75df-4c7a-4fbd-8c94-b0178d0a109b ] Exception during message handling: iSCSI device not found at /dev/disk/by-path/

Both volumes are visible under Admin -> Volumes image description

I don't see anything in the cinder api log or cinder server log while attempting to restart the instances.

target status is as follows:

service target status -l

Redirecting to /bin/systemctl status  -l target.service
target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Wed 2014-12-24 13:37:36 GMT; 6h ago
  Process: 1028 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 1028 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

Dec 24 13:37:35 systemd[1]: Starting Restore LIO kernel target configuration...
Dec 24 13:37:36 target[1028]: No saved config file at /etc/target/saveconfig.json, ok, exiting
Dec 24 13:37:36 systemd[1]: Started Restore LIO kernel target configuration.
2014-12-24 05:13:10 -0500 commented answer Make existing public network available

Thanks - can you point me to any documentation that explains how I set that ?

2014-12-24 04:33:15 -0500 received badge  Popular Question (source)
2014-12-24 04:12:28 -0500 commented answer Make existing public network available
2014-12-24 03:44:55 -0500 commented answer Make existing public network available

I can assign a floating IP but not ping it (even directly from the controller node) and the port in the router remains DOWN. And I have setup the security group to allow all ICMP and all TCP, exactly as I did (and worked) on an allinone deployment.

2014-12-24 02:52:04 -0500 asked a question Make existing public network available

Hi all,

I have a 2-node openstack installation now working, created with packstack and I can create instances on private networks.

I'm now trying to connect my existing public network to this so that other nodes on the network can connect to Openstack instances, but struggling. What is the right way to make an existing network available for floating IP addresses ?

My openstack dashboard is on my public network ( This is the network I wish to add to my Openstack deployment

If I try and add a new network of type 'External Network' provider network type 'Local' it fails and I get an error in the neutron server log saying

2014-12-24 08:44:49.677 3890 INFO neutron.api.v2.resource [req-aed42425-a3ae-45ae-b7ce-0a806182b3ab None] create failed (client error): Invalid input for operation: network_type value 'local' not supported.

If I create the network as a VXLAN network, it succeeds. But I'm not sure VXLAN is the right solution for adding the existing local network ?

image description

After creating a working VXLAN external network, any router created on this network always has the gateway port in a DOWN state:

image description

2014-12-23 16:56:10 -0500 received badge  Famous Question (source)
2014-12-17 13:05:58 -0500 received badge  Notable Question (source)