Ask Your Question

yaroni's profile - activity

2019-03-02 09:56:10 -0500 received badge  Nice Question (source)
2015-04-28 03:42:12 -0500 received badge  Famous Question (source)
2015-04-28 03:42:12 -0500 received badge  Notable Question (source)
2015-03-08 14:05:40 -0500 received badge  Popular Question (source)
2015-03-08 10:02:12 -0500 received badge  Famous Question (source)
2015-03-08 07:19:32 -0500 commented question no ping to instance on compute node

@dbaxps i posted new output, for some reason i see that in controller noder i don't have qrouter... from ip netns output as i had in all-in-one setup!!!

2015-03-08 05:02:20 -0500 commented question no ping to instance on compute node

@dbaxps i did this part manually. But i edited my answer with the output of neutron router definition

2015-03-08 04:19:19 -0500 commented question no ping to instance on compute node

@Pavel Kutishchev I do have net.ipv4.ip_forward=1 in controller and compute node. How do you set with GATEWAY=<router node="" ip=""> ? I did set --gateway 172.16.1.1. I edit my post with more details

2015-03-05 12:04:05 -0500 received badge  Notable Question (source)
2015-03-04 22:31:06 -0500 received badge  Popular Question (source)
2015-03-04 08:53:04 -0500 asked a question no ping to instance on compute node

I installed with packstack on 2 ips (rdo juno)

packstack --install-hosts=ip1,ip2

Than on ip1 created a bridge br-ex, and add it with interface to ovs.
Created a network

neutron net-create extnet --router:external=True

And a subnet

neutron subnet-create extnet --allocation-pool start=172.16.7.150,end=172.16.7.170  --gateway 172.16.1.1 --enable_dhcp=False  172.16.0.0/16

Than i created manually a local network, a router, and connected the external and the internal network to the router.
The instance is launched on ip2
I have no ping to the floating ip of the instance. I can connect to the instance (internal ip) through the router on the controller node

ip netns exec qdhcp-de862bfd-dcc6-496c-9a34-272191a8f32b ssh -i /sriov.pem cirros@10.67.78.2

There is no error log in nova compute log and not in the ovs agent log

Edit:
I do have net.ipv4.ip_forward=1 in controller and compute node
How do you set with GATEWAY=<router node="" ip=""> ?
I did set --gateway 172.16.1.1

On controller node

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.1      0.0.0.0         UG    0      0        0 br-ex
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens192
169.254.0.0     0.0.0.0         255.255.0.0     U     1004   0        0 br-ex
172.16.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-ex

On compute node

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.1      0.0.0.0         UG    0      0        0 eno1
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eno1
172.16.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eno1

      neutron net-create extnet --router:external=True
    Created a new network:
    +---------------------------+--------------------------------------+
    | Field                     | Value                                |
    +---------------------------+--------------------------------------+
    | admin_state_up            | True                                 |
    | id                        | e3d90cbb-1547-4a97-af81-56655db7cc80 |
    | name                      | extnet                               |
    | provider:network_type     | vxlan                                |
    | provider:physical_network |                                      |
    | provider:segmentation_id  | 12                                   |
    | router:external           | True                                 |
    | shared                    | False                                |
    | status                    | ACTIVE                               |
    | subnets                   |                                      |
    | tenant_id                 | d8d051738a4c48cc9f1baa0f160e0f3a     |
    +---------------------------+--------------------------------------+
    [root@localhost ~(keystone_admin)]# neutron subnet-create extnet --allocation-pool start=172.16.7.150,end=172.16.7.170  --gateway 172.16.1.1 --enable_dhcp=False  172.16.0.0/16
    Created a new subnet:
    +-------------------+--------------------------------------------------+
    | Field             | Value                                            |
    +-------------------+--------------------------------------------------+
    | allocation_pools  | {"start": "172.16.7.150", "end": "172.16.7.170"} |
    | cidr              | 172.16.0.0/16                                    |
    | dns_nameservers   |                                                  |
    | enable_dhcp       | False                                            |
    | gateway_ip        | 172.16.1.1                                       |
    | host_routes       |                                                  |
    | id                | 2342dbe6-f5ba-43dd-a20a-ed91d14d2a5f             |
    | ip_version        | 4                                                |
    | ipv6_address_mode |                                                  |
    | ipv6_ra_mode      |                                                  |
    | name              |                                                  |
    | network_id        | e3d90cbb-1547-4a97-af81-56655db7cc80             |
    | tenant_id         | d8d051738a4c48cc9f1baa0f160e0f3a                 |
    +-------------------+--------------------------------------------------+
    [root@localhost ~(keystone_admin)]# neutron net-create InternalNet1  --gateway 172.16.1.1
    Bad Request (HTTP 400) (Request-ID: req-1bf8f9c3-b28e-43a1-ba44-cd931ee619e7)
    [root@localhost ~(keystone_admin)]# neutron net-create InternalNet1
    Created a new network:
    +---------------------------+--------------------------------------+
    | Field                     | Value                                |
    +---------------------------+--------------------------------------+
    | admin_state_up            | True                                 |
    | id                        | 56239cb3-2018-4407-bd87-c58ee32e9ba6 |
    | name                      | InternalNet1                         |
    | provider:network_type     | vxlan                                |
    | provider:physical_network |                                      |
    | provider:segmentation_id  | 13                                   |
    | router:external           | False                                |
    | shared                    | False                                |
    | status                    | ACTIVE                               |
    | subnets                   |                                      |
    | tenant_id                 | d8d051738a4c48cc9f1baa0f160e0f3a     |
    +---------------------------+--------------------------------------+

fdsf

    [root@localhost ~(keystone_admin)]# neutron net-list
    +--------------------------------------+--------------+------------------------------------------------------+
    | id                                   | name         | subnets                                              |
    +--------------------------------------+--------------+------------------------------------------------------+
    | cc339bfc-a634-4d49-86e2-1f081bc4ffdf | public       | 63710304-4d54-4e7c-b2ca-0a99c3c69f86 172.24 ...
(more)
2015-03-03 10:59:39 -0500 received badge  Famous Question (source)
2015-02-25 23:27:01 -0500 received badge  Nice Question (source)
2015-02-15 08:57:12 -0500 commented question sriov binding failure

Still not working

I tried and i am still getting error in neutron server.log 2015-02-15 08:54:00.366 5247 INFO neutron.quota [req-c6d690c5-14f6-453b-85b4-6ddb6d41afea None] Loaded quota_driver: <neutron.db.quota_db.dbquotadriver object="" at="" 0x4edced0="">. 2015-02-15 08:54:00.476 5247 WARNING neutron.plu

2015-02-14 21:02:12 -0500 received badge  Notable Question (source)
2015-02-13 10:17:06 -0500 received badge  Popular Question (source)
2015-02-12 06:55:42 -0500 asked a question sriov binding failure

I am trying to launch vm with sriov but get the following error

I am getting the following error in nova-conductor.log 2015-02-10 12:14:33.337 4570 ERROR nova.scheduler.utils [req-63f0d61e-1e2e-40ba-8509-7c6c68238199 None] [instance: b57c7237-7f0b-4c67-8e97-dfa04ec56e68] Error from last host: localhost.localdomain (node localhost.localdomain): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _do_build_and_run_instance\n filter_properties)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2161, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance b57c7237-7f0b-4c67-8e97-dfa04ec56e68 was re-scheduled: Unexpected vif_type=binding_failed\n']

The lspci output looks like

09:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
09:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
09:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
09:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
09:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
...

In nova.conf

pci_passthrough_whitelist = {"address":"*:09:10.*","physical_network":"physnet1"}

ml2_conf.ini

tenant_network_types = vxlan,vlan
mechanism_drivers =openvswitch,sriovnicswitch
network_vlan_ranges = physnet1:101:150

ml2_sriov.ini

supported_pci_vendor_devs = 8086:10fb 
physical_device_mappings = physnet1:ens2f0

I tried to launch the vm through the dashboard and through the shell and get the error

neutron net-create --provider:physical_network=physnet1 --provider:network_type=vlan --provider:segmentation_id=111 sriov-net
neutron subnet-create sriov-net 192.168.2.0/24 --name sriov-subnet
neutron port-create 64926985-3c04-4e50-8502-b50c0f38af7c  --binding:vnic-type direct
nova boot --flavor m1.large --image 7648d005-4df8-43ab-abca-7f6908e32218   --nic port-id=b89a362d-1909-4e6f-8b64-223a66d66f41 sriov

Neutron log is as following

015-02-10 12:14:04.828 5282 INFO neutron.wsgi [req-6be014f2-64bb-4fc8-80df-173d9b2f95c6 None] 172.16.4.250 - - [10/Feb/2015 12:14:04] "GET //v2.0/routers.json?tenant_id=5af604dbf4c2431f87e157c526158bd0 HTTP/1.1" 200 1046 0.027448

2015-02-10 12:14:06.429 5290 WARNING neutron.plugins.ml2.rpc [req-60be7756-4e9e-44b1-bc10-2b103c74bdc2 None] Device 66fd86cf-4fe8-4d07-8170-a158a156dc2d requested by agent ovs-agent-localhost.localdomain on network 4de555b2-5153-4271-b881-4e3b4a351a40 not bound, vif_type: binding_failed

2015-02-10 12:14:15.001 5280 INFO neutron.wsgi [-] (5280) accepted ('172.16.4.250', 37888)

2015-02-10 12:14:15.041 5280 INFO neutron.wsgi [req-e26ecbbb-a210-4271-913e-7f94699ef7b7 None] 172.16.4.250 - - [10/Feb/2015 12:14:15] "GET /v2.0/ports.json?device_id=b57c7237-7f0b-4c67-8e97-dfa04ec56e68&device_id=e9a531a4-b957-4dc9-8a1f-055e8866d102&device_id=a3e86ff8-6abb-4027-87cf-15fe903916f4&device_id=df44c009-4ccf-4dfe-8173-7a8613514a63&device_id=3f5d95ac-5c9c-49de-909b-74f0b2f722e2&device_id=7d57d794-20d6-4368-8c1a-7ccd41d3217a&device_id=10301f0b-c7f0-4291-bcc4-36936bfaf7ec&device_id=768da694-6771-4761-9520-e8103232dbf7 HTTP/1.1" 200 9223 0.036710

2015-02-10 12:14:15.044 5280 INFO neutron.wsgi [-] (5280) accepted ('172.16.4.250', 37889)

2015-02-10 12:14:15.073 5280 INFO neutron.wsgi [req-a6dda121-3dd5-4b2a-bd37-d20f8dbc79b7 None] 172.16.4.250 - - [10/Feb/2015 12:14:15] "GET /v2.0/security-groups.json?id=66d8cdc0-ce42-4ad5-b986-e9060522f293&id=9a534a9b-ac6e-43fc-a1d2-0dd4e8ffd555&id=94533a81-0e5b-4325-9096-0f919fe1fa97&id=ecb94f49-0fdd-4f6f-baa6-d602d83228bd&id=d66f4b45-cab4-4c65-b578-d9a2359f0ddd&id=d525c71f-ba6c-44b3-bae8-54307aeb3822 HTTP/1.1" 200 10154 0.028447

2015-02-10 12:14:15.113 5280 INFO neutron.wsgi [-] (5280) accepted ('172.16.4.250', 37890)

2015-02-10 12:14:15.140 5280 INFO neutron.wsgi [req-6aa90549-a253-412c-9264-0da80d0b075f None] 172.16.4.250 - - [10/Feb/2015 12:14:15] "GET //v2 ...
(more)
2015-02-11 09:49:00 -0500 commented question traffic mirroring through qvo

I also tried to use allowed address pair, but it seems to be blocked before the iptables rules

2015-02-03 04:15:30 -0500 commented answer issue regarding ovs br-int

did you succeed to do it? How?

2015-02-03 03:06:18 -0500 asked a question traffic mirroring through qvo

I want to mirror traffic between vm via ovs. Openstack created the the qvo,qvb,qbr,tap In the ovs-vsctl show output, i can see only qvo interfaces. So i launched the creation of mirror via qvo as following, but packet got stuck before qbr level. I can see packet reach qvo’s TX, than qvb’s RX but no packet got to qbr! What is the right way to do it?

ovs-vsctl — set Bridge br-int mirrors=@m — –id=@qvo25d9875c-a7 get Port qvo25d9875c-a7 — –id=@qvo1d3862bd-3a get Port qvo1d3862bd-3a — –id=@m create Mirror name=mymirror select-dst-port=@qvo25d9875c-a7 select-src-port=@qvo25d9875c-a7 output-port=@qvo1d3862bd-3a

2015-02-02 12:10:31 -0500 received badge  Famous Question (source)
2015-02-02 10:13:49 -0500 received badge  Famous Question (source)
2015-01-20 02:28:05 -0500 received badge  Notable Question (source)
2015-01-19 03:09:09 -0500 received badge  Notable Question (source)
2015-01-18 14:21:35 -0500 received badge  Popular Question (source)
2015-01-18 05:08:22 -0500 asked a question mirroring traffic between vm

Hi

I run juno rdo packstack all in one. I created a seperate network on 1.2.3.0/24 that is not connected to the external net. I want to mirror traffic from one vm to another on this vm. I want the solution to work also on production setup (not all in one) C:\fakepath\rads.JPG

I can see in the ifconfig the qvo,qvb,qbr,tap for my 2 vm
tap0b448724-0d is for rad2
tapaea6c53c-29 is for rad1

I want to mirror from rad1 to rad2. So by searching from the net i did the following

(my-cloudy)[root@localhost inputs]# ovs-vsctl add-port br-int tap0b448724-0d
(my-cloudy)[root@localhost inputs]# ovs-vsctl add-port br-int tapaea6c53c-29
(my-cloudy)[root@localhost inputs]# ovs-vsctl -- set Bridge br-int mirrors=@m  -- --id=@tap0b448724-0d get Port  tap0b448724-0d -- --id=@tapaea6c53c-29 get Port tapaea6c53c-29 -- --id=@m create Mirror name=mymirror select-dst-port=@tapaea6c53c-29 select-src-port=@tapaea6c53c-29 output-port=@tap0b448724-0d
a54d6966-b33e-4b64-a948-eefa15c850d5
(my-cloudy)[root@localhost inputs]# ovs-vsctl list Bridge br-int
_uuid               : 4d828dda-bf82-47eb-937c-49b3a00fc0b1
controller          : []
datapath_id         : "0000da8d824deb47"
datapath_type       : ""
external_ids        : {}
fail_mode           : secure
flood_vlans         : []
flow_tables         : {}
ipfix               : []
mirrors             : [a54d6966-b33e-4b64-a948-eefa15c850d5]
name                : br-int
netflow             : []
other_config        : {}
ports               : [01136404-75fa-4353-a8e6-c52be026a5a6, 318ab565-515e-45c5-afe1-9865817feb42, 37b1f634-42eb-4c09-b773-66666abdc351, 3f6c04ca-a3c6-4f32-ad6c-c8bbd2236404, 606df555-b2b6-474e-afd7-1f54b7534dc1, 63878890-0caf-490b-a5e9-e933f61687a4, 90272284-cac7-4e13-89e4-80bd5d452829, 9c69b662-86fc-4782-8cfe-e9faf67adc23, a245d77a-049c-49df-93ce-a4278d48d9be, a802a623-6cdb-449b-87fd-84e08e43b989, aca9ad00-715a-4a98-a179-114b0f92d03e, e5a375fd-60fb-4431-ac93-afa9c22b2aff, fc68b5d8-828b-4eb8-8451-314d51c449a4]
protocols           : []
sflow               : []
status              : {}
stp_enable          : false

I sent a trace on tapaea6c53c-29 and saw that it arrived to rad1 but it didn't arrive to rad2. Why?. (tcpreplay -M 10 -i tapaea6c53c-29 ~/mytrace.pcap)

This is the ovs-vsctl output

ovs-vsctl show
c204e021-ff03-4372-a0c8-e7fdf643ae55
    Bridge br-int
        fail_mode: secure
        Port "qvo5f4806a7-fc"
            tag: 9
            Interface "qvo5f4806a7-fc"
        Port "tapaea6c53c-29"
            Interface "tapaea6c53c-29"
        Port "tap8ba80ac8-93"
            tag: 27
            Interface "tap8ba80ac8-93"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-3f83a835-10"
            tag: 9
            Interface "qr-3f83a835-10"
                type: internal
        Port "tap0b448724-0d"
            Interface "tap0b448724-0d"
        Port "qvo7fa5eb55-e9"
            tag: 9
            Interface "qvo7fa5eb55-e9"
        Port "qvoaea6c53c-29"
            tag: 27
            Interface "qvoaea6c53c-29"
        Port "qvo0b448724-0d"
            tag: 27
            Interface "qvo0b448724-0d"
        Port "tap1caf66fd-ca"
            tag: 9
            Interface "tap1caf66fd-ca"
                type: internal
        Port "qvo6809345f-8f"
            tag: 9
            Interface "qvo6809345f-8f"
        Port "qvo5971b9b0-c3"
            tag: 9
            Interface "qvo5971b9b0-c3"
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "enp3s0f1"
            Interface "enp3s0f1"
        Port "qg-260a866b-a4"
            Interface "qg-260a866b-a4"
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.1.3"

This is the ifconfig output

br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.4.250  netmask 255.255.0.0  broadcast 172.16.255.255
        inet6 fe80::6ab5:99ff:fecd:e626  prefixlen 64  scopeid 0x20<link>
        ether 68:b5:99:cd:e6:26  txqueuelen 0  (Ethernet)
        RX packets 49460873  bytes 34634417183 (32.2 GiB)
        RX errors 0  dropped 95878  overruns 0  frame 0
        TX packets 12907733  bytes 2895664946 (2.6 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp3s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 68:b5:99:cd:e6:26  txqueuelen 1000  (Ethernet)
        RX packets 54020070  bytes 40669135629 (37.8 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16869072  bytes 3664815495 (3.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73 ...
(more)
2015-01-15 17:36:12 -0500 received badge  Popular Question (source)
2015-01-15 07:43:03 -0500 answered a question cloud-init failure with metadata and multiple network

If i add a router between external network and the new "my_network_open_stack_name" and add to the subnet the dns name server it solve the error

  1. Why do i need to do that?
2015-01-15 06:19:43 -0500 received badge  Student (source)
2015-01-15 03:35:29 -0500 asked a question cloud-init failure with metadata and multiple network

I am using rdo packstack juno on centos 7 If used only one private network and external network it is ok, If i try to add another network network i get the following error in cloud-init-output.log

Here is the display of the network in the dashboard image description

The problematic message in cloud-init 2015-01-14 16:11:53,787 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: unexpected error ['ConnectionError' object has no attribute 'response']

Here is the full log

Press any key to continue.
Press any key to continue.
Press any key to continue.
Press any key to continue.
Press any key to continue.
Press any key to continue.
Press any key to continue.
Press any key to continue.
Press any key to continue.
Press any key to continue.
?%G     Welcome to CentOS 
Starting udev: %G[  OK  ]
Setting hostname localhost.localdomain:  [  OK  ]
Setting up Logical Volume Management:   2 logical volume(s) in volume group "vg_comp" now active
[  OK  ]
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/mapper/vg_comp-lv_root 
/dev/mapper/vg_comp-lv_root: clean, 126497/1148304 files, 1714934/4589568 blocks
[/sbin/fsck.ext4 (1) -- /boot] fsck.ext4 -a /dev/vda1 
/dev/vda1: clean, 45/128016 files, 79732/512000 blocks
[  OK  ]
Remounting root filesystem in read-write mode:  [  OK  ]
Mounting local filesystems:  [  OK  ]
Enabling local filesystem quotas:  [  OK  ]
Enabling /etc/fstab swaps:  [  OK  ]
Entering non-interactive startup
Calling the system activity data collector (sadc)... 
Starting monitoring for VG vg_comp:   2 logical volume(s) in volume group "vg_comp" monitored
[  OK  ]
ip6tables: Applying firewall rules: [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0:  
Determining IP information for eth0... done.
[  OK  ]
Bringing up interface eth1:  
Determining IP information for eth1... done.
[  OK  ]
Bringing up interface eth2:  Device eth2 does not seem to be present, delaying initialization.
[FAILED]
Starting auditd: [  OK  ]
Starting portreserve: [  OK  ]
Starting system logger: [  OK  ]
Starting irqbalance: [  OK  ]
Starting rpcbind: [  OK  ]
Starting system message bus: [  OK  ]
Starting NFS statd: [  OK  ]
Starting cups: [  OK  ]
Mounting filesystems:  [  OK  ]
Starting acpi daemon: [  OK  ]
Starting HAL daemon: [  OK  ]
Retrigger failed udev events[  OK  ]
Adding udev persistent rules[  OK  ]
Loading autofs4: [  OK  ]
Starting automount: [  OK  ]
Enabling Bluetooth devices:
Starting cloud-init: Cloud-init v. 0.7.5 running 'init-local' at Wed, 14 Jan 2015 16:11:37 +0000. Up 20.38 seconds.
Starting cloud-init: Cloud-init v. 0.7.5 running 'init' at Wed, 14 Jan 2015 16:11:38 +0000. Up 21.06 seconds.
ci-info: ++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++
ci-info: +--------+------+-------------+---------------+-------------------+
ci-info: | Device |  Up  |   Address   |      Mask     |     Hw-Address    |
ci-info: +--------+------+-------------+---------------+-------------------+
ci-info: |   lo   | True |  127.0.0.1  |   255.0.0.0   |         .         |
ci-info: |  eth1  | True |   1.2.3.2   | 255.255.255.0 | fa:16:3e:aa:86:5d |
ci-info: |  eth0  | True | 10.67.79.31 | 255.255.255.0 | fa:16:3e:8e:58:18 |
ci-info: +--------+------+-------------+---------------+-------------------+
ci-info: +++++++++++++++++++++++++++++Route info++++++++++++++++++++++++++++++
ci-info: +-------+-------------+---------+---------------+-----------+-------+
ci-info: | Route | Destination | Gateway |    Genmask    | Interface | Flags |
ci-info: +-------+-------------+---------+---------------+-----------+-------+
ci-info: |   0   |   1.2.3.0   | 0.0.0.0 | 255.255.255.0 |    eth1   |   U   |
ci-info: |   1   |  10.67.79.0 | 0.0.0 ...
(more)
2015-01-15 03:27:11 -0500 commented question Error: Failed to launch instance "ff": Please try again later [Error: No valid host was found. ].

I'm not sure but i think, the title is the message i saw in dashboard popup. And the log is from log. And yes I reinstalled packstack allinone. So i removed than reinstall

2015-01-14 01:26:32 -0500 marked best answer Error: Failed to launch instance "ff": Please try again later [Error: No valid host was found. ].

I reinstalled packstack allinone - i run the uninstall script I search on internet and found that i need to yum remove mysql mysql-server rm -rf /var/lib/mysql nova-manage db sync

Still getting the same error /var/log/nova/nova-compute.log

2015-01-07 17:25:11.142 22438 TRACE nova.compute.manager [instance: 7c5483ff-ad55-4d99-993d-58587b357778]
2015-01-07 17:25:19.988 22438 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2015-01-07 17:25:20.336 22438 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 64256, total allocated virtual ram (MB): 1024
2015-01-07 17:25:20.337 22438 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 13
2015-01-07 17:25:20.337 22438 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 16, total allocated vcpus: 0
2015-01-07 17:25:20.337 22438 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2015-01-07 17:25:20.337 22438 INFO nova.compute.resource_tracker [-] Compute_service record updated for localhost.localdomain:localhost.localdomain
2015-01-07 17:25:21.765 22438 AUDIT nova.compute.manager [req-90c8e054-3f72-439a-a274-a940bab1dba5 None] [instance: 7c5483ff-ad55-4d99-993d-58587b357778] Terminating instance
2015-01-07 17:25:21.775 22438 WARNING nova.virt.libvirt.driver [-] [instance: 7c5483ff-ad55-4d99-993d-58587b357778] During wait destroy, instance disappeared.
2015-01-07 17:25:22.025 22438 INFO nova.virt.libvirt.driver [req-90c8e054-3f72-439a-a274-a940bab1dba5 None] [instance: 7c5483ff-ad55-4d99-993d-58587b357778] Deletion of /var/lib/nova/instances/7c5483ff-ad55-4d99-993d-58587b357778_del complete
2015-01-07 17:25:22.234 22438 INFO nova.scheduler.client.report [req-90c8e054-3f72-439a-a274-a940bab1dba5 None] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain')
2015-01-07 17:25:44.601 22438 AUDIT nova.compute.manager [req-07f3dfb4-f087-4653-b5da-2a4a2a0f160d None] [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] Starting instance...
2015-01-07 17:25:44.680 22438 AUDIT nova.compute.claims [-] [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] Attempting claim: memory 2048 MB, disk 20 GB
2015-01-07 17:25:44.680 22438 AUDIT nova.compute.claims [-] [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] Total memory: 64256 MB, used: 512.00 MB
2015-01-07 17:25:44.680 22438 AUDIT nova.compute.claims [-] [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] memory limit: 96384.00 MB, free: 95872.00 MB
2015-01-07 17:25:44.681 22438 AUDIT nova.compute.claims [-] [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] Total disk: 14 GB, used: 0.00 GB
2015-01-07 17:25:44.681 22438 AUDIT nova.compute.claims [-] [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] disk limit not specified, defaulting to unlimited
2015-01-07 17:25:44.692 22438 AUDIT nova.compute.claims [-] [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] Claim successful
2015-01-07 17:25:44.789 22438 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain')
2015-01-07 17:25:44.888 22438 INFO nova.scheduler.client.report [-] Compute_service record updated for ('localhost.localdomain', 'localhost.localdomain')
2015-01-07 17:25:44.998 22438 ERROR nova.network.neutronv2.api [-] [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] Neutron error creating port on network 6bcaec62-e9d3-42f4-a68c-0d4fdf680458
2015-01-07 17:25:44.998 22438 TRACE nova.network.neutronv2.api [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c] Traceback (most recent call last):
2015-01-07 17:25:44.998 22438 TRACE nova.network.neutronv2.api [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 214, in _create_port
2015-01-07 17:25:44.998 22438 TRACE nova.network.neutronv2.api [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c]     port_id = port_client.create_port(port_req_body)['port']['id']
2015-01-07 17:25:44.998 22438 TRACE nova.network.neutronv2.api [instance: 4bf37282-0e4e-4506-969d-a610c4c9654c]   File "/usr/lib/python2.7 ...
(more)
2015-01-14 01:26:32 -0500 received badge  Self-Learner (source)
2015-01-14 01:26:32 -0500 received badge  Teacher (source)