Ask Your Question

Yong's profile - activity

2015-05-13 01:58:19 -0600 received badge  Necromancer (source)
2015-05-13 01:58:19 -0600 received badge  Teacher (source)
2015-02-05 13:29:08 -0600 received badge  Nice Question (source)
2015-02-04 22:29:25 -0600 answered a question vxlan tunnels messed up after L2poulation enabled -RDO Juno

Hello,

I meet seem problem but my diffrence is I use "gre" tunnel .

Once enable the population , all of the nodes missed the tunnel connection inforamation in "br-tun" ovs bridge. but disable population all of the tunel info are back, so what's I missed?

I have added the "arp_responder = True" but it still don't work.

[agent]
arp_responder = True

[root@compute1 ~]# ovs-vsctl show

4e2b9403-3251-481a-86eb-2a9272c9cf09
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
        Port "fg-a67ad606-43"
            Interface "fg-a67ad606-43"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-85097111-fe"
            tag: 1
            Interface "qr-85097111-fe"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvofb7c2370-ee"
            tag: 1
            Interface "qvofb7c2370-ee"
    ovs_version: "2.3.1-git3282e51"

[root@network0 ~]# ovs-vsctl show

d21e3790-d015-44e4-ab97-8ee91beda7b2
    Bridge br-ex
        Port "qg-4cf25f12-ed"
            Interface "qg-4cf25f12-ed"
                type: internal
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-85097111-fe"
            tag: 1
            Interface "qr-85097111-fe"
                type: internal
        Port "tap6dac23c1-16"
            tag: 1
            Interface "tap6dac23c1-16"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "sg-faa9cf1a-54"
            tag: 1
            Interface "sg-faa9cf1a-54"
                type: internal
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.3.1-git3282e51"

[root@compute1 ~]# cat /etc/neutron/neutron.conf |grep -v ^$|grep -v ^#

[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller0
rabbit_password = openstack
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True
dvr_base_mac = fa:16:3f:00:00:00
router_distributed = True
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller0:5000/v2.0
identity_uri = http://controller0:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
[database]
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

[root@compute1 ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep -v ^$|grep -v ^#

[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 10.0.1.31
enable_tunneling = True
[agent]
tunnel_types = gre
l2_population = True
enable_distributed_routing = True
arp_responder = True

[root@compute1 ~]# cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini |grep -v ^$|grep -v ^#

[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.1.31
[agent]
enable_distributed_routing = True
arp_responder = True
l2_population = True
[securitygroup]

[root@compute1 ~]# cat /etc/neutron/l3_agent.ini |grep -v ^$|grep -v ^#

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
verbose = True
debug = True
agent_mode = dvr

[root@network0 ~]# cat /etc/neutron/neutron.conf |grep -v ^$|grep -v ^#

[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller0
rabbit_password = openstack
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
verbose = True
dvr_base_mac = fa:16:3f:00:00:00
router_distributed = True
debug = True
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller0:5000/v2 ...
(more)
2015-02-04 21:22:00 -0600 answered a question Juno dvr associate floating ip reported "No ports available"

i meet same problem and fix it flow kejunyang's suggestion.

2015-02-04 21:17:32 -0600 received badge  Scholar (source)
2015-02-04 21:17:18 -0600 answered a question How can i get kvm rpm package which support ceph rbd for centos7 or rhel 7?

Thanks all ! here is the summary for this issue.

solution 1: rebuild the qemu-kvm rpm

first , install yum tuils packages

yum groupinstall -y "Development Tools"
yum install -y yum-utils rpm-build

download qemu source rpm to loacal

yumdownloader --source qemu-kvm
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros

Extract rpm

rpm -ivh qemu-kvm-*
cd ~/rpmbuild/SPECS

vi qemu-kvm.spec

modify the spec , find the '--block-drv-rw-whitelist' parameter and append the 'rbd' to the end,then rebuild the souce code.

rpmbuild -bb qemu-kvm.spec

if meet the pacakge dependency error ,you can use 'yum' install them then build again, after the build successful, you can see the package exist in ../RPMS/x86_64/

then delete the old package without rbd support like this:

rpm -e --nodeps qemu-img
rpm -e --nodeps qemu-kvm
rpm -e --nodeps qemu-kvm-tools

solution 2: use the fedora rpm directly.

I see the qemu-kvm rpm have been supported the rbd function on fedora20 directly, i think you can use them also .

http://fedora.xfree.com.ar/linux/upda...

2015-01-28 07:12:09 -0600 received badge  Famous Question (source)
2015-01-28 00:30:31 -0600 received badge  Notable Question (source)
2015-01-27 23:31:02 -0600 commented answer How can i get kvm rpm package which support ceph rbd for centos7 or rhel 7?

Hi Omar Munoz,

Thanks for your replay! i see here still do not inlcude the qemu-img/qemu-kvm/kvm-tools packages that have enable rdb support.

Currently , i have to rebuilld the packages form http://vault.centos.org/7.0.1406/upda.. .

Do you have other good channel to get the ofical rdo supported rp

2015-01-27 23:30:27 -0600 answered a question How can i get kvm rpm package which support ceph rbd for centos7 or rhel 7?

Hi Omar Munoz,

Thanks for your replay! i see here still do not inlcude the qemu-img/qemu-kvm/kvm-tools packages that have enable rdb support.

Currently , i have to rebuilld the packages form http://vault.centos.org/7.0.1406/upda...

Do you have other good channel to get the ofical rdo supported rpms?

BRs/Yongluo

2015-01-27 20:59:31 -0600 received badge  Popular Question (source)
2015-01-26 23:32:14 -0600 asked a question How can i get kvm rpm package which support ceph rbd for centos7 or rhel 7?

hello,

The kvm packages can't support RBD at default RDO yum repository at juno . Before we can get the package from ceph-extras repository : http://ceph.com/packages/ceph-extras/... but there is no centos7 or rhel7 relasted packages in the repository, so where can i get the packages?

Do I have to compile it manually?

BRs/Yong

2014-08-26 03:42:37 -0600 received badge  Famous Question (source)
2014-08-05 20:52:31 -0600 commented question no db table create for neutron

Hi dbaxps, Thanks for your post!!

I follow you configuration which from the post. Unfortunately, i still got same error.Attached my configuration about Neutron. Can you help me have a look what i missed?

My deployment:

Three nodes (controller0,network0,compute0) and neutron-server install to controller0 node but neutron-dhcp-agent,neutron-openvswhich-agent,neutron-l3-agent,neutron-metadata-agent install to network0 node.

Neutron configuration files:

vi /etc/neutron/neutron.conf 

[DEFAULT]
verbose = True
debug = True 
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = controller0
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller0:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 756f65748ca34f0689dfea1d03d79da0
nova_admin_password = nova
nova_admin_auth_url = http://controller0:35357/v2.0
core_plugin = ml2 
service_plugins = router
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller0:5000
auth_host = controller0
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
[database]
connection = mysql://neutron:openstack@controller0/neutron
[service_providers]
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

vi /etc/neutron/plugin.ini

[ml2]
type_drivers = gre ...
(more)
2014-08-05 20:41:33 -0600 answered a question How to remove a physical compute node from production OpenStack cluster safely

Thanks, i will try.

2014-08-05 09:51:11 -0600 commented question Issue While Creating Neutron Database Schema

hi ,

i follow the installation guide : http://docs.openstack.org/icehouse/in... i have three nodes (controller0,network0,compute0) i meet same issue, how do you fix it?

vi /etc/neutron/neutron.conf

[DEFAULT]
verbose = True
debug = True 
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = controller0
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller0:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 756f65748ca34f0689dfea1d03d79da0
nova_admin_password = nova
nova_admin_auth_url = http://controller0:35357/v2.0
core_plugin = ml2 
service_plugins = router
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller0:5000
auth_host = controller0
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
[database]
connection = mysql://neutron:openstack@controller0/neutron
[service_providers]
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

vi /etc/neutron/plugin.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
2014-08-05 07:51:34 -0600 received badge  Notable Question (source)
2014-07-25 08:56:18 -0600 received badge  Enthusiast
2014-07-24 07:43:28 -0600 received badge  Famous Question (source)
2014-07-24 07:43:28 -0600 received badge  Notable Question (source)
2014-07-23 23:46:34 -0600 received badge  Popular Question (source)
2014-07-23 10:59:26 -0600 asked a question How to remove a physical compute node from production OpenStack cluster safely

hi ,

i have set up a openstack (Havana) use Dev/QA in my company. but there are haveware must be switch to other organization. so the running VM host on the servers will be migrated other compute node server. i think it should have below steps:

1.Distable the compute service at the compute node which will be removed.

2.Migrated all of the VMs from this server.

3.Remove the host record from nova database

So I am not sure that it's a right solution, i want to know what guys to do usually? Can you give me you steps??

BRs/yongluo

2014-07-23 09:53:00 -0600 received badge  Autobiographer
2014-07-21 00:54:25 -0600 received badge  Popular Question (source)
2014-07-20 05:38:22 -0600 asked a question no db table create for neutron

hi all,

After i installation the icehouse folow the installation guildline(), but go to the neutron part is failed.

os:centos6.5 rod:icehouse neutron:ml2+openvswich+gre

When i perform below command:

root@controller0 ~]# neutron net-create ext-net --shared --router:external=True
Request Failed: internal server error while processing your request.

See the /var/log/neutron/server.log

2014-07-20 17:54:46.683 13021 INFO neutron.api.extensions [-] Loading extension file: vpnaas.pyc
2014-07-20 17:54:46.685 13021 INFO neutron.api.extensions [-] Loading extension file: vpnaas.pyo
2014-07-20 17:54:46.687 13021 INFO neutron.api.extensions [-] Extension 'agent' provides no backward compatibility map for extended attributes
2014-07-20 17:54:46.688 13021 INFO neutron.api.extensions [-] Extension 'Allowed Address Pairs' provides no backward compatibility map for extended attributes
2014-07-20 17:54:46.691 13021 INFO neutron.api.extensions [-] Extension 'Neutron Extra Route' provides no backward compatibility map for extended attributes
2014-07-20 17:54:46.748 13021 INFO keystoneclient.middleware.auth_token [-] Starting keystone auth_token middleware
2014-07-20 17:54:46.748 13021 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.
2014-07-20 17:54:46.758 13021 INFO keystoneclient.middleware.auth_token [-] Using /tmp/keystone-signing-5vpf7S as cache directory for signing certificate
2014-07-20 17:54:46.788 13021 INFO neutron.service [-] Neutron service started, listening on 0.0.0.0:9696
2014-07-20 17:54:46.794 13021 INFO neutron.wsgi [-] (13021) wsgi starting up on http://0.0.0.0:9696/

2014-07-20 17:54:46.813 13021 INFO neutron.openstack.common.rpc.impl_qpid [-] Connected to AMQP server on controller0:5672
2014-07-20 17:55:52.339 13021 INFO neutron.wsgi [-] (13021) accepted ('172.16.0.10', 45226)

2014-07-20 17:55:52.361 13021 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): controller0
2014-07-20 17:55:52.563 13021 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): controller0
2014-07-20 17:55:52.655 13021 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): controller0
2014-07-20 17:55:52.665 13021 INFO keystoneclient.middleware.auth_token [-] Auth Token confirmed use of v2.0 apis
2014-07-20 17:55:52.666 13021 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): controller0
2014-07-20 17:55:52.679 13021 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): controller0
2014-07-20 17:55:52.965 13021 INFO neutron.openstack.common.rpc.impl_qpid [req-07bec7d5-1920-4edc-b706-46d5c3cc2b66 None] Connected to AMQP server on controller0:5672
2014-07-20 17:55:52.990 13021 ERROR neutron.api.v2.resource [req-07bec7d5-1920-4edc-b706-46d5c3cc2b66 None] create failed
2014-07-20 17:55:52.990 13021 TRACE neutron.api.v2.resource Traceback (most recent call last):
2014-07-20 17:55:52.990 13021 TRACE neutron.api.v2.resource   File "/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 87, in resource
2014-07-20 17:55:52.990 13021 TRACE neutron.api.v2.resource     result = method(request=request, **args)
2014-07-20 17:55:52.990 13021 TRACE neutron.api.v2.resource   File "/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 376, in create
2014-07-20 17:55:52.990 13021 ...
(more)
2014-03-07 13:14:14 -0600 received badge  Famous Question (source)
2014-02-14 08:50:36 -0600 received badge  Notable Question (source)
2014-02-14 08:50:36 -0600 received badge  Popular Question (source)
2014-02-11 12:58:38 -0600 received badge  Famous Question (source)
2014-01-17 02:48:13 -0600 asked a question openvswitch hang

hi guys,

i deploy openstack multi-node with neutron (openvswich + gre). it's work fine but i met a issue that network can't be conneted by the openvswitch hang. so that one of the compute server have this proberm other is work fine.

i have below check: all of the neutron agent looks fine by "neutron agent-list" check. all of the compute nodes looks fine by "nova-manage service list" and check the openvswich server status ,it's looks fine also by "/etc/init.d/openvswitch status" so i try to restart the openvswitch , but i'm failed.

[root@compute3 ~]# /etc/init.d/openvswitch status
ovsdb-server is running with pid 9481
ovs-vswitchd is running with pid 9494
[root@compute3 ~]# /etc/init.d/openvswitch restart
Killing ovs-vswitchd (9494)                                [  OK  ]
Killing ovs-vswitchd (9494) with SIGKILL                   [  OK  ]
**Killing ovs-vswitchd (9494) failed ... failed!**
Killing ovsdb-server (9481)                                [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
ovs-vswitchd is already running.
Enabling remote OVSDB managers                             [  OK  ]

i try to kill the process but i'm still failed

[root@compute3 log]# ps -ef|grep vswitchd
root      9494     1 11  2013 ?        2-21:45:29 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
root      9495  9494  0  2013 ?        00:00:01 [ovs-vswitchd] <defunct>
root     17044 11344  0 16:22 pts/13   00:00:00 gdb /usr/sbin/ovs-vswitchd 9494
root     18292 11344  0 16:45 pts/13   00:00:00 grep vswitchd

openstack version and os information:

openstack:
package:RDO release,  rdo-release-havana-6
OS:redhat 6.4
kernel :Linux compute3 2.6.32-431.1.2.0.1.el6.x86_64 #1 SMP Fri Dec 13 13:06:13 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
openvswitch:ovs-vsctl (Open vSwitch) 1.11.0
2014-01-17 02:00:44 -0600 received badge  Notable Question (source)
2013-12-13 01:26:15 -0600 received badge  Student (source)
2013-12-13 01:24:41 -0600 received badge  Popular Question (source)
2013-12-07 21:51:18 -0600 received badge  Editor (source)
2013-12-07 21:49:58 -0600 asked a question compute node instance can't access other instance

i have installed havana with neutron as the network service. (openvswich+gre). i have installed two compute node and a controller node. One is work fine but other is not.

i can create instance at the compute node but the instance can't access to outsite . It is strange that the other instance host on other compute node can access the instance. for example , instance a (with ip :192.168.1.6) host on compute node A, instance b(with ip :192.168.1.24) host on compute B. , instance a ping b is ok opposite not.

[root@host-192-168-1-6 ~]# ping 192.168.1.24
PING 192.168.1.24 (192.168.1.24) 56(84) bytes of data.
64 bytes from 192.168.1.24: icmp_seq=1 ttl=64 time=1.29 ms
64 bytes from 192.168.1.24: icmp_seq=2 ttl=64 time=0.619 ms
64 bytes from 192.168.1.24: icmp_seq=3 ttl=64 time=0.747 ms

[root@host-192-168-1-24 ~]# ping 192.168.1.6
From 192.168.1.24 icmp_seq=2 Destination Host Unreachable
From 192.168.1.24 icmp_seq=3 Destination Host Unreachable
From 192.168.1.24 icmp_seq=4 Destination Host Unreachable
From 192.168.1.24 icmp_seq=6 Destination Host Unreachable

i have some clue about the this . it's should be related iptables rule or firewall. so i add below iptables rule at node B munually .it's work fine now.

#iptables -t filter -I FORWARD -i qbr+ -o qbr+ -j ACCEPT

but once the firewall refreshed after a new instance create or delete. the network still can't work again. so what's happan? it's seem that the iptables rule clear and roolback after iptables refresh.

here is my deployment environment:

vi /etc/nova/nova.conf

...
security_group_api=neutron
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
...

vi /etc/neutron/plugin.ini

...
[OVS]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
local_ip = 192.168.0.10

[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
...