Opendaylight integration with OpenStack Queens release
Hi, I was trying to integrate ODL with OpenStack (Queens) using Kolla-ansible. I did the following configuration in globals.yaml:
neutron_plugin_agent: "opendaylight"
enable_opendaylight: "yes"
Also, I have to do the following modifications in kolla-ansible (stable/queens) code for my setup to be up: https://review.openstack.org/#/c/571441/
After successful deployment, ml2_conf.ini looks like this:
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = opendaylight_v2
extension_drivers = port_security
[ml2_odl]
url = http://172.16.141.167:8088/controller/nb/v2/neutron
username = admin
password = gOI3BfzrDTNM2y7WFOkQavc6Z8EUe6BLhSBirgJN
port_binding_controller = pseudo-agentdb-binding
[ml2_type_vlan]
network_vlan_ranges =
[ml2_type_flat]
flat_networks = physnet1
[ml2_type_vxlan]
vni_ranges = 1:1000
vxlan_group = 239.1.1.1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
tunnel_types = vxlan
l2_population = true
arp_responder = true
[ovs]
bridge_mappings = physnet1:br-ex
datapath_type = system
ovsdb_connection = tcp:127.0.0.1:6640
local_ip = 172.16.141.39
docker exec openvswitch_vsiwtchd ovs-vsctl show on network node:
(openvswitch-vswitchd)[root@CONTROLLER397662 /]# ovs-vsctl show
4369a98a-4108-415d-afb6-137a1d729056
Manager "tcp:172.16.141.39:6641"
is_connected: true
Bridge br-int
Controller "tcp:172.16.141.39:6653"
is_connected: true
fail_mode: secure
Port "tap482b5942-2e"
tag: 4095
Interface "tap482b5942-2e"
type: internal
Port br-ex-patch
Interface br-ex-patch
type: patch
options: {peer=br-ex-int-patch}
Port "tapa79c00cd-de"
tag: 4095
Interface "tapa79c00cd-de"
type: internal
Port br-int
Interface br-int
type: internal
Port "tun4475255a1c1"
Interface "tun4475255a1c1"
type: vxlan
options: {key=flow, local_ip="172.16.141.39", remote_ip="172.16.141.36"}
Bridge br-ex
Port "eno2"
Interface "eno2"
Port br-ex
Interface br-ex
type: internal
Port br-ex-int-patch
Interface br-ex-int-patch
type: patch
options: {peer=br-ex-patch}
docker exec openvswitch_vsiwtchd ovs-vsctl show on compute node:
(openvswitch-vswitchd)[root@COMPUTE362635 /]# ovs-vsctl show
9903fa01-adc8-4c8f-88b1-c5c779de46ae
Manager "tcp:172.16.141.39:6641"
is_connected: true
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eno2"
Interface "eno2"
Port br-ex-int-patch
Interface br-ex-int-patch
type: patch
options: {peer=br-ex-patch}
Bridge br-int
Controller "tcp:172.16.141.39:6653"
is_connected: true
fail_mode: secure
Port "tun2e2c38b74f9"
Interface "tun2e2c38b74f9"
type: vxlan
options: {key=flow, local_ip="172.16.141.36", remote_ip="172.16.141.39"}
Port br-ex-patch
Interface br-ex-patch
type: patch
options: {peer=br-ex-int-patch}
Port br-int
Interface br-int
type: internal
Also, I tested ODL integration with OpenStack with the following command:
curl -u admin:gOI3BfzrDTNM2y7WFOkQavc6Z8EUe6BLhSBirgJN http://172.16.141.167:8088/controller/nb/v2/neutron/networks (http://172.16.141.167:8088/controller...)
Output:
{
"networks" : [ {
"id" : "bd8db3a8-2b30-4083-a8b3-b3fd46401142",
"tenant_id" : "bd8db3a82b304083a8b3b3fd46401142",
"project_id" : "bd8db3a8-2b30-4083-a8b3-b3fd46401142",
"name" : "Sync Canary Network",
"admin_state_up" : false,
"status" : "ACTIVE",
"shared" : false,
"router:external" : false,
"provider:network_type" : "flat",
"segments" : [ ]
}, {
"id" : "a280dcec-3402-46f6-9d95-39868bd36eda",
"tenant_id" : "3a53dc2483564d8c9d993ead57da1362",
"project_id" : "3a53dc2483564d8c9d993ead57da1362",
"revision_number" : 2,
"name" : "demo-net",
"admin_state_up" : true,
"status" : "ACTIVE",
"shared" : false,
"router:external" : false,
"provider:network_type" : "vxlan",
"provider:segmentation_id" : "25",
"segments" : [ ]
}, {
"id" : "af3f87f8-ba87-4972-834b-81eb69c77bd0",
"tenant_id" : "3a53dc2483564d8c9d993ead57da1362",
"project_id" : "3a53dc2483564d8c9d993ead57da1362",
"revision_number" : 5,
"name" : "public1",
"admin_state_up" : true,
"status" : "ACTIVE",
"shared" : false,
"router:external" : true,
"provider:network_type" : "flat",
"provider:physical_network" : "physnet1",
"segments" : [ ]
} ]
}
But when I launched cirros instances on public network, My VMs donot get IP from DHCP. Although, VMs on private network are working fine.
Please help regarding this issue.
Regards, Mansi
Whats the output of docker exec openvswitch_vsiwtchd ovs-vsctl show on compute and network nodes?
Hi , Thanks for the response. I have updated my question with the output of docker exec openvswitch_vsiwtchd ovs-vsctl show on compute and network nodes.
Just to make sure, public networks is likely created with dhcp disabled, ensure is enabled
DHCP is enabled in public network. Is my configuration correct? ODL version is 0.4.3.Carbon. But private network VMs are working fine with this configuration. Only public network VMs are not getting IP.
Also I am not able to ping external gateway from DHCP namespace. After debugging I found that inside DHCP namespace mac address of external gateway is different than its actual mac address.