Ask Your Question

DeepakRK's profile - activity

2017-09-14 21:30:49 -0500 received badge  Famous Question (source)
2017-06-22 04:05:31 -0500 received badge  Notable Question (source)
2017-06-21 07:19:11 -0500 received badge  Popular Question (source)
2017-06-08 10:54:06 -0500 received badge  Notable Question (source)
2017-05-15 08:05:30 -0500 received badge  Popular Question (source)
2017-05-05 04:37:47 -0500 asked a question Issues while configuring Devstack(Ocata) to use Nova-docker

Installed Docker in the machine.

sudo apt-get install -y lxc-docker

source /etc/bash_completion.d/docker sudo bash -c "echo DOCKER_OPTS=\'-G ubuntu\' >> /etc/default/docker" sudo restart docker docker version docker run -i -t ubuntu /bin/bash ( Successfully launched)

Used the following local.conf for devstack

*[[local|localrc]] ADMIN_PASSWORD=ADMIN

HOST_IP=192.168.160.123

MYSQL_PASSWORD=SQL

MYSQL_HOST=$HOST_IP

RABBIT_PASSWORD=RABBIT

SERVICE_PASSWORD=$ADMIN_PASSWORD

RABBIT_HOST=$HOST_IP

SERVICE_HOST=$HOST_IP

MULTI_HOST=True

LOGDAYS=2

VIRT_DRIVER=novadocker.virt.docker.DockerDriver

Enabled Services

enable_service q-svc

enable_service q-agt

enable_service q-dhcp

enable_service q-l3

enable_service q-meta

enable_service neutron

enable_service placement-api

enable_service placement-client

enable_plugin heat https://git.openstack.org/openstack/heat stable/ocata

TACKERHORIZON_BRANCH=stable/ocata

enable_plugin tacker https://git.openstack.org/openstack/tacker (https://git.openstack.org/openstack/t...) stable/ocata

enable_plugin networking-sfc https://git.openstack.org/openstack/networking-sfc (https://git.openstack.org/openstack/n...) stable/ocata

enable_plugin nova-docker https://git.openstack.org/openstack/nova-docker (https://git.openstack.org/openstack/n...) stable/ocata

Network

NEUTRON_CREATE_INITIAL_NETWORKS=False

Nova

disable_service n-net

compute_driver=novadocker.virt.docker.DockerDriver

Neutron - OVS, VxLAN, and ML2

Q_PLUGIN=ml2

Q_AGENT=openvswitch

Q_USE_DEBUG_COMMAND=True

Q_L3_ENABLED=True

Q_L3_ROUTER_PER_TENANT=True

ENABLE_TENANT_TUNNELS=True

TENANT_TUNNEL_RANGE=-1:1000

Q_ML2_TENANT_NETWORK_TYPE=vxlan

Q_USE_PROVIDERNET_FOR_PUBLIC=True

OVS_BRIDGE_MAPPINGS=public:br-ex

OVS_ENABLE_TUNNELING=True

PUBLIC_BRIDGE=br-ex

Q_DVR_MODE=dvr_snat

Logging

SCREEN_LOGDIR=/opt/stack/logs

Glance

container_formats=ami,ari,aki,bare,ovf,ova,docker

Tempest

disable_service tempest

==================================================================================================

But stack fails with the following error

+functions-common:service_check:1663 for service in '$failures' ++functions-common:service_check:1664 basename /opt/stack/status/stack/n-cpu.failure +functions-common:service_check:1664 service=n-cpu.failure +functions-common:service_check:1665 service=n-cpu +functions-common:service_check:1666 echo 'Error: Service n-cpu is not running' Error: Service n-cpu is not running +functions-common:service_check:1669 '[' -n /opt/stack/status/stack/n-cpu.failure ']' +functions-common:service_check:1670 die 1670 'More details about the above errors can be found with screen' +functions-common:die:186 local exitcode=0 +functions-common:die:187 set +o xtrace

n-cpu log :-

2017-05-05 14:22:32.950 INFO os_vif [^[[00;36m-] ^[[01;35mLoaded VIF plugins: ovs, linux_bridge^[[00m 2017-05-05 14:22:32.951 WARNING oslo_reports.guru_meditation_report [^[[00;36m-] ^[[01;35mGuru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.^[[00m 2017-05-05 14:22:33.058 WARNING oslo_config.cfg [^[[00;36m-] ^[[01;35mOption "use_neutron" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.^[[00m 2017-05-05 14:22:33.075 DEBUG oslo_policy.policy [^[[01;36mreq-704d622e-4b24-4ebd-be0f-79735000a25e ^[[00;36mNone None] ^[[01;35mThe policy file policy.json could not be found.^[[00m ^[[00;33mfrom (pid=122120) load_rules /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:520^[[00m 2017-05-05 14:22:33.088 ERROR nova.virt.driver [^[[01;36mreq-704d622e-4b24-4ebd-be0f-79735000a25e ^[[00;36mNone None] ^[[01;35mCompute driver option required, but not specified^[[00m n-cpu failed to start

=======================================================================================

Can anyone guide me how to integrate Docker in the devstack(Ocata environment)? Also could anyone share a working local.conf with docker in Ocata ?

( I have used the following as references but those are for earlier releases kilo ... (more)

2017-03-09 06:57:42 -0500 asked a question HTTP requests to public sites not going through SFC

Following are the details regarding source, service VMs and destination VM which I used for Service function chaining. ( Followed the steps mentioned in http://blog.cafarelli.fr/2016/11/service-function-chaining-demo-with-devstack/ (http://blog.cafarelli.fr/2016/11/serv...) )

Source VM 10.2.0.9

Service VM 1 ingress-> 10.2.0.13 egress-> 10.2.0.7

Service VM 2 ingress-> 10.2.0.12 egress-> 10.2.0.10

Destination VM 10.2.0.6 + a public interface

(MASQUERADE rule added in destination VM )

• Created flow classifiers for TCP & UDP traffic to pass through the chain

neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix 10.2.0.9/16 --destination-ip-prefix 10.2.0.6/16 --protocol tcp --logical-source-port source_vm_port SFC_tcp

neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix 10.2.0.9/16 --destination-ip-prefix 10.2.0.6/16 --protocol udp --logical-source-port source_vm_port SFC_udp

• Created a port chain using the above flow-classifier.

neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2 --flow-classifier SFC_tcp --flow-classifier SFC_udp PC1

• Added static routing in service VMs

sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'

sudo ip route add 10.2.0.9 dev eth0

sudo ip route add 10.2.0.6 dev eth1

• Ran a simple http server in the destination VM (10.2.0.6)

while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to destination server" | sudo nc -l -p 80 ; done&

• Tried to curl the http server from Source VM(10.2.0.9)

curl http://10.2.0.6

Could see the HTTP request going through the service VMs (by checking tcpdump in service VM ports) and finally reaching the destination VM. This ensures that SFC is working as expected.

While trying to curl http://bing.com from source VM, the url is being resolved from DNS server(10.2.0.2) but we could not see HTTP request going to the service VMs ie.SFC

user@VNF:~$ curl http://www.bing.com -v

  • Rebuilt URL to: http://www.bing.com/

  • Hostname was NOT found in DNS cache

  • Trying 204.79.197.200...

  • connect to 204.79.197.200 port 80 failed: Connection refused

  • Trying 13.107.21.200...

  • connect to 13.107.21.200 port 80 failed: Connection refused

  • Failed to connect to http://www.bing.com port 80: Connection refused

  • Closing connection 0

curl: (7) Failed to connect to http://www.bing.com port 80: Connection refused

user@VNF:~$ nslookup http://bing.com

Server: 10.2.0.2

Address: 10.2.0.2#53

Non-authoritative answer:

Name: http://bing.com

Address: 204.79.197.200

Name: http://bing.com

Address: 13.107.21.200

Doubts:-

How to modify the flow classifier or what are the changes needed in static routes in service VMs to ensure that all http requests like "curl http://www.bing.com%22 (http://www.bing.com") from source VM(10.2.0.9) goes through the service Vms & reaches destination VM and finally makes the request to http://bing.com through the public interface in ... (more)

2017-03-09 06:57:42 -0500 asked a question HTTP requests to public sites not going through SFC

Following are the details regarding source, service VMs and destination VM which I used for Service function chaining. ( Followed the steps mentioned in http://blog.cafarelli.fr/2016/11/service-function-chaining-demo-with-devstack/ (http://blog.cafarelli.fr/2016/11/serv...) )

Source VM 10.2.0.9

Service VM1 ingress-> 10.2.0.13 egress-> 10.2.0.7

Service VM2 ingress-> 10.2.0.12 egress-> 10.2.0.10

Destination VM 10.2.0.6 + a public interface

(MASQUERADE rule added in destination VM )

• Created flow classifiers for TCP & UDP traffic to pass through the chain

neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix 10.2.0.9/16 --destination-ip-prefix 10.2.0.6/16 --protocol tcp --logical-source-port source_vm_port SFC_tcp

neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix 10.2.0.9/16 --destination-ip-prefix 10.2.0.6/16 --protocol udp --logical-source-port source_vm_port SFC_udp

• Created a port chain using the above flow-classifier.

neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2 --flow-classifier SFC_tcp --flow-classifier SFC_udp PC1

• Added static routing in service VMs

sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'

sudo ip route add 10.2.0.9 dev eth0

sudo ip route add 10.2.0.6 dev eth1

• Ran a simple http server in the destination VM (10.2.0.6)

while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to destination server" | sudo nc -l -p 80 ; done&

• Tried to curl the http server from Source VM(10.2.0.9)

curl http://10.2.0.6

Could see the HTTP request going through the service VMs (by checking tcpdump in service VM ports) and finally reaching the destination VM. This ensures that SFC is working as expected.

While trying to curl http://bing.com from source VM, the url is being resolved from DNS server(10.2.0.2) but we could not see HTTP request going to the service VMs ie.SFC

user@VNF:~$ curl http://www.bing.com -v

  • Rebuilt URL to: http://www.bing.com/

  • Hostname was NOT found in DNS cache

  • Trying 204.79.197.200...

  • connect to 204.79.197.200 port 80 failed: Connection refused

  • Trying 13.107.21.200...

  • connect to 13.107.21.200 port 80 failed: Connection refused

  • Failed to connect to http://www.bing.com port 80: Connection refused

  • Closing connection 0

curl: (7) Failed to connect to http://www.bing.com port 80: Connection refused

user@VNF:~$ nslookup http://bing.com

Server: 10.2.0.2

Address: 10.2.0.2#53

Non-authoritative answer:

Name: http://bing.com

Address: 204.79.197.200

Name: http://bing.com

Address: 13.107.21.200

Doubts:-

How to modify the flow classifier or what are the changes needed in static routes in service VMs to ensure that all http requests like "curl http://www.bing.com%22 (http://www.bing.com") from source VM(10.2.0.9) goes through the service Vms & reaches destination VM and finally makes the request to http://bing.com through the public interface in the destination VM?