Ask Your Question

vaidyanath-m's profile - activity

2019-01-21 05:26:56 -0500 received badge  Famous Question (source)
2019-01-21 05:26:56 -0500 received badge  Notable Question (source)
2018-03-22 08:39:16 -0500 received badge  Famous Question (source)
2016-10-22 11:02:25 -0500 received badge  Popular Question (source)
2016-07-28 03:39:06 -0500 asked a question vm unable to get ip neutron with vmware nsx plugin

Hi All, here is my setup details

1- Controller node Services - keystone, glance, neutron, nova neutron plugins used - vmware-nsx - https://github.com/openstack/vmware-nsx/ neutron agents - openvswitch agent 2- compute node Services - nova-compute

I have all the services up and running. but when i provision the vm the vm is not assigning the IP address which is offered from DHCP server

here are the config details:-

root@controller:~# neutron net-show test +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2016-07-28T13:35:22 | | description | | | id | be2178a3-a268-47f4-809e-8e0024c6f054 | | name | test | | port_security_enabled | True | | provider:network_type | vlan | | provider:physical_network | dvs | | provider:segmentation_id | 110 | | router:external | False | | shared | True | | status | ACTIVE | | subnets | 5009ec57-4ca7-4e2b-962e-549e6bbee408 | | tags | | | tenant_id | ce581005def94bb1947eac9ac15f15ea | | updated_at | 2016-07-28T13:35:22 | +---------------------------+--------------------------------------+

root@controller:~# neutron subnet-show testsubnet +-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | {"start": "192.168.18.246", "end": "192.168.18.248"} | | cidr | 192.168.18.0/24 | | created_at | 2016-07-28T14:56:54 | | description | | | dns_nameservers | 192.168.13.12 | | enable_dhcp | True | | gateway_ip | 192.168.18.1 | | host_routes | | | id | 5009ec57-4ca7-4e2b-962e-549e6bbee408 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | testsubnet | | network_id | be2178a3-a268-47f4-809e-8e0024c6f054 | | subnetpool_id | | | tenant_id | ce581005def94bb1947eac9ac15f15ea | | updated_at | 2016-07-28T14:56:54 | +-------------------+------------------------------------------------------+

root@controller:~# ovs-vsctl show d516b5b1-db3f-4acd-856c-10d530c58c23 Bridge br-dvs Port br-dvs Interface br-dvs type: internal Port "eth1" Interface "eth1" Bridge br-int Port br-int Interface br-int type: internal Port "tap91d8accd-6d" Interface "tap91d8accd-6d" type: internal ovs_version: "2.5.0"

root@controller:~# ip netns qdhcp-be2178a3-a268-47f4-809e-8e0024c6f054

root@controller:~# ip netns exec qdhcp-be2178a3-a268-47f4-809e-8e0024c6f054 ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

tap91d8accd-6d Link encap:Ethernet HWaddr fa:16:3e:7f:5e:03 inet addr:192.168.18.246 Bcast:192.168.18.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe7f:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)

root@controller:~# ping 192.168.18.246 PING 192.168.18.246 (192.168.18.246) 56(84) bytes of data. ^C --- 192.168.18.246 ping statistics --- 20 packets transmitted, 0 received, 100% packet loss, time 18999ms

I dont have any agents running. because vmware_nsx should be taking care of the communication with openvswitch.

Commandline: apt install openvswitch-switch Install: openvswitch-switch:amd64 (2.5.0-0ubuntu1~cloud0), openvswitch-common:amd64 (2.5.0-0ubuntu1~cloud0, automatic)

2016-07-11 12:13:28 -0500 asked a question nova boot vmware doesn't create the cached image in ipaddress_base folder

nova boot fails as the image created with oslo_vmware/api.py is creating the vm outside the base folder. though the x.x.x.x_base folder is present it is not creating it inside the vm my cofiguration is as follows.

  1. controller and compute node in one vcenter server
  2. the compute manages a different vcenter server.
  3. All the hosts in the cluster is shared with the same and many other datastores
  4. I have many clusters apart from the one which i am managing currently. which would be managed later.

Any help here would be appreciable.

Regards, Vaidyanath

2016-07-11 12:05:53 -0500 received badge  Notable Question (source)
2016-07-08 04:01:06 -0500 commented answer How to create network configuration for Openstack Mitaka

Hi partha, I am using provider network. I have the dhcp and metadata services installed in controller also I have installed openvswitch in controller i dont have anything installed in compute node.

http://paste.openstack.org/show/528559/ I am using https://github.com/openstack/vmware-nsx/ - DVS

2016-06-29 08:17:09 -0500 received badge  Popular Question (source)
2016-06-27 22:39:57 -0500 commented answer neutron-dhcp offer doesn't reach VM

http://paste.openstack.org/show/523709/

2016-06-27 22:39:17 -0500 answered a question neutron-dhcp offer doesn't reach VM

0b8220bf-6e38-46ed-8abd-e96939485ff5 Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port "eth1" Interface "eth1" Port "tapd4efa0d6-d5" Interface "tapd4efa0d6-d5" type: internal Bridge br-dvs Port br-dvs Interface br-dvs type: internal ovs_version: "2.5.0"

I dont have a router created. Its just a flat network. vmware-nsx with DVS doesn't support L3.

Regards, Vaidyanath

2016-06-27 03:35:13 -0500 asked a question neutron-dhcp offer doesn't reach VM

Hi All, I have a compute node and controller node configured in my lab. I am using vmware as my hypervisor.

controller node have nova neutron dhcp and metadata compute node have just nova-compute

0b8220bf-6e38-46ed-8abd-e96939485ff5 Bridge br-int fail_mode: secure Port "tapb8a9ad98-30" Interface "tapb8a9ad98-30" type: internal Port br-int Interface br-int type: internal Port "eth1" Interface "eth1" Bridge br-dvs Port br-dvs Interface br-dvs type: internal ovs_version: "2.5.0"

root@controller:~# ip netns qdhcp-8eb9fc31-0f12-4df5-b41b-31be0b9f95c6

root@controller:~# ip netns exec qdhcp-8eb9fc31-0f12-4df5-b41b-31be0b9f95c6 ip a 1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 16: tapb8a9ad98-30: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fa:16:3e:c4:23:19 brd ff:ff:ff:ff:ff:ff inet 192.168.14.40/24 brd 192.168.14.255 scope global tapb8a9ad98-30 valid_lft forever preferred_lft forever inet 169.254.169.254/16 brd 169.254.255.255 scope global tapb8a9ad98-30 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fec4:2319/64 scope link valid_lft forever preferred_lft forever

Jun 27 13:33:27 dnsmasq-dhcp[17906]: 361721128 available DHCP subnet: 192.168.14.0/255.255.255.0 Jun 27 13:33:27 dnsmasq-dhcp[17906]: 361721128 vendor class: MSFT 5.0 Jun 27 13:33:27 dnsmasq-dhcp[17906]: 361721128 client provides name: blb44cehvrt463 Jun 27 13:33:27 dnsmasq-dhcp[17906]: 361721128 DHCPDISCOVER(tapb8a9ad98-30) 00:50:56:b8:6c:e6 ignored Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 available DHCP subnet: 192.168.14.0/255.255.255.0 Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 client provides name: devstack Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 DHCPDISCOVER(tapb8a9ad98-30) fa:16:3e:4e:3e:b2 Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 tags: tag0, known, tapb8a9ad98-30 Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 DHCPOFFER(tapb8a9ad98-30) 192.168.14.41 fa:16:3e:4e:3e:b2 Jun 27 13:33:29 dnsmasq-dhcp[17906]: Ignoring duplicate dhcp-option 26 Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 requested options: 1:netmask, 28:broadcast, 2:time-offset, 3:router, Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 requested options: 15:domain-name, 6:dns-server, 119:domain-search, Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 requested options: 12:hostname, 44:netbios-ns, 47:netbios-scope, Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 requested options: 26:mtu, 121:classless-static-route, 42:ntp-server Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 next server: 192.168.14.40 Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size: 1 option: 53 message-type 2 Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size: 4 option: 54 server-identifier 192.168.14.40 Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size: 4 option: 51 lease-time 1d Jun 27 13:33:29 dnsmasq-dhcp[17906]: 4249997134 sent size: 4 option: 58 ... (more)

2016-06-26 06:49:34 -0500 received badge  Enthusiast
2016-06-25 00:47:06 -0500 commented answer How to create network configuration for Openstack Mitaka

Hi Partha, I have a similar environment, But i am unable to get the IP to the VM, But the DHCP seems to be giving the IP address to the VM.

2016-05-19 20:59:32 -0500 commented question Openstack with VMware Virtual Distributed Switch (vDS) VLAN tagging

I am trying to setup neutron with vmware vds. flat network. any guide which you would suggest?

2014-08-01 10:33:54 -0500 received badge  Famous Question (source)
2014-05-22 16:36:56 -0500 received badge  Notable Question (source)
2014-04-02 01:43:54 -0500 commented answer Horizon failure- openstack-dashboard: 500 on openstack-status show

i am also hitting the same issue.. did you find the solution for this one?

2014-04-02 01:43:00 -0500 received badge  Supporter (source)
2014-04-01 01:18:18 -0500 commented answer Inspite of successfull installation unable to open horizon

showing the logs from horizon cat /var/log/horizon/horizon.log

2014-04-01 06:06:29,576 2615 ERROR django.request Internal Server Error: /dashboard/ Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 92, in get_response response = middleware_method(request) File "/usr/lib/python2.7/site-packages/django/middleware/locale.py", line 21, in process_request check_path = self.is_language_prefix_patterns_used() File "/usr/lib/python2.7/site-packages/django/middleware/locale.py", line 56, in is_language_prefix_patterns_used for url_pattern in get_resolver(None).url_patterns: File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py", line 347, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py", line 342, in urlconf_module self._urlconf_module = import_module(self.urlconf_name) File "/usr/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/urls.py", line ... (more)

2014-04-01 00:05:22 -0500 commented answer Inspite of successfull installation unable to open horizon

Anand, Thanks for the reply and sorry about the delayed response. I tried the steps but still doesn't seem to be helpful. the horizon service is still in 500 and it doesnt get started

2014-04-01 00:02:20 -0500 received badge  Popular Question (source)
2014-03-28 00:14:35 -0500 asked a question Inspite of successfull installation unable to open horizon

have anyone faced this issue??

[root@test-fedora ~]# packstack --allinone Welcome to Installer setup utility

Installing:
Clean Up...                                            [ DONE ]
Setting up ssh keys...root@10.20.26.108's password:
                                 [ DONE ]
Discovering hosts' details...                          [ DONE ]
Adding pre install manifest entries...                 [ DONE ]
Adding MySQL manifest entries...                       [ DONE ]
Adding QPID manifest entries...                        [ DONE ]
Adding Keystone manifest entries...                    [ DONE ]
Adding Glance Keystone manifest entries...             [ DONE ]
Adding Glance manifest entries...                      [ DONE ]
Installing dependencies for Cinder...                  [ DONE ]
Adding Cinder Keystone manifest entries...             [ DONE ]
Adding Cinder manifest entries...                      [ DONE ]
Checking if the Cinder server has a cinder-volumes vg...[ DONE ]
Adding Nova API manifest entries...                    [ DONE ]
Adding Nova Keystone manifest entries...               [ DONE ]
Adding Nova Cert manifest entries...                   [ DONE ]
Adding Nova Conductor manifest entries...              [ DONE ]
Adding Nova Compute manifest entries...                [ DONE ]
Adding Nova Scheduler manifest entries...              [ DONE ]
Adding Nova VNC Proxy manifest entries...              [ DONE ]
Adding Nova Common manifest entries...                 [ DONE ]
Adding Openstack Network-related Nova manifest entries...[ DONE ]
Adding Neutron API manifest entries...                 [ DONE ]
Adding Neutron Keystone manifest entries...            [ DONE ]
Adding Neutron L3 manifest entries...                  [ DONE ]
Adding Neutron L2 Agent manifest entries...            [ DONE ]
Adding Neutron DHCP Agent manifest entries...          [ DONE ]
Adding Neutron LBaaS Agent manifest entries...         [ DONE ]
Adding Neutron Metadata Agent manifest entries...      [ DONE ]
Adding OpenStack Client manifest entries...            [ DONE ]
Adding Horizon manifest entries...                     [ DONE ]
Adding Swift Keystone manifest entries...              [ DONE ]
Adding Swift builder manifest entries...               [ DONE ]
Adding Swift proxy manifest entries...                 [ DONE ]
Adding Swift storage manifest entries...               [ DONE ]
Adding Swift common manifest entries...                [ DONE ]
Adding Provisioning manifest entries...                [ DONE ]
Adding Ceilometer manifest entries...                  [ DONE ]
Adding Ceilometer Keystone manifest entries...         [ DONE ]
Adding Nagios server manifest entries...               [ DONE ]
Adding Nagios host manifest entries...                 [ DONE ]
Adding post install manifest entries...                [ DONE ]
Preparing servers...                                   [ DONE ]
Installing Dependencies...                             [ DONE ]
Copying Puppet modules and manifests...                [ DONE ]
Applying Puppet manifests...
Applying 10.20.26.108_prescript.pp
10.20.26.108_prescript.pp :                 [ DONE ]
Applying 10.20.26.108_mysql.pp
Applying 10.20.26.108_qpid.pp
10.20.26.108_mysql.pp :                         [ DONE ]
10.20.26.108_qpid.pp :                           [ DONE ]
Applying 10.20.26.108_keystone.pp
Applying 10.20.26.108_glance.pp
Applying 10.20.26.108_cinder.pp
10.20.26.108_keystone.pp :                   [ DONE ]
10.20.26.108_glance.pp :                       [ DONE ]
10.20.26.108_cinder.pp :                       [ DONE ]
Applying 10.20.26.108_api_nova.pp
10.20.26.108_api_nova.pp :                   [ DONE ]
Applying 10.20.26.108_nova.pp
10.20.26.108_nova.pp :                           [ DONE ]
Applying 10.20.26.108_neutron.pp
10.20.26.108_neutron.pp :                     [ DONE ]
Applying 10.20.26.108_osclient.pp
Applying 10.20.26.108_horizon.pp
10.20.26.108_osclient.pp :                   [ DONE ]
10.20.26.108_horizon.pp :                     [ DONE ]
Applying 10.20.26.108_ring_swift.pp
10.20.26.108_ring_swift.pp :               [ DONE ]
Applying 10.20.26.108_swift.pp
Applying 10.20.26.108_provision.pp
Applying 10.20.26.108_ceilometer.pp
Applying 10.20.26.108_nagios.pp
Applying 10.20.26.108_nagios_nrpe.pp
10.20.26.108_swift.pp :                         [ DONE ]
10.20.26.108_provision.pp :                 [ DONE ]
10.20.26.108_ceilometer.pp :               [ DONE ]
10.20.26.108_nagios.pp :                       [ DONE ]
10.20.26.108_nagios_nrpe.pp :             [ DONE ]
Applying 10.20.26.108_postscript.pp
10.20.26.108_postscript.pp :               [ DONE ...
(more)