Ask Your Question

bobyakov's profile - activity

2018-05-22 12:24:04 -0500 received badge  Famous Question (source)
2018-03-22 05:10:16 -0500 received badge  Notable Question (source)
2018-03-22 05:10:16 -0500 received badge  Popular Question (source)
2017-05-18 10:42:39 -0500 asked a question HELP! vm not accepting IP

Hi Guys,

Keep having this intermittent issue, need to figure it out please help. Every one in awhile an internal network will assign an ip and the VM no longer wants to accept it. Everything was working and then just stops. If i create a new network in smae project, new instances work fine. new instances on old network no longer work.

The VM boot log shows: url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: request error. it also has no IP assigned to eth1

Neutron syslog is showing: dnsmasq-dhcp[8432]: message repeated 15 times: [ DHCPDISCOVER(tap66eeab4d-18) fa:16:3e:a5:9e:05 no address available]

I checked /var/lib/neutron/dhcp/network_id and all the info is correct (mac to IP) I also double checked all the services are running (metadata, dhcp-agent etc)

Have no idea where to go now..

System running on ubuntu 14 openstack icehouse.

UPDATE

So running tcpdump on DHCP interface on working instance shows request coming in. (sudo ip netns exec qdhcp-ff31bc90-dc60-4764-b660-8861170c1840 tcpdump -ln -i -v tap2dce5482-fa) .

Monitoring dump on non-working instance shows no request coming in.

2017-03-15 06:33:32 -0500 received badge  Notable Question (source)
2017-02-23 02:19:47 -0500 received badge  Notable Question (source)
2017-02-23 02:19:47 -0500 received badge  Famous Question (source)
2017-02-23 02:19:47 -0500 received badge  Popular Question (source)
2017-02-14 12:12:35 -0500 received badge  Notable Question (source)
2017-02-14 08:53:38 -0500 asked a question Icehouse documentation

Hi Guys,

Who can I contact in order to get install documentation for icehouse. Seems the website was recently updated and removed all icehouse documentation. I have a Icehouse instance running and wanted to access the old documentation in preparation to upgrade it to kilo.

2017-02-12 11:05:48 -0500 received badge  Popular Question (source)
2017-02-10 17:51:08 -0500 asked a question unable to create snapshot

Hey Guys,

Have an Ice house cluster running, with swift cluster for images. Weird thing started happening. When I create a snapshot of an instance with small root disk (20GB under), it works fine. When I try to make snapshot of instance with a larger root disk (over 20GB), it hangs stating queued. No errors in nova-compute no errors in glance logs. Its been sitting for hours, tried multiple times. Please help.... Compute host has plenty of root disk over 1TB...

2016-09-19 03:58:23 -0500 received badge  Famous Question (source)
2016-08-24 09:26:10 -0500 received badge  Popular Question (source)
2016-08-22 11:41:37 -0500 asked a question Ice House upgrade Instructions

Hi All,

I am currently running Ubuntu 14.04 and Ice House release. Looking to upgrade to a newer version and have a couple of questions.

First, does the upgrade have to be sequential, or can i go straight to Matika?

Second, any really good guides for upgrading.

Lastly, I broke out the database from the controller and running a separate mariaDB cluster Distrib 5.5.50-MariaDB, will this work fine with the latest openstack release or do I need to upgrade to mariaDb to distro 10?

SECOND Option, looking into this a little looks like the best option would be to build a new controller side by side. If I have multiple instance of each service (Ex: nova, keystone, etc) would all of them work fine if they are different versions?

Ex: currently running 2 controllers for load balancing, each controller has ice house (keystone, nova, scheduler, glance, and swift), add a 3rd controller running same services in Matika version.

Would this work?

2016-05-10 01:57:52 -0500 received badge  Famous Question (source)
2016-04-13 12:47:20 -0500 commented answer Performance Monitoring counters

This did allow me to get performance counter. Modified nova.conf on compute node under [libvirt] section. Have to create new instance after nova is modified. The CPU model now matches the physical host.

2016-04-13 09:16:20 -0500 commented answer Performance Monitoring counters

I updated the one compute nodes nova.conf. the virtual host CPU still does not match physical host.

2016-04-13 08:52:41 -0500 commented answer Performance Monitoring counters

The nova.conf file you are referring to is on the compute node, correct? Also which section of the nova.conf file, DEFAULT?

2016-04-12 14:02:06 -0500 received badge  Notable Question (source)
2016-04-12 08:19:00 -0500 received badge  Popular Question (source)
2016-04-11 15:47:37 -0500 asked a question Performance Monitoring counters

Looking for information if openstack supports Performance Monitoring counters For Example:

If I run below command on physical host:

perf stat ls
1,575,930 cycles                        #    1.585 GHz
933,369 stalled-cycles-frontend   #   59.23% frontend cycles idle
622,292 stalled-cycles-backend    #   39.49% backend  cycles idle

If I run same command on instance:

perf stat ls
<not supported> cycles
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend

Update After spending some time researching I still don't have an answer. What I did find and I think it's related is the CPU , model is different on virtual host than physical. I suspect some of the modules not recognizing the model.

Physical model= Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Virtual model = Westmere E56xx/L56xx/X56xx (Nehalem-C)

2016-03-24 03:21:02 -0500 received badge  Notable Question (source)
2016-02-19 07:42:58 -0500 received badge  Famous Question (source)
2016-02-18 14:38:45 -0500 commented question glance flavor-keystone not working Need Help!!!

I found something where the registry_host looks for meta data, and I knew where the metadata is stored so I tried pointing it directly there rather than the VIP on the HA proxy.

2016-01-12 10:17:05 -0500 commented question cloud-init not assigning IP

I deleted the interface on compute host and restarted instance. It created new interface and started working. I never found root cause.

2015-12-23 01:18:39 -0500 received badge  Notable Question (source)
2015-12-06 01:11:30 -0500 marked best answer nova-compute connecting amqp local

Hi Guys,

Need some help with nova compute. Seems I can't get nova-compute to register with controller. Everything else is working properly. I checked the logs on compute node, and only error I see is in nova-compute. Although below error indicated AMQP as localhost:5672, my config is actually a clustered rabbitmq server. I am able to telnet and connect on said port. No idea why it keeps trying localhost.

Using Ubuntu 14.04 and Openstack icehouse

nova-compute log:

2014-08-12 12:32:26.659 4173 INFO oslo.messaging._drivers.impl_rabbit [-] Reconnecting to AMQP server on localhost:5672
2014-08-12 12:32:26.659 4173 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 1.0 seconds...
2014-08-12 12:32:27.673 4173 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 23 seconds.
2014-08-12 12:32:50.691 4173 INFO oslo.messaging._drivers.impl_rabbit [-] Reconnecting to AMQP server on localhost:5672
2014-08-12 12:32:50.692 4173 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 1.0 seconds...
2014-08-12 12:32:51.705 4173 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 25 seconds.
2014-08-12 12:33:16.725 4173 INFO oslo.messaging._drivers.impl_rabbit [-] Reconnecting to AMQP server on localhost:5672
2014-08-12 12:33:16.726 4173 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 1.0 seconds...
2014-08-12 12:33:17.738 4173 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 27 seconds.

compute node nova.conf:

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
auth_strategy = keystone
glance_host = 10.1.0.2

[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:password@10.1.0.2/nova

[keystone_authtoken]
auth_uri = http://10.1.0.2:5000
auth_host = 10.1.0.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password

[rabbitMQ]
rpc_backend = rabbit
rabbit_hosts = 10.1.0.5,10.1.0.6
rabbit_password = password

[neutron network]
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://10.1.0.2:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = quantum
neutron_admin_password = password
neutron_admin_auth_url = http://10.1.0.2:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

my_ip = 10.1.0.22
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.1.0.22
novncproxy_base_url = http://10.1.0.2:6080/vnc_auto.html
2015-12-06 01:11:30 -0500 received badge  Self-Learner (source)
2015-11-26 07:28:36 -0500 received badge  Famous Question (source)
2015-11-09 04:58:35 -0500 received badge  Popular Question (source)
2015-11-06 14:55:04 -0500 asked a question unable to migrate instance after it has been resized

Hi all, Running Ubuntu 14.04 with Icehouse

Not sure where to start, I'm able to resize instances, I'm also able to migrate, and live migrate instances successfully. The issue only happens when I resize an instance first, then try to migrate. No errors, just sits there migrating for ever. Checked libvirt, checked nova, do not see any errors.

2015-10-22 14:48:29 -0500 received badge  Popular Question (source)
2015-10-22 09:32:53 -0500 answered a question Instance with two NICs to route traffic

I was advised I can do it through neutron. (this worked) neutron port-update nicportid --allowed_address_pairs list=true type=dict ip_address=subnet Problem is this would require massive overhead, anyway to allow this for project user to do this?

2015-10-22 08:08:41 -0500 commented answer Instance with two NICs to route traffic

My mistake, I was just entering the info as an example. I updated the information with the actual IP's

2015-10-22 08:07:57 -0500 edited question Instance with two NICs to route traffic

Hi ,

Need help setting up Ubuntu server as a router with two NICs to route traffic between two private subnets. Need to ping/connect from client1 to client2 via router instance.

Ex: Ubuntu Router Instance: nic1 10.2.1.8 nic2 192.168.1.4

Client1 nic1 10.2.1.6 gateway 10.2.1.8

Client2 nic1 192.168.1.6 gateway 192.168.1.4

Router Instance info:

sudo sysctl -p net.ipv4.ip_forward = 1

ubuntu@router-1:~$ ifconfig

eth0      Link encap:Ethernet  HWaddr fa:16:3e:aa:c4:05
          inet addr:10.2.1.8  Bcast:10.2.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:feaa:c405/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1454  Metric:1
          RX packets:11466 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11410 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1121494 (1.1 MB)  TX bytes:1317588 (1.3 MB)

eth1      Link encap:Ethernet  HWaddr fa:16:3e:49:08:03
          inet addr:192.168.1.4  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe49:803/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4381 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4130 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:426279 (426.2 KB)  TX bytes:381393 (381.3 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1680 (1.6 KB)  TX bytes:1680 (1.6 KB)

ubuntu@router-1:~$ route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.2.1.1        0.0.0.0         UG    0      0        0 eth0
10.2.1.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1