Ask Your Question

AB239's profile - activity

2019-06-10 00:08:27 -0500 received badge  Famous Question (source)
2019-06-10 00:08:27 -0500 received badge  Notable Question (source)
2018-09-05 18:17:42 -0500 received badge  Famous Question (source)
2018-09-05 18:17:42 -0500 received badge  Notable Question (source)
2018-09-05 18:17:42 -0500 received badge  Popular Question (source)
2018-07-27 17:18:04 -0500 received badge  Famous Question (source)
2018-06-14 09:42:47 -0500 received badge  Popular Question (source)
2018-05-15 05:03:52 -0500 asked a question dedicated network between cinder and compute

Hello all,

Setup information:

OpenStack Version: Newton

1 KVM Controller on Server 1 (having one 1Gbps NIC card attached at eth0)

1 KVM Cinder on Server 2. Server 2 has attached SSD of 120G as a block device(haven't created any file-system on SSD ). This VM has a 1Gbps NIC card attached at eth0

1 Compute node, Server 3. Compute node has one 1Gbps network card attached to it which currently serves for all types of traffics.

I want to create a dedicated 10Gbps network (attached via a cross cable) between Server 2(having Cinder) and Server 3(Compute). I currently have a default 1Gbps connection but I am not able to get high throughput in R/W operations to SSD due to 1Gbps network limit.

Requirement:

1) Please suggest if I can create a 10gbps network between compute and cinder node. Where do I have to configure the additional interface on either side? 2) I am using LVM driver in Cinder. I can see there is an option of configuring iscsi_ip_address in cinder.conf from the reference config files available on openstack.org. But, I am unable to find similar configuration param on compute side. Looks like some config is needed on neutron side but I am unsure at this point. 3) Any documentation where an end-to-end openstack deployment is explained for multiple NICs?

Looking forward to your replies.

TIA

2018-05-14 06:44:10 -0500 commented answer cinder block slow performance

Thanks a ton! I am closing this question. In case needed, I will create a new question with detailed info about my environment and seek your expert advice on that. :D

2018-05-14 06:03:41 -0500 received badge  Famous Question (source)
2018-05-14 04:26:20 -0500 commented answer cinder block slow performance

Cinder backend is ISCSI. I have explained about the environment in my original query.

2018-05-14 02:05:42 -0500 commented answer cinder block slow performance

Thanks. Planning to move the network between Cinder and Compute nodes to 10Gbps. Can u please guide me about the configuration changes that will be needed in my environment now?

2018-05-08 07:28:53 -0500 received badge  Notable Question (source)
2018-05-08 02:15:21 -0500 commented answer cinder block slow performance

Just updated the question with the correct device name on VM-1.

2018-05-07 08:17:05 -0500 received badge  Popular Question (source)
2018-05-07 04:14:15 -0500 asked a question cinder block slow performance

Hello all,

Setup information:

Openstack Version: Newton

1 KVM Controller on Server 1

1 KVM Cinder on Server 2. Server 2 has attached SSD of 120G as a block device(haven't created any file-system on SSD )

1 Compute node, Server 3

I have a pass-through configuration for SSD on Cinder VM and I can see the drive mounted as /dev/vda I have created an OpenStack instance on Compute node, let's call it VM-1. A Cinder volume is being attached to it and its mounted as /dev/vdb on VM-1.

Issue: I am seeing a very low disk Read/writes numbers for the mounted volume on VM-1.

Troubleshooting and HDPARM numbers till now:

1) SERVER 2(host for Cinder VM):

hdparm -Tt /dev/sda (SSD mount point)

Timing cached reads:   14808 MB in 
2.00 seconds = 7411.80 MB/sec

Timing buffered disk reads: 1208 MB in  3.00 seconds = 402.03 MB/sec

2) CINDER VM (on Server 2):

hdparm -t -direct /dev/vda 

/dev/vda: 
 Timing cached reads:   14136 MB in  1.98 seconds = 7125.59 MB/sec 
 Timing buffered disk reads: 1196 MB in  3.00 seconds = 398.15 MB/sec

3) VM-1 (on Server 3):

hdparm -t --direct **/dev/vdb**

/dev/vda: 
 Timing O_DIRECT disk reads: 208 MB in  3.01 seconds =  69.10 MB/sec

You can see there is a huge difference in disk reads in #2 and #3. Both the servers (server 2 and 3) are connected by 1000Mbps link. But, I found something strange when I checked network interfaces for the VM-1 on compute node(server 3).

ETHTOOL for eth0 (physical interface on Server3)

...
            Advertised pause frame use: Symmetric
            Advertised auto-negotiation: Yes
            *Speed: 1000Mb/s*
            Duplex: Full
            Port: Twisted Pair
...

ETHTOOL for TAP interface attached to VM-1:

...
            Supports auto-negotiation: No
            Advertised link modes:  Not reported
            Advertised pause frame use: No
            Advertised auto-negotiation: No
            **Speed: 10Mb/s**
            Duplex: Full
            Port: Twisted Pair
            PHYAD: 0
            Transceiver: internal
            Auto-negotiation: off
  ...

Interface speeds show huge difference here as well. Please suggest if there is any way I can change this speed on tap interface. Is there any configuration file or anything where I can specify this?

Looking forward for expert advice.

EDIT 1: Corrected VM-1's device name for Cinder Volume.

TIA

2018-05-07 02:32:11 -0500 commented question can not attach cinder volume at running instance

Can u set Log level in cinder.conf to DEBUG and paste cinder-api logs?

2018-04-13 05:16:45 -0500 received badge  Notable Question (source)
2018-03-22 01:59:16 -0500 received badge  Famous Question (source)
2018-03-19 09:04:13 -0500 received badge  Notable Question (source)
2018-03-19 00:12:56 -0500 commented question Unable to SSH on IPv6

@Andreas Merk: Yes host is outside the tenant network. I have multiple tenants on the same host. Network topology:

1) IPv6 networks are part of tenant only. They are NOT shared across all tenants. 2) Host machine doesn't have IPv6 enabled/configured on it as my core network is IPv4 only.

2018-03-18 12:22:20 -0500 received badge  Popular Question (source)
2018-03-17 02:56:39 -0500 received badge  Famous Question (source)
2018-03-16 02:38:13 -0500 received badge  Famous Question (source)
2018-03-16 02:38:13 -0500 received badge  Notable Question (source)
2018-03-16 02:36:53 -0500 received badge  Popular Question (source)
2018-03-16 02:36:13 -0500 marked best answer very slow cloud-config runcmd execution

Hello,

I have a working cloud-config but it is taking a lot of time to execute. Sometimes a simple execution of apt-get install <package> is taking 7-8 minutes even when I am installing it from a local repository server on LAN. The problem is not with fetching the files, but during installation it takes a long time.

Environment details: 1) Openstack: Newton with neutron 2) Both Compute hosts have 96 GB RAM 3) Network is 1Gbps supported.

Sample cloud-config goes like this:

#cloud-config                                                
chpasswd:                                                    
  list: |                                                    
    root:passwd                                              
  expire: False                                              
apt_upgrade: false                                           
runcmd:                                                      
# Post Installation Commands                                 
- export DEBIAN_FRONTEND=noninteractive                      
- apt-get update                                             
- apt-get install -y --force-yes --allow-unauthenticated gcc

Also I have checked during the execution, both CPU and RAM are show nothing terribly wrong. CPU just shows 40% occupancy and RAM consumption is less than 30%

Has anyone faced similar issue? What is the solution for this?

Please let me know if you need more information about this.

2018-03-16 02:36:08 -0500 answered a question very slow cloud-config runcmd execution

It was happening due to high DISK IO consumption on my HOSt machine which slowed down disk operations on all the VMs as well.

2018-03-16 02:35:17 -0500 received badge  Notable Question (source)
2018-03-16 02:34:07 -0500 received badge  Famous Question (source)
2018-03-16 02:33:31 -0500 asked a question Unable to SSH on IPv6

Hello all,

I am using Newton release of Openstack. Trying to create a setup with IPv6 here. My backbone/core network doesn't have IPv6 so I have kept one interface of IPv4 as well to login into VMs. Here are few details about the setup I am trying to create:

I am doing all this from Horizon itself.

1) IPv6 Network (ipv6-priv) 2) IPv6 Subnet (2001:db8::/64) 3) Have selected DHCPv6-Stateful DHCP configuration while creating Network. 4) Spawned 2 VMs to check ping, SSH and netcat. 5) VMs have 2 interfaces (eth0 which is ipv4 based and eth1 which is ipv6)

Following issues can be seen:

ISSUE #1: New VMs that get spawned don't come up with IPv6 attached to them if I check from CLI. I have to create a network config file eth1.cfg with following entries:

# The primary network interface
auto eth1
iface eth1 inet6 dhcp

After this, I run ifup eth1 and then I can see IPv6 allocated by Neutron on VMs.

Issue #2: Unable to ping each other from these VMs

I solved it by creating a route on both the VMs:

ip -6 route add 2001:db8::/64 dev eth1

After setting this route, I am able to ping6.

ISSUE #3: Unable to SSH:

I have set rules in security policy for all TCP, UDP and ICMP for IPv6 for ::/0. So this can't be a problem. After playing with tcpdump on VM1, 2 and HOST on which these VMs are spawned, I could see packets flowing like this:

VM1(initiated SSH from here) -----> HOST -----> VM2(SSH server) -----> HOST -----x

Packets are never received on VM1 hence SSH session never begins. I could see few fast re-transmits as well. Although, if I use netcat, it works absolutely fine.

Do I need to do additional configuration on the HOST machine to allow packets to flow to VM1?

TIA

2018-03-07 04:01:30 -0500 answered a question Unable to ping two VM instances with IPv6 address via DHCPv6

Got it working. I had to add a default route for my IPv6 network. Something like this:

ip -6 route add 2001:db8::/64 dev eth1

I think it should be inserted by default from DHCPv6. Any clues where to configure it?

2018-03-07 01:08:08 -0500 asked a question Unable to ping two VM instances with IPv6 address via DHCPv6

Hello all,

I am using Netwon release openstack and have a requirement of using IPv6 for a complete stack. I have created a tenant network with IPv6 subnet 2001:db8::/64 with a default gateway. Also selected 'Stateful DHCPv6' as option for IP allocation.

I am able to see IPs are getting allocated on the Horizon but I cant see them visible in the VM. I have to do 'dhclient -6 eth1' to attach already allocated IPv6 to VM.

I can see this on one VM:

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:28:22:a5 brd ff:ff:ff:ff:ff:ff
inet6 2001:db8::c/128 scope global 
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe28:22a5/64 scope link 
   valid_lft forever preferred_lft forever

IP is there but subnet is /128. Hence, it can't ping any other IP.

Am I missing any configuration on tenant network?

UPDATE 1: I tried to ping these IPs from dhcp but it dint work as well.

a@newton-controller:# ip netns exec qdhcp-4f5e070a-3d0e-408b-bbd3-8c7ee46dc509 traceroute6 2001:db8::9traceroute to 2001:db8::9 (2001:db8::9) from 2001:db8::2, 30 hops max, 24 byte packets

1  * *

Regards AB

2018-03-07 00:17:46 -0500 commented question DHCP not providing IPv6 address to VM

@Elangovan Anganann : Facing the same issue. I cant assign it manually and ping but DHCPv6 assigns IP addresses of subnet /128 hence I am unable to ping other VMs with IPv6.

2018-03-06 06:53:42 -0500 answered a question instance only show the IP on dashboard but not inside the VM

I am also facing this issue in Newton release. Anyone got solution to this problem? When I manually add IP in interface file, I can see it in 'ip addr show'. Otherwise, dhclient -6 and ifup fail.

2018-01-24 11:03:17 -0500 received badge  Notable Question (source)
2018-01-15 02:59:02 -0500 received badge  Popular Question (source)
2018-01-14 13:30:47 -0500 received badge  Notable Question (source)
2018-01-14 13:30:47 -0500 received badge  Famous Question (source)
2018-01-13 12:03:24 -0500 received badge  Famous Question (source)
2018-01-11 04:46:57 -0500 asked a question Newton: Unable to migrate/live-migrate on

Hello team,

I am unable to migrate/live-migrate VMs from one compute node to another. Please note that all the commands were executed using root user credentials.

controller:/home/# openstack compute service list
+----+------------------+-------------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host              | Zone     | Status  | State | Updated At                 |
+----+------------------+-------------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | newton-controller | internal | enabled | up    | 2018-01-11T10:41:21.000000 |
|  4 | nova-scheduler   | newton-controller | internal | enabled | up    | 2018-01-11T10:41:26.000000 |
|  5 | nova-conductor   | newton-controller | internal | enabled | up    | 2018-01-11T10:41:29.000000 |
|  8 | nova-compute     | ComputeA           | nova     | enabled | up    | 2018-01-11T10:41:29.000000 |
|  9 | nova-compute     | ComputeB           | nova     | enabled | up    | 2018-01-11T10:41:29.000000 |
+----+------------------+-------------------+----------+---------+-------+----------------------------+

Command executed to migrate from ComputeA to ComputeB:

openstack server migrate c408e2ec-f555-4bd6-8d4d-52186877cd6e --live ComputeB

After this point, I don't get any error. Checked in logs and I couldn't see any error related to migration as well.

Please let me know what could be wrong here?

Update #1:

Able to see this in nova-compute.log on computeA:

2018-01-11 15:12:22.160 32681 ERROR oslo_messaging.rpc.server InvalidSharedStorage: defnet2 is not on shared storage: Live migration can not be used without shared storage except a booted from volume VM which does not have a local dis

If that is the case, can migration be done without --live option? and how?

2017-12-29 02:37:48 -0500 received badge  Famous Question (source)
2017-11-15 06:23:06 -0500 received badge  Necromancer (source)
2017-08-09 14:07:04 -0500 received badge  Famous Question (source)