Ask Your Question

sherv's profile - activity

2017-06-20 18:12:34 -0600 answered a question Cinder.conf is incomplete
[database]
connection = mysql+pymysql://cinder:password@dbserver/dbname

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[keystone_authtoken]
auth_uri = http://controller:5000/v3
auth_url = http://controller:35357/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = password
2017-06-06 07:49:10 -0600 received badge  Editor (source)
2017-06-06 07:47:43 -0600 answered a question Swift and s3ql problem

I know this is, like, 2 years old, but if someone googles it in search of solution for a similar problem, like I did, you should try specifying project name with login name, like this <project name>:<user name>. The reason behind it that s3ql uses only username and password with keystone authentication, which leads to unscoped authentication, which doesn't return any catalog endpoints. To get a scoped authentication(which returns list of endpoints) you should specify a project(tenant) name. While official s3ql manual doesn't say it, you can find in code(/usr/lib/s3ql/s3ql/backends/swiftks.py for Ubuntu 16.04, apt installation) that project name and user name can be provided together, separated by colon.

2017-03-29 10:26:16 -0600 answered a question Add multiple IPs to instance

Two ways:

add more interfaces to an instance. This way every IP will have it's own policer(if you have any), and if you're using neutron port security(iptables filters), you'll have to assure that source traffic is being sent from the interface that this IP belongs to, otherwise it will be blocked by iptables. Latter can be achieved with iproute2 in linux, no idea about other OSs;

add more IPs to port. Using neutron port-update you can add more than 1 fixed IP to a port, like this:

$ neutron port-update --fixed-ip subnet_id=b63cb5ae-44dd-463a-b27b-686780395bee,ip_address=192.168.1.114 --fixed-ip subnet_id=b63cb5ae-44dd-463a-b27b-686780395bee,ip_address=192.168.1.115 --fixed-ip subnet_id=b63cb5ae-44dd-463a-b27b-686780395bee,ip_address=192.168.1.116 b9f0b04f-a729-46b0-b087-d6a546def45b

$ neutron port-show b9f0b04f-a729-46b0-b087-d6a546def45b
--- output skipped ---
| fixed_ips         | {"subnet_id": "b63cb5ae-...", "ip_address": "192.168.1.114"} |
|                       | {"subnet_id": "b63cb5ae-...", "ip_address": "192.168.1.115"} |
|                       | {"subnet_id": "b63cb5ae-...", "ip_address": "192.168.1.116"} |

This way all IPs will have shared policer, and you don't have to worry about source traffic stickiness. DHCP behaviour hasn't been tested, so either test it youself, or use static configuration. This has been tested, and works as of Newton release.

2016-12-01 09:28:24 -0600 received badge  Famous Question (source)
2016-10-27 14:52:31 -0600 received badge  Notable Question (source)
2016-08-11 02:56:10 -0600 received badge  Popular Question (source)
2016-08-10 09:04:16 -0600 received badge  Teacher (source)
2016-08-10 09:04:16 -0600 received badge  Self-Learner (source)
2016-08-10 08:59:54 -0600 received badge  Supporter (source)
2016-08-10 08:59:33 -0600 answered a question CEPH backend and compute disk usage

I've done as you suggested, and configured Nova to use CEPH as it's backend. It worked in a way - now dashboard says that each of my two compute nodes has 3.6TB of storage. I see how this can introduce some issues in the future, as this number, obviously, taken from CEPH. I have 4 OSDs, 1TB each, plus some overhead, results in CEPH stating it has 3724GB of storage. I guess this is the number Nova gets from CEPH monitors as it's available storage. But I also have pool size of 2, which means every 1GB of data takes 2GB+overhead of storage in CEPH, and Nova is not aware of that. Of course, situation is eased a little bit by the fact that CEPH thin-provisions block devices, so 100GB disk only takes as much storage as there's really needed. But anyway, this can possibly lead to a situation where there's no storage left in CEPH, but Nova doesn't know this. So I'm not sure this is the end solution.

2016-08-10 06:14:08 -0600 asked a question CEPH backend and compute disk usage

I have a CEPH backend for my Mitaka cinder and glance, and everything works perfectly fine. When I create a new instance, I have to select some flavor with, let's say, 50GB root disk. But previously I've already selected an option to create a new disk and specified a size for it - this disk will be created in CEPH cluster. So now I have a running VM with it's disk in CEPH cluster, but nova thinks that I use 50GB of storage on a compute node(in fact I'm not). Eventually, when you create enough VMs, scheduler will think that storage space on a compute node is exhausted(it's not), and fail to create a new VM.

My question is what's the best practice approach to resolve this issue? Currently I'm creating flavors with 0GB root disk, but this doesn't seem right to me. Maybe there's an existing solution to let nova know that local storage on a compute node is not used with CEPH?

2016-06-18 05:21:31 -0600 received badge  Famous Question (source)
2016-06-02 06:57:44 -0600 received badge  Enthusiast
2016-05-30 06:20:48 -0600 received badge  Popular Question (source)
2016-05-30 06:20:48 -0600 received badge  Notable Question (source)
2016-05-27 10:01:34 -0600 received badge  Scholar (source)
2016-05-27 10:01:22 -0600 answered a question ESX neutron bridging and ethernet padding issue

VMXNET3 didn't help. I tried this without any Openstack packages - wiped the guest clean, reinstalled Ubuntu, created a bridge, namespace, and connected them with veth interface - still had the same issue. Coudn't reproduce this on ESXi 6.0, or VMware Workstation 12 though, but it's partially reproducing on another ESXi 5.1 host. It's seen only when promiscuos mode is enabled.

Anyway, this is not an Openstack issue, but some weird combination of VMware networking and linux bridging. Issue is not seen on a physical server(without hypervisor software), neutron node works perfecly fine there, so I'll stick with this option for now.

2016-05-26 17:34:30 -0600 commented question ESX neutron bridging and ethernet padding issue

I'm not exactly sure I fully understand you(vitrual machines is not my area of expertise). By parvirtualisation do you mean to try using VMXNET3? If yes, it's planned for tomorrow. By the way, I tried setting up a neutron node on a physical machine, and it works perfectly fine. Looks like it's E1000

2016-05-26 14:42:46 -0600 received badge  Student (source)
2016-05-26 11:53:56 -0600 asked a question ESX neutron bridging and ethernet padding issue

I have a Mitaka lab setup. One of the nodes in this setup is Neutron running on Ubuntu 14.04LTS on a VMware ESXi guest. It has two network interfaces - eth0 for external connectivity and eth1 for internal management and VXLAN termination. On eth0 there's a promiscuos mode enabled in a VMware vswitch. I'm using classic approach with linux-bridging agent.

Theres' a virtual router(r1) residing on this network node which was provisioned by standart neutron utility. It's connected to a project and external network, and has a public IP address from an external subnet. It's connections are managed by a bridge, which consists of two interfaces - eth0 and a tap interface.

When I try to send trafic from r1 to any external IP, it first sends and ARP who-has for a gateway MAC. This ARP packet is switched by the bridge, and bridge updates it's MAC table so r1 MAC address is assosiated with tap interface. But this ARP packet is also 48 bytes long, so it has to be padded to 64 bytes(60 without a CRC) according to the rules of ethernet. The issue is, that padding happens somewhere in a system, so a new 60byte packet gets reinserted in the network stack of a host, so a brdige now sees this new 60byte packet on it's eth0 interface. After that the bridge updates it's MAC table so the r1 MAC address is now assosiated with an eth0 interface of a bridge. Everything that follows is a normal operation. ARP who-has packet gets sent to a network, response ARP is-at packet comes with a dst-MAC of an r1, bridge makes a lookup, sees this MAC address on incoming interface, and drops the packet accoring to a split horizon rule.

Any ideas how to fix this behavior? Packet MUST be padded, but it MUST NOT be reiserted in a network stack in such a fashion that bridge sees this packet again on eth0. Does a fact that network node works on VMware guest has something to do with it? Maybe its VMware e1000 driver behavior?