tony.pearce's profile - activity

2020-04-17 08:54:31 -0600 received badge  Famous Question (source)
2020-04-17 08:54:31 -0600 received badge  Notable Question (source)
2020-02-24 08:23:30 -0600 received badge  Notable Question (source)
2020-02-24 08:23:30 -0600 received badge  Popular Question (source)
2020-02-21 06:24:36 -0600 commented question Is it possible to tie nova compute AZ to cinder AZ?

in horizon, i am just going and setting name of instance, network, flavour etc. I am not setting any metadata on instance create.

2020-02-21 06:23:21 -0600 commented question Is it possible to tie nova compute AZ to cinder AZ?

...and that cinder default filters is already the default

2020-02-21 06:22:57 -0600 commented question Is it possible to tie nova compute AZ to cinder AZ?

Hi - nice to speak with you again :) I am using Horizon and the error is that no AZ is defined (because I have default_availability_zone = No-storage-zone-specified in cinder.conf). In Horizon I am only selecting the nova AZ. In Cinder I have "scheduler_default_filters = AvailabilityZoneFilter"

2020-02-21 05:40:32 -0600 asked a question Is it possible to tie nova compute AZ to cinder AZ?

I have 2 x compute nodes that I am testing. Each one has its own cinder backend. In cinder I have created the 2x backends and set: backend 1: backend_availability_zone = test-cluster1

backend 2: backend_availability_zone = test-cluster2

In nova I have 2 x (availability zones) which are named the same as above in cinder. When I launch an instance I am getting an error that the backend AZ is not found.

What is missing from the above to achieve this?

2020-01-17 12:41:54 -0600 received badge  Popular Question (source)
2020-01-15 20:12:52 -0600 commented question Using openstack in a different location - what options are there?

There is discussion on cheesecake but apart from a 2016 proposal i dont see it has been implemented into any openstack :(

2020-01-15 20:12:13 -0600 answered a question How to delete a volume snapshot force?
  1. make the snapshot available cinder snapshot-reset-state 13d713b4-0fda-40fa-8d13-5a44c58a7514
  2. edit the mysqldb mysql
  3. use cinder;
  4. update snapshots set deleted=1,status='deleted',deleted_at=now(),updated_at=now() where deleted=0 and id='13d713b4-0fda-40fa-8d13-5a44c58a7514';
2020-01-15 01:04:23 -0600 asked a question How to delete a volume snapshot force?

PReviously, I've had a situation where a volume was not created on the backend iscsi but the entry was created in the db on openstack. So when we tried to delete the volume from openstack it would go into "error deleting" status because it could not find the volume on the backend to delete - I am assuming. To workaround this, I deleted the entry from the cinder db like: "mysql>update volumes set deleted=1,status='deleted',deleted_at=now(),updated_at=now() where deleted=0 and id='8022e762-7d9f-435b-ab36-412e903c2756';

Now I have a situation where there are some snapshots that really do exist in the backend but it looks like a recent OS upgrade of the backend iscsi has changed the backend api and now the snapshot deleterequests from openstack are not understood by the backend. I have manually deleted one snapshot on the backend. Openstack still has this listed under volume snapshots.

Where can I manually delete the snapshot from openstack to make it go away?

2020-01-14 04:51:32 -0600 commented question Using openstack in a different location - what options are there?

How about:

  1. replicate the production volumes to 2nd location
  2. bring up the controller in the 2nd location
  3. update the cinder DB to point all the volumes to the new IQN and IP address

I dont want to edit db... is there a better option?

2020-01-14 00:05:23 -0600 asked a question Using openstack in a different location - what options are there?

I have a working 3-node openstack environment with storage array:

  • 1 x virtual controller
  • 2 x physical hosts
  • And storage is backed by cinder / iscsi against a Nimble Storage array, which uses the nimble-cinder API.

I want to be able to use openstack if this environment is down, such as power or internet failure.

I have another physical location, with another Nimble Storage array and some empty physical hosts. Moving the controller to the other location is easy, because it's a VM and is taken care of by the host which it runs on. But there are some confusions on my side about the other components such as:

  • physical host
  • data storage volumes

Regarding data storage, all of the instances have volumes which reside on the nimble array and are accessed via iscsi from the controller and the hosts.

-> does openstack have any way of copying data from one datatype to another?

I can enable volume replication on the nimble side. However, the IQN and storage array IP addresses are different at the other site so I dont see how it would be useful.

I guess maybe I should be asking if there are any examples of running openstack across 2 sites with 2 separate storage arrays, and how it's achievable.

2019-04-18 01:19:48 -0600 received badge  Notable Question (source)
2019-04-18 01:19:48 -0600 received badge  Popular Question (source)
2019-02-14 22:24:02 -0600 commented question How do tenant network works for vxlan and gre tunnel

bumpy bump

2019-02-14 03:22:34 -0600 asked a question How do tenant network works for vxlan and gre tunnel

I have a 2 node openstack setup with a single controller. All is working right now. The 2 hosts use GRE tunnels on an interface labelled as "tenant" which came from the TripleO installation guide. When I run "ovs-vsctl show" I can see:

Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "gre-c0a80c68"
            Interface "gre-c0a80c68"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.12.105", out_key=flow, remote_ip="192.168.12.104"}
        Port "gre-c0a80c6c"
            Interface "gre-c0a80c6c"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.12.105", out_key=flow, remote_ip="192.168.12.108"}

The "br-tun" interface is what maps to the physical interface which is labelled as the "tenant". As I understand this, by default it connects all the hosts together via vxlan so that the VMs running on the hosts can have access to oher VMs in the same tenant (openstack network) which is separate from the real underlying network of the host. For example, openstack project has VMs running in network 10.0.0.0/24. VM1 runs on host 1 and VM2 runs on host 2. The 2 VMs can communicate on their own network IP addresses between them, say 10.0.0.1 and 10.0.0.2 and the underlying host does not know anything about 10.0.0.0/24 network as it uses (for example) 192.168.12.0/24.

Now what I would like to do is install a 3rd host in another site. I would need the hosts to establish vxlan (or gre) between them.

But here is the problem I forsee: The hosts use this network as a flat network 192.168.12.0/24. There is no default route on this network interface, it's on another interface. So how would the hosts connect the gre/vxlan here?

I havent tried to install a 3rd host in another location yet. Do I need another overlay network on top of this to stretch the 192.168.12.0/24 layer 2 to all hosts? Or can the openstack hosts use their default gateway to route to the remote host? OR does the installtion script set up static host routes on the hosts via the br-tun interface so they can set up the gre/vxlan tunnel?

Bit confused how it would work.

Thanks

2018-09-15 09:55:48 -0600 received badge  Famous Question (source)
2018-09-04 18:06:22 -0600 commented answer openstack queens octavia load balancer

Hi, thanks for your message. Is octavia LB only a feature of RDO? We managed to fix the issue somehow. I did run the db upgrade command again previously, seemed to go ok. It may have been an amphorae issue. The appalling documentation has been the main issue here. Still unable to get horizon working

2018-09-04 10:02:05 -0600 received badge  Notable Question (source)
2018-08-26 03:22:47 -0600 received badge  Popular Question (source)
2018-08-24 13:11:47 -0600 received badge  Notable Question (source)
2018-08-24 13:11:47 -0600 received badge  Popular Question (source)
2018-08-21 20:53:11 -0600 commented question openstack queens octavia load balancer

I logged into the db and ran some checks. My db skills are a bit weak but the table has a default of 'NULL'. From what I can find out from Mrs. Google was that this is normal but I do not understand why it's throwing an error.

2018-08-21 19:57:03 -0600 asked a question openstack queens octavia load balancer

We have an all-in-one deployment of openstack queens using rdo. We have tried to set up octavia but the documentation is a bit weak. The horizon link to the load balancers page is visible on the dashboard but when we click on it, the page just keeps showing the spinning loading icon and then refreshing. It looks like the link in horizon is added after installing "openstack-octavia-ui" package from yum but im not sure and unable to find out if specific dashboard configuration needs to be done.

Secondly, we have been unable to activate a deployed load balancer even after configuring listener, pool and healthcheck

$ openstack loadbalancer show tplb
WARNING: Failed to import plugin orchestration.
WARNING: Failed to import plugin baremetal.
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2018-08-21T06:56:40                  |
| description         |                                      |
| flavor              |                                      |
| id                  | e13afbb7-5810-4a37-9a21-ef9b7290062e |
| listeners           | 6b31cd53-abec-46e7-ab60-ce2d30a580c6 |
| name                | tplb                                 |
| operating_status    | OFFLINE                              |
| pools               | 344147be-f3cf-4798-ad97-089fdfdf24f1 |
| project_id          | a3b5c4b6770c4d4dabbde06c55c5d0d1     |
| provider            | octavia                              |
| provisioning_status | ACTIVE                               |
| updated_at          | 2018-08-21T07:09:00                  |
| vip_address         | 172.16.1.106                         |
| vip_network_id      | None                                 |
| vip_port_id         | 18554215-345f-49c1-ba31-6bfaf2a9ad59 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 9a3c0f05-4b58-41a8-bddb-11fa72683bc1 |
+---------------------+--------------------------------------+

The operating status always says offline. Checking the logs, we see this in the health-manager.log:

2018-08-21 13:35:06.813 196242 ERROR octavia.controller.healthmanager.update_db [-] update_health encountered an unknown error: DBError: (pymysql.err.InternalError) (1364, u"Field 'amphora_id' doesn't have a dault value") [SQL: u'INSERT INTO amphora_health (last_update, busy) VALUES (%(last_update)s, %(busy)s)'] [parameters: {'busy': 0, 'last_update': datetime.datetime(2018, 8, 21, 8, 5, 6, 805390)}] (Background onhis error at: http://sqlalche.me/e/2j85)

I didn't set this up but trying to go through an install guide for Queens has been difficult. I would say that could be a misconfiguration somewhere but for the lack of a proper guide I am a bit stuck.

2018-05-04 02:08:52 -0600 received badge  Famous Question (source)
2018-04-25 10:53:39 -0600 received badge  Notable Question (source)
2018-04-25 10:53:39 -0600 received badge  Famous Question (source)
2018-04-25 10:53:39 -0600 received badge  Popular Question (source)
2018-03-16 04:30:09 -0600 received badge  Famous Question (source)
2018-03-16 04:30:09 -0600 received badge  Notable Question (source)
2018-03-05 17:34:49 -0600 received badge  Notable Question (source)
2018-02-14 22:01:55 -0600 asked a question How do debug node install?

I am running into an issue with deployment. To put it simply, I tag the controller node with the control profile however during the deployment the compute role is installed on the controller node and consequently the deployment fails.

openstack overcloud profiles list +--------------------------------------+--------------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+--------------+-----------------+-----------------+-------------------+ | 29f8a4b9-3f48-4079-9fba-afe868660e7f | Controller-0 | available | control | | | 42b6c88f-bab6-4b7d-a011-fa181b33ee4b | Compute-0 | available | compute | | | 2e46ce3d-b42f-4f03-8f16-cd4758fe0eae | Compute-1 | available | compute | | +--------------------------------------+--------------+-----------------+-----------------+-------------------+

As per the above output you can see that the controller-0 has control profile assigned.

When I run the deploy command, I monitor ironic node-list and I manually power on the controller node. I see that node status go into "deploying" when the controller node has PXE booted and is being installed. So at this point things look good, openstack is showing the controller node is being installed and the only baremetal node powered on is the controller node. But after the install completes and the node is powered off then back on, it boots up with a hostname of "compute".

How can I debug this? I cannot get this deployed for this very reason :(

2018-02-14 21:56:05 -0600 received badge  Popular Question (source)
2018-02-14 01:13:45 -0600 asked a question Why is node placement not working?

I'm deploying openstack. I have 3 nodes: 1 x controller and 2 x compute. Deployment fails because openstack tries to install the controller role onto one of the compute nodes. The reason for the failure is the difference in network adapters. I have a controller.yaml with this network defined and must be used for the controller node install however openstack deploy does not do this.

I have tried "tag" the nodes using the tripleo gui but this had no positive effect. I have also tried running "openstack baremetal node set 06149789-78d1-4eb8-ac41-d908312a912c --property capabilities='node:controller-0,boot_option:local'" but this also had no positive effect.

properties             | {u'memory_mb': u'16384', u'cpu_arch': u'x86_64', u'local_gb': u'69',    |
|                        | u'cpus': u'8', u'capabilities':                                         |
|                        | u'node:controller-0,boot_option:local,profile:control'}

How can I instruct openstack to specifically deploy the controller role onto the controller role and likewise the compute roles onto the compute nodes? This is wasting a lot of time because I dont know that the deployment will fail until it boots up after installing the nodes. It takes a good 45 minutes to get to that point and another 10 minutes to delete the deployment and re-try.

Yours sincerely, Stressed,

2018-02-05 01:31:24 -0600 received badge  Popular Question (source)
2018-02-01 23:55:58 -0600 answered a question Tenant gre network broken

I fixed it. First problem was that I was creating the br-tun interface bridge in my yaml file = believe that was incorrect. I also had a separate bridge called br-tenant which I have set for provider vlans and I thnk it was clashing as I saw some ovs-vsctl show output referencing this bridge. I renamed this bridge to br-vlans and redeployed.

I did finally have to edit the security group to allow icmp. Floaty IP now working. I can move on to the next item on the list.

Thanks Mr Bausch again for your help :)

2018-02-01 17:23:49 -0600 commented question Tenant gre network broken

Hmm so it deployed successfully and I can ping on the interface. But also "br-tun" was created even though I did not include any bridge called "br-tun" in my yaml files. How should this be done? Looks like I have it wrong and causing problems for myself :)

2018-02-01 08:50:57 -0600 commented question Tenant gre network broken

I'm going to try to deploy and make the tenant interface a normal interface (no ovs bridge) but I have a feeling deployment will fail because I believe the bridge is required for openvswitch to manage the ports on it.:/

2018-02-01 08:25:55 -0600 commented question Tenant gre network broken

when the vxlan or gre tunnel tries to come up, what happens to the bridge interface? for me it seems to go down when this happens. I can reboot the node and while its booting up, log in via ssh and i can ping out of the br-tun. When the services start, ping stops. Seems something is breaking it?

2018-02-01 08:24:42 -0600 commented question Tenant gre network broken

..in this setup so cant test different hosts at the moment :( security group is default, allows any ipv4 inbound. I cannot get an ssh session either.

When you try to ping the instance, the openstack router sends icmp reply "destination host unreachable"

Can you tell me - when the vxlan or gre

2018-02-01 08:23:04 -0600 commented question Tenant gre network broken

I just re-deployed using vxlan and the same interface (br-tun) cannot ping anything on that interface after deployment. For the questions: - the instances cannot reach external network. Like I mentioned I can ping the routers interfaces, just not the instances i only have 1 compute host in

2018-02-01 02:34:22 -0600 commented question Tenant gre network broken

i tested without cloning the mac and no change.

2018-02-01 01:54:49 -0600 commented question Tenant gre network broken

for the ovs bridge "br-tun" I am cloning the mac address from the physical interface. Is this a problem to do?

2018-02-01 01:54:49 -0600 received badge  Commentator
2018-02-01 00:48:09 -0600 asked a question Tenant gre network broken

I have a network isolation openstack tripleo deployed. I have a bridge called "br-tun" which I have set up for the 'Tenant' network.

I have one main problem and it's that I cannot ping the instance VMs using the floating IP. I think I have traced this back to the tenant GRE network not working. I think I have further traced it back to something amiss with the bridge but follow me here and you'll see why.

For the instances I have this network layout: Real network switch ---- openstack router---local (GRE network)

I can ping the openstack router which is using 192.168.20.108/24 I have assigned a floating IP to the instance which is 192.168.20.102

When I try and ping the floating IP of the instance, I get a response from the floating IP saying destination host unreachable. Which is saying that the openstack router can't communicate with the instance. As the instance is running on the compute host and the floating ip and external network is connected to the controller node, the gre tunnel allows the connectivity to the instance.

What I found is that I Cannot ping the br-tun interface of the nodes from the other node. IE controller cannot ping compute and compute cannot ping controller. I don't know if this is normal, but doesn't seem normal as I can ping all of the other interfaces with IPs. In addition, each node cannot ping the network switch on the tenant network. I have the switch set up as a layer 3 gateway, although the nodes do not have a gateway on this network. I was using the switch IP to test connectivity. Also, running 'arp -an' shows that all IPs on the br-tun interface are <unknown> which means broadcasts are not traversing.

I find that If I do 'ifdown ifcfg-br-tun' and then ifdown ifcf-eth5' and then followed by 'ifup ifcfg-br-tun' and 'ifup ifcfg-eth5' that I can then ping the network switch and ARP is now populating in the table, but "ovs-vsctl show" shows that there is no gre tunnel.

So I decided to reboot the nodes, thinking that there was no issue and all will be fixed with a mass service restart. But once the nodes were up again i found I Was back to square 1, with no working tenant network.

Before I down and up the interfaces::

Controller ping

[root@overcloud-controller-0 heat-admin]# ping 192.168.12.1
PING 192.168.12.1 (192.168.12.1) 56(84) bytes of data.
^C
--- 192.168.12.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

And the arp:

[root@overcloud-controller-0 heat-admin]# arp -an | grep 192.168.12
? (192.168.12.107) at <incomplete> on br-tun
? (192.168.12.1) at <incomplete> on br-tun

ovs-vsctl show for the br-tun

Bridge br-tun
            Controller "tcp:127.0.0.1:6633"
                is_connected: true
            fail_mode: secure
            Port "gre-c0a80c6b"
                Interface "gre-c0a80c6b"
                    type: gre
                    options: {df_default="true", in_key ...
(more)
2018-01-31 13:49:20 -0600 received badge  Famous Question (source)