Ask Your Question

alp317's profile - activity

2018-08-09 06:14:33 -0500 answered a question OpenStack-Anisble and OVS

You are mapping the provider vlan to bridge br-vlan (network_mappings: "vlan:br-vlan,vxlan:br-vxlan"). That means there should be an OVS bridge called "br-vlan" on the compute and controller nodes, that's responsible for traffic into and out of the nodes for OpenStack networks using "vlan" provider.

Note: There is a difference in VLAN type network and network provider. You are using vlan as the network provider name also, which can lead to confusion.

2018-05-31 08:24:28 -0500 received badge  Popular Question (source)
2018-03-26 06:39:09 -0500 commented question openstack with opencontrail installation

Did you find any solution?

2018-03-26 06:39:09 -0500 received badge  Commentator
2018-01-26 05:35:54 -0500 commented question Heat AutoScalingGroup not returning outputs

From Horizon when i view all the resources. I can see the "asg", when i open that up it further contains another resource which points to the nested template. If i open that nested resource i can see that it generated the outputs, however, they are not passed to "asg".

2018-01-26 05:30:13 -0500 commented question Heat AutoScalingGroup not returning outputs

If the autoscaling group resource name is "asg", then i do get_attr: [asg]. This shows that attributes like outputs and outputs_list are empty along with the current_size. I am using nested templates. Nested template contains a load balanced server. There are outputs declared in that template.

2018-01-25 07:23:43 -0500 received badge  Famous Question (source)
2018-01-23 10:58:32 -0500 received badge  Notable Question (source)
2018-01-22 07:56:25 -0500 received badge  Popular Question (source)
2018-01-19 22:04:13 -0500 commented question Heat AutoScalingGroup not returning outputs

Yes. And the scaling part is working. But i am more concerned about the outputs. As i am planning to pass the outputs to other SoftwareConfigs. I am using OpenStack Newton.

2018-01-19 13:18:30 -0500 asked a question Heat AutoScalingGroup not returning outputs

I am using AutoScalingGroup to auto scale a galera cluster. This also includes load balancing. I found an autoscaling example at I am successfully able to run this, however the challenge i am facing is that AutoScalingGroup is not getting any outputs from the nested "lb_server.yaml" template. The outputs are defined but they are not delivered to the AutoScalingGroup.

A similar question has been asked before, but its pretty old and there is no answer on that one.

2018-01-19 08:21:06 -0500 commented question Heat autoSacalingGroup attributes current_size, outputs, and outputs_list are always blank

i am facing a similar issue, did you ever got the fix?

2018-01-15 01:26:44 -0500 commented question Can I deploy cinder backends during overcloud deploy? (cinder.conf)

What installer are you using?

2017-12-18 08:34:52 -0500 commented question OVS DPDK poor performance

RedHat Director RHOSP 10.

2017-12-18 08:32:19 -0500 received badge  Famous Question (source)
2017-12-18 08:28:00 -0500 asked a question NodeUserData continues even if one SoftwareConfig fails

I am using first boot scripts to generate and configure parameters for my environment.

I have a first-boot.yaml file, that contains a MultipartMime.

    type: OS::Heat::MultipartMime
      - config: {get_resource: generate}
      - config: {get_resource: configure}

The generate resource is a python script, and generate some values and places them in a file. Similarly the configure resource is a bash script that will read the values and configure them.

    type: OS::Heat::SoftwareConfig
      group: script
      config: | 

Now this works fine for most scenarios. But sometimes the python script fails because it couldn't find the desired file.

At this point i want my Stack creation (overcloud deployment) to fail. But it keeps going and then the configure script also fails. Can anyone please tell me how to add checks here that ensures that python script exited with status code 0.

2017-08-11 04:30:37 -0500 received badge  Enthusiast
2017-08-09 04:25:30 -0500 received badge  Famous Question (source)
2017-08-08 04:15:17 -0500 received badge  Notable Question (source)
2017-08-08 04:15:17 -0500 received badge  Popular Question (source)
2017-08-07 02:25:14 -0500 asked a question OVS DPDK poor performance

I was experimenting with OVS-DPDK based deployment of OpenStack (using tripleo). In the documentation that I followed they suggested to use different datapaths for control plane networks and tenant (VM) networks. For control plane networks Linux bonds were used. While experimenting I deployed the OpenStack with internal api network (a control plane network) on OVS-DPDK, to my surprise the network was working. But after testing it out I found out it was giving very poor performance, around 150 -250 Mbits/s on a 20 Gbits/s bonded network. While the networks on Linux bond were working fine. Now after reading a lot of questions on forums I couldn’t find the answer to this question. Other than a casual mention that it’s a rule of thumb to not use kernel and dpdk datapath ports on the same bridge. Also using the ethtool I found that the tagged vLAN network that I created over dpdk bridge is showing a link speed of 10 Mb/s. Can someone please explain what’s happening here.

2017-08-07 00:56:17 -0500 commented answer Metadata cpu policy dedicated

I finally figured it out. Nova's default thread policy is require. That means it will only use vCPUs that also have their sibling present in the nova vcpu list. As i was dedicating first 8 (0..7) to host their siblings were not used by nova, hence the missing 8 vCPUs. Solution was to use sibling set

2017-08-07 00:52:19 -0500 received badge  Notable Question (source)
2017-08-07 00:52:19 -0500 received badge  Popular Question (source)
2017-03-17 07:58:10 -0500 commented answer Metadata cpu policy dedicated

Information about NUMA Node 1. "cpuset": [9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39], "pinned_cpus": [33, 35, 37, 39, 9, 11, 13, 15, 17, 19, 29, 31],

2017-03-17 07:39:43 -0500 commented answer Metadata cpu policy dedicated

Yes i verified that instances are pinned to single NUMA Nodes by looking into cpuset tag inside libvirt.xml. And 4 vcpus are free on both nodes. I can see that it created 3 instances on each NUMA Node.

2017-03-17 04:56:29 -0500 asked a question Metadata cpu policy dedicated

I have a compute node with 40 vcpus. I assigned 32 to nova scheduler and isolated those vcpus by using isolcpus. Now i am creating VM's using flavor metadata hw:cpu_policy=dedicated and hw:numa_nodes=1 but i can only SPAWN 6 instances of 4 vcpus. So i am using 24 vcpus. I can see in the horizon under hypervisors that i still have 8 vcpus. But i cannot SPAWN the 7th instance using the same flavor.