Ask Your Question

Frank_SU_HH's profile - activity

2017-12-10 10:52:08 -0500 received badge  Famous Question (source)
2017-10-06 03:52:06 -0500 received badge  Notable Question (source)
2017-09-20 02:15:39 -0500 received badge  Popular Question (source)
2017-09-19 15:43:56 -0500 commented answer query OS::Aodh::GnocchiAggregationByResourcesAlarm wrong project_id

Hi, thx for your answer. Unfortunately this part of the query is generated by the module itself. So you can't just change it. I'm afraid I have to say, the module seems a bit buggy.

2017-09-19 15:43:16 -0500 answered a question query OS::Aodh::GnocchiAggregationByResourcesAlarm wrong project_id

Hi, thx for your answer. Unfortunately this part of the query is generated by the module itself. So you can't just change it. I'm afraid I have to say, the module seems a bit buggy. After changing the query inside the database, the not working first bit of the query "{"and": [{"=": {"created_by_project_id": "***"}}," gets added just another time when you suspend and restart the stack. I will take a closer look at the code as soon as I'll find some extra time. There has to be a better way to solve this.

cheers Frank

2017-09-14 02:26:12 -0500 asked a question query OS::Aodh::GnocchiAggregationByResourcesAlarm wrong project_id

Hi there I'm facing problems trying to set up a heat autoscaling group which reacts on OS::Aodh::GnocchiAggregationByResourcesAlarm.

The resulting Alarm uses a query to find the right metrics to inspect. One part of it is the stack_id aka server_group which gets attached to the alarm by either str_replace or list_join intrinsic function. This is no problem. The problem actually is, that parts of the query are kind of hardwired, so the resulting json string is something like

{"and": [{"=": {"created_by_project_id": "*********"}}, {"=": {"server_group": "******"}}]}

with the project_id being the id of the heat stack building project. Since at least ocata the resources of type instance are polled by a ceilometer-compute-agent running on every compute. This leads to the resources getting created by the service project_id. So I end up with alarms that query the wrong created_by_project_id ID and I have to change the ID ( or delete this part of the query ) on database level. e.g. one example I am using. query:

str_replace:
   template: '{"=": {"server_group": "stack_id"}}'
       params:
          stack_id: {get_param: "OS::stack_id"}

Help is highly appreciated. Cheers mates ;-)

2017-07-03 05:19:39 -0500 received badge  Famous Question (source)
2017-07-03 05:19:39 -0500 received badge  Notable Question (source)
2017-06-21 10:40:32 -0500 answered a question nova commands returns "ERROR (ConnectFailure): Unable to establish connection to http://controller:8774/v2.1/"

Well, this can't possibly be a short answer: You have most certainly replaced parts of the python tools used by openstack while installing gnocchi with "pip install" I presume, you have installed the Stack first out of any repository there is for your Distri, and now pip did at least over install keystone, I guess crucial part of the oslo stack and so on.

As this is your controller, restarting other services will result fail too.

You can try

su - nova -s /bin/bash
nova-api --config-file /etc/nova/nova.conf

or nova-conductor as well and it will give you error messages whats wrong and/or missing.

2017-06-14 03:40:29 -0500 answered a question Nova can't authenticate to neutron

Any errors inside the nova-api log? For me it looks like you have an error inside the nova.conf on your controller. This is the neutron section inside of mine

[neutron]
url = http://controller:9696
region_name = RegionOne
auth_url = http://controller:35357
auth_type = password
default_domain_name = default
project_domain_name = default
user_domain_name = default
username = neutron
password = niftypassword
project_name = service
service_metadata_proxy = True
metadata_proxy_shared_secret=niftysecret

I mean, your dashboard will setup networks with the neutronclient directly, so any errors inside the nova.conf will not affect this.

2017-06-14 02:43:00 -0500 commented answer the openstack install guide says that the provider network needs an ip but then uses a special config without an ip

The address range and gateway IP will be needed, when you define the subnet for the external network.That's it

2017-06-14 02:38:21 -0500 commented answer the openstack install guide says that the provider network needs an ip but then uses a special config without an ip

No static or dynamic IP, nothing. The gateway ( as for the example 203.0.113.1 ) is located on any router of your uplink provider (company, networking department, whatever ) You set

[linux_bridge]
physical_interface_mappings = eth0

inside /etc/neutron/plugins/ml2/linuxbridge_agent.ini

2017-06-14 02:34:19 -0500 commented answer the openstack install guide says that the provider network needs an ip but then uses a special config without an ip

I'll have a try: You want to provide external addresses for your instances. So you need too have an interface, connected to the internet, let's say it's name is eth0.

auto eth0
iface eth0 inet manual
        up ip link set dev $IFACE up
        down ip link set dev $IFACE down
2017-06-13 07:13:12 -0500 answered a question the openstack install guide says that the provider network needs an ip but then uses a special config without an ip

In this case, the routing for 203.0.113.0/24 is done by an external router ( yours or your providers ) via gw 203.0.113.1

So the 203.0.113.1 is not inside your OS environment. When you configure the provider net, you tell neutron about the subnet and the gw address ( some more parameters are possible, like vlan id ..) . When you setup a router inside your OS env, you need to add a gateway, which will be located inside the provider net. This router then will receive a public IP from inside this net ( e.g. 203.0.113.2 ). If everything is correct, you will be able to ping that ip. The router's IP exists inside a so called namespace. You can inspect the namespaces with the command ip netns. You may see something like

qrouter-a2da1577-b986-4af9-bbd6-9718f6294920 With " ip netns exec qrouter-a2da1577-b986-4af9-bbd6-9718f6294920 ip a you will then be able see the routers Gateway IP.

Physically the interface only needs to be set as manually up. that's why:

iface INTERFACE_NAME inet manual
ip link set dev $IFACE up

2017-06-13 06:48:51 -0500 commented answer Couldn't get network list while using keystone v3 API for authentication

I think, 1 user can only be located inside 1 domain UNIQUE KEY ixu_user_id_domain_id (id,domain_id), -> KEY domain_id (domain_id), Only the admin of the default domain can be superuser of all domains ( as stated inside policy.json for domains ). You can handle diff. but own risk!

2017-06-13 06:43:29 -0500 commented answer Couldn't get network list while using keystone v3 API for authentication

Hi It's default for the domain_id and Default for the domain_name. And it's, like it says, the default domain, if you don't want to use the feature at all. E.g. you can configure your horizon to not ask but imply that all projects are located inside the default domain.

2017-06-12 09:03:08 -0500 answered a question Couldn't get network list while using keystone v3 API for authentication

For keystone v3 you have to set project and user domain_name or domain_id. You can use the domain_id "default".

Or you use v2.

For v3 the default domain is set for everything, that does not have another domain defined. And the cloud admin (superuser) is resident in the default domain too.

My example with keystonev3 ( slightly different approach for the keytoneclient, but almost the same thing )
from keystoneclient.v3 import client
project = "admin"
domain = "default"
user = "admin"
password = 'xxxxxxxxxxxxxxx''
auth_url = 'http://aa.bb.cc.dd:35357/v3'
auth = client.Client(auth_url=auth_url,username=user, password=password,
                        project_name=project, user_domain_id="default",project_domain_id="default")
sess = session.Session(auth=auth)
2017-06-12 07:44:34 -0500 answered a question IP not been configured in VM os, but openstanck assign IP for it

Hi, the errors indicate a python error, not an Openstack error. 'ascii' codec can't encode characters in position 31-35: ordinal not in range(128)

As the python code inside of the openstack modules is working usually, maybe your locales are missing something?

greetings

Frank

2017-06-08 20:36:07 -0500 received badge  Popular Question (source)
2017-06-07 23:48:17 -0500 received badge  Notable Question (source)
2017-06-07 23:47:09 -0500 commented answer Where to find openstack-dashboard.conf in the latest DevStack release?

Hi As you have the complete path , not relative ones, for WSGIScriptAlias And Alias you don't need a DocumentRoot. Will be the servers default then. SSL termination on a proxy means, the SSL session terminates on a loadbalancer in front of the dashboard, which balances the requests in plain http.

2017-06-07 03:54:33 -0500 answered a question Ceilometer - Error: An error occurred. Please try again later

Hi Do you see anything inside the apache error log?

2017-06-07 03:51:34 -0500 answered a question Where to find openstack-dashboard.conf in the latest DevStack release?

Hi our ssl terminates on a proxy now, but we had something like

<VirtualHost *:443>
RewriteEngine on
RewriteRule ^/$ https://%{HTTP_HOST}/horizon [L,R]
ServerName horizon.example.com
SSLEngine On
SSLCertificateFile /etc/ssl/certs/example.com.crt
SSLCACertificateFile /etc/ssl/certs/example.com.ca-bundle
SSLCertificateKeyFile /etc/ssl/private/example.com.key
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown

#HTTP Strict Transport Security (HSTS) enforces that all communications
# with a server go over SSL. This mitigates the threat from attacks such
# as SSL-Strip which replaces links on the wire, stripping away https prefixes
# and potentially allowing an attacker to view confidential information on the
# wire
Header add Strict-Transport-Security "max-age=15768000"
# same as for http here
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
WSGIDaemonProcess horizon user=horizon group=horizon processes=4 threads=10
WSGIProcessGroup horizon
Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
# For Apache http server 2.2 and earlier:
#Order allow,deny
#Allow from all

# For Apache http server 2.4 and later:
Require all granted
</Directory>

CustomLog /var/log/apache2/openstack_access_log common
</VirtualHost>
2017-06-01 02:43:01 -0500 asked a question gnocchi dispatcher doesn't work for resource-type "instance" (SOLVED)

Hi there! My Setup is an Ocata Openstack cluster with nova,neutron,heat,cinder,glance and ceilometer/gnocchi on ubuntu xenial.

I've installed everything from the ubuntu-cloud.archive first, now I am trying the stable/ocata branch for ceilometer from git. Everything works fine, except the "type: instance" metrics are not getting into gnocchi.dispatcher. ceilometer-collector Error is e.g.: ERROR ceilometer.dispatcher.gnocchi [-] Error creating resource 6124c75b-ed7c-4cb3-82e1-98bfd36af874: Invalid input: required key not provided @ data[u'flavor_id'] (HTTP 400)

The part in the gnoccchi_resources.yaml responsible for the flavor_id key would be:

 - resource_type: instance
    metrics:
      - 'instance'
      - 'memory'
      - 'memory.usage'
      - 'memory.resident'
      - 'vcpus'
      - 'cpu'
      - 'cpu.delta'
      - 'cpu_util'
      - 'disk.root.size'
      - 'disk.ephemeral.size'
      - 'disk.read.requests'
      - 'disk.read.requests.rate'
      - 'disk.write.requests'
      - 'disk.write.requests.rate'
      - 'disk.read.bytes'
      - 'disk.read.bytes.rate'
      - 'disk.write.bytes'
      - 'disk.write.bytes.rate'
      - 'disk.latency'
      - 'disk.iops'
      - 'disk.capacity'
      - 'disk.allocation'
      - 'disk.usage'
    attributes:
      host: resource_metadata.host
      image_ref: resource_metadata.image_ref
      display_name: resource_metadata.display_name
      flavor_id: resource_metadata.(instance_flavor_id|(flavor.id))
[...]

the gnocchi dispatcher parameters inside ceilometer.conf are

[DEFAULT]   
    meter_dispatchers=gnocchi
    event_dispatchers=gnocchi 
[dispatcher_gnocchi]  
     filter_project=service  
     filter_service_activity =False 
     archive_policy = low 
     resources_definition_file=gnocchi_resources.yaml

Only thing, it's just not happening. I have resources like instance_disk or network, or numerous others, but no resource of type instance. The type itself of course does exist. Any suggestions?

THX a lot Frank

UPDATE:

I'm now able to meter resources of type "instance" as well. It obviously is an issue with the instance_discovery_method for the compute polling agent. The libvirt_metada seems not to work, so I've switched back to workload_partitioning.

[compute]
workload_partitioning=true

instance_discovery_method = workload_partitioning

Don't know if both lines are needed, but will not harm anyhow.

So, my working setup now is:

removed ceilometer-collector from controller(s).
changed puplisher inside ceilometer/pipeline.yml from "notifier://" to "gnocchi://" ( Everywhere, where polling agents run)
and changed the method as mentioned above.

It seems to be related to the libvirt Version on the ubuntu xenial servers. I'll investigate some more and may open a bug with ceilometer.

Cheers

Frank

2017-01-19 10:12:44 -0500 received badge  Popular Question (source)
2017-01-12 10:32:59 -0500 received badge  Popular Question (source)
2017-01-12 09:32:51 -0500 asked a question Liberty upgrade disabled metering for multipart objects

Hi I've recently upgraded my swift installation to liberty. Since that very moment we have one odd issue:

The meter (ceilometer) storage.objects.outgoing.bytes is not collecting the traffic data when downloading a "big" multipart object anymore. All other objects are still getting metered as before.

Only one initial PUT request results to an entry inside the mongodb and all subsequent action is ignored.

All hints are highly appreciated.

cheers Frank

Edit:

I've solved the issue. It was a problem with the location of the ceilometer agent inside the swift pipeline. It has to be located BEFORE slo and dlo (static large object-, dynamic large object support ). Million THXs to rledisez on #openstack-swift irc channel for helping me out.

cheers

Frank

2016-11-30 09:34:30 -0500 answered a question Flavor with specific volume_type

Hi, excuse my late answer. Is the AggregateInstanceExtraSpecsFilter enabled? /quote To configure the scheduler to support host aggregates, the scheduler_default_filters configuration option must contain the AggregateInstanceExtraSpecsFilter in addition to the other filters used by the scheduler. /

h_t_t_p_:_/_/docs.openstack.org/draft/config-reference/compute/scheduler.html

cheers Frank

2016-11-30 07:49:03 -0500 asked a question After migration data loss

Hi A strange thing occurred to me. I wanted to evacuate one compute node which was still homing 3 instances. 2 instances migrated seamingless, but with the third somesthing strange happend:

Instead of containing all the data and installation of the old instance, it was like a fresh provisioning of the image in use. Keys were set newly, none of my installation or network configurations was existing anymore.

Fortunately the old "disk" image still existed on the former host, so I was able to transfer all data manually to the new host.

Only log entry for this instance I don't really understand is: During sync_power_state the instance has a pending task (resize_finish). Skip. There is no debug or error entry so far, not on the old, nor the new host.

I'm running kilo.

Any hints would be highly appreciated.

Cheers Frank

2016-07-29 08:11:08 -0500 answered a question 2 Availability Zones on one compute node

Hi, you can actually add 1 compute node to different aggregates. Your limitations are: 1 aggregate can only be in 1 availability zone. 1 availability zone can host multiple aggregations. From my perspective availability zones are morely a kind of meta tag you can use for seperation.

So the question is, does it make any sense to do what you asked? The aggregations are most helpful for scheduling tasks together with the filters nova-scheduler provides. By adding extra_specs to either images or flavors, or/and adding the Multitenancy Filter, you can fine grain the usge of your compute(s).

So, what are you up for?

cheers

Frank

2015-10-06 02:56:25 -0500 received badge  Autobiographer
2015-10-05 02:27:45 -0500 received badge  Famous Question (source)
2015-09-17 10:37:39 -0500 edited question Ring Builder files lost swift

Hello We have an productive swift environment. After moving over the swift-proxies and deleting the old ones we found, that we have lost the builder files. The backups were only local, and I have to admit, I was very stupid. We use juno and swift 2.4.0 but the initial setup of the environment must have been with <= 1.7.5 The recovery with this help https://ask.openstack.org/en/question/20952/how-to-recovery-ring-builder-files/ (https://ask.openstack.org/en/question...) stops with

builder._set_parts_wanted()

Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/dist-packages/swift/common/ring/builder.py", line 550, in _set_parts_wanted dev['parts']) Any help is highly apreciated.

Update:

The trouble seems to be, that the original files were built with swift <=1.7.5 The function _set_parts_wanted iters over the object.devs dictionaries and adds a parts_wanted key.

for dev in self._iter_devs():
            if not dev['weight']:

                dev['parts_wanted'] = -self.parts * self.replicas
            else:
                dev['parts_wanted'] = \
                    int(weight_of_one_part * dev['weight']) - dev['parts']

by setting the key parts to 0 I am able to restore the builder files.

So I end up with new builder files, and the actual content of the e.g. account.ring.gz seems to be the same as of the newly generated with swift-ring-builder account.builder write_ring.

My tests are

from swift.common.ring import RingData, RingBuilder
ring = RingData.load('old_account.ring.gz')
ring2 = RingData.load('new_account.ring.gz')
old = vars(ring)
new = vars(ring2)
if old == new:
     print "okay"
else:
     exit ()
if ring._replica2part2dev_id == ring2._replica2part2dev_id:
    print ring.devs[200]
    print "\n"
    print ring2.devs[200]
else:
    print "Error"

So everything seems to be alright. Still I feel a bit uncertain. Is there anything else I should observe/test? best regards Frank

New update: Thx to the great help of kota_ at IRC #openstack-swift I feel more certain about the 3 builder files. As I am on holiday right now, I'ld like to keep the question opened til next month when we will build an activate a new ring during a planned maintenance. Keep you on track here.

Final Update:

New ringfiles are in place, devices got removed and newly added, ring got rebalanced and spreaded. Everything is working as it should! So here is what worked for me

from swift.common.ring import RingData, RingBuilder
###Object from old ring
ring = RingData.load('/root/tools/ring/account.ring.gz')
import math

partitions = len(ring._replica2part2dev_id[0])
replicas = len(ring._replica2part2dev_id)
builder = RingBuilder(int(math.log(partitions, 2)), replicas, 1)

builder.devs = ring.devs
builder._replica2part2dev = ring._replica2part2dev_id
builder._last_part_moves_epoch = 0
from array import array

builder._last_part_moves = array('B', (0 for _ in xrange(partitions)))
### Here it is important to define the 'parts' key first !!!

    for d in builder._iter_devs():
                    d['parts'] = 0
    for p2d in builder._replica2part2dev:
        for dev_id in p2d:
            builder.devs[dev_id]['parts'] += 1
    builder._set_parts_wanted()


    builder.change_min_part_hours(24)
    builder.validate()
    import pickle
    pickle.dump(builder.to_dict(), open('account.builder', 'wb ...
(more)