Ask Your Question

cbouscal's profile - activity

2016-10-19 07:58:17 -0500 received badge  Famous Question (source)
2016-06-29 15:51:06 -0500 received badge  Notable Question (source)
2016-06-29 09:25:53 -0500 received badge  Popular Question (source)
2016-06-29 01:11:13 -0500 answered a question Ceilometer - cpu meters not collecting data

The solution to this problem only applies to a vmware backend. I also added a transformer for memory utilization % as a bonus. Here is what is working for us; ymmv.

/etc/ceilometer/ceilometer.conf should contain this line, with your desired interval:

evaluation_interval = 60

/etc/ceilometer/pipeline.yaml:

    ---
sources:
- name: meter_source
  interval: 60
  meters:
        - "*"
        - "!volume.create.*"
        - "!volume.delete.*"
        - "!volume.update.*"
        - "!volume.resize.*"
        - "!volume.attach.*"
        - "!volume.detach.*"
        - "!snapshot.create.*"
        - "!snapshot.delete.*"
        - "!identity.authenticate.*"
        - "!storage.api.request"
  sinks:
      - meter_sink
- name: memory_source
  interval: 60
  meters:
      - "memory.usage"
  sinks:
      - memory_util
sinks:
- name: meter_sink
  transformers:
  publishers:
      - notifier://

- name: memory_util
  transformers:
      - name: "arithmetic"
        parameters:
            target:
                name: "memory_util"
                unit: "%"
                type: "gauge"
                expr: "100.0 * $(memory.usage) / $(memory.usage).resource_metadata.flavor.ram "
  publishers:
      - notifier://

/etc/nova/nova-compute.conf _and_ /etc/ceilometer/ceilometer.conf on all compute nodes should both resemble the following. Edit host info to suit the environment.

    [DEFAULT]
    compute_driver=vmwareapi.VMwareVCDriver
    log_file=nova-compute-vmware-vcenter.log
    host=vcenter-hostname
    [vmware]
    cache_prefix=$host
    cluster_name=clustername
    host_ip=10.20.0.7
    host_username=administrator@vsphere.local
    host_password=<<password>>
    insecure=True
    api_retry_count=5
    maximum_objects=100
    task_poll_interval=5.0
    datastore_regex=<<regex>>
    use_linked_clone=true
2016-06-28 23:10:51 -0500 answered a question mirantis-8: how to change fuel master SSH listening IP

I haven't gone hunting for which puppet module this is set in, but to answer your question directly: we've simply added a ListenAddress to our sshd_config, and it's been running fine forever. If you execute something that triggers that module (such as executing 'fuelmenu' and then saving), I imagine this will revert your settings. You might test this to verify. Chances are, you won't be doing anything that will overwrite this once you're config is set.

As of fuel9, there is a set up option in fuelmenu to configure ssh access.

2016-06-28 22:51:18 -0500 asked a question Fuel 9 - horizon UI not rendering

I've just installed the 9.0 community release, everything installs fine (after adding a 'libvirt' group to the compute node to overcome a puppet deployment error), however the horizon UI barely renders. The path to the logo, for example, shows as http://10.10.0.3/horizon/static/dashboard/img/logo-splash.png but there is no 'img' directory in the /usr/share/openstack-dashboard/static/dashboard directory.

Once logged in, would seem more of the static content is missing as the pages are only rendered as text.

2016-06-03 00:04:10 -0500 received badge  Famous Question (source)
2016-05-12 00:34:45 -0500 received badge  Notable Question (source)
2016-05-10 07:49:40 -0500 received badge  Popular Question (source)
2016-05-09 10:48:07 -0500 asked a question Ceilometer - cpu meters not collecting data

We're running Mirantis 8 here, in a lab with a single compute node and 3 controllers. The compute node is actually a gateway to a vmware node. (I did not set this up and may not be describing this accurately).

Problem: We wish to use heat templates to create an auto-scaling group based on cpu utilization spikes. We've created the templates and tested them in an RDO setup, with success (that is to say, we're confident that the templates are set up correctly). However, we're not seeing the poller pick up the event (the log says "Skip pollster cpu, no resources found this cycle"). In my functioning RDO setup, I see this line in the log: "Polling pollster cpu in the context of cpu_source"

We've been working with our Mirantis support for about a week with little to show for it. We've checked the timing settings (making sure the configured time units are >= polling times in the pipeline.yaml).

#

/etc/ceilometer/pipeline.yaml:

cat /etc/ceilometer/pipeline.yaml
---
sources:
    - name: meter_source
      interval: 60
      meters:
            - "*"
            - "!volume.create.*"
            - "!volume.delete.*"
            - "!volume.update.*"
            - "!volume.resize.*"
            - "!volume.attach.*"
            - "!volume.detach.*"
            - "!snapshot.create.*"
            - "!snapshot.delete.*"
            - "!identity.authenticate.*"
            - "!storage.api.request"
      sinks:
          - meter_sink
    - name: cpu_source
      interval: 60
      meters:
          - "cpu"
      sinks:
          - cpu_sink
          - cpu_delta_sink
sinks:
    - name: meter_sink
      transformers:
      publishers:
          - notifier://
    - name: cpu_sink
      transformers:
          - name: "rate_of_change"
            parameters:
                target:
                    name: "cpu_util"
                    unit: "%"
                    type: "gauge"
                    scale: "100.0 / (10**9 * (resource_metadata.cpu_number or 1))"
      publishers:
          - notifier://
    - name: cpu_delta_sink
      transformers:
          - name: "delta"
            parameters:
                target:
                    name: "cpu.delta"
                growth_only: True
      publishers:
          - notifier://
#

autoscale template:

heat_template_version: 2014-10-16  
description: Auto-scaling group - up only
resources:  
  group:
    type: OS::Heat::AutoScalingGroup
    properties:
      cooldown: 60
      desired_capacity: 2
      max_size: 5
      min_size: 1
      resource:
        type: simple_centos.yaml

  scaleup_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: { get_resource: group }
      cooldown: 60
      scaling_adjustment: 1

  scaledown_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: { get_resource: group }
      cooldown: 60
      scaling_adjustment: -1

  cpu_alarm_high:
    type: OS::Ceilometer::Alarm
    properties:
      meter_name: cpu_util
      statistic: avg
      period: 60
      evaluation_periods: 1
      threshold: 50
      alarm_actions:
        - {get_attr: [scaleup_policy, alarm_url]}
      comparison_operator: gt

  cpu_alarm_low:
    type: OS::Ceilometer::Alarm
    properties:
      meter_name: cpu_util
      statistic: avg
      period: 60
      evaluation_periods: 1
      threshold: 20
      alarm_actions:
        - {get_attr: [scaleup_policy, alarm_url]}
      comparison_operator: lt