Ask Your Question

CK Lai's profile - activity

2018-08-15 03:12:44 -0600 received badge  Taxonomist
2018-03-15 14:49:23 -0600 received badge  Famous Question (source)
2018-03-14 06:33:18 -0600 received badge  Notable Question (source)
2018-03-14 04:58:05 -0600 received badge  Popular Question (source)
2018-03-14 04:58:05 -0600 received badge  Famous Question (source)
2018-03-14 04:58:05 -0600 received badge  Notable Question (source)
2018-02-18 11:41:09 -0600 received badge  Teacher (source)
2017-11-28 09:45:13 -0600 received badge  Popular Question (source)
2017-08-09 13:52:50 -0600 received badge  Famous Question (source)
2017-07-03 07:28:58 -0600 answered a question gnocchi port not open

I was having the same issue. My workaround is updating /usr/bin/gnocchi-api:

FROM:

parser.add_argument('--port', '-p', type=int, default=8000,

TO:

parser.add_argument('--port', '-p', type=int, default=8041,

Then restart gnocchi service.

2017-06-20 00:48:41 -0600 received badge  Notable Question (source)
2017-06-06 08:02:57 -0600 received badge  Popular Question (source)
2017-05-30 00:31:55 -0600 answered a question 'ClusterTemplate' object has no attribute 'trustee_domain_id'

My mistake during the reinstallation. Old version of policy.json was being used. The issue is fixed after using the correct version of policy.json.

2017-05-29 03:27:19 -0600 asked a question 'ClusterTemplate' object has no attribute 'trustee_domain_id'

Hi Magnum Expert, I'm currently installing magnum service in Centos 7 by following https://docs.openstack.org/project-install-guide/container-infrastructure-management/ocata/ (this guide). I'm able to create cluster template but unable to create a cluster by using the template. The command returns error:

[root@controller ~]# magnum cluster-create --name k8s-cluster \
>                       --cluster-template 47c553d6-3eb5-4544-969c-b3696cc14bc1 \
>                       --master-count 1 \
>                       --node-count 1
ERROR: 'ClusterTemplate' object has no attribute 'trustee_domain_id' (HTTP 500) (Request-ID: req-f9adc02b-b458-4ae0-9a2e-e16d642a0487)

The following is error found in magnum-api.log

2017-05-29 09:18:23.050 14909 ERROR wsme.api [req-65388ab3-f65d-43b8-918c-3fb307dbaf8a 4e1e34768c6f46bd8f7fffd6fed4a210 c0cfa45e50364ccab8783eea86b8bee0 default - -] Server-side error: "'ClusterTemplate' object has no attribute 'trustee_domain_id'". Detail:
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in callfunction
    result = f(self, *args, **kwargs)

  File "/usr/lib/python2.7/site-packages/magnum/api/controllers/v1/cluster_template.py", line 324, in get_one
    action='clustertemplate:get')

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 96, in enforce
    do_raise=do_raise, exc=exc, *args, **kwargs)

  File "/usr/lib/python2.7/site-packages/oslo_policy/policy.py", line 724, in enforce
    result = self.rules[rule](target, creds, self)

  File "/usr/lib/python2.7/site-packages/oslo_policy/_checks.py", line 204, in __call__
    return enforcer.rules[self.match](target, creds, enforcer)

  File "/usr/lib/python2.7/site-packages/oslo_policy/_checks.py", line 106, in __call__
    return not self.rule(target, cred, enforcer)

  File "/usr/lib/python2.7/site-packages/oslo_policy/_checks.py", line 297, in __call__
    match = self.match % target

  File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 759, in __getitem__
    return getattr(self, name)

AttributeError: 'ClusterTemplate' object has no attribute 'trustee_domain_id'

The same error occurred when I was trying to delete the cluster template. Here's my magnum.conf:

[DEFAULT]
state_path = /var/lib/magnum
debug = true
log_dir = /var/log/magnum
transport_url = rabbit://openstack:mypassword@controller

[api]
port = 9511
host = 0.0.0.0

[barbican_client]
region_name = RegionOne
endpoint_type = publicURL

[certificates]
cert_manager_type = barbican

[cinder_client]
[cluster]
[cluster_heat]
[cluster_template]
[conductor]
[cors]
[cors.subdomain]
[database]
mysql_engine = InnoDB
backend = sqlalchemy
connection = mysql+pymysql://magnum:mypassword@controller/magnum

[docker]
[docker_registry]
[glance_client]
[heat_client]
[keystone_auth]
[keystone_authtoken]
memcached_servers = controller:11211
auth_version = v3
auth_uri = http://controller:5000/v3
project_domain_id = default
project_name = service
user_domain_id = default
password = mypassword
username = magnum
auth_url = http://controller:35357
auth_type = password

[magnum_client]
[matchmaker_redis]
[neutron_client]
[nova_client]
[oslo_concurrency]
lock_path = /var/lib/magnum/tmp

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messaging

[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[profiler]
[quotas]
[trust]
trustee_domain_name = magnum
trustee_domain_admin_name = magnum_domain_admin
trustee_domain_admin_password = mypassword
trustee_keystone_interface = public

[x509]
2017-05-26 04:42:53 -0600 answered a question magnum-api DBNonExistentTable (sqlite3.OperationalError) no such table: magnum_service

This issue is fixed. It was due to the environment settings for magnum which was configured in Liberty release. I have wiped the Liberty and installed Ocata release but have missed these environment settings for magnum.

2017-05-26 00:28:56 -0600 asked a question magnum-api DBNonExistentTable (sqlite3.OperationalError) no such table: magnum_service

I'm installing Magnum by following https://docs.openstack.org/project-install-guide/container-infrastructure-management/ocata/get_started.html (this guide). Installation was smooth but it is having error in verifying steps. When I run "magnum service-list", it returned:

ERROR: Remote error: DBNonExistentTable (sqlite3 (HTTP 500) (Request-ID: req-6ee72050-d534-42bd-be0f-12ece7d2c262)

The following error is shown in magnum-api.log:

2017-05-26 02:02:12.028 46907 ERROR wsme.api [req-5cd06f3e-6c02-49fe-b479-438501d6e629 10822534c1a64b838ca654b1805cf468 1a0a4243540c4c3cb5a0964e8adc600c default - -] Server-side error: "Remote error: DBNonExistentTable (sqlite3.OperationalError) no such table: magnum_service [SQL: u'SELECT magnum_service.created_at AS magnum_service_created_at, magnum_service.updated_at AS magnum_service_updated_at, magnum_service.id AS magnum_service_id, magnum_service.host AS magnum_service_host, magnum_service.binary AS magnum_service_binary, magnum_service.disabled AS magnum_service_disabled, magnum_service.disabled_reason AS magnum_service_disabled_reason, magnum_service.last_seen_up AS magnum_service_last_seen_up, magnum_service.forced_down AS magnum_service_forced_down, magnum_service.report_count AS magnum_service_report_count \nFROM magnum_service ORDER BY magnum_service.id ASC']
[u'Traceback (most recent call last):\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/magnum/conductor/handlers/indirection_api.py", line 33, in _object_dispatch\n    return getattr(target, method)(context, *args, **kwargs)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/magnum/objects/magnum_service.py", line 84, in list\n    sort_dir=sort_dir)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/magnum/db/sqlalchemy/api.py", line 547, in get_magnum_service_list\n    sort_key, sort_dir, query)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/magnum/db/sqlalchemy/api.py", line 114, in _paginate_query\n    return query.all()\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2703, in all\n    return list(self)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2855, in __iter__\n    return self._execute_and_instances(context)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2878, in _execute_and_instances\n    result = conn.execute(querycontext.statement, self._params)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute\n    return meth(self, multiparams, params)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection\n    return connection._execute_clauseelement(self, multiparams, params)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement\n    compiled_sql, distilled_params\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context\n    context)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1398, in _handle_dbapi_exception\n    util.raise_from_cause(newraise, exc_info)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause\n    reraise(type(exception), exception, tb=exc_tb, cause=cause)\n', u'  File "/var/lib/magnum/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context\n    context)\n', u'  File "/var ...
(more)
2017-05-26 00:15:35 -0600 received badge  Enthusiast
2017-05-23 22:55:26 -0600 asked a question Gnocchi returned unauthorized error when creating resources alarm from Heat

I have installed Ocata Openstack by following the following OpenStack Installation Tutorial for Red Hat Enterprise Linux and CentOS.

I have created a Heat Template for creating an autoscaling stack:

heat_template_version: 2015-10-15
description: Example auto scale group, policy and alarm

parameters:
  pcc_name:
    type: string
    description: VM name
    default: pcc
  pcc_image:
    type: string
    description: Image used for servers
    default: CentOS68A
  pcc_flavor:
    type: string
    description: flavor used by the web servers
    default: m1.medium
  policy_name:
    type: string
    description: VM name
    default: nodeA
  policy_image:
    type: string
    description: Image used for servers
    default: CentOS68B
  policy_flavor:
    type: string
    description: flavor used by the web servers
    default: m1.large
  policy_ip:
    type: string
    description: Network used by the server
    default: 172.16.1.149
  key_name:
    type: string
    description: SSH key to connect to the servers
    default: mykey
  public_network:
    type: string
    description: Network used by the server
    default: public
  private_network:
    type: string
    description: Network used by the server
    default: private
  private_subnet:
    type: string
    description: Subnet used by the server
    default: private

resources:
  server_port:
    type: OS::Neutron::Port
    properties:
      network_id: { get_param: private_network }
      fixed_ips:
        - subnet_id: { get_param: private_subnet }
          ip_address: { get_param: policy_ip }

  policy_instance:
    type: OS::Nova::Server
    properties:
      name: {get_param: policy_name}
      flavor: {get_param: policy_flavor}
      image: {get_param: policy_image}
      key_name: {get_param: key_name}
      networks:
        - port: { get_resource: server_port }
      user_data_format: RAW
      user_data: {get_resource: policy_config}

  floating_ip:
     type: OS::Neutron::FloatingIP
     properties:
       floating_network: {get_param: public_network}

  association:
     type: OS::Neutron::FloatingIPAssociation
     properties:
       floatingip_id: { get_resource: floating_ip }
       port_id: {get_attr: [policy_instance, addresses, {get_param: private_network}, 0, port]}

  scaleup_group:
    type: OS::Heat::AutoScalingGroup
    properties:
      cooldown: 60
      desired_capacity: 0
      max_size: 3
      min_size: 0
      resource:
        type: http://localhost/templates/pcc_v1.yaml
        properties:
          name: {get_param: pcc_name}
          flavor: {get_param: pcc_flavor}
          image: {get_param: pcc_image}
          key_name: {get_param: key_name}
          network: {get_param: private_network}
          user_data: {get_resource: pcc_config}

  scaleup_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: { get_resource: scaleup_group }
      cooldown: 60
      scaling_adjustment: 1

  scaledown_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: { get_resource: scaleup_group }
      cooldown: 60
      scaling_adjustment: -1

  cpu_alarm_high:
    type: OS::Ceilometer::GnocchiAggregationByResourcesAlarm
    properties:
      description: Scale-up if the last CPU > 20%
      metric: cpu_util
      aggregation_method: last
      granularity: 300
      evaluation_periods: 1
      threshold: 20
      comparison_operator: gt
      alarm_actions:
        - {get_attr: [scaleup_policy, alarm_url]}
      resource_type: instance
      query:
        str_replace:
          template: '{"=": {"server_group": "stack_id"}}'
          params:
            stack_id: {get_param: "OS::stack_id"}

  cpu_alarm_low:
    type: OS::Ceilometer::GnocchiAggregationByResourcesAlarm
    properties:
      description: Scale-up if the last CPU < 10%
      metric: cpu_util
      aggregation_method: last
      granularity: 300
      evaluation_periods: 1
      threshold: 10
      comparison_operator: lt
      alarm_actions:
        - {get_attr: [scaledown_policy, alarm_url]}
      resource_type: instance
      query:
        str_replace:
          template: '{"=": {"server_group": "stack_id"}}'
          params:
            stack_id: {get_param: "OS::stack_id"}

  pcc_config:
    type: "OS::Heat::CloudConfig"
    properties:
      cloud_config:
        users:
        - default
        - name: tango
          gecos: "Tango User"
          primary-group: tango
          lock-passwd: false
          shell: /bin/bash
          sudo: "ALL=(ALL) NOPASSWD:ALL"
          homedir: /tango
        ssh_pwauth: true

        chpasswd:
          list: |
            tango:t3l3com
          expire: False

        write_files:
          - path: /etc/resolv.conf
            permissions: '0644'
            content: |
              domain pcc.com
              nameserver 192.168.1.68

          - path: /etc/sysconfig/network
            permissions: '0644'
            content: |
              NETWORKING=yes
              HOSTNAME=DYNAMIC_HOST_NAME_PLACEHOLDER

  policy_config:
    type: "OS::Heat::CloudConfig"
    properties:
      cloud_config:
        growpart:
          mode: auto
          devices: ['/']
          ignore_growroot_disabled: false

When creating the stack, the following error was shown in heat-engine.log:

2017-05-24 04:20:21.604 28581 INFO heat.engine.resource [req-d59a4460-5c98-46ff-a88f-884bac069441 demo demo - - -] CREATE: AodhGnocchiAggregationByResourcesAlarm "cpu_alarm_low" Stack "Ocata_Test_050" [28f7556a-dc06-4f08-9ec4-3d0c21a86e90]
2017-05-24 04:20:21.604 28581 ERROR heat ...
(more)
2017-05-18 15:46:31 -0600 answered a question aodh error: [Errno 98] Address already in use

This is fixed by replacing port 8000 with 8042 in /usr/bin/aodh-api and restart aodh-api service.

parser.add_argument('--port', '-p', type=int, default=8042,
                    help='TCP port to listen on')
2017-05-18 15:46:31 -0600 asked a question aodh error: [Errno 98] Address already in use

I'm currently installing Ocata version on my CentOS 7.3 by following https://docs.openstack.org/ocata/install-guide-rdo/ (this guide). The following error keeps showing up in api.log whenever the openstack-aodh-api is started.

2017-05-17 07:56:41.220 26831 CRITICAL aodh [-] error: [Errno 98] Address already in use 2017-05-17 07:56:41.220 26831 ERROR aodh Traceback (most recent call last): 2017-05-17 07:56:41.220 26831 ERROR aodh File "/usr/bin/aodh-api", line 34, in <module> 2017-05-17 07:56:41.220 26831 ERROR aodh server = wss.make_server('', args.port, build_wsgi_app()) 2017-05-17 07:56:41.220 26831 ERROR aodh File "/usr/lib64/python2.7/wsgiref/simple_server.py", line 144, in make_server 2017-05-17 07:56:41.220 26831 ERROR aodh server = server_class((host, port), handler_class) 2017-05-17 07:56:41.220 26831 ERROR aodh File "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__ 2017-05-17 07:56:41.220 26831 ERROR aodh self.server_bind() 2017-05-17 07:56:41.220 26831 ERROR aodh File "/usr/lib64/python2.7/wsgiref/simple_server.py", line 48, in server_bind 2017-05-17 07:56:41.220 26831 ERROR aodh HTTPServer.server_bind(self) 2017-05-17 07:56:41.220 26831 ERROR aodh File "/usr/lib64/python2.7/BaseHTTPServer.py", line 108, in server_bind 2017-05-17 07:56:41.220 26831 ERROR aodh SocketServer.TCPServer.server_bind(self) 2017-05-17 07:56:41.220 26831 ERROR aodh File "/usr/lib64/python2.7/SocketServer.py", line 430, in server_bind 2017-05-17 07:56:41.220 26831 ERROR aodh self.socket.bind(self.server_address) 2017-05-17 07:56:41.220 26831 ERROR aodh File "/usr/lib64/python2.7/socket.py", line 224, in meth 2017-05-17 07:56:41.220 26831 ERROR aodh return getattr(self._sock,name)(*args) 2017-05-17 07:56:41.220 26831 ERROR aodh error: [Errno 98] Address already in use 2017-05-17 07:56:41.220 26831 ERROR aodh

I have checked the port 8042 but no process is using it: netstat -tulpn| grep 8042

I have ran out of idea on what could caused this issue.

2017-05-17 02:28:23 -0600 received badge  Organizer (source)