发起问题

sami 45215 的档案 - activity

2020-09-02 19:23:07 -0500 获得奖牌  热门的问题 (source)
2020-09-02 19:23:07 -0500 获得奖牌  受欢迎的问题 (source)
2020-07-24 19:09:02 -0500 获得奖牌  著名问题 (source)
2020-05-18 00:47:28 -0500 获得奖牌  著名问题 (source)
2020-05-11 05:34:50 -0500 获得奖牌  粉丝
2020-05-10 09:46:18 -0500 问了问题 Gnocchi measures result showing empty

I got this error in /var/log/ceilometer/ceilometer-agent-notification.log 020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample [-] Pipeline meter_sink: Continue after error from publisher <ceilometer.publisher.gnocchi.gnocchipublisher object="" at="" 0x7f187b42a090="">: NoUniqueMatch: ClientException 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample Traceback (most recent call last): 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample File "/usr/lib/python2.7/dist-packages/ceilometer/pipeline/sample.py", line 157, in _publish_samples 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample p.publish_samples(transformed_samples) 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample File "/usr/lib/python2.7/dist-packages/ceilometer/publisher/gnocchi.py", line 294, in publish_samples 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample data = [s for s in data if not self._is_gnocchi_activity(s)] 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample File "/usr/lib/python2.7/dist-packages/ceilometer/publisher/gnocchi.py", line 278, in _is_gnocchi_activity 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample return (self.filter_project and self.gnocchi_project_id and ( 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample File "/usr/lib/python2.7/dist-packages/ceilometer/publisher/gnocchi.py", line 255, in gnocchi_project_id 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample name=self.filter_project) 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample File "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 75, in func 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample return f(args, *new_kwargs) 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample File "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 447, in find 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample raise ksc_exceptions.NoUniqueMatch 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample NoUniqueMatch: ClientException 2020-05-10 14:50:30.748 7688 ERROR ceilometer.pipeline.sample

2019-11-29 22:42:14 -0500 获得奖牌  热门的问题 (source)
2019-11-16 07:36:33 -0500 获得奖牌  学生 (source)
2019-11-16 07:36:24 -0500 获得奖牌  受欢迎的问题 (source)
2019-11-15 10:18:02 -0500 评论问题 heat auto scaling template

yes it work , i try it . I think cpu_util still exist in Stein.

2019-11-14 19:37:29 -0500 评论问题 heat auto scaling template

http://paste.openstack.org/show/786151/

2019-11-11 16:24:45 -0500 评论问题 heat auto scaling template

Thank you for the answer but i found a template and it work for me with aodh alarm and cpu_util .

2019-11-09 15:21:23 -0500 问了问题 heat auto scaling template

Can you give me a simple heat template for auto scaling an instance base on CPU utilisation. I'm using openstack stein with Heat , Ceilometer and aodh . thanks,

2019-10-02 09:24:27 -0500 获得奖牌  热门的问题 (source)
2019-10-01 12:58:55 -0500 获得奖牌  受欢迎的问题 (source)
2019-09-30 14:38:58 -0500 问了问题 Openstack instances can't ping to internet

I install openstack on single node using this guide https://docs.openstack.org/install-guide/ (https://docs.openstack.org/install-gu...) (https://docs.openstack.org/install-gu...) . When I launch an instance from OpenStack, the DHCP gives it a private IP 172.30.2.107. From that instance, I can ping the OpenStack host 172.30.2.219. I can't ping any other network outside of 172.30.2.0/24 and I can't ping 8.8.8.8.

the ip addr command output :

1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,up,lower_up> mtu 9001 qdisc mq master brq49c51ee0-07 state UP group default qlen 1000 link/ether 0a:ba:5e:71:a6:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::8ba:5eff:fe71:a6f2/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <no-carrier,broadcast,multicast,up> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:bd:c2:82 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <broadcast,multicast> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:bd:c2:82 brd ff:ff:ff:ff:ff:ff 5: brq49c51ee0-07: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0a:ba:5e:71:a6:f2 brd ff:ff:ff:ff:ff:ff inet 172.30.2.219/24 brd 172.30.2.255 scope global brq49c51ee0-07 valid_lft forever preferred_lft forever 6: tapdbcceff1-c4@if2: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue master brq49c51ee0-07 state UP group default qlen 1000 link/ether ce:e1:9b:7f:fd:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0

2019-09-30 11:28:05 -0500 获得奖牌  热门的问题 (source)
2019-09-30 09:15:28 -0500 评论问题 Openstack instances can't ping to internet

I can give you access to my AWS account so you can see what the problem .

2019-09-22 18:05:25 -0500 获得奖牌  受欢迎的问题 (source)
2019-09-21 12:20:37 -0500 评论问题 Openstack instances can't ping to internet

I install Opestack on a virtual machine inside Amazon EC2 she has a private ip 172.30.2.219 and a public ip 63.32.44.146 .

On /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge] physical_interface_mappings = provider:eth0

2019-09-21 10:48:04 -0500 评论问题 Openstack instances can't ping to internet

I can ping the internet from the openstack hosts (single node)

2019-09-21 09:35:20 -0500 问了问题 Openstack instances can't ping to internet

I install openstack on single node using this guide https://docs.openstack.org/install-guide/ (https://docs.openstack.org/install-gu...) . When I launch an instance from OpenStack, the DHCP gives it a private IP 172.30.2.107. From that instance, I can ping the OpenStack host 172.30.2.219. I can't ping any other network outside of 172.30.2.0/24 and I can't ping 8.8.8.8.

the ip addr command output :

1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,up,lower_up> mtu 9001 qdisc mq master brq49c51ee0-07 state UP group default qlen 1000 link/ether 0a:ba:5e:71:a6:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::8ba:5eff:fe71:a6f2/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <no-carrier,broadcast,multicast,up> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:bd:c2:82 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <broadcast,multicast> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:bd:c2:82 brd ff:ff:ff:ff:ff:ff 5: brq49c51ee0-07: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0a:ba:5e:71:a6:f2 brd ff:ff:ff:ff:ff:ff inet 172.30.2.219/24 brd 172.30.2.255 scope global brq49c51ee0-07 valid_lft forever preferred_lft forever 6: tapdbcceff1-c4@if2: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue master brq49c51ee0-07 state UP group default qlen 1000 link/ether ce:e1:9b:7f:fd:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0

2017-12-25 02:04:59 -0500 获得奖牌  受欢迎的问题 (source)
2017-12-25 02:04:59 -0500 获得奖牌  著名问题 (source)
2017-12-25 02:04:59 -0500 获得奖牌  热门的问题 (source)
2017-08-21 07:43:09 -0500 问了问题 memcache : failed to listen on TCP port 11211: Address already in use

memcache : failed to listen on TCP port 11211: Address already in use

After doing ps -a , memcache process not runnig .