Ask Your Question

Aidaho's profile - activity

2018-10-15 02:47:46 -0500 received badge  Famous Question (source)
2018-03-02 11:27:04 -0500 received badge  Famous Question (source)
2017-02-23 18:07:45 -0500 received badge  Famous Question (source)
2017-02-23 18:07:45 -0500 received badge  Notable Question (source)
2016-10-26 18:45:29 -0500 received badge  Famous Question (source)
2016-08-31 08:04:13 -0500 received badge  Popular Question (source)
2016-08-04 08:31:37 -0500 received badge  Notable Question (source)
2016-07-24 23:03:21 -0500 asked a question Several partitions in the vda

Hi all!

I want to know is it possible to break up the hard drive on a virtual machine into multiple partitions?

As far as I know in such cases it is necessary to create the VM from scratch and break partitions of the installation, but do it every time you install?

Can there any funds? I Tried Cloud-init, but as I understand it, he does not know how.

2016-05-12 07:50:59 -0500 received badge  Famous Question (source)
2016-05-12 07:50:59 -0500 received badge  Notable Question (source)
2016-04-12 05:32:57 -0500 received badge  Notable Question (source)
2016-02-25 15:49:12 -0500 received badge  Notable Question (source)
2016-02-23 20:51:30 -0500 received badge  Popular Question (source)
2016-02-09 23:23:04 -0500 received badge  Popular Question (source)
2016-02-05 05:14:24 -0500 answered a question No traffic to br-tun

Help anyone please!

2016-02-04 03:17:16 -0500 commented question No traffic to br-tun

And in log i have this err:

ovs-vsctl: ovs|00001|vsctl|ERR|no row "patch-tun" in table Port
2016-02-04 01:05:38 -0500 asked a question No traffic to br-tun

HI all! I use RHEL 7 and Libery. Faced with the following problem - there is no traffic in tunnels and with strange rules:

# ovs-ofctl dump-flows br-tun 
NXST_FLOW reply (xid=0x4):
 cookie=0x886283bb69f40f1a, duration=2442.899s, table=0, n_packets=27, n_bytes=2154, idle_age=2430, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0x886283bb69f40f1a, duration=2442.378s, table=0, n_packets=0, n_bytes=0, idle_age=2442, priority=1,in_port=2 actions=resubmit(,4)
 cookie=0x886283bb69f40f1a, duration=2442.261s, table=0, n_packets=0, n_bytes=0, idle_age=2442, priority=1,in_port=5 actions=resubmit(,3)
 cookie=0x886283bb69f40f1a, duration=2442.248s, table=0, n_packets=0, n_bytes=0, idle_age=2442, priority=1,in_port=6 actions=resubmit(,3)
 cookie=0x886283bb69f40f1a, duration=2442.234s, table=0, n_packets=0, n_bytes=0, idle_age=2442, priority=1,in_port=7 actions=resubmit(,3)
 cookie=0x886283bb69f40f1a, duration=2417.158s, table=0, n_packets=0, n_bytes=0, idle_age=2417, priority=1,in_port=3 actions=resubmit(,4)
 cookie=0x886283bb69f40f1a, duration=2417.010s, table=0, n_packets=0, n_bytes=0, idle_age=2417, priority=1,in_port=8 actions=resubmit(,3)
 cookie=0x886283bb69f40f1a, duration=2397.312s, table=0, n_packets=0, n_bytes=0, idle_age=2397, priority=1,in_port=4 actions=resubmit(,4)
 cookie=0x886283bb69f40f1a, duration=2397.162s, table=0, n_packets=0, n_bytes=0, idle_age=2397, priority=1,in_port=9 actions=resubmit(,3)
 cookie=0x886283bb69f40f1a, duration=2442.899s, table=0, n_packets=0, n_bytes=0, idle_age=2442, priority=0 actions=drop
 cookie=0x886283bb69f40f1a, duration=2442.898s, table=2, n_packets=0, n_bytes=0, idle_age=2442, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x886283bb69f40f1a, duration=2442.898s, table=2, n_packets=27, n_bytes=2154, idle_age=2430, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
 cookie=0x886283bb69f40f1a, duration=2442.898s, table=3, n_packets=0, n_bytes=0, idle_age=2442, priority=0 actions=drop
 cookie=0x886283bb69f40f1a, duration=2442.898s, table=4, n_packets=0, n_bytes=0, idle_age=2442, priority=0 actions=drop
 cookie=0x886283bb69f40f1a, duration=2442.897s, table=6, n_packets=0, n_bytes=0, idle_age=2442, priority=0 actions=drop
 cookie=0x886283bb69f40f1a, duration=2442.897s, table=10, n_packets=0, n_bytes=0, idle_age=2442, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,cookie=0x886283bb69f40f1a,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x886283bb69f40f1a, duration=2442.897s, table=20, n_packets=0, n_bytes=0, idle_age=2442, priority=0 actions=resubmit(,22)
 cookie=0x886283bb69f40f1a, duration=2442.798s, table=22, n_packets=27, n_bytes=2154, idle_age=2430, priority=0 actions=drop

All tunnels up:

# ovs-vsctl show
706cd420-35a9-46fc-aa0e-6d02e18e1301
    Bridge br-tun
        fail_mode: secure
        Port "gre-0a000a04"
            Interface "gre-0a000a04"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="10.0.10.5", out_key=flow, remote_ip="10.0.10.4"}
        Port "gre-0a001403"
            Interface "gre-0a001403"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="10.0.10.5", out_key=flow, remote_ip="10.0.20.3"}
        Port "vxlan-0a001401"
            Interface "vxlan-0a001401"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.10.5", out_key=flow, remote_ip="10.0.20.1"}
        Port patch-int
            Interface patch-int
                type: patch ...
(more)
2016-02-02 00:10:14 -0500 received badge  Popular Question (source)
2016-02-01 03:24:55 -0500 asked a question No connection between the physical interface and qg-

Good day! I use RHEL 7.2 and Openstack Liberty.

I can not solve a problem with access to the external network from the internal network Openstask.

The problem is that qg- interface for some reason not connected with the physical. What I mean:

# ip netns exec qrouter-87599ab1-ed12-4550-9c37-623ab7843ed9 ping 172.28.5.249
PING 172.28.5.249 (172.28.5.249) 56(84) bytes of data.
From 172.28.5.23 icmp_seq=1 Destination Host Unreachable
From 172.28.5.23 icmp_seq=2 Destination Host Unreachable
From 172.28.5.23 icmp_seq=3 Destination Host Unreachable
From 172.28.5.23 icmp_seq=4 Destination Host Unreachable

Then on network node i see:

# tcpdump -i tapf2a91744-53
tcpdump: WARNING: tapf2a91744-53: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tapf2a91744-53, link-type EN10MB (Ethernet), capture size 65535 bytes
15:20:44.043450 ARP, Request who-has 172.28.5.249 tell 172.28.5.23, length 28
15:20:44.310269 ARP, Request who-has 172.28.5.249 tell 172.28.5.23, length 28
15:20:45.045918 ARP, Request who-has 172.28.5.249 tell 172.28.5.23, length 28

And:

# ip netns exec qrouter-87599ab1-ed12-4550-9c37-623ab7843ed9 arp -a
? (172.28.5.249) at <incomplete> on qg-f2a91744-53
? (172.28.5.248) at <incomplete> on qg-f2a91744-53
? (10.0.0.6) at fa:16:3e:6e:da:36 [ether] on qr-b2e900d8-5e
? (172.28.5.247) at <incomplete> on qg-f2a91744-53
? (10.0.0.5) at fa:16:3e:45:54:58 [ether] on qr-b2e900d8-5e
? (172.28.5.240) at <incomplete> on qg-f2a91744-53

If i do ping from external network, then silence:

# tcpdump -i any -n -v \ 'icmp[icmptype] = icmp-echoreply or icmp[icmptype] =icmp-echo'
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
^C
0 packets captured
1 packet received by filter
0 packets dropped by kernel

Iptables enable:

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
neutron-linuxbri-INPUT  all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
neutron-filter-top  all  --  anywhere             anywhere            
neutron-linuxbri-FORWARD  all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
neutron-filter-top  all  --  anywhere             anywhere            
neutron-linuxbri-OUTPUT  all  --  anywhere             anywhere            

Chain neutron-filter-top (2 references)
target     prot opt source               destination         
neutron-linuxbri-local  all  --  anywhere             anywhere            

Chain neutron-linuxbri-FORWARD (1 references)
target     prot opt source               destination         

Chain neutron-linuxbri-INPUT (1 references)
target     prot opt source               destination         

Chain neutron-linuxbri-OUTPUT (1 references)
target     prot opt source               destination         

Chain neutron-linuxbri-local (1 references)
target     prot opt source               destination         

Chain neutron-linuxbri-sg-chain (0 references)
target     prot opt source               destination         

Chain neutron-linuxbri-sg-fallback (0 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* Default drop rule for unmatched traffic. */

# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
neutron-linuxbri-PREROUTING  all  --  anywhere             anywhere            

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
neutron-linuxbri-OUTPUT  all  --  anywhere             anywhere            

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
neutron-linuxbri-POSTROUTING  all  --  anywhere             anywhere            
neutron-postrouting-bottom  all  --  anywhere             anywhere            

Chain neutron-linuxbri-OUTPUT (1 ...
(more)
2016-02-01 03:17:04 -0500 answered a question Instance cannot connect to external network

You forgot

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2016-01-27 23:45:45 -0500 asked a question No external traffic even on the network interface

Hi!

I have new install on RHEL 7.2 Openstack Liberty, and have problem with external network - no external traffic, and no errors in the logs. For network I'm use LinuxBridge. Inside, all working well.

I have one ip address submitted to the external network interface:

# ip netns exec qrouter-c179afdb-32d0-4c95-b786-538550cd8843 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ha-eb5e1ee4-fb@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:2a:8b:34 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-eb5e1ee4-fb
       valid_lft forever preferred_lft forever
    inet 169.254.0.1/24 scope global ha-eb5e1ee4-fb
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe2a:8b34/64 scope link 
       valid_lft forever preferred_lft forever
3: qg-f98f1904-a9@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:1d:7a:15 brd ff:ff:ff:ff:ff:ff
    inet 172.28.5.11/24 scope global qg-f98f1904-a9
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe1d:7a15/64 scope link nodad 
       valid_lft forever preferred_lft forever

This address on router:

# neutron  router-list
+--------------------------------------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------+
| id                                   | name | external_gateway_info                                                                                                                                                                   | distributed | ha   |
+--------------------------------------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------+
| c179afdb-32d0-4c95-b786-538550cd8843 | 3    | {"network_id": "a07aabc7-e477-46e9-b885-ef647a70e962", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "40ce7b7a-50fe-4bfd-88f4-419fc45c9398", "ip_address": "172.28.5.11"}]} | False       | True |
+--------------------------------------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+------+

And on the network node no traffic:

    # tcpdump -i any -n -v \ 'icmp[icmptype] = icmp-echoreply or icmp[icmptype] =icmp-echo'
    tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

Configs:

# egrep -v ^#\|^$ /etc/neutron/plugins/ml2/ml2_conf.ini
[linux_bridge]
physical_interface_mappings = vlan:ens224,external:ens256
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = public
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 1:1000
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

# egrep -v ^#\|^$ /etc/neutron/plugins/ml2/linuxbridge_agent.ini 
[linux_bridge]
physical_interface_mappings = public:ens224
[vxlan]
enable_vxlan = True
local_ip = 10.0.20.1
l2_population = True
[agent]
prevent_arp_spoofing = True
[securitygroup]

This means that the external network does not know that the IP address is on the external network interface Network Node and accordingly does not come.

Please help me understand what the problem is, I have completely run out of ideas.

2016-01-27 23:23:03 -0500 received badge  Popular Question (source)
2016-01-19 10:41:55 -0500 received badge  Popular Question (source)
2016-01-19 10:41:55 -0500 received badge  Notable Question (source)
2015-12-10 04:34:33 -0500 asked a question libvirtError: internal error: process :6789: Unknown protocol

Greetings. I am trying to integrate in ceph openstask. Kinder Andes Collins operate normally, but with a new-Compute a problem. When you create an instance arises here is the error:

2015-12-10 16:07:18.171 12878 ERROR nova.compute.manager [req-7eff6362-4afe-41e6-a6eb-2bcac3dea668 6ff45a1be4b646299c7e51b295f760c8 84d5ff4c58e54c6081c047c3b71a7037 - - -] [instance: 66149161-902c-495a-96ef-5a672bdd0bc4] Instance failed to spawn
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4] Traceback (most recent call last):
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2461, in _build_resources
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     yield resources
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2333, in _build_and_run_instance
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     block_device_info=block_device_info)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2385, in spawn
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     block_device_info=block_device_info)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4403, in _create_domain_and_network
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     power_on=power_on)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4334, in _create_domain
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     LOG.error(err)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     six.reraise(self.type_, self.value, self.tb)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4324, in _create_domain
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     domain.createWithFlags(launch_flags)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     rv = execute(f, *args, **kwargs)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]     six.reraise(c, e, tb)
2015-12-10 16:07:18.171 12878 TRACE nova.compute.manager [instance: 66149161-902c-495a-96ef-5a672bdd0bc4]   File "/usr/lib ...
(more)
2015-12-09 20:41:48 -0500 answered a question ImageCopyFailure: Failed to copy image to volume: 'tuple' object has no attribute 'find'

Guys? somebody was like?

2015-12-09 05:17:51 -0500 commented question ImageCopyFailure: Failed to copy image to volume: 'tuple' object has no attribute 'find'

May be anybody know what a problem? Pls...

2015-12-09 02:43:36 -0500 received badge  Enthusiast
2015-11-20 04:02:02 -0500 asked a question ImageCopyFailure: Failed to copy image to volume: 'tuple' object has no attribute 'find'

Hello.I am trying to create a VM on the block device and get this error in the log:

2015-11-19 10:13:39.012 5803 ERROR oslo_messaging.rpc.dispatcher [req-69b451e3-cfa9-40dd-95ad-c141419002a4 6ff45a1be4b646299c7e51b295f760c8 84d5ff4c58e54c6081c047c3b71a7037 - - -] Exception during message handling: Failed to copy image to volume: 'tuple' object has no attribute 'find'
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     executor_callback))
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     executor_callback)
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     return f(*args, **kwargs)
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 468, in create_volume
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     _run_flow()
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 456, in _run_flow
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     flow_engine.run()
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 96, in run
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     for _state in self.run_iter():
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 153, in run_iter
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     failure.Failure.reraise_if_any(failures.values())
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 244, in reraise_if_any
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     failures[0].reraise()
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 251, in reraise
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     six.reraise(*self._exc_info)
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 67, in _execute_task
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher     result = task.execute(**arguments)
2015-11-19 10:13:39.012 5803 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 653, in execute
2015-11-19 ...
(more)
2015-11-18 01:17:10 -0500 commented answer Glance does not see images when use api v1

Thx, from CLI i now see images, but in dashbord stell not

2015-11-18 01:16:19 -0500 received badge  Popular Question (source)
2015-11-05 20:28:05 -0500 commented answer Glance does not see images when use api v1

The config file is already corrected, but did not help:

OPENSTACK_API_VERSIONS = {
#    "data-processing": 1.1,
#    "identity": 3,
#    "volume": 2,
     "image": 2,
}

But it did not help in dashboards still no images

2015-11-05 05:19:21 -0500 received badge  Editor (source)
2015-11-05 05:18:04 -0500 commented answer Glance does not see images when use api v1

Well, if different for different versions of endpoint api, why do I only see images of the second version and I do not see in dashboards? Output -commands with --debug above.

2015-11-05 05:17:48 -0500 answered a question Glance does not see images when use api v1

For glance i have 1 endpoint:

keystone endpoint-list

/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
  'python-keystoneclient.', DeprecationWarning)
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+----------------------------------+
|                id                |   region  |                    publicurl                    |                   internalurl                   |                  adminurl                  |            service_id            |
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+----------------------------------+
| 32ef7d81873d449a9aedc141a639a48e | RegionOne |    http://172.28.83.182:8776/v1/$(tenant_id)s   |    http://172.28.83.182:8776/v1/$(tenant_id)s   | http://172.28.83.182:8776/v1/$(tenant_id)s | 8633fe5427e345e0bbca45112a2ac4e9 |
| 57f990a1ec5d43a5be772de33c50a475 | RegionOne |          http://172.28.83.182:5000/v2.0         |          http://172.28.83.182:5000/v2.0         |      http://172.28.83.182:35357/v2.0       | 2028bdc15ddc495eb9b6193d239e5aa1 |
| 6e99e60bda2145718df7756c339cfec3 | RegionOne |    http://172.28.83.182:8776/v2/$(tenant_id)s   |    http://172.28.83.182:8776/v2/$(tenant_id)s   | http://172.28.83.182:8776/v2/$(tenant_id)s | 8fa310440a2f4a8cbb450b0dc47ea938 |
| 81e165ef2fa849d68c6dfd4c7cc5790f | RegionOne |    http://172.28.83.182:8774/v2/$(tenant_id)s   |    http://172.28.83.182:8774/v2/$(tenant_id)s   | http://172.28.83.182:8774/v2/$(tenant_id)s | 6e1b0c849a5a45868dbc24bb6c545342 |
| 84edb48cd8814e87a09a7bc8122e6ff4 | RegionOne |            http://172.28.83.182:9696/           |            http://172.28.83.182:9696/           |         http://172.28.83.182:9696/         | 7c77fc2acdb94365b515bb128cd32a11 |
| a1ecc0899b89441cb470836d661e6836 | RegionOne |            http://172.28.83.182:9292            |            http://172.28.83.182:9292            |         http://172.28.83.182:9292          | 2d2d6e6484784b2fb9d6ba2152085810 |
| b91bd621d610450698b434d8aabd2251 | RegionOne |    http://172.28.83.182:8004/v1/%(tenant_id)s   |    http://172.28.83.182:8004/v1/%(tenant_id)s   | http://172.28.83.182:8004/v1/%(tenant_id)s | dee41ff8d2804c8795b9cf8ad40f0888 |
| b95a9ce32fc54efd86338f3a5260aab5 | RegionOne | http://172.28.83.182:8080/v1/AUTH_%(tenant_id)s | http://172.28.83.182:8080/v1/AUTH_%(tenant_id)s |         http://172.28.83.182:8080          | c949520500f34c02875d4e8469527f72 |
| bab695d57ce04691873dff4cdeda580e | RegionOne |            http://controller:8000/v1            |            http://controller:8000/v1            |         http://controller:8000/v1          | 5370af3847af4508a5ac9d59229cf2d4 |
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+----------------------------------+

This output from debug:

glance --debug image-list

curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}bad12623bcb06ab72499160d77a4639ddafeb02a' -H 'Content-Type: application/octet-stream' http://172.28.83.182:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20

HTTP/1.1 200 OK
date: Thu, 05 Nov 2015 11:14:43 GMT
content-length: 14
content-type: application/json; charset=UTF-8
x-openstack-request-id: req-req-762243f5-e130-4ac9-ab5f-24e87b0713de

{"images": []}

+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
+----+------+-------------+------------------+------+--------+

# glance --debug --os-image-api-version 2 image-list
curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}f8d18dccc39d6bdabe430020c39af46f40601e5d' -H 'Content-Type: application/octet-stream' http://172.28.83.182:9292/v2/schemas/image

HTTP/1.1 200 OK
date: Thu, 05 Nov 2015 11:15:16 GMT
content-length: 3867
content-type: application/json; charset=UTF-8
x-openstack-request-id: req-req-7eeb3722-2702-4cb4-9ac7-3b21d013a59a

{"additionalProperties": {"type": "string"}, "name": "image", "links": [{"href": "{self}", "rel": "self"}, {"href": "{file}", "rel": "enclosure"}, {"href": "{schema}", "rel": "describedby"}], "properties": {"status": {"enum": ["queued", "saving", "active", "killed", "deleted", "pending_delete"], "type": "string", "description": "Status of the image (READ-ONLY)"}, "tags": {"items": {"type": "string", "maxLength": 255}, "type": "array", "description": "List of strings related to the image"}, "kernel_id": {"pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string", "description": "ID of image stored in Glance that should be used as the kernel when booting an AMI-style image ...
(more)
2015-11-03 13:37:22 -0500 asked a question Glance does not see images when use api v1

Hi! I have problem with Glance when I'm using api v1. Example:

glance -v --os-image-api-version 1 image-list

> +----+------+-------------+------------------+------+--------+ | ID | Name | Disk Format | Container
> Format | Size | Status |
> +----+------+-------------+------------------+------+--------+
> +----+------+-------------+------------------+------+--------+

Or just: glance image-list

+----+------+-------------+------------------+------+--------+ | ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
+----+------+-------------+------------------+------+--------+

But, when i use: glance -v --os-image-api-version 2 image-list or nova image-list I see:

+--------------------------------------+-------------+
| ID | Name |
+--------------------------------------+-------------+
| 0d10cf8d-e254-4bb1-8fb5-9ff9e10df1ff | 123 |
| f9904cec-98d0-4350-bbc8-3e51d2a1fdb2 | 213 |

And in dashbord in "Project" I not see my images, in "Admin" panel images have.

What problem mb? Please help me :).