发起问题

locklock 的档案 - activity

2020-09-02 09:34:50 -0500 获得奖牌  热门的问题 (source)
2020-09-02 07:43:38 -0500 获得奖牌  受欢迎的问题 (source)
2020-07-28 08:14:05 -0500 问了问题 Swift upgrade - creation object OK but POST metadata shows 404 immediatly after - still the object GET a 200

Hi there, We are looking for some help regarding our current upgrade for our production swift clusters.

Current release : USSURI Previous : Kilo

We were in the process of upgrading from Kilo to Ussuri. But immediatly after we've made the upgrade on our proxy servers, our functionnal tests done by our monitoring system showed failures when trying to create an object and POST metadata on it immediatly after the creation of the object. Still, when we do a GET on the object after the 404, we can see the metadata X-Deleter-After has been successfully added. We also had errors regarding timestamps (in the far future) for the X-Delete-After metadata so we decided to go for Ussuri instead of Stein version. But same result regarding the 404.

We tried to modify some configuration on the storage regarding object/containers updater or expirers , but it looks like the error could come from the proxy servers. The error showed up only after the proxy servers were upgraded to the latest version. The ones which were still in kilo version, were not producing the error when the monitoring system was doing the test through these.

Our current test is doing the following :

1- check keystone & get token 2 - upload an object 3 - Get the object & content 4 - Add metadata x-deleter-after 30s with POST 5 Check file has been successfully deleted after the expire

What we are seing : Token OK. Upload OK. GET object OK. Add Metdata -> 404 'reason' => 'Not Found', 'protocol' => 'HTTP/1.1', 'content' => '<html>

Not Found

The resource could not be found.

</html>', Check Metadata : OK

We are wondering if we did not miss something in the configuration with the upgrade or if there are some new bugs we did not see in the bug currently opened.

Update: We can see by checking the logs when asking a proxy on one site, that the POST is returning a 202 on the local storage but 404 on the 2 others which are on a different geographic site. That could explain the 404 returned, but why is it behaving like this ?

Do you have any idead what could go wrong at the moment ?

Thanks a lot.

2018-05-22 03:32:08 -0500 获得奖牌  受欢迎的问题 (source)
2018-05-22 03:32:08 -0500 获得奖牌  著名问题 (source)
2018-05-22 03:32:08 -0500 获得奖牌  热门的问题 (source)
2017-09-17 12:33:13 -0500 获得奖牌  著名问题 (source)
2017-08-15 19:30:58 -0500 获得奖牌  著名问题 (source)
2017-07-24 04:51:22 -0500 问了问题 Merge a qcow2 image and Backing file into a single file

Hello everyone,

I would like to merge a qcow2 image and Backing file into a single file.

indeed, I have a directory containing three files (disk, disk.info, libvirt.xml) and its backup file (/var/lib/nova/instance/_base/backing_file).

I would like to merge them into a single image.

Using Virt-manager on my host, I have no problem, but I would like it to work on openstack.

Thanks

2017-07-20 04:46:06 -0500 问了问题 Port Mirror [OVS]

Hello,

I'm trying to port mirroring between VM in OpenStack (Ocata on Ubuntu 16.0.2).

ml2_plugin = GRE

I look at the openvswitch doc: http://docs.openvswitch.org/en/latest/faq/configuration/ (http://docs.openvswitch.org/en/latest...)

But I do not find the network network elements present in the documentation.

So, How I coul create a Port Morroring between VM, cleanly ?

My output ovs

root@network:# ovs-vsctl show 
4a9906a6-68b2-4f55-b0a8-5f6c343fc175
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-ac100105"
            Interface "gre-ac100105"
                type: gre
            options: {df_default="true", in_key=flow, local_ip="y.y.y.y", out_key=flow, remote_ip="x.x.x.x"}
    Port "gre-ac100104"
        Interface "gre-ac100104"
            type: gre
            options: {df_default="true", in_key=flow, local_ip="y.y.y.y", out_key=flow, remote_ip="x.x.x.x"}
Bridge br-ex
    Controller "tcp:127.0.0.1:6633"
        is_connected: true
    fail_mode: secure
    Port "eno2"
        Interface "eno2"
    Port br-ex
        Interface br-ex
            type: internal
    Port phy-br-ex
        Interface phy-br-ex
            type: patch
            options: {peer=int-br-ex}
Bridge br-int
    Controller "tcp:127.0.0.1:6633"
        is_connected: true
    fail_mode: secure
    Port "tap8e58ddd4-9d"
        tag: 14
        Interface "tap8e58ddd4-9d"
            type: internal
    Port "tap84bb5cda-a5"
        tag: 23
        Interface "tap84bb5cda-a5"
            type: internal
    Port "qg-9dd21fcf-23"
        tag: 11
        Interface "qg-9dd21fcf-23"
            type: internal
    Port "tap8a18193e-6a"
        tag: 19
        Interface "tap8a18193e-6a"
            type: internal
    Port "qr-4beb49f2-89"
        tag: 18

Any Help ?

Thanks

2017-06-27 04:17:23 -0500 评论回答 I cannot launch instance : status = Error

Resolved !

2017-06-23 04:01:59 -0500 评论回答 I cannot launch instance : status = Error

see message above

2017-06-23 03:37:31 -0500 获得奖牌  热门的问题 (source)
2017-06-23 03:15:21 -0500 回答问题 I cannot launch instance : status = Error

In the dashboard horizon, I have this error message :

Error: Failed to perform requested operation on instance "deb", the instance has an error status: Please try again later [Error: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 90c6159e-28dc-4ad1-96a3-a0548be26065. Last exception: Invalid input for field 'identity/password/user/password': None is not of type 'string' (HTTP 400) (Request-ID: req-b].

root@controller:# tail -f /var/log/nova/nova-conductor.log

2017-06-23 10:56:58.955 5609 WARNING oslo_config.cfg [req-4476f711-dd2d-463b-94a4-03aeb8317776 c070e9c342824ef3ba41336373a17b91 05696ec4daa44a1c89a26761b621a507 - - -] Option "scheduler_default_filters" from group "DEFAULT" is deprecated. Use option "enabled_filters" from group "filter_scheduler".
2017-06-23 10:57:01.717 5610 ERROR nova.scheduler.utils [req-4476f711-dd2d-463b-94a4-03aeb8317776 c070e9c342824ef3ba41336373a17b91 05696ec4daa44a1c89a26761b621a507 - - -] [instance: 6956232b-17fb-4344-ba20-331c3fb6bbf8] Error from last host: compute1 (node compute1): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1780, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2016, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance 6956232b-17fb-4344-ba20-331c3fb6bbf8 was re-scheduled: Invalid input for field 'identity/password/user/password': None is not of type 'string' (HTTP 400) (Request-ID: req-c25e7d09-ece3-43ec-ad2b-3c059f8e417d)\n"]
2017-06-23 10:57:01.718 5610 WARNING nova.scheduler.utils [req-4476f711-dd2d-463b-94a4-03aeb8317776 c070e9c342824ef3ba41336373a17b91 05696ec4daa44a1c89a26761b621a507 - - -] Failed to compute_task_build_instances: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 6956232b-17fb-4344-ba20-331c3fb6bbf8. Last exception: Invalid input for field 'identity/password/user/password': None is not of type 'string' (HTTP 400) (Request-ID: req-c25e7d09-ece3-43ec-ad2b-3c059f8e417d)

I don't understand which identity it talking about.

2017-06-23 03:09:18 -0500 评论回答 I cannot launch instance : status = Error

Yes, compute service is active (running) on compute and controller.

2017-06-22 23:21:34 -0500 获得奖牌  受欢迎的问题 (source)
2017-06-22 04:50:00 -0500 获得奖牌  编辑 (source)
2017-06-22 04:42:44 -0500 问了问题 I cannot launch instance : status = Error

Hello everyone,

I try install openstack ocata in ubuntu 16.02 with OVS self-service component Network.

The log shows the following:

 root@compute1:# tail -f /var/log/neutron/neutron-openvswitch-agent.log

2017-06-22 10:29:57.154 20587 INFO neutron.agent.securitygroups_rpc [req-4b091856-97e4-40ca-89de-447d6f6bbf6c c070e9c342824ef3ba41336373a17b91 05696ec4daa44a1c89a26761b621a507 - - -] Security group member updated [u'114c0888-c226-4486-92fc-e50270e20209']
2017-06-22 10:29:57.744 20587 INFO neutron.agent.securitygroups_rpc [req-7f0bfd06-5b4f-44a2-a1b8-920c767d2df1 c070e9c342824ef3ba41336373a17b91 05696ec4daa44a1c89a26761b621a507 - - -] Security group member updated [u'114c0888-c226-4486-92fc-e50270e20209']
2017-06-22 10:29:58.866 20587 INFO neutron.agent.common.ovs_lib [req-a3716861-ce7c-4f65-9d3f-b4b3d3d517c0 - - - - -] Port 38b862ba-495b-4d79-bd9c-1dfa19815d7a not present in bridge br-int
2017-06-22 10:29:58.867 20587 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-a3716861-ce7c-4f65-9d3f-b4b3d3d517c0 - - - - -] port_unbound(): net_uuid None not managed by VLAN manager
2017-06-22 10:29:58.867 20587 INFO neutron.agent.securitygroups_rpc [req-a3716861-ce7c-4f65-9d3f-b4b3d3d517c0 - - - - -] Remove device filter for [u'38b862ba-495b-4d79-bd9c-1dfa19815d7a']
2017-06-22 10:30:00.149 20587 INFO neutron.agent.securitygroups_rpc [req-d7f28cbf-ecf4-4370-887a-d06ca0d58bc3 c070e9c342824ef3ba41336373a17b91 05696ec4daa44a1c89a26761b621a507 - - -] Security group member updated [u'114c0888-c226-4486-92fc-e50270e20209']
2017-06-22 10:30:00.673 20587 INFO neutron.agent.securitygroups_rpc [req-f69d2133-cbf1-4149-9ad0-28575cd1fe46 c070e9c342824ef3ba41336373a17b91 05696ec4daa44a1c89a26761b621a507 - - -] Security group member updated [u'114c0888-c226-4486-92fc-e50270e20209']
2017-06-22 10:30:00.867 20587 INFO neutron.agent.common.ovs_lib [req-a3716861-ce7c-4f65-9d3f-b4b3d3d517c0 - - - - -] Port 13cdb898-5724-4201-ab72-368fb03fef20 not present in bridge br-int
2017-06-22 10:30:00.868 20587 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-a3716861-ce7c-4f65-9d3f-b4b3d3d517c0 - - - - -] port_unbound(): net_uuid None not managed by VLAN manager
2017-06-22 10:30:00.869 20587 INFO neutron.agent.securitygroups_rpc [req-a3716861-ce7c-4f65-9d3f-b4b3d3d517c0 - - - - -] Remove device filter for [u'13cdb898-5724-4201-ab72-368fb03fef20']


root@controller:# openstack compute service list
+----+--------------+------------+----------+---------+-------+---------------+
| ID | Binary       | Host       | Zone     | Status  | State | Updated At    |
+----+--------------+------------+----------+---------+-------+---------------+
|  6 | nova-        | controller | internal | enabled | up    | 2017-06-22T09 |
|    | consoleauth  |            |          |         |       | :24:15.000000 |
|  7 | nova-        | controller | internal | enabled | up    | 2017-06-22T09 |
|    | scheduler    |            |          |         |       | :24:17.000000 |
|  8 | nova-        | controller | internal | enabled | up    | 2017-06-22T09 |
|    | conductor    |            |          |         |       | :24:17.000000 |
|  9 | nova-compute | compute1   | nova     | enabled | up    | 2017-06-22T09 |
|    |              |            |          |         |       | :24:18.000000 |
| 10 | nova-compute | compute2   | nova     | enabled | up    | 2017-06-22T09 |
|    |              |            |          |         |       | :24:17.000000 |
+----+--------------+------------+----------+---------+-------+---------------+


root@controller:# openstack network agent list
+-------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| ID                      | Agent Type         | Host     | Availability Zone | Alive | State | Binary                    |
+-------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| 02da8564-39c0-41c6-88a8 | DHCP agent         | network  | nova              | True  | UP    | neutron-dhcp-agent        |
| -29569fff8255           |                    |          |                   |       |       |                           |
| 2baff957-c08f-44e8      | Open vSwitch agent | compute1 | None              | True  | UP    | neutron-openvswitch-agent |
| -960b-bc3231b63b06      |                    |          |                   |       |       |                           |
| 36ded345-fc9b-          | Open vSwitch agent | compute2 | None              | True  | UP    | neutron-openvswitch-agent |
| 4ccb-8889-17e7ae9ff9ee  |                    |          |                   |       |       |                           |
| 9213326f-47f5-4a80-9581 | Open vSwitch agent | network  | None              | True  | UP    | neutron-openvswitch-agent |
| -69a2dd7b6702           |                    |          |                   |       |       |                           |
| ad731dfc-d164-4b6b-     | L3 agent           | network  | nova              | True  | UP    | neutron-l3-agent          |
| 8d04-da2e1461d358       |                    |          |                   |       |       |                           |
| c9e4fbb2-cbcd-          | Metadata agent     | network  | None              | True  | UP    | neutron-metadata-agent    |
| 4c08-be00-4c71c71f0484  |                    |          |                   |       |       |                           |
+-------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+




 root@controller:# cat /etc/neutron/neutron.conf | egrep -v '(^$|^#)'[DEFAULT]
    rpc_backend = rabbit
    verbose = True
    auth_strategy = keystone
    core_plugin = ml2
    service_plugins = router
    allow_overlapping_ips = True
    notify_nova_on_port_status_changes = True
    notify_nova_on_port_data_changes = True
    nova_url = http://controller:8774/v2
    core_plugin = ml2

    [agent]
    root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

    [database]
    connection =mysql+pymysql://neutron:pass@controller/neutron

    [keystone_authtoken]
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password =pass
    admin_user =admin
    admin_password =pass

    [nova]
    auth_url = http://controller:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = nova
    password =pass

    [oslo_messaging_notifications]
    transport_url = rabbit://openstack:pass@controller

    [oslo_messaging_rabbit]
    rabbit_host = controller
    rabbit_userid = openstack
    rabbit_password =pass

root@network:# cat /etc/neutron/neutron.conf | egrep -v '(^$|^#)'
[DEFAULT]
rpc_backend = rabbit
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
auth_strategy = keystone
verbose = True

[agent]
root_helper = sudo /usr ...
(more)
2017-05-31 03:46:04 -0500 获得奖牌  粉丝
2017-05-30 02:42:04 -0500 获得奖牌  热门的问题 (source)
2017-05-29 16:55:53 -0500 获得奖牌  受欢迎的问题 (source)
2017-05-29 10:30:17 -0500 评论问题 Network packets are dropped

up!!!!!!!!

2017-04-10 23:10:32 -0500 问了问题 Network packets are dropped

I'm trying to get traffic between two pfsense interfaces on a third interface using the 'Bridge' option of pfsense (SPAN). For instance I want to see traffic between LAN and WAN through DMZ VM. This works very well on VMWARE, but on OpenStack, I only view broadcast network packets.

I did a tcpdump -i DMZinterface on pfsense, the traffic is present, but not on DMZ VM.

[LAN] 192.168.1.2---------192.168.1.1 PFSENSE 192.168.2.1----------- [DMZ] 192.168.2.2 | WAN

I do not know if OpenStack is blocking these network packets.

I have already disabled the port-security-groups for routing packets on pfsense.

Thank You

2017-04-10 23:10:32 -0500 问了问题 Why network packets are dropped!!

Hello Everybody,

I try to get traffic from a pfsense interface through a VM from another network. For instance traffic between LAN and WAN is present in DMZ' VM.

[LAN] 192.168.1.2--------- 192.168.1.1 [pfsense] 192.168.2.1 ---------------- 192.168.2.2 [DMZ] | WAN

I have activated SPAN on pfsense (DMZ interface), but I only see the broadcast address. On vmware it works, i have no problem but on Openstack, the packets of a different network are not visible (except the broadcast), I do not know if it is a security of OpenStack. I already disabled port-security on interface for routing packet.

My neutron network is Linux Bridge.

Thank you