Ask Your Question

Planck-C's profile - activity

2019-12-22 22:07:58 -0500 received badge  Self-Learner (source)
2019-12-22 22:07:49 -0500 received badge  Student (source)
2019-09-30 11:59:56 -0500 received badge  Popular Question (source)
2019-09-30 11:59:56 -0500 received badge  Famous Question (source)
2019-09-30 11:59:56 -0500 received badge  Notable Question (source)
2018-04-24 17:42:08 -0500 received badge  Notable Question (source)
2018-04-24 17:42:08 -0500 received badge  Famous Question (source)
2018-02-11 01:33:13 -0500 received badge  Famous Question (source)
2018-02-06 09:39:19 -0500 received badge  Famous Question (source)
2018-01-31 08:28:38 -0500 received badge  Famous Question (source)
2017-11-02 13:21:22 -0500 received badge  Notable Question (source)
2017-10-28 12:53:29 -0500 received badge  Notable Question (source)
2017-10-28 12:52:53 -0500 received badge  Notable Question (source)
2017-10-27 16:40:27 -0500 answered a question neutron-openvswitch-agent out of sync

the missing flow root cause was : https://ask.openstack.org/en/question...

2017-10-27 16:11:52 -0500 received badge  Teacher (source)
2017-10-27 16:11:52 -0500 received badge  Self-Learner (source)
2017-10-27 15:15:18 -0500 received badge  Popular Question (source)
2017-10-23 10:56:45 -0500 received badge  Popular Question (source)
2017-10-19 13:11:02 -0500 answered a question magnum cluster create k8s cluster Error: ResourceFailure

Upon further investigating the heat stack, the problem stems from not instantiating a large enough fedora instance, volume-wise. It turns out the fedora-atomic-latest image requires at least 6G of disk space(?). After resizing the cinder volume in packstack, the creation progress passing this error.

2017-10-17 18:14:37 -0500 received badge  Editor (source)
2017-10-17 16:30:32 -0500 asked a question magnum cluster create k8s cluster Error: ResourceFailure

env: Pike cmd:

magnum cluster-template-create k8s-cluster-template --image fedora-atomic-latest --keypair mypackstackkey --external-network public --dns-nameserver 8.8.8.8 --flavor m1.tiny --docker-volume-size 3 --network-driver flannel --coe kubernetes --volume-driver cinder magnum cluster-create k8s-cluster --cluster-template k8s-cluster-template --master-count 1 --node-count 2

symptom: CREATE_FAILED, heat-engine error:

    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource Traceback (most recent call last):
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource   File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 831, in _action_recorder
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource     yield
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource   File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 939, in _do_action
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource     yield self.action_handler_task(action, args=handler_args)
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource   File "/usr/lib/python2.7/site-packages/heat/engine/scheduler.py", line 334, in wrapper
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource     step = next(subtask)
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource   File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 884, in action_handler_task
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource     handler_data = handler(*args)
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource   File "/usr/lib/python2.7/site-packages/heat/engine/resources/template_resource.py", line 286, in handle_create
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource     self.child_params())
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource   File "/usr/lib/python2.7/site-packages/heat/engine/resources/stack_resource.py", line 315, in create_with_template
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource     kwargs['template_id'])
    2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 329, in __exit__
2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource     return self._should_ignore_ex(exc_val)
2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource   File "/usr/lib/python2.7/site-packages/heat/engine/resources/stack_resource.py", line 367, in translate_remote_exceptions
2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource     raise exception.ResourceFailure(message, self, action=self.action)
2017-10-17 20:52:12.288 16601 ERROR heat.engine.resource ResourceFailure: resources[0]: Property error: resources.docker_volume.properties.volume_type: Error validating value '': The VolumeType () could not be found.

Attempted to add default (lvm) in " /usr/lib/python2.7/site-packages/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml, does not seem to help.

2017-10-16 23:01:13 -0500 answered a question packstack Magnum: Error during MagnumServicePeriodicTasks.update_magnum_service: MultipleResultsFound: Multiple rows were found for one()

Maybe something went wrong in the initial installation. Dropping Magnum and Heat databases, and reinstalling packstack (using same answer file) seem to solve the problem. Now only one "Up" magnum-conductor.

2017-10-16 20:10:14 -0500 received badge  Popular Question (source)
2017-10-16 20:08:08 -0500 commented question packstack Magnum: Error during MagnumServicePeriodicTasks.update_magnum_service: MultipleResultsFound: Multiple rows were found for one()

There are two instances of the conductors started indeed, and both in "down" state. Why two instances? Stop/restart service, the same case.

2017-10-15 23:09:26 -0500 received badge  Popular Question (source)
2017-10-14 12:58:05 -0500 asked a question packstack Magnum: Error during MagnumServicePeriodicTasks.update_magnum_service: MultipleResultsFound: Multiple rows were found for one()

Installed Magnum with Packstack. Magnum-conductor service has the following error. Cluster creation failed.

2017-10-14 17:52:35.478 28101 WARNING oslo_messaging.server [-] Possible hang: stop is waiting for start to complete 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task [req-068caab6-0c6e-47dc-ad50-caf29cd21c3f - - - - -] Error during MagnumServicePeriodicTasks.update_magnum_service: MultipleResultsFound: Multiple rows were found for one() 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task Traceback (most recent call last): 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 220, in run_periodic_tasks 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task task(self, context) 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/site-packages/magnum/service/periodic.py", line 43, in handler 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task func(self, ctx) 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/site-packages/magnum/servicegroup/magnum_service_periodic.py", line 46, in update_magnum_service 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task ctx, self.host, self.binary) 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task result = fn(cls, context, args, *kwargs) 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/site-packages/magnum/objects/magnum_service.py", line 62, in get_by_host_and_binary 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task host, binary) 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/site-packages/magnum/db/sqlalchemy/api.py", line 528, in get_magnum_service_by_host_and_binary 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task return query.one() 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2817, in one 2017-10-14 17:52:37.840 28114 ERROR oslo_service.periodic_task "Multiple rows were found for one()")

2017-10-03 14:19:37 -0500 answered a question ovs connection to local OF controller failed when having two flat provider networks

This turned out to be an issue with Ryu (native mode) OF controller. When configured with multiple provider flat network, the controller seems rejecting connection from the two external br's.

So I switch the of_interface mode back to ovs-ofctl, the br connections works and flows are what I would expect.

2017-10-02 12:59:09 -0500 asked a question ovs connection to local OF controller failed when having two flat provider networks

Setup: Newton. /etc/neutron/plougins/ml2/openvswitch_agent.ini is configure with: bridge_mappings = external:br-ex, publicnet:br-ex-pub so two flat provider networks.

Symptoms: 1) ovs-ofctl dump-flows <br-ex br-ex-pub=""> shows flows are missing.Sometimes restarting the agent will get different results. 2) upon reviewing ovs-vswitchd.log : 2017-10-02T17:55:20.319Z|01335|rconn|INFO|br-ex<->tcp:127.0.0.1:6633: connecting... 2017-10-02T17:55:20.320Z|01336|rconn|INFO|br-ex<->tcp:127.0.0.1:6633: connected 2017-10-02T17:55:20.321Z|01337|rconn|INFO|br-ex-pub<->tcp:127.0.0.1:6633: connection closed by peer 2017-10-02T17:55:28.321Z|01338|rconn|INFO|br-ex-pub<->tcp:127.0.0.1:6633: connected 2017-10-02T17:55:28.322Z|01339|rconn|INFO|br-ex<->tcp:127.0.0.1:6633: connection closed by peer 2017-10-02T17:55:29.320Z|01340|rconn|INFO|br-ex<->tcp:127.0.0.1:6633: connecting... 2017-10-02T17:55:29.321Z|01341|rconn|INFO|br-ex<->tcp:127.0.0.1:6633: connected 2017-10-02T17:55:29.321Z|01342|rconn|INFO|br-ex-pub<->tcp:127.0.0.1:6633: connection closed by peer 3) if change configuration with only one bridge mapping to either br-ex or br-ex-pub. connections are steady. and flows are what I will expect.

Some observations: There were two bugs maybe related: https://lists.launchpad.net/yahoo-eng-team/msg60575.html (https://lists.launchpad.net/yahoo-eng...) https://bugzilla.redhat.com/show_bug.cgi?id=1425507 (https://bugzilla.redhat.com/show_bug....)

But no formal resolution yet.

2017-10-02 12:15:14 -0500 received badge  Enthusiast
2017-09-28 21:19:19 -0500 asked a question neutron-openvswitch-agent out of sync

setup: one controller/network node; 2 compute nodes; DVR

Symptom: Compute1 can not ping to provider network through dvr, while compute2 works properly.

Observations: Compute2: have the proper br-ex flow entries, particularly: cookie=0xb766867925a68f74, duration=2986.041s, table=0, n_packets=21, n_bytes=994, idle_age=592, priority=4,in_port=1,dl_v lan=2 actions=strip_vlan,NORMAL

Compute1: sometimes missing that particular flow; other times missing all flows, by restarting neutron-openvswitch-agent.service Inspecting the agent log: 2017-09-28 21:59:20.442 4024 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8d199a48-8ea0-4c77-a785-fcf1cd7e4294 - - - - -] Agent out of sync with plugin!

Attempted restarting controller, compute nodes, but still not working properly on compute2.

2017-09-28 21:19:19 -0500 answered a question (Newton) SNAT drop default traffic

Ahhh - the infamous restarts come to the rescue: stop neutron-openvswitch-agent.service, restart openvswitch.service, start neutron-openvswitch-agent.service.

now br-ex has the flow to strip-vlan and does the normal forwarding:

NXST_FLOW reply (xid=0x4): cookie=0xad340cd9b136afd0, duration=1302.571s, table=0, n_packets=70, n_bytes=4060, idle_age=622, priority=4,in_port=1,dl_v lan=1 actions=strip_vlan,NORMAL cookie=0xad340cd9b136afd0, duration=1304.226s, table=0, n_packets=9, n_bytes=1050, idle_age=1226, priority=2,in_port=1 acti ons=resubmit(,1) cookie=0xad340cd9b136afd0, duration=1304.574s, table=0, n_packets=0, n_bytes=0, idle_age=1304, priority=0 actions=NORMAL cookie=0xad340cd9b136afd0, duration=1304.225s, table=0, n_packets=314, n_bytes=26028, idle_age=7, priority=1 actions=resubm it(,3)

2017-09-28 21:19:19 -0500 asked a question (Newton) SNAT drop default traffic

Setup: one controller/network node (two other nodes are disabled for debugging purpose) running snat; 3x compute nodes with dvr.

Symptom: 1) VMs can reach provider network gw via FIP - OK; 2) VMSs without FIP can not reach provider network. Tracing packets reveal that arp can reach controller snat; then into br-int then br-ex. But flow rule show br-ex to drop the packet:

br-ex flow rules: ovs-ofctl dump-flows br-ex table=0 NXST_FLOW reply (xid=0x4): cookie=0xa17bdbc1ee2b5660, duration=6717.517s, table=0, n_packets=354, n_bytes=21600, idle_age=568, priority=2,in_port=1 actions=resubmit(,1) cookie=0xa17bdbc1ee2b5660, duration=6717.852s, table=0, n_packets=0, n_bytes=0, idle_age=9736, priority=0 actions=NORMAL cookie=0xa17bdbc1ee2b5660, duration=6717.516s, table=0, n_packets=4435, n_bytes=318022, idle_age=5, priority=1 actions=resubmit(,3)

ovs-ofctl dump-flows br-ex table=1 NXST_FLOW reply (xid=0x4): cookie=0xa17bdbc1ee2b5660, duration=6749.800s, table=1, n_packets=354, n_bytes=21600, idle_age=601, priority=0 actions=resubmit(,2)

ovs-ofctl dump-flows br-ex table=2 NXST_FLOW reply (xid=0x4): cookie=0xa17bdbc1ee2b5660, duration=6790.906s, table=2, n_packets=354, n_bytes=21600, idle_age=642, priority=2,in_port=1 actions=drop

ovs-vsctl show: Bridge br-int Controller "tcp:127.0.0.1:6633" fail_mode: secure Port "qg-14127f17-20" tag: 1 Interface "qg-14127f17-20" type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap13d9a8d7-62" tag: 4 Interface "tap13d9a8d7-62" type: internal Port "qr-cb9d8b4b-33" tag: 2 Interface "qr-cb9d8b4b-33" type: internal Port "sg-35a9d87d-42" tag: 2 Interface "sg-35a9d87d-42" type: internal Port int-br-ex-pub Interface int-br-ex-pub type: patch options: {peer=phy-br-ex-pub} Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int}

Bridge br-ex Controller "tcp:127.0.0.1:6633" fail_mode: secure Port "bond0.363" Interface "bond0.363" Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex}