Ask Your Question

Sideris's profile - activity

2019-01-28 03:30:13 -0500 received badge  Famous Question (source)
2018-10-04 00:03:20 -0500 received badge  Famous Question (source)
2018-09-26 02:13:39 -0500 received badge  Famous Question (source)
2018-09-24 21:28:28 -0500 received badge  Notable Question (source)
2018-09-01 06:37:22 -0500 received badge  Notable Question (source)
2018-08-23 10:54:05 -0500 received badge  Popular Question (source)
2018-07-16 03:29:46 -0500 received badge  Popular Question (source)
2018-07-05 04:17:49 -0500 answered a question Use magnum and etcd offline

Hi all,

This is my solution : Get 2 images docker: discovery.etcd.io.docker etcd.io.docker

Make a script like this:

#!/bin/bash
# set host ip for etcd, replace your own host IP before run it.
export HostIP="10.26.9.19"

#Lancement d'une session etcd dans un container
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
 --name etcd quay.io/coreos/etcd \
 etcd \
 -name etcd0 \
 -advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \
 -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
 -initial-advertise-peer-urls http://${HostIP}:2380 \
 -listen-peer-urls http://0.0.0.0:2380 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://${HostIP}:2380 \
 -initial-cluster-state new

#Lancement dans un container du service discovery
docker run --name discovery -d --net=host quay.io/coreos/discovery.etcd.io

#Test du bon fonctionnement
#Sleep le temps du lancement
sleep 5
curl -v -X PUT localhost:8087/new

And put on discovery url in magnum.conf, your controler name on url.

It's work for me.

2018-07-05 02:14:51 -0500 received badge  Editor (source)
2018-07-04 10:00:54 -0500 commented answer Instances doesn't get IP

After i new test, i restart linux-bridge-agent on controleur when the service is up on compute node, and now it's work. I don't really know why.

2018-07-04 06:59:38 -0500 commented answer Instances doesn't get IP

But nothing happened next ...

2018-07-04 06:59:26 -0500 commented answer Instances doesn't get IP

I make TCPDUMP, on all interfaces. in the compute node, the interface (tapbe445cff-21) is the interface where the instance is plug. I see the DHCP answer: 12:26:40.380376 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP..... 12:26:45.38118 ARP, Request who-has 172.16.1.10 tell 172.16.1.2,

2018-07-04 05:19:12 -0500 commented question No VXLAN Traffic Across Compute Nodes

Hi,

I meet the same problem today, do you find a solution ?

Thx

2018-07-04 02:21:13 -0500 commented answer Instances doesn't get IP

I try to put vxlan network on the second interface and i have the same problem.

2018-07-03 22:26:58 -0500 received badge  Notable Question (source)
2018-07-03 15:31:25 -0500 commented answer Instances doesn't get IP

Yes i saw this difference but I tried the both solution and it doesn't work.

The first time I install my openstack it worka , and configure bridge and vxlan on one interface.

But I can ping both in the both side.

I will try to configure vxlan in the other interface tomorrow

2018-07-03 08:45:04 -0500 received badge  Commentator
2018-07-03 08:45:04 -0500 commented answer Instances doesn't get IP

So i install a compute service on my controler, and it's worked

The problem come from the compute node, i don't know how debug this; Someone can help me plz

2018-07-03 07:48:07 -0500 commented answer Instances doesn't get IP

Is it normal if my namespace dhcp can't ping my compute node ? but can ping my controler node which is in the same network of my compute?

2018-07-03 06:02:56 -0500 commented answer Instances doesn't get IP

Is the configuration of my bridge on my compute node is good ?

I follow the same documentation last month, and it was work, today not. Thx

2018-07-03 06:02:01 -0500 commented answer Instances doesn't get IP

I'm not an expert of TCPDUMP I record on bridge controler, bridge compute, interface of router namespace and interface of dhcp namespace. I see the DHCP request going to dhcp namespace, and i see the offer by dhcp namespace.

But i didn't see the return of this request.

2018-07-03 03:42:57 -0500 commented answer Instances doesn't get IP

Ok i found a solution for configure network interface for the instance, i use virsh. I configure instance. My private network is 172.16.1.0/24, i've got a router with 2 interfaces 192.168.10.102 and 172.16.1.1 I configure my cirros host with ip 172.16.1.9, but can't ping router Any ideas?

2018-07-03 02:49:18 -0500 commented answer Instances doesn't get IP

Neutron agent are UP and enable too

2018-07-03 02:15:31 -0500 commented answer Instances doesn't get IP

Do you have a tips for change cirros keyboard ? after this i will try to configure network on my instance and try to communicate over the network.

Regards, Thx

2018-07-03 02:14:37 -0500 commented answer Instances doesn't get IP

Hi,

First thanks for all propositions All nova agents are UP and Enable My Openstack infrastructure run on physical architecture, and use KVM

For my test i use cirros 0.4 image, but when i use vnc to connect on it, the keyboard is in not on QWERTY or AZERTY, so i can not edit configuration.

2018-07-03 02:09:25 -0500 received badge  Popular Question (source)
2018-07-02 09:43:41 -0500 commented answer Instances doesn't get IP

Hi,

Yes the dhcp is enable on the subnet.

2018-07-02 08:35:12 -0500 received badge  Supporter (source)
2018-07-02 08:00:24 -0500 commented question Instances doesn't get IP

I answer you in the next answer post .

2018-07-02 07:59:52 -0500 answered a question Instances doesn't get IP

This is ml2 configuration for controler node: ml2_conf.ini

[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true

linuxbridge_agent.ini:

[DEFAULT]
debug = True
[agent]
[linux_bridge]
physical_interface_mappings = provider:enp6s0
[network_log]
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.20
l2_population = true

For the compute node linuxbridge_agent.ini

[DEFAULT]
debug = True
[agent]
[linux_bridge]
physical_interface_mappings = provider:enp6s0
[network_log]
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.21
l2_population = true

I trace DHCP request of the instance, i make tcpdump in the DHCP namespace, router namespace, controler and compute node I see the DHCP request going to DHCP namespace, but no return. I can follow you wireshark capture.

Thanks

UPDATE 05 July 2018

I think the problem was a desynchronization between controler node and compute. I launch the neutron-bridge-agent on compute node and after i launch this commande on controleur node:

/etc/init.d/neutron-linuxbridge-cleanup restart ; /etc/init.d/neutron-linuxbridge-agent restart

And Magic the problem was solved !

2018-07-02 03:00:24 -0500 asked a question Instances doesn't get IP

Hi all,

I meet a problem with my Openstack infrastructure. It's an offline infrastructure, the controler node and compute doesn't have access to Internet. I follow this documentation for deployement (Queens Release): https://docs.openstack.org/install-guide/openstack-services.html (https://docs.openstack.org/install-gu...)

I follow all instruction for the installation of Keystone, glance, nova and neutron. After when I tried to deploy an instance, my instance doesn't get IP address. No error in log. In /var/lib/neutron/dhcp I see the IP address reserved for the instance.

I can join all folder you want for help me. First this is my configuration:

Controler: 2 interface:

2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:be brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21a:64ff:fed0:34be/64 scope link
       valid_lft forever preferred_lft forever
3: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:c0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21a:64ff:fed0:34c0/64 scope link
       valid_lft forever preferred_lft forever
4: enp4s0.748@enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:be brd ff:ff:ff:ff:ff:ff
    inet 10.26.9.19/25 brd 10.26.9.127 scope global enp4s0.748
       valid_lft forever preferred_lft forever
    inet6 fe80::21a:64ff:fed0:34be/64 scope link
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:26:3c:f1:3d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
8: tap39c1a9de-db@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq99275f2e-5a state UP group default qlen 1000
    link/ether 56:a8:df:50:99:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: tapd75c2984-2e@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 7a:da:65:b4:1b:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 1
10: tap475e0c42-1d@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq99275f2e-5a state UP group default qlen 1000
    link/ether ca:71:3f:b1:e0:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 2
11: brqa7f0e57c-d0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:c0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.20/24 brd 192.168.10.255 scope global brqa7f0e57c-d0
       valid_lft forever preferred_lft forever
    inet6 fe80::189c:87ff:fec0:f839/64 scope link
       valid_lft forever preferred_lft forever
12: tapa9663a5e-cd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 76:96 ...
(more)
2018-06-24 19:14:40 -0500 asked a question Murano deploiement failed

Hi all,

I've installed murano on my openstack architecture. But when i want to deploy an environment i have this error:

2018-06-18 11:57:18 — Action deploy is scheduled
2018-06-18 11:57:18 — 'NoneType' object has no attribute 'get_user_id'

When i watch the log, in murano-engine.log i have these errors

2018-06-18 13:57:18.879 21759 ERROR murano.common.engine [-] Exception 'NoneType' object has no attribute 'get_user_id' occurred during invocation of <system>: AttributeError: 'NoneType' object has no attribute 'get_user_id'
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine Traceback (most recent call last):
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine   File "/usr/lib/python2.7/dist-packages/murano/common/engine.py", line 199, in execute
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine     self._create_trust()
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine   File "/usr/lib/python2.7/dist-packages/murano/common/engine.py", line 318, in _create_trust
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine     self._session.token, self._session.project_id)
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine   File "/usr/lib/python2.7/dist-packages/murano/common/auth_utils.py", line 96, in create_trust
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine     trustee_user = admin_client.session.auth.get_user_id(admin_client.session)
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine AttributeError: 'NoneType' object has no attribute 'get_user_id'
2018-06-18 13:57:18.879 21759 ERROR murano.common.engine
2018-06-18 13:57:18.918 21759 INFO murano.common.engine [-] Finished processing task: {"action": {"isException": true, "result": {"message": "'NoneType' object has no attribute 'get_user_id'", "details": "Traceback (most recent call last):\n  File \"/usr/lib/python2.7/dist-packages/murano/common/engine.py\", line 199, in execute\n    self._create_trust()\n  File \"/usr/lib/python2.7/dist-packages/murano/common/engine.py\", line 318, in _create_trust\n    self._session.token, self._session.project_id)\n  File \"/usr/lib/python2.7/dist-packages/murano/common/auth_utils.py\", line 96, in create_trust\n    trustee_user = admin_client.session.auth.get_user_id(admin_client.session)\nAttributeError: 'NoneType' object has no attribute 'get_user_id'\n"}}}
2018-06-18 13:57:18.920 21759 WARNING oslo_messaging.rpc.client [-] Using notification transport for RPC. Please use get_rpc_transport to obtain an RPC transport instance.
2018-06-18 13:57:19.074 21759 ERROR oslo_messaging.rpc.server [-] Exception during message handling: KeyError: u'\'model\'\nTraceback (most recent call last):\n\n  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n\n  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n\n  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch\n    result = func(ctxt, **new_args)\n\n  File "/usr/lib/python2.7/dist-packages/murano/common/server.py", line 47, in process_result\n    model = result[\'model\']\n\nKeyError: \'model\'\n'
2018-06-18 13:57:19.074 21759 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2018-06-18 13:57:19.074 21759 ERROR ...
(more)
2018-06-24 19:14:40 -0500 asked a question Use magnum and etcd offline

Hi all,

Today i have a problem. I must build an architecture of Openstack Queens without Internet. I installed basic components (Keystone, glance, nova, neutron, horizon, cinder and heat), all work nicely. Now i want install Magnum, i follow this doc: https://docs.openstack.org/magnum/queens/install/index.html (https://docs.openstack.org/magnum/que...)

The installation work correctly, when i make the verification chapter, when i create the swarm-cluster i have this error:

# openstack coe cluster show swarm-cluster
+---------------------+--------------------------------------------------------------------------+
| Field               | Value                                                                    |
+---------------------+--------------------------------------------------------------------------+
| status              | CREATE_FAILED                                                            |
| cluster_template_id | 6e9946af-40cc-43d4-abd2-616bc14c96cf                                     |
| node_addresses      | []                                                                       |
| uuid                | f86cc9fb-a251-424f-8341-024268ec7ac7                                     |
| stack_id            | None                                                                     |
| status_reason       | Failed to get discovery url from 'https://discovery.etcd.io/new?size=1'. |
| created_at          | 2018-06-11T13:56:58+00:00                                                |
| updated_at          | 2018-06-11T13:57:21+00:00                                                |
| coe_version         | None                                                                     |
| labels              | {}                                                                       |
| faults              | {}                                                                       |
| keypair             | mykey                                                                    |
| api_address         | None                                                                     |
| master_addresses    | []                                                                       |
| create_timeout      | 60                                                                       |
| node_count          | 1                                                                        |
| discovery_url       | None                                                                     |
| master_count        | 1                                                                        |
| container_version   | None                                                                     |
| name                | swarm-cluster                                                            |
| master_flavor_id    | m1.small                                                                 |
| flavor_id           | m1.small                                                                 |
+---------------------+--------------------------------------------------------------------------+

I understand the message, magnum want get something in 'https://discovery.etcd.io/new?size=1' But i must build this without internet. In the doc, i follow etcd configuration chapter, this is configuration file

#cat /etc/etcd/etcd.conf.yml
name: imgesx07
data-dir: /var/lib/etcd
initial-cluster-state: 'new'
initial-cluster-token: 'etcd-cluster-01'
initial-cluster: imgesx07=http://192.168.10.20:2380
initial-advertise-peer-urls: http://192.168.10.20:2380
advertise-client-urls: http://192.169.10.20:2379
listen-peer-urls: http://0.0.0.0:2380
listen-client-urls: http://192.168.10.20:2379

&

#cat /lib/systemd/system/etcd.service
[Unit]
After=network.target
Description=etcd - highly-available key value store

[Service]
LimitNOFILE=65536
Restart=on-failure
Type=notify
ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yml
User=etcd

[Install]
WantedBy=multi-user.target

Do you have an idea to use magnum without internet please? Thank you, Best Regards Nicolas MOREL

2018-06-24 19:14:40 -0500 asked a question Error on Networking Option 2: Self-service networks documentation

Hi,

I think I found an error on this link: https://docs.openstack.org/neutron/queens/install/controller-install-option2-ubuntu.html (https://docs.openstack.org/neutron/qu...) In all parameters "auth_url = " we must had in /etc/neutron/neutron.conf, they used the port 35357, but in the queens release we only use the port 5000. I have right?

Best regards Nicolas MOREL

2018-06-15 02:12:12 -0500 received badge  Enthusiast