Ask Your Question

Sofiane68's profile - activity

2018-12-14 08:54:27 -0500 received badge  Famous Question (source)
2018-12-14 08:54:27 -0500 received badge  Popular Question (source)
2018-12-14 08:54:27 -0500 received badge  Notable Question (source)
2018-05-02 23:44:30 -0500 received badge  Notable Question (source)
2018-05-02 23:44:30 -0500 received badge  Popular Question (source)
2018-02-12 09:26:32 -0500 received badge  Famous Question (source)
2018-02-02 11:42:24 -0500 received badge  Famous Question (source)
2018-01-17 23:20:48 -0500 received badge  Famous Question (source)
2017-10-04 02:55:38 -0500 received badge  Notable Question (source)
2017-10-04 02:55:38 -0500 received badge  Famous Question (source)
2017-10-04 02:55:38 -0500 received badge  Popular Question (source)
2017-09-02 06:02:32 -0500 asked a question How to add dynamic inventory to Ansible hosts file?

Hi,

I use Ansible to create servers on OpenStack and to install mysql on it. All is done once, so in my code I want to register the public IP of the new server and store it in my Ansible hosts file. But it does not seem to work:

- name: launch an instance
  os_server:
    state: present
    auth:
      auth_url: "{{ item.value.auth_url }}"
      username: "{{ item.value.username }}"
      password: "{{ item.value.password }}"
      project_name: "{{ item.value.project_name }}"
    region_name: "{{ item.value.region_name }}"
    name: "{{ item.value.name }}"
    image: "{{ item.value.image }}"
    state: present
    network: "{{ item.value.network }}"
    flavor: "{{ item.value.flavor }}"
    key_name: "{{ item.value.key_name }}"
    user_data: "{{ item.value.user_data }}"
  with_dict: "{{ openstack_hosts }}"
  register: mysqlserver



#- set_fact: public_v4="{{ mysqlserver.server.public_v4 }}"
- name: output of the registred variable
  debug:
    var: "{{ item.key }}"
  with_dict: "{{ openstack_hosts }}"

If I make a debug of "mysqlserver" variable it shows me a big JSON output where I can see the public IP address but "mysqlserver.server.public_v4" doest not output anything.

I suspect the "with_dict" calls because without this it works, weiird! I need "with_dict" because I want to have many servers.

Any ideas please?

2017-07-19 02:17:50 -0500 received badge  Notable Question (source)
2017-07-16 17:42:38 -0500 received badge  Popular Question (source)
2017-07-15 15:47:26 -0500 received badge  Notable Question (source)
2017-07-14 10:10:13 -0500 asked a question External network is not reachable from subnet

Hi,

I created a public net (10.29.14.0/24 gateway 10.29.14.1) , a private one (10.0.0.0/24) and I created a router between them.

The problem is the router gateway is set to 10.29.14.53 and I cannot associate floating IP to my instances.

My router has two interfaces : 10.29.14.53 (down) and 10.0.0.1 which is up.

When I want to assoaciate a floating IP, this is the error :

Error: External network 5635b60a-87d0-4447-a387-e75532009ae5 is not reachable from subnet 3c4b0863-afd1-4a61-ac76-4c81e881b424. Therefore, cannot associate Port 60524d40-4ce9-4400-889d-06d40baca5e1 with a Floating IP. Neutron server returns request_ids: ['req-c965fe7e-01d4-4351-80f0-b2cf57d6f7c2']


/etc/neutron/plugins/ml2/linuxbridge_agent.ini


# Ansible managed

# Linux bridge agent physical interface mappings
[linux_bridge]

#physical_interface_mappings = dbaas-mgmt:eth14

# Linux bridge agent VXLAN networks
[vxlan]

enable_vxlan = True
vxlan_group = 239.1.1.1
# VXLAN local tunnel endpoint
local_ip = 172.29.240.11
l2_population = False


# Agent
[agent]

# Security groups
[securitygroup]
firewall_driver = iptables
enable_security_group = True

/etc/neutron/plugins/ml2/ml2_conf.ini

# Ansible managed

# ML2 general
[ml2]
type_drivers = flat,vlan,vxlan,local
tenant_network_types = vxlan,flat
mechanism_drivers = linuxbridge
extension_drivers = port_security

# ML2 flat networks
[ml2_type_flat]
flat_networks = flat

# ML2 VLAN networks
[ml2_type_vlan]
network_vlan_ranges =

# ML2 VXLAN networks
[ml2_type_vxlan]
vxlan_group = 239.1.1.1
vni_ranges = 1:1000

# Security groups
[securitygroup]
enable_security_group = True
enable_ipset = True

Any ideas please?

2017-07-11 04:18:28 -0500 asked a question This is not a recognized Fernet token

Hi,

Problem

I installed OpenStack- Ansible and I add trove component. But when I want to launch a database, it fails. It seems to be a token problem


Logs

When looking at /var/log/trove/trove-taskmanager.log

2017-07-11 11:09:25.876 25680 ERROR oslo_service.periodic_task   File "/openstack/venvs/trove-master/lib/python2.7/site-packages/keystoneauth1/session.py", line 766, in post 2017-07-11 11:09:25.876 25680 ERROR oslo_service.periodic_task     return self.request(url, 'POST', **kwargs) 2017-07-11 11:09:25.876 25680 ERROR oslo_service.periodic_task   File "/openstack/venvs/trove-master/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner 2017-07-11 11:09:25.876 25680 ERROR oslo_service.periodic_task     return wrapped(*args, **kwargs) 2017-07-11 11:09:25.876 25680 ERROR oslo_service.periodic_task   File "/openstack/venvs/trove-master/lib/python2.7/site-packages/keystoneauth1/session.py", line 655, in request 2017-07-11 11:09:25.876 25680 ERROR oslo_service.periodic_task     raise exceptions.from_response(resp, method, url) 2017-07-11 11:09:25.876 25680 ERROR oslo_service.periodic_task NotFound: This is not a recognized Fernet token b4ad1caf0ac9dc0b6be7 (HTTP 404) (Request-ID: req-ce0fb34c-e503-478e-8838-57ffdf2554c0) 2017-07-11 11:09:25.876 25680 ERROR oslo_service.periodic_task


/etc/trove/trove.conf

nova_compute_url = http://172.29.236.10:8774/v2.1
nova_proxy_admin_user = admin_trove_user
nova_proxy_admin_pass = b4ad1caf0ac9dc0b6be7
nova_proxy_admin_tenant_name = trove_for_trove_usage

[keystone_authtoken]
insecure = False
auth_type = password
auth_url = http://172.29.236.10:35357
auth_uri = http://172.29.236.10:5000
project_domain_id = default
user_domain_id = default
project_name = trove_for_trove_usage
username = admin_trove_user
password = b4ad1caf0ac9dc0b6be7
region_name = RegionOne

Any ideas guys?

2017-06-28 09:43:19 -0500 received badge  Notable Question (source)
2017-06-28 09:43:19 -0500 received badge  Famous Question (source)
2017-06-08 22:29:51 -0500 received badge  Popular Question (source)
2017-04-11 03:51:46 -0500 asked a question Ansible - Dynamic inventory

Hi,

Each time you create an instance, the IP address changes so there is a need to change the host file in Ansible (/etc/ansible/hosts)

Who has developed a script to setup a dynamic inventory so we don't need to change this host file?

2017-04-10 05:27:55 -0500 asked a question Instances running very slow

Hi,

I am running OpenStack Mirantis on vSphere, is it normal that the instances created are very slow? network issues or storage?

2017-04-10 05:15:42 -0500 commented question Can't create instance, error shown "No valid host"

Normally it is a lack of space on your hypersvisor I think you should first of all try to restart the nova-compute service. Why the host is sometimes os10 and other times localhost.localdomain? Weird How did you install OpenStack?

2017-04-06 04:34:02 -0500 asked a question Who has ever tried to deploy fuel-ccp?

I am trying to deploy fuel-ccp with vagrant (Vagrantfile in the official github repository).

When I run ccp build I got this:

vagrant@node1:~$ ccp build
2017-04-06 09:15:01,871 INFO     fuel_ccp.fetch  Cloning repositories into /home/vagrant/ccp-repos/
2017-04-06 09:15:01,879 INFO     fuel_ccp.fetch  ##################################################
2017-04-06 09:15:01,879 INFO     fuel_ccp.fetch  Summary:
2017-04-06 09:15:01,879 INFO     fuel_ccp.fetch  23 repository(s) is(are) already cloned: fuel-ccp-ceph, fuel-ccp-cinder, fuel-ccp-debian-base, fuel-ccp-entrypoint, fuel-ccp-etcd, fuel-ccp-galera, fuel-ccp-glance, fuel-ccp-grafana, fuel-ccp-heat, fuel-ccp-horizon, fuel-ccp-ironic, fuel-ccp-keystone, fuel-ccp-memcached, fuel-ccp-murano, fuel-ccp-
neutron, fuel-ccp-nova, fuel-ccp-openstack-base, fuel-ccp-rabbitmq, fuel-ccp-sahara, fuel-ccp-searchlight, fuel-ccp-stacklight, fuel-ccp-elasticsearch, fuel-ccp-nginx
2017-04-06 09:15:01,879 INFO     fuel_ccp.fetch  ##################################################
2017-04-06 09:15:02,219 INFO     fuel_ccp.build  base: Starting image build
2017-04-06 09:25:09,524 ERROR    fuel_ccp.build  base: The command '/bin/sh -c apt-get update && apt-get install -y apt-transport-https' returned a non-zero code: 100
2017-04-06 09:25:09,524 ERROR    fuel_ccp.build  base: Push will be skipped due to build failure
2017-04-06 09:25:09,524 ERROR    fuel_ccp.build  base-tools: Build will be skipped due to parent image (base) build failure
2017-04-06 09:25:09,560 INFO     fuel_ccp.build  ##################################################
2017-04-06 09:25:09,561 INFO     fuel_ccp.build  Summary:
2017-04-06 09:25:09,561 ERROR    fuel_ccp.build  2 image(s) build failed: base-tools, base
2017-04-06 09:25:09,561 ERROR    fuel_ccp.build  2 image(s) push failed: base-tools, base
2017-04-06 09:25:09,561 INFO     fuel_ccp.build  ##################################################

Log: (/tmp/ccp.log)

[2017-04-06 09:15:02,219] INFO     fuel_ccp.build base: Starting image build
[2017-04-06 09:15:02,222] DEBUG    docker.api.build Looking for auth config
[2017-04-06 09:15:02,222] DEBUG    docker.api.build No auth config in memory - loading from filesystem
[2017-04-06 09:15:02,222] DEBUG    docker.auth.auth Trying paths: ['/home/vagrant/.docker/config.json', '/home/vagrant/.dockercfg']
[2017-04-06 09:15:02,222] DEBUG    docker.auth.auth No config file found
[2017-04-06 09:15:02,222] DEBUG    docker.api.build No auth config found
[2017-04-06 09:15:02,272] DEBUG    fuel_ccp.build base: Step 1/10 : FROM debian:jessie
[2017-04-06 09:15:02,272] DEBUG    fuel_ccp.build base:  ---> 8cedef9d7368
[2017-04-06 09:15:02,272] DEBUG    fuel_ccp.build base: Step 2/10 : MAINTAINER MOS Microservices <mos-microservices@mirantis.com>
[2017-04-06 09:15:02,273] DEBUG    fuel_ccp.build base:  ---> Using cache
[2017-04-06 09:15:02,273] DEBUG    fuel_ccp.build base:  ---> d54c422070d1
[2017-04-06 09:15:02,273] DEBUG    fuel_ccp.build base: Step 3/10 : ENV DEBIAN_FRONTEND noninteractive
[2017-04-06 09:15:02,273] DEBUG    fuel_ccp.build base:  ---> Using cache
[2017-04-06 09:15:02,275] DEBUG    fuel_ccp.build base:  ---> 9abac259f18a
[2017-04-06 09:15:02,275] DEBUG    fuel_ccp.build base: Step 4/10 : RUN apt-get update && apt-get install -y apt-transport-https
[2017-04-06 09:15:02,326] DEBUG    fuel_ccp.build base:  ---> Running in b752d7721034
[2017-04-06 09:23:05,493] DEBUG    fuel_ccp.build base: Err http://security.debian.org jessie/updates InRelease
[2017-04-06 09:23:05,494] DEBUG    fuel_ccp.build base:
[2017-04-06 09:23:05,494] DEBUG    fuel_ccp.build base: Err http://security.debian.org jessie/updates Release.gpg
  Cannot ...
(more)
2017-04-05 05:35:40 -0500 received badge  Notable Question (source)
2017-04-03 11:15:51 -0500 received badge  Popular Question (source)
2017-04-03 09:32:34 -0500 commented answer Floating IPs not accessible from inside && outside

Problem resolved. Promiscuous mode not enabled in port group and vswitch in vSphere

2017-04-03 09:31:25 -0500 answered a question install Mirantis OpenStack using Ansible

What is the benefit? Please forget the idea :-)

2017-04-03 09:29:32 -0500 answered a question difference between Mirantis OpenStack and community OpenStack

The difference is in the approach but at the end the "same" OpenStack will be installed I explain. Mirantis has packaged the installation of OpenStack in a custom ISO based image so you dont need to lose time in a huge configuration.

Mirantis developed Fuel which is a set of many components : Astute, nailgun, and so forth (please chech official documentation)

Mirantis allows to install OpenStack in a very easy and automative fashion. Of course the packages of nova, neutron, cinder are the same even if they are embedded in mirantis custom repo. I suggest you to try to install OpenStack manually dor learning purpose

2017-04-03 09:00:51 -0500 received badge  Famous Question (source)
2017-04-03 07:07:20 -0500 answered a question Floating IPs not accessible from inside && outside

root@node-5:~# ip netns exec qrouter-05e079e8-b0c3-4d59-a5eb-4deb1507cd8f arping -I qg-896349c2-7b 10.29.14.144 ARPING 10.29.14.144 from 10.29.14.144 qg-896349c2-7b ^CSent 16 probes (16 broadcast(s)) Received 0 response(s)

root@node-5:~# arp 10.29.14.144
    Address                  HWtype  HWaddress           Flags Mask            Iface
    10.29.14.144             ether   fa:16:3e:f1:14:7e   C                     br-floating
    10.29.14.144             ether   fa:16:3e:f1:14:7e   C                     br-ex

The thing I do not understand is this one:

root@node-5:~# ping 10.29.14.130
PING 10.29.14.130 (10.29.14.130) 56(84) bytes of data.
64 bytes from 10.29.14.130: icmp_seq=4 ttl=64 time=0.443 ms
64 bytes from 10.29.14.130: icmp_seq=5 ttl=64 time=0.614 ms
64 bytes from 10.29.14.130: icmp_seq=6 ttl=64 time=0.227 ms

I can ping the router gateway in "/24" but not the floating IP in "/32"

routing problem?

2017-04-03 06:45:11 -0500 received badge  Popular Question (source)
2017-04-03 05:55:12 -0500 asked a question Floating IPs not accessible from inside && outside

Hi,

I set up an OpenStack environment on vSphere

  • 3 controllers nodes
  • 2 compute nodes
  • 1 cinder node

I created a cirros instance and associate to it a floating ip

Problem:

This floating ip (10.29.14.144) is not accessible

[root@fuel ~]# ssh control01

root@node-2:~# ping 10.29.14.144
PING 10.29.14.144 (10.29.14.144) 56(84) bytes of data.
^C
--- 10.29.14.144 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2004ms

root@node-2:~# arp 10.29.14.144
Address                  HWtype  HWaddress           Flags Mask            Iface
10.29.14.144             ether   fa:16:3e:f1:14:7e   C                     br-ex
root@node-5:~# ip netns exec qrouter-05e079e8-b0c3-4d59-a5eb-4deb1507cd8f ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
28: ha-124b032c-e0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether fa:16:3e:51:88:b9 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-124b032c-e0
       valid_lft forever preferred_lft forever
    inet 169.254.0.1/24 scope global ha-124b032c-e0
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe51:88b9/64 scope link 
       valid_lft forever preferred_lft forever
29: qg-896349c2-7b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether fa:16:3e:f1:14:7e brd ff:ff:ff:ff:ff:ff
    inet 10.29.14.130/24 scope global qg-896349c2-7b
       valid_lft forever preferred_lft forever
    inet 10.29.14.144/32 scope global qg-896349c2-7b
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fef1:147e/64 scope link 
       valid_lft forever preferred_lft forever
30: qr-41f36263-c3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether fa:16:3e:c7:02:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.111.1/24 scope global qr-41f36263-c3
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fec7:24c/64 scope link 
       valid_lft forever preferred_lft forever

This floating ip is not accessible from inside the network namespace

root@node-5:~# ip netns exec qrouter-05e079e8-b0c3-4d59-a5eb-4deb1507cd8f ping 10.29.14.144
PING 10.29.14.144 (10.29.14.144) 56(84) bytes of data.

i checked the l3 agent of neutron --> al is up and running

2017-03-31 00:47:14 -0500 received badge  Famous Question (source)
2017-03-28 09:21:07 -0500 answered a question Does AWS use OpenStack?

not at all.. AWS has its own cloud solution like Azure But some other provider uses OpenStack like OVH, citycloud etc

2017-03-23 09:11:49 -0500 asked a question Mystery -VIP not accessible Mirantis 9.2

Installation done on vsphere Architecture: - 5 networks (PXE, Management, Public, Private, Storage) - Each network is on a VLAN

  • 7 virtual servers
    • 1 Fuel master
    • 3 controllers
    • 2 computes
    • 1 storage
  • all connected to all networks only for the Fuel Master which is connected to the PXE and Public networks

    Deployment is successfull.

    Problem:

    The VIP for accessing Horizon provided on the Fuel Dashboard is not accessible.

    root@controller03:~# pcs status vip__public (ocf::fuel:ns_IPaddr2): Started controller03.domain.tld


    root@controller03:~# ip netns exec haproxy ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever

    20: hapr-ns: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether c6:a0:97:26:2c:99 brd ff:ff:ff:ff:ff:ff
        inet 240.0.0.2/30 scope global hapr-ns
           valid_lft forever preferred_lft forever
        inet6 fe80::c4a0:97ff:fe26:2c99/64 scope link
           valid_lft forever preferred_lft forever

    30: b_public: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether b6:bc:db:c4:af:e3 brd ff:ff:ff:ff:ff:ff
        inet <my VIP>/24 scope global b_public
           valid_lft forever preferred_lft forever
        inet6 fe80::b4bc:dbff:fec4:afe3/64 scope link
           valid_lft forever preferred_lft forever


When I ping the VIP from outside --> not working 
I ping the VIP from the controller which holds the VIP --> works
2017-03-17 13:43:28 -0500 received badge  Famous Question (source)
2017-03-13 19:28:16 -0500 received badge  Famous Question (source)
2017-02-17 08:46:19 -0500 received badge  Famous Question (source)
2017-02-09 08:33:00 -0500 received badge  Famous Question (source)
2017-02-09 04:25:00 -0500 received badge  Famous Question (source)
2017-02-09 04:24:30 -0500 received badge  Famous Question (source)
2017-02-03 04:42:35 -0500 commented question Deployment fails after a failed connectivity check

Yeah I already encountered this issue. Can you connect to the slave nodes (login: root and pass: r00tme) and type ps -ef | grep mcollective and give us the output?

2017-01-26 08:47:12 -0500 received badge  Notable Question (source)