Ask Your Question

Antonio G.'s profile - activity

2017-11-24 16:34:03 -0500 commented question How to start VM with ubuntu Image and install it on a volume

Best thing to do: download an ubuntu cloud image and upload it to glance. Create a new Cinder volume from that image. Start a new VM with that volume.

2017-10-24 06:36:38 -0500 received badge  Nice Answer (source)
2017-09-06 03:09:19 -0500 received badge  Famous Question (source)
2017-09-06 03:09:19 -0500 received badge  Popular Question (source)
2017-09-06 03:09:19 -0500 received badge  Notable Question (source)
2017-09-04 11:00:31 -0500 commented question How to assign static reservation ip address to physical server neutron.

You should modify the dnsmasq (openstack dhcp) config in order to define a static lease.

2017-09-01 10:44:27 -0500 commented question Devstack fails instance creation other than tiny

Which is the size of volume group 'stack-volumes-lvmdriver-1'?

lsvg stack-volumes-lvmdriver-1
2017-08-30 15:20:35 -0500 answered a question mysqlworkbench settings for openstack nova database port

You could connect using:

  • IP address of your MySQL server as hostname
  • default port (3306) if you never changed it
  • your username and password

Please remember that if connecting from a remote host you must be sure you have a user allowed to connect from remote hosts (e.g. MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';) and not only from localhost.

2017-08-30 15:06:40 -0500 commented question Devstack fails instance creation other than tiny

You can define per tenant quotas, but I do not think this is the specific issue. You could check quotas through CLI commands or through GUI with admin credentials [Identity - Tenant - Modify quotas]

2017-08-30 15:04:54 -0500 commented question Devstack fails instance creation other than tiny

'No valid backend was found': I think this error means that Cinder is misconfigured; could you past /etc/cinder/cinder.conf content?

2017-08-28 15:56:54 -0500 commented question volumes are smaller then from flavor described

Seems you are trying to 'boot from volume' but you are using an image of 2 GB and you are defining a 1GB boot volume. When booting from volume a new Cinder volume is created starting from the chosen Glance image and then it is used as boot volume. It should be bigger than the image size!

2017-08-28 15:47:23 -0500 commented question Devstack fails instance creation other than tiny

Hi Georgios, please paste your cinder log files.

2017-08-24 16:40:00 -0500 commented question Devstack fails instance creation other than tiny

From the error, it seems it is related to a volume problem (try checking if you have enough available disk space). Have you found any interesting error in Cinder logs?

2017-08-24 16:30:14 -0500 commented answer [neutron]Failed to find some config files: /etc/neutron/plugins_conf.ini

Here new version. Old one could be retrieved here.

2017-07-07 15:06:20 -0500 edited question die 413 'Unable to determine DISTRO, can not continue

Hello all, I need your support. I am trying to deploy openstack on Raspberry Pi 3, when I command ./stack then I am getting below error.

.
.
.
+functions-common:GetOSVersion:353         os_PACKAGE=rpm
+functions-common:GetOSVersion:356         typeset -xr os_VENDOR
+functions-common:GetOSVersion:357         typeset -xr os_RELEASE
+functions-common:GetOSVersion:358         typeset -xr os_PACKAGE
+functions-common:GetOSVersion:359         typeset -xr os_CODENAME
+functions-common:GetDistro:369            [[ Raspbian =~ (Ubuntu) ]]
+functions-common:GetDistro:369            [[ Raspbian =~ (Debian) ]]
+functions-common:GetDistro:369            [[ Raspbian =~ (LinuxMint) ]]
+functions-common:GetDistro:373            [[ Raspbian =~ (Fedora) ]]
+functions-common:GetDistro:376            [[ Raspbian =~ (openSUSE) ]]
+functions-common:GetDistro:378            [[ Raspbian =~ (SUSE LINUX) ]]
+functions-common:GetDistro:385            [[ Raspbian =~ (Red.*Hat) ]]
+functions-common:GetDistro:385            [[ Raspbian =~ (CentOS) ]]
+functions-common:GetDistro:385            [[ Raspbian =~ (Scientific) ]]
+functions-common:GetDistro:385            [[ Raspbian =~ (OracleServer) ]]
+functions-common:GetDistro:385            [[ Raspbian =~ (Virtuozzo) ]]
+functions-common:GetDistro:389            [[ Raspbian =~ (XenServer) ]]
+functions-common:GetDistro:391            [[ Raspbian =~ (kvmibm) ]]
+functions-common:GetDistro:413            die 413 'Unable to determine DISTRO, can not continue.'
+functions-common:die:186                  local exitcode=1
+functions-common:die:187                  set +o xtrace
[Call Trace]
./unstack.sh:124:stop_horizon
/opt/lxc/lxc/devstack/lib/horizon:188:stop_apache_server
/opt/lxc/lxc/devstack/lib/apache:218:exit_distro_not_supported
/opt/lxc/lxc/devstack/functions-common:260:GetDistro
/opt/lxc/lxc/devstack/functions-common:413:die
[ERROR] /opt/lxc/lxc/devstack/functions-common:413 Unable to determine DISTRO, can not continue.
/opt/lxc/lxc/devstack/functions-common: line 232: /opt/stack/logs/error.log: No such file or directory
pi@raspberrypi:/opt/lxc/lxc/devstack $ ./clean.sh
/opt/lxc/lxc/devstack/functions-common: line 1302: rpm: command not found
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
[Call Trace]
./clean.sh:31:GetDistro
/opt/lxc/lxc/devstack/functions-common:413:die
[ERROR] /opt/lxc/lxc/devstack/functions-common:413 Unable to determine DISTRO, can not continue.
/opt/lxc/lxc/devstack/functions-common: line 232: /opt/stack/logs/error.lo

g: No such file or directory

2017-07-07 14:55:47 -0500 edited question i am trying to install openstack

I am trying to install openstack on ubuntu 16.04 but i am facing some errors

I have tried these commands

echo "manual" > /etc/init/keystone.override apt-get -y install keystone python-openstackclient apache2 libapache2-mod-wsgi python-oauth2client

mysql -u root -p MariaDB [(none)]> create database keystone; MariaDB [(none)]> grant all privileges on keystone.* to keystone@'waheed' identified by 'Cisco786';

MariaDB [(none)]> grant all privileges on keystone.* to keystone@'%' identified by 'Cisco786';

flush privileges; exit

vi /etc/keystone/keystone.conf

line 13: uncomment and change to any Token

admin_token = admintoken
line 551: change ( MariaDB connection info )

connection = mysql+pymysql://keystone:Cisco786@20.20.20.122/keystone
line 1248: add Memcache server

[memcache] servers = 20.20.20.122:11211
line 1986: add

[token] provider = fernet
line 2012: uncomment

driver = memcache 


root@dlp:~# su -s /bin/bash keystone -c "keystone-manage db_sync"

initialize Fernet key

root@dlp:~# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

root@dlp:~# vi /etc/apache2/apache2.conf
line 70: add own hostname

ServerName waheed.waheed root@dlp:~# vi /etc/apache2/sites-available/wsgi-keystone.conf
create new like follows

Listen 5000
Listen 35357

WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /usr/bin/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/apache2/keystone.log CustomLog /var/log/apache2/keystone_access.log combined

<Directory /usr/bin>
    <IfVersion >= 2.4>
        Require all granted
    </IfVersion>
    <IfVersion < 2.4>
        Order allow,deny
        Allow from all
    </IfVersion>
</Directory>

WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /usr/bin/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/apache2/keystone.log CustomLog /var/log/apache2/keystone_access.log combined

<Directory /usr/bin>
    <IfVersion >= 2.4>
        Require all granted
    </IfVersion>
    <IfVersion < 2.4>
        Order allow,deny
        Allow from all
    </IfVersion>
</Directory>

root@dlp:~# a2ensite wsgi-keystone

Enabling site wsgi-keystone. To activate the new configuration, you need to run: service apache2 reload

root@dlp:~# rm -f /var/lib/keystone/keystone.db

root@dlp:~# systemctl restart apache2

now please focus here below commands

for further configuration i have to type these commands

root@dlp:~# export OS_TOKEN=admintoken

root@dlp:~# export OS_URL=http://10.0.0.30:35357/v3

root@dlp:~# export OS_IDENTITY_API_VERSION=3

root@dlp:~# openstack domain create --description "Default Domain" defau

lt

and its result must be like this

+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Default Domain | | enabled | True | | id | 780b41cee03242da9ec0c73e1a76dc6e | | name | default | +-------------+----------------------------------+

but when i type these commands

root@waheed:/home/waheed# export OS_TOKEN=admintoken 
root@waheed:/home/waheed# export OS_URL=http://20.20.20.122:35357/v3
root@waheed:/home/waheed# openstack domain create --description "Default Domain" default

i am facing this error

An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-80fdf199-84fd-478c-b127-377157b664ac)

i am following these links

any solution for this please help me

thank you very much

2017-07-07 14:49:46 -0500 answered a question VM traffic isolation in same tenant network

Why don't you segregate them on three different tenant networks?

In theory you could achieve that result with actual setup and security groups, even if it is much more complicated.

2017-07-07 14:41:00 -0500 commented answer Create Instance on Canonical Distribution of OpenStack Autopilot

We need more info in order to help you, try opening a new question with detailed info, especially referring to ERRORS in log files. If you could pastebin them.

2017-07-07 02:02:40 -0500 commented answer Create Instance on Canonical Distribution of OpenStack Autopilot

You must source your RC file before running the command. You could download your RC file from GUI (Project->Compute->Access&Security->API-Access->Download-RC-file)

2017-07-06 10:05:34 -0500 received badge  Nice Answer (source)
2017-07-01 07:04:41 -0500 commented question unable to ping fixed ip on provider network

Did you set a real gw for the provider network?

2017-07-01 07:03:14 -0500 edited question unable to ping fixed ip on provider network

Following the guide here

No errors until I try to ping the ip on the provider network.

steven@controller:~$ ping -c 4 203.0.113.102 PING 203.0.113.102 (203.0.113.102) 56(84) bytes of data.

--- 203.0.113.102 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3024ms

It works if I use "ip netns exec" with either the selfservice or provider namespace.

What am I doing wrong?

2017-06-30 13:24:04 -0500 edited question Unable to create instance

Hi All,

I have a packstack setup of newton openstack on three nodes. Controller, compute and network nodes. When I use tenant network of type vxlan, it is working fine. But when I change the tenant network type in ml2.conf to 'vlan', the instance creation is failing.

The neutron logs show this:

2017-06-30 07:43:09.964 39076 ERROR neutron.plugins.ml2.managers [req-ea365127-7798-48ad-9b3f-adcde3e075ee 43963a5dfc084e819c2e3cd6b8cffb93 4457cb0b375345cdb8a23e2ed0c0dc83 - - -] Failed to bind port e8fcd9a6-5b03-46cc-a777-0a92496108ac on host compute for vnic_type normal using segments [{'segmentation_id': 70, 'physical_network': u'br-ens38', 'id': u'cb25d3c8-24db-4366-8d0d-1c30832e2846', 'network_type': u'vlan'}]

These are my configurations from compute node:


[root@compute ~]# cat /etc/neutron/plugins/ml2/openvswitch_agent.ini |  grep -v '^$\|#'
[DEFAULT]
[agent]
tunnel_types =vxlan
vxlan_udp_port = 4789
l2_population = False
drop_flows_on_start = False
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.91.130
bridge_mappings = physnet1:br-ens38
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
----------------

[root@compute ~]# ovs-vsctl show
7f3ba819-2fea-436b-806f-5e546e5c31ae
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "vxlan-c0a85b83"
            Interface "vxlan-c0a85b83"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.91.130", out_key=flow, remote_ip="192.168.91.131"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "int-br-ens38"
            Interface "int-br-ens38"
                type: patch
                options: {peer="phy-br-ens38"}
        Port br-int
            Interface br-int
                type: internal
    Bridge "br-ens38"
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "br-ens38"
            Interface "br-ens38"
                type: internal
        Port "phy-br-ens38"
            Interface "phy-br-ens38"
                type: patch
                options: {peer="int-br-ens38"}
    ovs_version: "2.6.1"
---------------------

These are the configuration from network node:

[root@network ~]#  cat /etc/neutron/plugins/ml2/openvswitch_agent.ini |  grep -v '^$\|#'
[DEFAULT]
[agent]
tunnel_types =vxlan
vxlan_udp_port = 4789
l2_population = False
drop_flows_on_start = False
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.91.131
bridge_mappings = physnet1:br-ens38
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@network ~]#
[root@network ~]# ovs-vsctl show
b1ba7d87-571f-4ec4-a95c-564410baaef8
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "vxlan-c0a85b82"
            Interface "vxlan-c0a85b82"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.91.131", out_key=flow, remote_ip="192.168.91.130"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge "br-ens38"
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "phy-br-ens38"
            Interface "phy-br-ens38"
                type: patch
                options: {peer="int-br-ens38"}
        Port "br-ens38"
            Interface "br-ens38"
                type: internal
        Port "ens38"
            Interface "ens38"
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "int-br-ens38"
            Interface "int-br-ens38"
                type: patch
                options: {peer="phy-br-ens38"}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.6.1"
[root@network ~]#
---------------------------------

What is causing this error ? Please helpenter code here

2017-06-30 00:16:39 -0500 commented answer Neutron overlay network

Everything is recreated or started automatically after nodes reboot. From VM perspective, there is a parameter in nova.conf that allows you to resume the specific state of a VM after node reboot.

2017-06-29 15:58:37 -0500 commented answer Neutron overlay network

Admins could select network type, tenant member users could not. Have a look at this

2017-06-29 15:38:52 -0500 commented answer Create Instance on Canonical Distribution of OpenStack Autopilot

Try enabling debug for glance and nova services and repeating the test. Interesting logs could be glance-api, nova-scheduler, nova-compute. Are you sure glance backend is working correctly? Try downloading the image with glance --debug image-download <imagename>

2017-06-29 15:14:44 -0500 answered a question Neutron overlay network

You are correct, the external network should map to a real network in your environment. The range of not already used IPs will determine how many floating IPs are available to your tenants. The gw is the real one.

Your commands look good!

2017-06-28 15:20:31 -0500 answered a question Create Instance on Canonical Distribution of OpenStack Autopilot

Ciao Riccardo, try uploading the same Ubuntu image as qcow2 disk_format and bare container_format (I noticed you uploaded it as ISO). If it does not work, please paste your controller's glance and nova logs

2017-06-28 15:05:33 -0500 answered a question Horizon Problem in Newton

Are you sure you are working with the same user on the same tenant both from CLI and GUI? You could check that comparing what tenant you have configured in your rc file with the tenant you see in Horizon.

2017-06-28 14:54:37 -0500 answered a question pacemaker can't start resources with single controller online

I think it is an expected behavior, if you want to change this default behavior I will advice you to take a look at this. Remember that the auto_tie_breaker parameter is not recommended for production environments.

2017-06-05 11:24:38 -0500 edited question Pike Master Branch : VM launch is failed with Error message No valid host was found. There are not enough hosts available

Its multinode setup, where on one VM control and compute node configured and on another VM neutron server is configured.

I am able to create network and subnet without any issue but when i try to launch VM then getting the following error:

Please note placement API is already installed and configured , but compute node is not getting registered in placement API inventory.

Any pointer to resolve this issue will be of great help.

root@controlNode:~# nova show 8ff61689-11cd-46d6-a88a-fb26a15d7b83
/usr/lib/python2.7/dist-packages/novaclient/client.py:278: UserWarning: The 'tenant_id' argument is deprecated in Ocata and its use may result in errors in future releases. As 'project_id' is provided, the 'tenant_id' argument will be ignored.
  warnings.warn(msg)
+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property                             | Value                                                                                                                                                                                                                   |
+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                                                                                                                                                                  |
| OS-EXT-AZ:availability_zone          | nova                                                                                                                                                                                                                    |
| OS-EXT-SRV-ATTR:host                 | -                                                                                                                                                                                                                       |
| OS-EXT-SRV-ATTR:hostname             | vm5                                                                                                                                                                                                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                                                                                                                                                                                       |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000c                                                                                                                                                                                                       |
| OS-EXT-SRV-ATTR:kernel_id            |                                                                                                                                                                                                                         |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                                                                                                                                                                                       |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                                                                                                                                                                                         |
| OS-EXT-SRV-ATTR:reservation_id       | r-iijln3t0                                                                                                                                                                                                              |
| OS-EXT-SRV-ATTR:root_device_name     | -                                                                                                                                                                                                                       |
| OS-EXT-SRV-ATTR:user_data            | -                                                                                                                                                                                                                       |
| OS-EXT-STS:power_state               | 0                                                                                                                                                                                                                       |
| OS-EXT-STS:task_state                | -                                                                                                                                                                                                                       |
| OS-EXT-STS:vm_state                  | error                                                                                                                                                                                                                   |
| OS-SRV-USG:launched_at               | -                                                                                                                                                                                                                       |
| OS-SRV-USG:terminated_at             | -                                                                                                                                                                                                                       |
| accessIPv4                           |                                                                                                                                                                                                                         |
| accessIPv6                           |                                                                                                                                                                                                                         |
| config_drive                         |                                                                                                                                                                                                                         |
| created                              | 2017-05-25T15:42:35Z                                                                                                                                                                                                    |
| description                          | -                                                                                                                                                                                                                       |
| fault                                | {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": "  File \"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\", line 915, in schedule_and_build_instances |
|                                      |     request_specs[0].to_legacy_filter_properties_dict())                                                                                                                                                                |
|                                      |   File \"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\", line 637, in _schedule_instances                                                                                                                 |
|                                      |     hosts = self.scheduler_client.select_destinations(context, spec_obj)                                                                                                                                                |
|                                      |   File \"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py\", line 371, in wrapped                                                                                                                               |
|                                      |     return func(*args, **kwargs)                                                                                                                                                                                        |
|                                      |   File \"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py\", line 51, in select_destinations                                                                                                          |
|                                      |     return self.queryclient.select_destinations(context, spec_obj)                                                                                                                                                      |
|                                      |   File \"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py\", line 37, in __run_method                                                                                                                 |
|                                      |     return getattr(self.instance, __name)(*args, **kwargs)                                                                                                                                                              |
|                                      |   File \"/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py\", line 32, in select_destinations                                                                                                             |
|                                      |     return self.scheduler_rpcapi.select_destinations(context, spec_obj)                                                                                                                                                 |
|                                      |   File \"/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py\", line 130, in select_destinations                                                                                                                  |
|                                      |     return cctxt.call(ctxt, 'select_destinations', **msg_args)                                                                                                                                                          |
|                                      |   File \"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py\", line 169, in call                                                                                                                             |
|                                      |     retry=self.retry)                                                                                                                                                                                                   |
|                                      |   File \"/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py\", line 98, in _send                                                                                                                              |
|                                      |     timeout=timeout, retry=retry)                                                                                                                                                                                       |
|                                      |   File \"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py\", line 505, in send                                                                                                                    |
|                                      |     retry=retry)                                                                                                                                                                                                        |
|                                      |   File \"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py\", line 496, in _send                                                                                                                   |
|                                      |     raise result                                                                                                                                                                                                        |
|                                      | ", "created": "2017-05-25T15:42:36Z"}                                                                                                                                                                                   |
| flavor                               | m1.tiny (e5bf6e30-0d88-49d0-be26-2fc0418bb7c9)                                                                                                                                                                          |
| hostId                               |                                                                                                                                                                                                                         |
| host_status                          |                                                                                                                                                                                                                         |
| id                                   | 8ff61689-11cd-46d6-a88a-fb26a15d7b83                                                                                                                                                                                    |
| image                                | cirros (302e6c08-e0b0-43df-a6e8-4abad517d01f)                                                                                                                                                                           |
| key_name                             | -                                                                                                                                                                                                                       |
| locked                               | False                                                                                                                                                                                                                   |
| metadata                             | {}                                                                                                                                                                                                                      |
| name                                 | VM5                                                                                                                                                                                                                     |
| os-extended-volumes:volumes_attached | []                                                                                                                                                                                                                      |
| status                               | ERROR                                                                                                                                                                                                                   |
| tags                                 | []                                                                                                                                                                                                                      |
| tenant_id                            | f85c91e401304cb8a0b78654352d13f6                                                                                                                                                                                        |
| updated                              | 2017-05-25T15:42:36Z                                                                                                                                                                                                    |
| user_id                              | 15d62a6cd8db41f48d9a85d9aaff958f                                                                                                                                                                                        |
+--------------------------------------+---------------

Repository used to install the nodes is:

 sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

Here is nova.conf file configured on my setup

root@controlNode:/home/test# cat nova.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controlNode
auth_strategy = keystone
my_ip = 192.168.56.101
use_neutron = True
ram_allocation_ratio = 3
debug = false
log_dir = /var/log/nova
[api]
[api_database]
connection = mysql+pymysql://nova:admin123@controlNode/nova_api
[barbican]
[cache]
[cells]
enable = True
[cinder]
[cloudpipe]
[conductor]
[console]
[consoleauth]
[cors]
[cors.subdomain]
[crypto ...
(more)
2017-06-05 10:31:50 -0500 answered a question Are security groups applied at port or instance level?

If you create the VM through Horizon GUI (or from CLI with --security-group flag) SGs are applied to all VM ports. Associating SGs to ports on HOT template or using either old CLI command proposed by Rajesh or new CLI commad referenced here could provide more granular control.

2017-05-23 10:49:13 -0500 edited question Ocata:EndpointNotFound: Could not find requested endpoint in Service Catalog.

I follow the offical document to install openstack Ocata on CentOS7, all verification has been succeed, but I cannot create instance via CLI or WebUI. Server log (nova-compute.log) show "EndpointNotFound: Could not find requested endpoint in Service Catalog" Error. Could ANYBODY can help me. Is this a bug?

Log Below:

4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] Attempting claim: memory 1500 MB, disk 35 GB, vcpus 1 CPU
2017-05-23 12:08:49.393 12353 INFO nova.compute.claims [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] Total memory: 16382 MB, used: 512.00 MB
2017-05-23 12:08:49.394 12353 INFO nova.compute.claims [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] memory limit: 24573.00 MB, free: 24061.00 MB
2017-05-23 12:08:49.394 12353 INFO nova.compute.claims [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] Total disk: 449 GB, used: 0.00 GB
2017-05-23 12:08:49.394 12353 INFO nova.compute.claims [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] disk limit: 449.00 GB, free: 449.00 GB
2017-05-23 12:08:49.395 12353 INFO nova.compute.claims [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] Total vcpu: 4 VCPU, used: 0.00 VCPU
2017-05-23 12:08:49.395 12353 INFO nova.compute.claims [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] vcpu limit not specified, defaulting to unlimited
2017-05-23 12:08:49.396 12353 INFO nova.compute.claims [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] Claim successful
2017-05-23 12:08:49.804 12353 INFO nova.scheduler.client.report [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] Submitted allocation for instance
2017-05-23 12:08:50.635 12353 WARNING nova.virt.osinfo [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] Cannot find OS information - Reason: (No configuration information found for operating system Empty)
2017-05-23 12:08:50.709 12353 WARNING nova.virt.osinfo [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] Cannot find OS information - Reason: (No configuration information found for operating system Empty)
2017-05-23 12:08:51.249 12353 INFO nova.compute.resource_tracker [req-372bdcaf-e95a-4c10-a7e5-758ce73c4993 - - - - -] Final resource view: name=main phys_ram=16382MB used_ram=2012MB phys_disk=449GB used_disk=35GB total_vcpus=4 used_vcpus=1 pci_stats=[]
2017-05-23 12:08:51.605 12353 WARNING nova.virt.osinfo [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] Cannot find OS information - Reason: (No configuration information found for operating system Empty)
2017-05-23 12:08:51.606 12353 INFO nova.virt.libvirt.driver [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] [instance: 89cda4c3-c3c3-49bc-b1fc-5382ea515c80] Creating image
2017-05-23 12:08:53.465 12353 ERROR nova.compute.manager [req-64eef975-4d76-4bba-afc2-7c57a1dca3e1 b85bc5aa14ab4392b099c57340f3f12a 4561eebad5994bb39e1299c36872ca52 - - -] Instance failed network setup after 1 attempt(s)
2017-05-23 12:08:53.465 12353 ERROR nova.compute.manager Traceback (most recent call last):
2017-05-23 12:08:53.465 12353 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1399, in _allocate_network_async
2017-05-23 12:08:53.465 12353 ERROR nova.compute.manager     bind_host_id=bind_host_id)
2017-05-23 12:08:53.465 12353 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 873, in allocate_for_instance
2017-05-23 12:08:53.465 12353 ERROR nova.compute.manager     bind_host_id, dhcp_options, available_macs)
2017-05-23 12:08:53.465 12353 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 992, in ...
(more)
2017-03-13 19:45:32 -0500 received badge  Guru (source)
2017-03-13 19:45:32 -0500 received badge  Great Answer (source)
2017-02-27 10:12:07 -0500 received badge  Famous Question (source)
2017-02-17 02:23:25 -0500 answered a question PCI-PT ports association & pci_alias

It seems it is related to the way you insert the values in nova.conf file.

Add the first interface (int1) in nova.conf
restart nova 
create a VM that uses that interface [here VM NIC 1 is mapped to int1]
delete the VM
add up in nova.conf file the second one (int2)
restart nova
create a VM that uses those interfaces [here VM NIC 1 is mapped to int2 and VM NIC 2 is mapped to int1]
delete the VM
add up in nova.conf file the third one (int3)
restart nova
create a VM that uses those interfaces [here VM NIC 1 is mapped to int3, VM NIC 2 is mapped to int2 and VM NIC 3 is mapped to int1]
2017-02-02 09:14:08 -0500 received badge  Notable Question (source)
2017-02-02 09:14:08 -0500 received badge  Popular Question (source)
2017-01-25 06:10:52 -0500 answered a question Where should I install endpoint APIs?

You should put Glance IP address.

2017-01-25 03:18:00 -0500 asked a question PCI-PT ports association & pci_alias

Hi, I am currently playing with PCI Passthrough VMs.

The compute node has 4 identical PCI-PT ports (2 NICs with 2 10Gb ports, Intel). I configured the pci_alias = {"vendor_id": "8086", "product_id": "10fb", "name":"intel_pt"} and configured a corresponding flavor with nova flavor-key VNF.4PCIPT set "pci_passthrough:alias"="intel_pt:4"

I always create the VM with an heat template, verifying that all vNICS are always mapped to PCI-PT ports in the same order (lower addr bus - higher port number is the first one, followed by lower addr bus - higher port number -1)

e.g.:

0000:03:00.1

0000:03:00.0

0000:04:00.1

0000:04:00.0

Is there anyone that could confirm this behaviour, in order to be sure it is deterministic. Where could I search for the logic behind this behaviour?

Another approach to specifically map PCI-PT devices to vNICs could be to define a specific pci_alias that maps specifically to a predefined port. Could it be possible to define a pci_alias referring to 0000:04:00.1 port only (assuming all NIC ports are identical, both from vendor and product IDs perspective)

Thanks in advance

2016-12-22 10:05:17 -0500 commented question spawning a VM with SRIOV interfaces is failing

Can you add flavor vn.nclim8p and ports details?

2016-12-22 10:01:11 -0500 edited question spawning a VM with SRIOV interfaces is failing

Hi,

I am running openstack mitaka. I am trying to spawn a VM with 2 interfaces. One interface is Virtio, second one is SRIOV (with binding direct). However VM fails with an error. I am spawning the VM in the following manner

nova boot --flavor vn.nclim8p --boot-volume e7ded76a-53d4-4573-90df-218c80ddbee7 --config-drive true \
 --nic port-id=15083f0d-e036-4daf-802d-27859b93595b \
 --nic port-id=257b5deb-1023-4528-af40-92fe2fd5a0e6 \
VM

I see the following error in nova-scheduler log

Compute fails NUMA topology requirements.

If I spawn a VM only with Virtio interfaces it is going through fine.

On Compute: nova.conf

pci_passthrough_whitelist = [ {"vendor_id":"15b3", "product_id":"1004"}, { "devname": "ens5f0", "physical_network": "physnet1"} ]

Here both Mellanox (15B3:1004) and Intel cards (ens5f0) are mentioned. SRIOV is working fine for Mellanox cards.

sriov_agent.ini

[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver

[sriov_nic]
physical_device_mappings = physnet1:ens5f0
exclude_devices =

ip link  show ens5f0
3: ens5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 14:02:ec:82:f9:f4 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
    vf 1 MAC 1a:2a:7f:cc:a2:fe, spoof checking on, link-state auto
    vf 2 MAC 92:bc:69:ab:9f:80, spoof checking on, link-state auto
    vf 3 MAC e6:e9:d0:c2:2a:99, spoof checking on, link-state auto
    vf 4 MAC 62:b5:e2:35:fb:3c, spoof checking on, link-state auto
    vf 5 MAC 62:de:c3:c7:e0:5c, spoof checking on, link-state auto
    vf 6 MAC 6e:95:ab:45:71:0e, spoof checking on, link-state auto

On Controller: nova.conf

scheduler_default_filters = AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, NUMATopologyFilter, PciPassthroughFilter, RetryFilter
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter

pci_alias={"name":"Mellanox_VF", "product_id":"1004", "vendor_id":"15b3", "device_type":"type-VF"}

ml2_conf.ini

[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge,sriovnicswitch
extension_drivers = port_security

[ml2_sriov]
supported_pci_vendor_devs = 8086:10ed
agent_required = False

[sriov_nic]
physical_device_mappings = physnet1:ens5f0

Can you please let me know if I am missing anything here?

Regards Krishna

2016-12-09 05:55:39 -0500 received badge  Nice Answer (source)
2016-10-19 04:50:15 -0500 commented answer Trove is a database as a service solution or a virtual machine image database?

No, I do not think so. You should have a minimal Openstack Setup (Keystone, Nova, Neutron (or at least Nova Network), Glance and Trove). Try reading this.

2016-09-03 01:51:39 -0500 received badge  Nice Answer (source)