Ask Your Question

Michel's profile - activity

2018-12-14 04:29:37 -0500 received badge  Famous Question (source)
2017-05-17 13:00:15 -0500 received badge  Famous Question (source)
2017-03-03 09:28:38 -0500 received badge  Famous Question (source)
2017-03-03 09:28:38 -0500 received badge  Notable Question (source)
2017-03-03 09:28:38 -0500 received badge  Popular Question (source)
2017-02-26 23:25:05 -0500 received badge  Famous Question (source)
2016-10-20 08:36:08 -0500 answered a question Mitaka : snapshot failed - ImageNotAuthorized

Hi,

Increasing the token expiration solve the issue.

2016-10-18 11:23:21 -0500 asked a question Mitaka : snapshot failed - ImageNotAuthorized

Hi,

Mitaka version. Controller node (VMWare). Compute node (HP G8)

I would like to create an image from an instance and reuse this image later as reference image.

I am trying to perform a snapshot of my VM via :

[root@controller images]# nova list
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+
| ID                                   | Name      | Status  | Task State | Power State | Networks                                |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+
| ced63b7e-e36f-4aed-ad73-162da88d82f6 | OSP53     | ACTIVE  | -          | Running     | selfservice=172.16.1.11
| cb009e5e-5e6c-4c86-9e5c-4f80f64557a3 | OSP53_250 | SHUTOFF | -       | Shutdown    | selfservice=172.16.1.19                 |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+
[root@controller images]# nova image-create --poll OSP53_250 OSP53_250Snap

The instance size is :

[root@compute1 _base]# qemu-img info c5d33aa3157b23eadef677e95823eaf773a86865
image: c5d33aa3157b23eadef677e95823eaf773a86865
file format: raw
virtual size: 245G (263066746880 bytes)
disk size: 39G

My controller has enough space :

[root@controller images]# df -h
Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/centos_controller-root  481G  133G  349G  28% /

I see the snapshot growing but after a while the snapshot is deleted :

-rw-r-----. 1 glance glance 3.9G Oct 18 15:30 411f16f5-644e-4198-8168-089f2ead54d6

The snapshot stays in saving mode :

[root@controller images]# openstack image list
+--------------------------------------+---------------+--------+
| ID                                   | Name          | Status |
+--------------------------------------+---------------+--------+
| 411f16f5-644e-4198-8168-089f2ead54d6 | OSP53_250Snap | saving |
| 31139f9f-bf46-4cd8-8414-455c78d0ea3e | OSP53-250     | active |
| 31682baf-b55c-47ef-b773-9c8f462006d1 | cirros        | active |
| 17de666d-fd41-4082-8615-472c9d53142e | OSP53         | active |
+--------------------------------------+---------------+--------+

From Dashboard, I see status Saving blocked at 39.7GB...

I can't find any error in log.

Nova log :

2016-10-18 15:16:49.885 55303 INFO nova.compute.resource_tracker [req-79a01dd0-e6d6-43ef-8bdd-2cd9c1ef0af4 - - - - -] Compute_service record updated for compute1:compute1
2016-10-18 15:17:15.054 55303 INFO nova.compute.manager [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb009e5e\
-5e6c-4c86-9e5c-4f80f64557a3] instance snapshotting
2016-10-18 15:17:15.056 55303 WARNING nova.compute.manager [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb009\
e5e-5e6c-4c86-9e5c-4f80f64557a3] trying to snapshot a non-running instance: (state: 4 expected: 1)
2016-10-18 15:17:15.146 55303 INFO nova.virt.libvirt.driver [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb00\
9e5e-5e6c-4c86-9e5c-4f80f64557a3] Beginning cold snapshot process

2016-10-18 15:23:54.866 55303 INFO nova.compute.resource_tracker [req-79a01dd0-e6d6-43ef-8bdd-2cd9c1ef0af4 - - - - -] Compute_service record updated for compute1:compute1
2016-10-18 15:24:20.069 55303 INFO nova.virt.libvirt.driver [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb00\
9e5e-5e6c-4c86-9e5c-4f80f64557a3] Snapshot extracted, beginning image upload

2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [req-a95e9d29-2a19-4287-b195-ec3fad2c9248 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] [instance: cb009e5\
e-5e6c-4c86-9e5c-4f80f64557a3] Error while trying to clean up image 411f16f5-644e-4198-8168-089f2ead54d6
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3] Traceback (most recent call last):
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 431, in deco\
rated_function
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3]     self.image_api.delete(context, image_id)
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3]   File "/usr/lib/python2.7/site-packages/nova/image/api.py", line 141, in delete
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3]     return session.delete(context, image_id)
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3]   File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 499, in delete
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager [instance: cb009e5e-5e6c-4c86-9e5c-4f80f64557a3]     self._client.call(context, 1, 'delete', image_id)
2016-10-18 16:25:00.834 55303 ERROR nova.compute.manager ...
(more)
2016-10-17 08:06:26 -0500 answered a question How to create network configuration for Openstack Mitaka

Hi Partha,

Are you able from your 10.0.0.XX KVM instances to ping 10.4.7.XX VMWare instances.

I am trying to achieve this since a while.

My KVM instances need to communicate with VMWare instances...

Thanks,

Michel

2016-10-11 01:45:16 -0500 received badge  Famous Question (source)
2016-10-06 05:06:50 -0500 received badge  Popular Question (source)
2016-10-04 09:02:32 -0500 received badge  Notable Question (source)
2016-10-03 11:40:02 -0500 asked a question Mitaka : install VM via Management network

HI,

My Controller node (vmware VM) and Compute node (G8) are installed in my private lan (10.0.0.0/24).

I am able to create instance and assign floating ip address.

GURKES299 linus> 
GURKES299 linus> ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:0A:XX:XX  
          inet addr:172.16.XX.XX...

I am able to access the instance via the floating ip or from controller via ip netns exec qrouter-xxxxx ssh XXX@172.16.XX.XX

As my KVM instances need to communicate via several machines from 10.0.0.0/24, is it possible to create an instance that can communicate with vsphere vms installed on 10.0.0.0/24 ?

root@controller images]# neutron net-list
+--------------------------------------+-------------+--------------------------------------------------------+
| id                                   | name        | subnets                                                |
+--------------------------------------+-------------+--------------------------------------------------------+
| bf33523d-589c-40d9-b465-b0c5dfd708ae | provider    | 34f4e7ab-490c-4926-a566-6f835e20afb3 212.166.XX.XX/28 |
| 0557444b-2627-4ecc-806d-d9f73814e877 | selfservice | 610537c6-fb58-4b07-993b-c47a9e8444a5 172.16.1.0/24     |
+--------------------------------------+-------------+--------------------------------------------------------+

Management network is linked to eth0 (vmware vlan) 10.0.0.0/24

Provider network contains a set of public ips linked to eth3 (public vlan) 212.166.XX.XX/28

The best solution would be to use the selfservice mode that could assign a 10.0.0.XX ip to instance (and that could ping other vpshere vm installed on 10.0.0.0/24) and also use floating ip to access the machine from the outside world).

Could you please tell me if this is feasible ?

Thanks for your help !

Michel

2016-10-03 02:33:20 -0500 received badge  Notable Question (source)
2016-10-03 02:33:20 -0500 received badge  Popular Question (source)
2016-09-07 10:44:09 -0500 received badge  Notable Question (source)
2016-09-01 04:11:08 -0500 received badge  Famous Question (source)
2016-08-26 12:55:28 -0500 received badge  Notable Question (source)
2016-08-26 06:37:59 -0500 asked a question mitaka : Windows Server 2012 - Boot failed from CDROM (0004)

Hi,

I am trying to create a Windows 2012 server image as described here : http://docs.openstack.org/image-guide/windows-image.html

[root@compute1 home]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------
 23    instance-00000020              running
 24    instance-00000021              running
 28    instance-00000024              running

root@compute1 home]# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     no            yes

[root@compute1 home]# ls -lrt
total 3765548
-rw-r--r--. 1 root root     197120 Aug 26 09:32 ws2012.qcow2
-rw-r--r--. 1 qemu qemu 3694962688 Aug 26 10:35 9200.16384.WIN8_RTM.120725-1247_X64FRE_SERVER_EVAL_EN-US-HRM_SSS_X64FREE_EN-US_DV5.ISO
-rw-r--r--. 1 qemu qemu  160755712 Aug 26 10:38 virtio-win-0.1.102.iso

windows iso md5 seems ok :

md5sum 9200.16384.WIN8_RTM.120725-1247_X64FRE_SERVER_EVAL_EN-US-HRM_SSS_X64FREE_EN-US_DV5.ISO

8503997171f731d9bd1cb0b0edc31f3d

After lauching the virt-install :

virt-install --connect qemu:///system  --name ws2012 --ram 2048 --vcpus 2  --network network=default,model=virtio --disk path=ws2012.qcow2,format=qcow2,device=disk,bus=virtio --cdrom 9200.16384.WIN8_RTM.120725-1247_X64FRE_SERVER_EVAL_EN-US-HRM_SSS_X64FREE_EN-US_DV5.ISO --disk path=virtio-win-0.1.102.iso,device=cdrom --vnc --os-type windows --os-variant win2k8

I see in virt-manager :

Boot failed : Could not read from CDROM (code 0004)

Did I make something wrong ?

Thanks for your help !

Michel

2016-08-26 06:10:36 -0500 answered a question Mitaka : cannot install CentOS instance - installation destination is full

I tried to launch the centos7 iso instead of the centos7 image.

Thanks

2016-08-25 06:41:53 -0500 received badge  Famous Question (source)
2016-08-22 09:01:57 -0500 commented question Mitaka : access instance via private ip

From controllers, I am able to access my VM by using :

ip netns exec qrouter-xxxxx ssh cirros@10.0.0.X

Is there a way to use directly ssh cirros@10.0.0.X without ip netns exec qrouter-xxxxx ?

2016-08-20 22:44:16 -0500 received badge  Popular Question (source)
2016-08-18 17:11:47 -0500 received badge  Popular Question (source)
2016-08-18 07:45:06 -0500 asked a question Mitaka : selfservice network - ssh via private network

Hi,

After installing cirros via self-service network :

+-------------------------+----------------------+--------+----------------------------------------+
| ID                      | Name                 | Status | Networks                               |
+-----------------------------------+----------------------+--------+------------------------------+
| 24cc35ca-4724-46a9-9... | selfservice-instance | ACTIVE | selfservice=172.16.1.4, xxx.xxx.xx.190 |
+-------------------------+----------------------+--------+----------------------------------------+

I am able to access the instance via my public ip : ssh cirros@xxx.xxx.xx.190

but to be able to access the instance via the private ip I have to do this (from the controller node):

ip netns exec qrouter-dd19a814-d681-4b0c-8bf7-0ffd79c2ac73 ssh cirros@172.16.1.4

$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:DB:33:92  
          inet addr:172.16.1.4  Bcast:172.16.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fedb:3392/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:844 errors:0 dropped:0 overruns:0 frame:0
          TX packets:846 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:92004 (89.8 KiB)  TX bytes:87639 (85.5 KiB)

Is there a way to access the instance via the private ip without using ip netns exec qrouter-dd19a814-d681-4b0c-8bf7-0ffd79c2ac73(just ssh cirros@172.16.1.4) ?

Thanks & Regards,

Michel

2016-08-17 10:07:31 -0500 asked a question Mitaka : cannot install CentOS instance - installation destination is full

Hi,

After booting successfully a centos 7.0 image, I can't install it because the installation destination is empty.

I see 40 GiB (Virtio Block Device) vda / 0B free

Thanks for your help !

Boot debug :

[root@controller neutron]#  openstack --debug server create --flavor 3  --image CentOS7  --nic net-id=0557444b-2627-4ecc-806d-d9f73814e877 --security-group default --key-name mykey CentOS_7
START with options: ['--debug', 'server', 'create', '--flavor', '3', '--image', 'CentOS7', '--nic', 'net-id=0557444b-2627-4ecc-806d-d9f73814e877', '--security-group', 'default', '--key-name', 'mykey', 'CentOS_7']
options: Namespace(access_token_endpoint='', auth_type='', auth_url='http://controller:35357/v3', cacert='', client_id='', client_secret='***', cloud='', debug=True, default_domain='default', deferred_help=False, domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='', log_file=None, os_compute_api_version='', os_data_processing_api_version='1.1', os_data_processing_url='', os_dns_api_version='2', os_identity_api_version='3', os_image_api_version='2', os_network_api_version='', os_object_api_version='', os_orchestration_api_version='1', os_project_id=None, os_project_name=None, os_volume_api_version='', password='***', profile=None, project_domain_id='', project_domain_name='default', project_id='', project_name='admin', protocol='', region_name='', scope='', service_provider_endpoint='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='default', user_id='', username='admin', verbose_level=3, verify=None)
defaults: {u'auth_type': 'password', u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': u'2', 'cacert': None, u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', u'interface': None, u'network_api_version': u'2', u'image_format': u'qcow2', u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'verify': True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'container_api_version': u'1', u'dns_api_version': u'2', u'object_store_api_version': u'1', u'disable_vendor_agent': {}}
cloud cfg: {'auth_type': 'password', u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 'data_processing_api_version': '1.1', u'network_api_version': u'2', u'image_format': u'qcow2', u'image_api_version': '2', 'verify': True, u'dns_api_version': '2', u'object_store_api_version': u'1', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': {'username': 'admin', 'project_name': 'admin', 'user_domain_name': 'default', 'auth_url': 'http://controller:35357/v3', 'password': '***', 'project_domain_name': 'default'}, 'default_domain': 'default', u'container_api_version': u'1', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': '1', 'timing': False, 'cacert': None, u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', 'debug': True, u'interface': None, u'disable_vendor_agent': {}}
compute API version 2, cmd group openstack.compute.v2
network API version 2, cmd group openstack.network.v2
image API version 2, cmd group openstack.image.v2
volume API version 2, cmd group openstack.volume.v2
identity API version 3, cmd group openstack.identity.v3
object_store API version 1, cmd group openstack.object_store.v1
dns API version 2, cmd group openstack.dns.v2
data_processing API version 1.1, cmd group openstack.data_processing.v1
orchestration API version 1, cmd group openstack.orchestration.v1
command: server create -> openstackclient.compute.v2.server.CreateServer
Auth plugin password selected
auth_type: password
Using auth plugin: password
Using parameters {'username': 'admin', 'project_name': 'admin', 'auth_url': 'http://controller:35357/v3', 'user_domain_name': 'default', 'password': '***', 'project_domain_name': 'default'}
Get auth_ref
REQ: curl -g -i -X GET http://controller:35357/v3 -H "Accept: application/json" -H "User-Agent: python-openstackclient keystoneauth1/2.4.1 python-requests/2.10.0 CPython/2.7.5"
Starting new HTTP ...
(more)
2016-08-17 09:19:28 -0500 received badge  Popular Question (source)
2016-08-16 08:44:58 -0500 asked a question Mitaka : access instance via private ip

Hi,

After creating successfully the cirros instance, I am able to access it via ssh public ip.

Controller node (VM) and Compute node (G8) are installed in my private lan (10.0.0.0/24)

I am using the provider case with a range of public ip :

+--------------------------------------+-------------+---------------------------------------------+
| id                                   | name        | subnets                                                |
 +--------------------------------------+-------------+--------------------------------------------+
 | bf33523d-589c-40d9-b465-b0c5dfd708ae | provider    | 34f4e7ab-490c-4926-a56...  212.XXX.XX.XXX/XX


[root@controller ~]# openstack server list
+--------------------------------------+-------------------+--------+-------------------------+
| ID                                   | Name              | Status | Networks                |
+--------------------------------------+-------------------+--------+-------------------------+
| a5de1441-15e9-435f-aa3f-c478633d1df5 | provider-instance | ACTIVE | provider=212.XXX.XX.XXX |
+--------------------------------------+-------------------+--------+-------------------------+

I am able to access to the cirros instance via the public ip :

[root@controller ~]# ssh cirros@212.XXX.XXX.XXX
...
$ ifconfig
eth0     ...

I would like to know if its possible to assign a private ip to this instance and be able to access it via SSH :

ssh cirros@10.0.0.X

Thanks & Regards,

Michel

2016-08-16 07:16:53 -0500 commented answer Mitaka : Instance in error state due to Failed to notify nova on events

It's fixed now... An entry was missing in /etc/neutron/neutron.conf Many thanks for your help

2016-08-15 20:26:22 -0500 received badge  Notable Question (source)
2016-08-14 07:25:46 -0500 received badge  Popular Question (source)
2016-08-12 09:31:43 -0500 answered a question Mitaka : Instance in error state due to Failed to notify nova on events

Hi Barry,

Thanks for your answer !

here are my demo and admin files :

[root@controller openstack]# cat /usr/openstack/admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PSW
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

[root@controller openstack]# cat /usr/openstack/demo-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PSW
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

I am able to request tokens but I am still facing the same issue...

[root@controller openstack]# . admin-openrc
[root@controller openstack]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2016-08-12T14:59:17.995362Z                                                                                                                                                             |
| id         | gAAAAABXrdY1e92hOP-XbMptyTNP8qqlWGSmwrOyUn7R0i-XJ8qXNURiUrX5v5r0toUegpH4du3S4g7kA-dFEStkX_7ofddwH_mnsvXg7De1rZaQFe9NK3iRjjwLy-Yvku9AI5l9tshaQIus32AeTFPrQLtlo9jsbvF3DxR5j-ED_jBoQo157is |
| project_id | 9004b5f9eb7a40b4906d7e197abf6903                                                                                                                                                        |
| user_id    | 40ea01e3ad834163b400129f6dae4b15                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller openstack]# . demo-openrc
[root@controller openstack]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2016-08-12T14:59:44.458172Z                                                                                                                                                             |
| id         | gAAAAABXrdZQGPTq2RecbgcPiQOptO3X1XiIp3Vj7_EaMHCvk2Pe_mQcFKLAUQ-mghF-3pvaiBiEUIx3TWykJkrkD0Nn5LBKtx-nq9tVeH1LJnn1bYM-lIkU1q7-LTtKH3x_0HqP7PzC65CNo2YbTagXYWwB-_0-dccfsV4SNsMTVxee73yh1qw |
| project_id | d1aa453f237041239787fa046a734c11                                                                                                                                                        |
| user_id    | f198c46485bb4de68f99a359c86483e6                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2016-08-12 08:42:48 -0500 asked a question Mitaka : Instance in error state due to Failed to notify nova on events

Hi,

When launching cirros instance, the compute is waiting for vif plugging callback from the controller. Unfortunately neutron controller is not able to send the event and then compute remove the instance (timeout).

This is a fresh install of Mitaka (no upgrade from older version) No issue with Kilo on the same environment (3-nodes).

Any help would be highly appreciated

Thanks & Regards,

Michel

Controller : VM - CentOs 7 Compute : G8 CentOs 7

SELinux : permissive on both nodes Firewalld disabled on both nodes

From controller neutron server.log

2016-08-12 14:51:30.067 3070 ERROR neutron.notifiers.nova [-] Failed to notify nova on events: [{'status': 'completed', 'tag': u'6a99763c-8814-4659-9ae5-d4db6c50f7eb', 'name': 'network-vif-plugged', 'server_uuid': u'433be2f0-91a2-4a23-a9ee-e118c228ab62'}]

2016-08-12 14:51:22.370 3068 INFO neutron.wsgi [req-3a96bcca-05c2-4955-9972-4c47bead52ee 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] 10.0.0.40 - - [12/Aug/2016 14:51:22] "GET /v2.0/networks/bf33523d-589c-40d9-b465-b0c5dfd708ae HTTP/1.1" 200 877 0.076181
2016-08-12 14:51:22.528 3068 INFO neutron.wsgi [req-a7152096-9516-4e25-b203-4b7391c30992 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] 10.0.0.40 - - [12/Aug/2016 14:51:22] "GET /v2.0/networks.json?id=bf33523d-589c-40d9-b465-b0c5dfd708ae HTTP/1.1" 200 880 0.074319
2016-08-12 14:51:22.539 3068 INFO neutron.wsgi [req-7e755bc6-cca2-483b-9888-7b89c2e6ec8e 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] 10.0.0.40 - - [12/Aug/2016 14:51:22] "GET /v2.0/quotas/9004b5f9eb7a40b4906d7e197abf6903.json HTTP/1.1" 200 385 0.007345
2016-08-12 14:51:22.604 3068 INFO neutron.wsgi [req-1a3fd2e2-3169-45cf-9e72-1b000aea6803 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] 10.0.0.40 - - [12/Aug/2016 14:51:22] "GET /v2.0/ports.json?fields=id&tenant_id=9004b5f9eb7a40b4906d7e197abf6903 HTTP/1.1" 200 369 0.061353
2016-08-12 14:51:23.311 3068 INFO neutron.wsgi [req-8831b616-2a4d-41f7-a8f9-72f863822bf7 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] 10.0.0.40 - - [12/Aug/2016 14:51:23] "GET /v2.0/ports.json?device_id=433be2f0-91a2-4a23-a9ee-e118c228ab62 HTTP/1.1" 200 226 0.070872
2016-08-12 14:51:23.993 3068 INFO neutron.wsgi [req-5564fca4-13fc-4821-9c27-2fe58a181062 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] 10.0.0.200 - - [12/Aug/2016 14:51:23] "GET /v2.0/extensions.json HTTP/1.1" 200 6007 0.005699
2016-08-12 14:51:24.128 3068 INFO neutron.wsgi [req-bfd0e2da-3ae0-4927-8202-1547ba63c891 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] 10.0.0.200 - - [12/Aug/2016 14:51:24] "GET /v2.0/networks.json?id=bf33523d-589c-40d9-b465-b0c5dfd708ae HTTP/1.1" 200 880 0.127760
2016-08-12 14:51:24.240 3068 INFO neutron.wsgi [req-28ed606c-e091-4b7e-b1cd-8bbf72da51da 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] 10.0.0.200 - - [12/Aug/2016 14:51:24] "GET /v2.0/security-groups.json?tenant_id=9004b5f9eb7a40b4906d7e197abf6903 HTTP/1.1" 200 2532 0.099570
2016-08-12 14:51:25.451 3068 INFO neutron.wsgi [req-1b74a43d-9d85-43cd-a13f-d72d3f116296 1a2e4525db5448d8a9985df7b79090c9 273f07c1bcad4ae5a8c607286cccc37a - - -] 10.0.0.200 - - [12/Aug/2016 14:51:25] "POST /v2.0/ports.json HTTP/1.1" 201 1071 0.877761
2016-08-12 14:51:25.536 3068 INFO neutron.wsgi [req-23a0aa5a-a7c1-4d27-b8ab-8a3f884f542c 1a2e4525db5448d8a9985df7b79090c9 273f07c1bcad4ae5a8c607286cccc37a - - -] 10.0.0.200 - - [12/Aug/2016 14:51:25] "GET /v2.0/ports.json?tenant_id=9004b5f9eb7a40b4906d7e197abf6903&device_id=433be2f0-91a2-4a23-a9ee-e118c228ab62 HTTP/1.1" 200 1069 0.049650
2016-08-12 14:51:25.559 3068 INFO neutron.wsgi [req-4bd3bc97-5a1e-41c8-a648-39fca9118c7a 1a2e4525db5448d8a9985df7b79090c9 273f07c1bcad4ae5a8c607286cccc37a - - -] 10.0.0.200 - - [12/Aug/2016 14:51:25] "GET /v2.0/floatingips.json?fixed_ip_address=212.166.53 ...
(more)
2016-08-12 06:29:46 -0500 commented answer Mitaka Server List Status:ERROR

Hi Barry,

Indeed :

From keystone log :

2016-08-12 13:25:35.989 5752 WARNING keystone.common.wsgi [req-8b382614-3a8e-42a0-96b5-1af0784900e4 - - - - -] Authorization failed. The request you have made requires authentication. from 10.0.0.40

2016-08-12 04:17:56 -0500 answered a question Mitaka Server List Status:ERROR

Hi,

I have the same issue when creating an instance of Cirros (fresh installation of Mitaka)

No issue with Kilo on the same environment.

Any help would be highly appreciated

Thanks & Regards,

Michel

Controller : VM - CentOs 7

Compute : G8 CentOs 7

SELinux : permissive on both nodes Firewalld disabled on both nodes

[root@controller log]# openstack flavor  list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name      |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 0  | m1.nano   |    64 |    1 |         0 |     1 | True      |
| 1  | m1.tiny   |   512 |    1 |         0 |     1 | True      |
| 2  | m1.small  |  2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium |  4096 |   40 |         0 |     2 | True      |
| 4  | m1.large  |  8192 |   80 |         0 |     4 | True      |
| 5  | m1.xlarge | 16384 |  160 |         0 |     8 | True      |
+----+-----------+-------+------+-----------+-------+-----------+

[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 31682baf-b55c-47ef-b773-9c8f462006d1 | cirros | active |
+--------------------------------------+--------+--------+

From nova-api.log :

016-08-12 10:35:05.636 3331 INFO nova.api.openstack.wsgi [req-7f4dec2e-98ef-41dd-9841-f1a22d646a30 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] HTTP exception thrown: Image not found.

2016-08-12 10:35:05.775 3331 INFO nova.api.openstack.wsgi [req-bb87361e-94d3-4073-842c-f188a97e48f2 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] HTTP exception thrown: Flavor m1.nano could not be found.

**016-08-12 10:35:05.636 3331 INFO nova.api.openstack.wsgi [req-7f4dec2e-98ef-41dd-9841-f1a22d646a30 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-] HTTP exception thrown: Image not found.** 2016-08-12 10:35:05.637 3331 INFO nova.osapi_compute.wsgi.server [req-7f4dec2e-98ef-41dd-9841-f1a22d646a30 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-] 10.0.0.40 "GET /v2.1/9004b5f9eb7a40b4906d7e197abf6903/images/cirros HTTP/1.1" status: 404 len: 351 time:
0.3375230 2016-08-12 10:35:05.698 3331 INFO nova.osapi_compute.wsgi.server [req-6263c4fb-024f-49e2-97ec-9e1edbaf59a0 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-] 10.0.0.40 "GET /v2.1/9004b5f9eb7a40b4906d7e197abf6903/images HTTP/1.1" status: 200 len: 1247 time:
0.0553460 2016-08-12 10:35:05.747 3331 INFO nova.osapi_compute.wsgi.server [req-3518cede-c8c9-49ec-8099-535e250ae80a 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-] 10.0.0.40 "GET /v2.1/9004b5f9eb7a40b4906d7e197abf6903/images/31682baf-b55c-47ef-b773-9c8f462006d1 HTTP/1.1" status: 200 len: 945 time:
**0.0432029 2016-08-12 10:35:05.775 3331 INFO nova.api.openstack.wsgi [req-bb87361e-94d3-4073-842c-f188a97e48f2 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-] HTTP exception thrown: Flavor m1.nano could not be found.** 2016-08-12 10:35:05.777 3331 INFO nova.osapi_compute.wsgi.server [req-bb87361e-94d3-4073-842c-f188a97e48f2 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-] 10.0.0.40 "GET /v2.1/9004b5f9eb7a40b4906d7e197abf6903/flavors/m1.nano HTTP/1.1" status: 404 len: 369 time:
0.0242729 2016-08-12 10:35:05.810 3331 INFO nova.osapi_compute.wsgi.server [req-2acdd9d1-bbbe-400b-9437-91cd386420f8 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-] 10.0.0.40 "GET /v2.1/9004b5f9eb7a40b4906d7e197abf6903/flavors HTTP/1.1" status: 200 len: 1728 time:
0.0285199 2016-08-12 10:35:05.840 3331 INFO nova.osapi_compute.wsgi.server [req-413d2009-bd66-4c0b-bb59-669c9d20d873 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-] 10.0.0.40 "GET /v2.1/9004b5f9eb7a40b4906d7e197abf6903/flavors/0 HTTP/1.1" status: 200 len: 687 time:
0.0243511 2016-08-12 10:35:06.366 3331 INFO nova.osapi_compute.wsgi.server [req-85a0908c-80c6-4cb3-bf51-767016d41421 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - -
-]

From glance api.log :

2016-08-12 09:32:38.989 2860 ERROR glance.registry.client.v1.client [req-07785b96-82d4-4c98-8a30-68f52ef804e8 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] Registry client request GET /images/cirros raised NotFound

2016-08-12 09:32:38.989 2860 ERROR glance.registry.client.v1.client [req-07785b96-82d4-4c98-8a30-68f52ef804e8 40ea01e3ad834163b400129f6dae4b15 9004b5f9eb7a40b4906d7e197abf6903 - - -] Registry client request GET /images/cirros raised NotFound
2016-08-12 09:32:38.989 2860 ERROR glance.registry.client.v1.client Traceback (most recent call last):
2016-08-12 ...
(more)
2016-08-11 10:42:56 -0500 received badge  Notable Question (source)
2016-08-11 10:42:56 -0500 received badge  Popular Question (source)
2016-08-02 03:12:49 -0500 received badge  Enthusiast
2016-07-25 10:38:00 -0500 answered a question Exception occurred processing WSGI script

Using

curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=kilo-eol | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin

seems to solve the issue.

But I have still an issue when launching the demo instance :

[root@controller nova]# neutron net-list
7d43ee10-baa1-4463-aaf0-f8f3b890a01a | ext-net  | 1744efc4-a135-4a4f-ac81-0dcbfef0657a XXXXXX 
dba911ce-0bed-4d6d-945e-a19789f36197 | demo-net | b8ed02e6-9cc4-4d82-9f81-2fc5191cdf73 192.168.1.0/24 

nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-id=dba911ce-0bed-4d6d-945e-a19789f36197 --security-group default --key-name demo-key demo-instance1

nova-api.log :

: 22486 time: 0.9127622
2016-07-25 17:06:06.464 21586 INFO nova.osapi_compute.wsgi.server [req-0418e4c6-b143-4bf6-bb5a-1c593c7324e5 97582299ac9e42aeb4377ff9224dc2d9 a7107abaece04e43adc2cd299d87e2b1 - - -] 10.0.0.25 "
len: 214 time: 0.0154979
2016-07-25 17:06:06.513 21586 INFO nova.osapi_compute.wsgi.server [req-03e6ab23-3faf-411b-baea-a4722dea203a 97582299ac9e42aeb4377ff9224dc2d9 a7107abaece04e43adc2cd299d87e2b1 - - -] 10.0.0.25 "
" status: 200 len: 1294 time: 0.0452380
2016-07-25 17:06:10.393 21585 INFO nova.osapi_compute.wsgi.server [req-59ebaf54-4a0d-478f-a7a1-7e5f131f1ff1 97582299ac9e42aeb4377ff9224dc2d9 a7107abaece04e43adc2cd299d87e2b1 - - -] 10.0.0.25 "
t=21 HTTP/1.1" status: 200 len: 211 time: 0.2834220
2016-07-25 17:10:21.823 21588 INFO nova.osapi_compute.wsgi.server [req-58e64ecd-4f9e-434c-85af-cfac303aa148 97582299ac9e42aeb4377ff9224dc2d9 a7107abaece04e43adc2cd299d87e2b1 - - -] 10.0.0.25 "
len: 889 time: 0.3874590
2016-07-25 17:11:07.121 21588 INFO nova.osapi_compute.wsgi.server [req-c348b49c-c80e-4eed-aabc-4979d4d02262 97582299ac9e42aeb4377ff9224dc2d9 a7107abaece04e43adc2cd299d87e2b1 - - -] 10.0.0.25 "
5 time: 0.3618941
2016-07-25 17:11:07.169 21588 ERROR nova.api.openstack [req-52bcf50c-b9a8-4b2d-a9fa-63e213fd2a83 97582299ac9e42aeb4377ff9224dc2d9 a7107abaece04e43adc2cd299d87e2b1 - - -] Caught error: id
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack Traceback (most recent call last):
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack   File "/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 125, in __call__
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack     return req.get_response(self.application)
File Edit Options Buffers Tools Help                                                                                                                                                           
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack     response = self.app(environ, start_response)
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack   File "/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack     return resp(environ, start_response)
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack   File "/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack     resp = self.call_func(req, *args, **self.kwargs)
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack   File "/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack     return self.func(req, *args, **kwargs)
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack   File "/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 756, in __call__
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack     content_type, body, accept)
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack   File "/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 821, in _process_stack
2016-07-25 17:11:07.169 21588 TRACE nova.api.openstack     action_result = self.dispatch(meth, request, action_args)
2016-07-25 17:11:07.169 21588 TRACE nova ...
(more)
2016-07-25 10:03:23 -0500 asked a question Exception occurred processing WSGI script

Hi,

I am not able to use httpd service due to the following error :

2016-07-25 15:35:27.538611 mod_wsgi (pid=23013): Exception occurred processing WSGI script '/var/www/cgi-bin/keystone/admin'.
2016-07-25 15:35:27.538690   File "/var/www/cgi-bin/keystone/admin", line 1
2016-07-25 15:35:27.538701     <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"

This is due to the fact that the kilo user guide ask us to do the following command :

curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo (http://git.openstack.org/cgit/opensta...) | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin

[root@controller conf.d]# curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo (http://git.openstack.org/cgit/opensta...)

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns='http://www.w3.org/1999/xhtml' xml:lang='en' lang='en'>
<head>
<title>openstack/keystone - OpenStack Identity (Keystone)</title>
<meta name='generator' content='cgit v0.12'/>
<meta name='robots' content='index, nofollow'/>
<link rel='stylesheet' type='text/css' href='/static/openstack.css'/>
<link rel='shortcut icon' href='/static/favicon.ico'/>

but then a HTML file is copied in /var/www/cgi-bin/keystone/main (admin) instead of a python script.

I have tried with curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py (http://git.openstack.org/cgit/opensta...) :

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from oslo_log import log
from oslo_log import versionutils

from keystone.i18n import _LW
from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)
LOG = log.getLogger(__name__)


def deprecation_warning():
    versionutils.report_deprecated_feature(
        LOG,
        _LW('httpd/keystone.py is deprecated as of Mitaka'
            ' in favor of keystone-wsgi-admin and keystone-wsgi-public'
            ' and may be removed in O.')
    )

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(
    name,
    post_log_configured_function=deprecation_warning)

but then I face another issue :

16-07-25 16:13:50.609105 mod_wsgi (pid=23641): Target WSGI script '/var/www/cgi-bin/keystone/admin' cannot be loaded as Python module.
2016-07-25 16:13:50.609139 mod_wsgi (pid=23641): Exception occurred processing WSGI script '/var/www/cgi-bin/keystone/admin'.
2016-07-25 16:13:50.609168 Traceback (most recent call last):
2016-07-25 16:13:50.609188   File "/var/www/cgi-bin/keystone/admin", line 18, in <module>
2016-07-25 16:13:50.609275 ...
(more)