Ask Your Question

Jorge Florencio's profile - activity

2018-04-16 03:27:26 -0500 received badge  Nice Question (source)
2017-08-30 21:34:51 -0500 received badge  Favorite Question (source)
2017-06-19 05:55:26 -0500 received badge  Nice Question (source)
2017-06-08 01:44:14 -0500 received badge  Famous Question (source)
2017-03-30 05:19:02 -0500 received badge  Notable Question (source)
2017-03-30 05:06:29 -0500 commented answer Glance is using 'localhost' for mysql connection even if other host is configured

Great!!! This was the issue.

Thanks your very much for your help!

2017-03-30 02:23:22 -0500 commented answer Glance is using 'localhost' for mysql connection even if other host is configured

Failed again, even if I use the IP instead the hostname.

2017-03-30 02:22:10 -0500 answered a question Glance is using 'localhost' for mysql connection even if other host is configured

The hostname used in configuration files is the same name from /etc/hosts file.

192.168.1.11    newton01

Also I've changed the hostname by the IP in configuration files, and the same result. The parameter is being ignored.

This is the content of api.log file:

2017-03-30 11:07:39.078 27711 CRITICAL glance [-] OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'glance'@'localhost' (using password: YES)")
2017-03-30 11:07:39.078 27711 ERROR glance Traceback (most recent call last):
2017-03-30 11:07:39.078 27711 ERROR glance   File "/usr/bin/glance-manage", line 10, in <module>
2017-03-30 11:07:39.078 27711 ERROR glance     sys.exit(main())
...
...
...
2017-03-30 11:07:39.078 27711 ERROR glance     return Connection(*args, **kwargs)
2017-03-30 11:07:39.078 27711 ERROR glance   File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 193, in __init__
2017-03-30 11:07:39.078 27711 ERROR glance     super(Connection, self).__init__(*args, **kwargs2)
2017-03-30 11:07:39.078 27711 ERROR glance OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'glance'@'localhost' (using password: YES)")
2017-03-30 11:07:39.078 27711 ERROR glance

This is the content of glance-api.conf :

[DEFAULT]
debug = true
verbose = true
[cors]
[cors.subdomain]
[database]
connection mysql+pymysql://glance:glancedbpassword@newton01/glance
[glance_store]
stores = file
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://newton01:5000
auth_url = http://newton01:35357
memcached_servers = newton01:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glancepassword
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

And this is the content of glance-registry.conf :

[DEFAULT]
debug = true
verbose = true
[database]
connection = mysql+pymysql://glance:glancedbpassword@newton01/glance
[keystone_authtoken]
auth_uri = http://newton01:5000
auth_url = http://newton01:35357
memcached_servers = newton01:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glancepassword
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]

Regards, Jorge.

2017-03-30 01:47:51 -0500 received badge  Popular Question (source)
2017-03-29 11:40:57 -0500 asked a question Glance is using 'localhost' for mysql connection even if other host is configured

Hi folks, I'm trying to install OpenStack Newton on CentOS 7.3 and glance fails to create tables on database. After executing this command:

su -s /bin/sh -c "glance-manage db_sync" glance

This error is displayed in the log:

Access denied for user 'glance'@'localhost' (using password: YES)

This is the configuration in glance-registry.conf and glance-api.conf files:

[database]
connection = mysql+pymysql://glance:password@controller/glance

I can sucessfully access database using the right hostname, but I don't know why glance is trying to use 'localhost' instead the hostname configured in files.

Many thanks in advance,

Jorge.

2017-02-08 03:30:46 -0500 received badge  Student (source)
2017-01-30 05:29:37 -0500 received badge  Famous Question (source)
2017-01-27 08:58:44 -0500 received badge  Notable Question (source)
2017-01-27 07:26:51 -0500 received badge  Popular Question (source)
2017-01-27 02:31:10 -0500 commented answer Instance get stuck when booting grub

Nested virtualization was already enabled. I still can't boot instances.

2017-01-26 11:28:53 -0500 asked a question Instance get stuck when booting grub

Hi folks, this is an OpenStack Mitaka running on CentOS 7.3 on top of VMWare ESXi 5.5 (nested virtualization).

I started a Cirros instance and the console get stuck booting grub:

Booting from Hard Disk...
GRUB

The nova-compute.log doesn't display any errors.

If I change the virt_type to qemu in /etc/nova/nova.conf the instance sucessfully boot, but I can't use hardware virtualization.

KVM module is loaded on compute nodes:

# lsmod |grep kvm
kvm_intel             170181  0
kvm                   554609  1 kvm_intel
irqbypass              13503  1 kvm
# ls -la /dev/kvm
crw-rw-rw- 1 root kvm 10, 232 Jan 26 17:28 /dev/kvm
#

Any suggestion would be appreciated.

Thanks! Jorge.

2017-01-15 22:06:10 -0500 received badge  Notable Question (source)
2017-01-15 22:06:10 -0500 received badge  Famous Question (source)
2016-10-27 22:53:00 -0500 received badge  Famous Question (source)
2016-08-09 07:25:06 -0500 received badge  Notable Question (source)
2016-08-09 07:25:06 -0500 received badge  Popular Question (source)
2016-06-03 17:38:13 -0500 asked a question How can I change where ephemeral and swap disk are stored?

Hi folks, when I create a new instance using ephemeral and swap disk, the root disk is created using my default cinder backend but the ephemeral and swap disks are created on two files (disk.eph0 and disk.swap) on the directory /var/lib/nova/instances/instance-id

Using this configuration I can't live-migrate instances because /var/lib/nova/instances directory is not shared between nova hosts.

The cinder backend is a shared storage, how can I configure nova for creating the ephemeral and swap volumes on the shared storage instead the default dir?

Thank you! Jorge.

2016-04-06 12:37:08 -0500 received badge  Popular Question (source)
2016-03-29 08:16:16 -0500 asked a question Liberty Cinder volumes multi attach

Hi folks, I'm trying to create a volume using the flag --allow-multiattach but the task fail. The scheduler.log show this error:

Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid host was found. No weighed hosts available

Is multiattach supported on cinder for all backend types?

Thank you! Jorge.

2015-04-17 04:55:14 -0500 received badge  Famous Question (source)
2015-04-08 10:44:34 -0500 received badge  Notable Question (source)
2015-04-01 09:07:48 -0500 received badge  Enthusiast
2015-03-30 16:00:28 -0500 received badge  Popular Question (source)
2015-03-30 07:18:53 -0500 asked a question Default volume-type for Cinder volumes ?

Hi, I have two backends configured in Cinder:

[DEFAULT]
...
default_volume_type=lvm
enabled_backends=lvm,scaleio

[lvm]
...
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI

[scaleio]
...
volume_driver=cinder.volume.drivers.emc.scaleio.ScaleIODriver
volume_backend_name=ScaleIO

I have created two volume types, one per each backend.

# cinder type-list
+--------------------------------------+---------+
|                  ID                  |   Name  |
+--------------------------------------+---------+
| cb87ac68-7f17-4c3a-b5d3-a23daaa30549 | scaleio |
| f66dd86d-f32e-4d31-a7b5-5f8528d02ae4 |   lvm   |
+--------------------------------------+---------+

# cinder extra-specs-list
+--------------------------------------+---------+---------------------------------------------------------------------------------------------------------------------------------+
|                  ID                  |   Name  |                                                           extra_specs                                                           |
+--------------------------------------+---------+---------------------------------------------------------------------------------------------------------------------------------+
| cb87ac68-7f17-4c3a-b5d3-a23daaa30549 | scaleio | {u'sio:pd_name': u'domain1', u'sio:provisioning_type': u'thin', u'sio:sp_name': u'default', u'volume_backend_name': u'ScaleIO'} |
| f66dd86d-f32e-4d31-a7b5-5f8528d02ae4 |   lvm   |                                              {u'volume_backend_name': u'LVM_iSCSI'}                                             |
+--------------------------------------+---------+---------------------------------------------------------------------------------------------------------------------------------+

I create a volume without especifying the volume type and then the volume is created using "scaleio" volume type, instead of "lvm" volume type.

Why parameter "default_volume_type" in cinder.conf is being ignored?

Thank you, Jorge

2014-12-13 13:26:59 -0500 received badge  Nice Answer (source)
2014-11-21 00:56:35 -0500 received badge  Famous Question (source)
2014-11-13 03:13:03 -0500 marked best answer Can't ping external network from router or instances

Hello, i´m running OpenStack Juno release on CentOS 7.0 with three networks:

  • Private Network: 192.168.10.128/27 (for gre tunnel)
  • External Network: 192.168.10.160/27 (for management & external access)
  • VM Netwok: 192.168.10.192/27 (for instances)

Here is the bridge configuration:

# ovs-vsctl show
30e10303-8042-4eab-921e-e07b7de4657f
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-c0a80a8e"
            Interface "gre-c0a80a8e"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.10.141", out_key=flow, remote_ip="192.168.10.142"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tap8ab8fa25-08"
            tag: 1
            Interface "tap8ab8fa25-08"
                type: internal
        Port "qr-11a3dee8-a4"
            tag: 1
            Interface "qr-11a3dee8-a4"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port "qg-21faed98-74"
            Interface "qg-21faed98-74"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "ens224"
            Interface "ens224"
    ovs_version: "2.1.3"

The IPs of qrouter:

# ip netns exec qrouter-394aece6-82e3-4f25-b44d-7592c7833493 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
14: qr-11a3dee8-a4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:bc:b4:17 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.193/27 brd 192.168.10.223 scope global qr-11a3dee8-a4
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:febc:b417/64 scope link
       valid_lft forever preferred_lft forever
15: qg-21faed98-74: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:9a:8a:a7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.164/27 brd 192.168.10.191 scope global qg-21faed98-74
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe9a:8aa7/64 scope link
       valid_lft forever preferred_lft forever

I can ping a instance:

# ip netns exec qrouter-394aece6-82e3-4f25-b44d-7592c7833493 ping -c 4 192.168.10.196
PING 192.168.10.196 (192.168.10.196) 56(84) bytes of data.
64 bytes from 192.168.10.196: icmp_seq=1 ttl=64 time=0.926 ms
64 bytes from 192.168.10.196: icmp_seq=2 ttl=64 time=0.531 ms
64 bytes from 192.168.10.196: icmp_seq=3 ttl=64 time=0.499 ms
64 bytes from 192.168.10.196: icmp_seq=4 ttl=64 time=0.439 ms

--- 192.168.10.196 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.439/0.598/0.926/0.194 ms

But I can´t ping external gateway:

# ip netns exec qrouter-394aece6-82e3-4f25-b44d-7592c7833493 ping -c 4 192.168.10.161
PING 192.168.10.161 (192.168.10.161) 56(84) bytes of data.

--- 192.168.10.161 ping statistics ---
4 ...
(more)
2014-11-13 03:13:03 -0500 received badge  Self-Learner (source)
2014-11-13 03:13:03 -0500 received badge  Teacher (source)
2014-11-11 16:33:49 -0500 received badge  Supporter (source)
2014-11-11 16:33:47 -0500 received badge  Scholar (source)
2014-11-11 16:30:07 -0500 commented answer iSCSI target not provisioned

Great !! Now its working !!

2014-11-11 03:18:51 -0500 received badge  Famous Question (source)
2014-11-11 03:13:04 -0500 received badge  Editor (source)
2014-11-11 03:04:32 -0500 answered a question iSCSI target not provisioned

Hello, on my installation, ther is not two backends, and the configuration is the same, but with only one backend. Here is the content of cinder.conf on the storage node:

[DEFAULT]
rabbit_host=*host*
rabbit_userid=*user*
rabbit_password=*pass*
rpc_backend=rabbit
my_ip=*ip*
glance_host=*host*
auth_strategy=keystone
verbose=true
iscsi_ip_address=*ip*
volume_backend_name=LVM_iSCSI
iscsi_helper=lioadm
volume_group=cinder-volumes
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[database]
connection=mysql://cinder:*pass*@*host*/cinder
[fc-zone-manager]
[keymgr]
[keystone_authtoken]
auth_uri=http://*host*:5000/v2.0
identity_uri=http://*host*:35357
admin_user=cinder
admin_password=*pass*
admin_tenant_name=service
[matchmaker_redis]
[matchmaker_ring]
[profiler]
[ssl]

Thank you,

Jorge.

2014-11-10 02:04:01 -0500 received badge  Notable Question (source)
2014-11-09 23:19:51 -0500 received badge  Popular Question (source)
2014-11-09 07:10:07 -0500 received badge  Notable Question (source)
2014-11-08 18:37:09 -0500 received badge  Taxonomist
2014-11-08 18:37:09 -0500 asked a question iSCSI target not provisioned

Hello, installing OpenStack Juno on CentOS 7, I have configured LIO Target as Cinder iSCSI backend, in /etc/cinder/cinder.conf :

iscsi_helper=lioadm
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

When I create a new volume, iSCSI is not provisioned so compute nodes cannot access volumes:

# cinder create --display-name test 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-11-09T00:26:55.541777      |
| display_description |                 None                 |
|     display_name    |                 test                 |
|      encrypted      |                False                 |
|          id         | 6bf30407-71f4-4f92-9650-d4afb19e1934 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

There is no errors in the Cinder log:

2014-11-09 01:26:55.671 24762 INFO cinder.volume.flows.manager.create_volume [req-35cf7097-65db-487a-a658-d2f6c4eb2d31 224904d37b3b4e26aaaf11739cdfae00 e832975e17064d399d982e4b379169e3 - - -] Volume 6bf30407-71f4-4f92-9650-d4afb19e1934: being created as raw with specification: {'status': u'creating', 'volume_size': 1, 'volume_name': u'volume-6bf30407-71f4-4f92-9650-d4afb19e1934'}

2014-11-09 01:26:55.855 24762 INFO cinder.volume.flows.manager.create_volume [req-35cf7097-65db-487a-a658-d2f6c4eb2d31 224904d37b3b4e26aaaf11739cdfae00 e832975e17064d399d982e4b379169e3 - - -] Volume volume-6bf30407-71f4-4f92-9650-d4afb19e1934 (6bf30407-71f4-4f92-9650-d4afb19e1934): created successfully

But iSCSI configuration is empty:

# targetcli ls
o- / ..................................................................... [...]
  o- backstores .......................................................... [...]
  | o- block .............................................. [Storage Objects: 0]
  | o- fileio ............................................. [Storage Objects: 0]
  | o- pscsi .............................................. [Storage Objects: 0]
  | o- ramdisk ............................................ [Storage Objects: 0]
  o- iscsi ........................................................ [Targets: 0]
  o- loopback ..................................................... [Targets: 0]

Any idea? missing some configuration or software installation?

Thank you!