Ask Your Question

Rodrigo's profile - activity

2016-09-20 08:44:51 -0500 received badge  Famous Question (source)
2016-09-20 08:44:51 -0500 received badge  Notable Question (source)
2016-05-16 14:28:02 -0500 received badge  Famous Question (source)
2016-04-07 02:13:46 -0500 received badge  Famous Question (source)
2016-03-14 03:44:20 -0500 received badge  Notable Question (source)
2016-03-13 08:24:44 -0500 commented question Migration operation has aborted.

Was it ! Perfect ! Work like a charm

2016-03-12 16:25:43 -0500 commented question Migration operation has aborted.

here are:

disk_cachemodes="file=directsync,block=none" live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST

2016-03-07 11:26:40 -0500 answered a question Mirantis 8 won't install when using Advanced Install

Are you using the fake raid controller ?

2016-03-07 02:53:48 -0500 received badge  Popular Question (source)
2016-03-03 09:29:19 -0500 asked a question Migration operation has aborted.

Can I perform live migration using the NFS Backend ?

When i try I receive the following error:

[instance: 02c278ba-b79a-49bb-bfd0-b5e7c5667679] Live Migration failure: Unsafe migration: Migration may lead to data corruption if disks use cache != none
[instance: 02c278ba-b79a-49bb-bfd0-b5e7c5667679] Migration operation has aborted

My cinder Config:

[cinder_netapp]
netapp_login=root
netapp_eseries_host_type=linux_dm_mp
netapp_controller_ips=
netapp_size_multiplier=1.2
netapp_vserver=
netapp_server_port=80
nfs_shares_config=/etc/cinder/shares.conf
netapp_sa_password=
thres_avl_size_perc_start=20
netapp_storage_pools=
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_protocol=nfs
netapp_storage_family=ontap_7mode
volume_backend_name=cinder_netapp
thres_avl_size_perc_stop=60
netapp_vfiler=
netapp_volume_list=
netapp_server_hostname=10.1.0.25
expiry_thres_minutes=2160
netapp_transport_type=http
netapp_password=manager
netapp_webservice_path=/devmgr/v2
netapp_copyoffload_tool_path=
nfs_mount_options=rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,sec=sys

root@node-3:~# cat /etc/cinder/shares.conf
10.250.3.253:/vol/openstack

Mounts:

root@node-3:~# df -h
Filesystem                        Size  Used Avail Use% Mounted on
udev                               16G   12K   16G   1% /dev
tmpfs                             3.2G   13M  3.2G   1% /run
/dev/dm-1                          27G  2.8G   23G  12% /
none                              4.0K     0  4.0K   0% /sys/fs/cgroup
none                              5.0M     0  5.0M   0% /run/lock
none                               16G     0   16G   0% /run/shm
none                              100M     0  100M   0% /run/user
/dev/sda3                         196M   44M  143M  24% /boot
/dev/mapper/vm-nova                47G   33M   47G   1% /var/lib/nova
10.250.3.253:/vol/openstack_inst   31G   21G   11G  67% /var/lib/nova/instances
10.250.3.253:/vol/openstack       750G  642G  109G  86% /var/lib/cinder/mnt/36f09148d1243e64fbe04aa72c497a2e
10.250.3.253:/vol/openstack       750G  642G  109G  86% /var/lib/nova/mnt/36f09148d1243e64fbe04aa72c497a2e
2016-02-21 19:06:18 -0500 received badge  Notable Question (source)
2016-02-21 19:06:18 -0500 received badge  Famous Question (source)
2016-02-21 19:06:18 -0500 received badge  Popular Question (source)
2016-02-17 18:49:36 -0500 asked a question Live Migrate kilo NFS storage

Hello Guys ! I am using the netapp cinder nfs drive in kilo release, when a try execute the live migrate I receive the error:

f3d3e2ac4e420fb3951984998c3f66 3336ff9d2c464d78b39d8458af37c866 - - -] [instance: 02c278ba-b79a-49bb-bfd0-b5e7c5667679] Live Migration failure: Unsafe migration: Migration may lead to data corruption if disks use cache != none

Whats the problem ?

All the best !

2016-02-02 20:26:02 -0500 received badge  Notable Question (source)
2016-02-01 17:17:00 -0500 received badge  Scholar (source)
2016-02-01 17:16:47 -0500 received badge  Popular Question (source)
2016-02-01 17:16:38 -0500 answered a question Cinder Multibacked netapp driver - Kilo

The user "dcain" in netapp community forum post the solution:

http://community.netapp.com/t5/OpenSt...

2016-01-29 13:47:07 -0500 asked a question Cinder Multibacked netapp driver - Kilo

I configured the cinder driver for netapp with success, I can create volumes, instances using the nfs backend. But when I create a new instance using the horizon, the value from internal storage for the hypervisor is marked as in use at the local disk, even not in use. The volume for instance is created in nfs shared but the LOCAL STORAGE in hypervisor is updated as used.

# nova hypervisor-stats
+----------------------+-------+
| Property             | Value |
+----------------------+-------+
| count                | 2     |
| current_workload     | 0     |
| disk_available_least | 470   |
| free_disk_gb         | 10    |
| free_ram_mb          | 48428 |
| local_gb             | 470   |
| local_gb_used        | 460   |
| memory_mb            | 96556 |
| memory_mb_used       | 48128 |
| running_vms          | 23    |
| vcpus                | 32    |
| vcpus_used           | 23    |
+----------------------+-------+

When all space from the local hypervisor is marked as in use a received the error:

No valid host was found. There are not enough hosts available.

But I have space in NFS share and all volumes was created in the NFS, I don't have any volumes created in the compute local storages.

- cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
enable_v2_api=True
iscsi_protocol=iscsi
use_syslog=True
api_paste_config=/etc/cinder/api-paste.ini
glance_num_retries=0
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
use_stderr=False
debug=False
storage_availability_zone=nova
glance_api_ssl_compression=False
glance_api_insecure=False
syslog_log_facility=LOG_LOCAL3
log_dir=/var/log/cinder
iscsi_ip_address=10.250.3.131
glance_api_servers=10.1.3.130:9292
volume_backend_name=DEFAULT
default_availability_zone=nova
use_syslog_rfc_format=True
control_exchange=cinder
enable_v1_api=True
glance_api_version=2
amqp_durable_queues=False
rpc_backend=cinder.openstack.common.rpc.impl_kombu
kombu_reconnect_delay=5.0
enabled_backends=cinder_isci,cinder_netapp
default_volume_type=cinder_netapp

[oslo_messaging_rabbit]
rabbit_ha_queues=True
rabbit_use_ssl=False
rabbit_hosts=10.1.3.131:5673, 10.1.3.132:5673, 10.1.3.134:5673
rabbit_virtual_host=/
rabbit_password=password
rabbit_userid=nova

[keystone_authtoken]
signing_dir=/tmp/keystone-signing-cinder
admin_password=password
admin_tenant_name=services
auth_uri=http://10.1.3.130:5000/
identity_uri=http://10.1.3.130:5000/
admin_user=cinder
signing_dirname=/tmp/keystone-signing-cinder

[database]
idle_timeout=3600
max_pool_size=30
max_retries=-1
retry_interval=10
max_overflow=60
min_pool_size=1
connection=mysql://cinder:password@10.1.3.130/cinder?charset=utf8&read_timeout=60

[keymgr]
fixed_key=8756254f7e8ee8c57f3858af6c47c420b074c2608e3932ca6720538112ee99d5

[cinder_isci]
iscsi_helper=tgtadm
volume_backend_name=cinder_isci
volume_group=cinder

[cinder_netapp]
netapp_login=root
netapp_eseries_host_type=linux_dm_mp
netapp_controller_ips=
netapp_size_multiplier=1.2
netapp_vserver=
netapp_server_port=80
nfs_shares_config=/etc/cinder/shares.conf
netapp_sa_password=
thres_avl_size_perc_start=20
netapp_storage_pools=
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_protocol=nfs
netapp_storage_family=ontap_7mode
volume_backend_name=cinder_netapp
thres_avl_size_perc_stop=60
netapp_vfiler=
netapp_volume_list=
netapp_server_hostname=10.1.0.25
expiry_thres_minutes=2160
netapp_transport_type=http
netapp_password=password
netapp_webservice_path=/devmgr/v2
netapp_copyoffload_tool_path=
root@node-6:~#

- shared.conf
10.250.3.253:/vol/openstack

# df -h
Filesystem                   Size  Used Avail Use% Mounted on
udev                          16G   12K   16G   1% /dev
tmpfs                        3.2G  5.4M  3.2G   1% /run
/dev/dm-1                     27G  2.1G   24G   9% /
none                         4.0K     0  4.0K   0% /sys/fs/cgroup
none                         5.0M     0  5.0M   0% /run/lock
none                          16G     0   16G   0% /run/shm
none                         100M     0  100M   0% /run/user
/dev/sda3                    196M   44M  143M  24% /boot
/dev/mapper/vm-nova           91G   33M   91G   1% /var/lib/nova
10.250.3.253:/vol/openstack ...
(more)
2016-01-28 17:59:18 -0500 commented question cinder nfs hypervisor tab doesnt show nfs share disk space on horizon

I have the same problem !

2016-01-28 17:58:42 -0500 received badge  Critic (source)
2016-01-19 10:58:04 -0500 received badge  Enthusiast
2016-01-16 13:18:03 -0500 received badge  Popular Question (source)
2016-01-14 19:16:28 -0500 asked a question Fuel upgrade from 6.1 to 7.0 failed

I tried upgrade my production Environment using fuel 6.1 to 7.0 but i have a error:

"Failed to call refresh: ceilometer-dbsync --config-file=/etc/ceilometer/ceilometer.conf returned 1 instead of one of [0]"

I followed the this procedure https://docs.mirantis.com/openstack/f... the error happens at step 4:

octane upgrade-node --isolated $SEED_ID <id>

I pasted the full log output in http://paste.openstack.org/show/483951/

My original environment is online, but one controller was removed. How can I resolved this problem or rollback the controller ?

2015-11-28 04:18:36 -0500 received badge  Famous Question (source)
2015-11-18 10:01:21 -0500 received badge  Taxonomist
2015-08-11 10:14:32 -0500 commented answer Radosgw s3 access

But de radosgw is configured for use keystone as backend, I need create separate user ? For example the swift use the keystone backend too and i can connect using my credentials.

2015-08-11 10:10:24 -0500 received badge  Notable Question (source)
2015-08-09 01:17:06 -0500 received badge  Popular Question (source)
2015-08-06 15:14:29 -0500 asked a question Radosgw s3 access

I deployed my environment using fuel 6.1 with success using ceph. Now I trying access the object storage.

I can connect using the client "CloudBerry Explorer for OpenStack Storage" Swift API with success.

but I can not connect using "CloudBerry Explorer for Amazon S3" S3 using the same username and password, my config in the client is:

Service Point: MYPUBLICIP:8080 Access Key: myusername Secret Key: mypassword

I need additional configuration ?

2015-07-20 01:52:19 -0500 received badge  Good Question (source)
2015-07-19 00:25:14 -0500 received badge  Nice Question (source)
2015-07-19 00:24:52 -0500 received badge  Famous Question (source)
2015-07-17 10:32:53 -0500 received badge  Student (source)
2015-07-17 02:48:59 -0500 received badge  Notable Question (source)
2015-07-16 21:37:20 -0500 received badge  Popular Question (source)
2015-07-16 16:32:37 -0500 commented question How to restrict bandwidth for instances

Yes are enabled. The another limits vcpu count, number of volumes are ok !

2015-07-16 11:41:16 -0500 asked a question How to restrict bandwidth for instances

Hello there !

I deploy my first openstack environment using the fuel 6.1 with

  • 3x controllers
  • 1x mongodb
  • 2x computes
  • 2x ceph storages

Now i need control the bandwidth for the instance (VM). I create new flavor and define vif_* parameters, for example

    nova flavor-key silver_1  set quota:vif_inbound_average=1024
    nova flavor-key silver_1  set quota:vif_outbound_average=1024

and I launched a new instance using this flavor but de bandwidth control don't work. Is this the correct way to define bandwidth ?