Ask Your Question

DanielJ's profile - activity

2018-08-25 16:12:49 -0500 received badge  Famous Question (source)
2018-02-09 01:47:44 -0500 received badge  Student (source)
2016-03-15 16:05:14 -0500 received badge  Famous Question (source)
2016-02-08 10:26:39 -0500 received badge  Notable Question (source)
2016-02-08 10:26:39 -0500 received badge  Popular Question (source)
2016-01-20 04:02:38 -0500 answered a question Unable to detach volume from instance during its termination process

The problem was that the ceph configuration file did not contain the authentication token information on two of the three storage nodes. After adding the missing information and restarting the cinder-volume service everything worked fine again.

2016-01-20 03:58:19 -0500 received badge  Famous Question (source)
2016-01-20 03:58:11 -0500 answered a question Best way to delete VM from database if it is already deleted from computing node

The problem that I could not terminate the instance was related to the problem that cinder could not detach the volume from the VM (see https://ask.openstack.org/en/question/87681/unable-to-detach-volume-from-instance-during-its-termination-process/ (Unable to detach volume from instance during its termination process)). After solving this issue, the instances could be terminated regularly.

2016-01-19 09:15:19 -0500 asked a question Unable to detach volume from instance during its termination process

Hello all,

if I want to terminate or resize an instance that is already running for several month, then the connection to the volume can not be terminated an the termination of the instance fails. The result is that the instance is deleted from the compute node, the tap device and the bridges of this instance are removed, its volume remains in the storage backend (as it should) but the instance remains in the database in error state with the volume still attached to it (see https://ask.openstack.org/en/question/87364/best-way-to-delete-vm-from-database-if-it-is-already-deleted-from-computing-node/ (Best way to delete VM from database if it is already deleted from computing node)). Do you have any ideas why the termination of the connection failed? Do you have any suggestion what I can do so that the other "old" instances can be successfully terminated?

But if I create a new instance, its termination succeeds even if it was running on the same compute node as the failed instances.

The cloud is set up as follows: One controller node is running cinder-api, cinder-scheduler and the corresponding nova services. The three storage nodes run a Ceph cluster and each of the nodes runs cinder-volume. The compute nodes run nova-compute. The instances are created in a way that the compute nodes run the instance whereas their volumes are only stored in Ceph.

If it would help, I could provide you with the entries in the nova-api.log of the compute node on which the instance has been located whose termination has failed.

Regards,

Daniel

2016-01-19 05:11:18 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node
  1. nova.instance_system_metadata: I could observe a new row with key clean_attempts. Would a manually inserted row prevent the system from cleaning up the database?
  2. nova.instance_actions, column request_id: Can I savely generate a random request id?
2016-01-19 05:09:52 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node

For most of the required changes I know which values are required. But there are some changes to be done, where I am not sure what to do:

  1. nova.reservations, column uuid: Which uuid should I use?
2016-01-19 05:04:02 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node

Then I had a look at the entries of the VM in ERROR state. I figured out that the changes in the neutron database have already been performed. But the changes in the cinder and the nova database still have to be done.

2016-01-19 04:59:10 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node
  • nova.instance_extra
  • nova.instance_info_caches
  • nova.instance_system_metadata
  • nova.quota_usages
  • nova.reservations
2016-01-19 04:56:57 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node

(continued) I could observe changes in the following tables:

  • cinder.volume
  • cinder.volume_admin_metadata
  • neutron.floatingips
  • neutron.ipallocations
  • neutron.ports
  • nova.instances
  • nova.block_device_mapping
  • nova.instance_actions
  • nova.instance_actions_events
2016-01-19 04:54:01 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node

In order to understand what happens in the database, I have created a new VM with a floating IP and stored all database entries related to the VM. Then, I terminated the instance and I have inspected the changes. (to be continued)

2016-01-19 04:48:44 -0500 commented question Best way to delete VM from database if it is already deleted from computing node

(continued) Then, I reset its state to active with nova. But when I tried to terminate it again, the VM was deleted from the compute node but in nova the VM returned into an error state.

2016-01-19 04:46:20 -0500 commented question Best way to delete VM from database if it is already deleted from computing node

I have tried to manually recreate the VM. Therefore, I used virsh dumpxml for a different instance on the same compute node. Then, I adapted its configuration to the values of the VM in EROR state. I had to outcomment the network interface. Then I started the instance. (to be continued)

2016-01-19 04:46:20 -0500 received badge  Commentator
2016-01-13 09:47:43 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node

If I set the instance in the table instances as deleted, will OpenStack automatically, e.g., detach the volume from the machine, free the ip addresses, mark the port of the subnet as deleted, ... ?

2016-01-13 05:43:54 -0500 received badge  Notable Question (source)
2016-01-13 03:13:21 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node

Since all possible solutions are quite outdated or not for Juno, I am a bit afraid of causing an inconsistency in the database. That is why I hoped to get an already tested solution.

2016-01-13 03:07:34 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node

I would prefer marking the instance as deleted instead of actually deleting it from the database. I also found https://stackoverflow.com/questions/22194965/openstack-can-not-delete-instance-from-dashboard (a description to mark an instance as deleted for Icehouse).

2016-01-13 02:57:40 -0500 commented answer Best way to delete VM from database if it is already deleted from computing node

I have already tried it. I could reset it into active state but when I want to delete it on any of both ways, it returns in the error state.

2016-01-13 02:56:15 -0500 commented question Best way to delete VM from database if it is already deleted from computing node

Yes, the horizon dashboard shows the VMs in error state. And it also shows that they are in error state.

2016-01-12 22:56:17 -0500 received badge  Popular Question (source)
2016-01-12 09:03:10 -0500 asked a question Best way to delete VM from database if it is already deleted from computing node

Hello all,

have some old virtual machines (VMs) that are in an ERROR state because there was some error during their deletion. If I want to delete them, the machine stays in ERROR state. When I had a look on the compute node, on which it should run, I saw that it was already deleted from libvirt. Thus, there is only the entry in the mysql database remaining.

Deleting this entry via the OpenStack Dashboard, nova delete or nova force-delete results in an error. What is the best way to remove this entry from the database? (I use OpenStack Juno.)

  1. Create a VM manually that fits to the data in the database and then try to delete it with nova.
  2. Change the database manually.

Can you provide me with the required commands for any of both alternatives or a better solution?

Just a remark: If I create new VMs, they are deleted without any errors, now.

Regards,

Daniel

2016-01-08 08:57:05 -0500 received badge  Notable Question (source)
2016-01-07 06:17:20 -0500 answered a question Use floating IP of instances in outgoing packages

It seems that the given iptables rule was manually created by a former administrator. With the command

iptables -L -t nat --line-numbers

you can see the number of the MASQUERADE rule. With this number you can execute

iptables -t nat -D POSTROUTING <line_number>

to remove this rule. Thereafter, the messages of outgoing connections from VMs keep their floating IPs.

2016-01-07 03:03:40 -0500 received badge  Popular Question (source)
2016-01-05 06:52:41 -0500 asked a question Use floating IP of instances in outgoing packages

Hi all,

in our current cloud setting, we have a separate controller node which serves as a network node, too. The set up is basically done as described in the official installation guide for OpenStack Juno with Open vSwitch, L3, ML2, etc.. The virtual machines (instances) are running on compute nodes with gre tunnels to the network node.

If a TCP connection is established from somewhere in the internet to an VM, its floating IP is used in the IP headers of the sent packages. But when the VM initiates an TCP connection the IP of the network node is used in the IP headers.

Is it possible to always use the floating IP of the VM? If so, how can I configure the services to work as wished?

Update:

If I run iptables-save on the network node, I see the following nat rule:

-A POSTROUTING -o external-network-interface -j MASQUERADE

If I understand this rule correctly, the floating IPs of all messages leaving the computer via the external network interface are replaced by the IP of the network node. How can this individual rule be dropped without changing any other rule?

How can I configure neutron or any other service such that this rule does not occur any more?

Regards,

Daniel

2015-11-19 02:02:02 -0500 commented answer Why do two cinder-scheduler_fanout queues have no consumer?

The parameters of these queues are: arguments: x-ha-policy: all auto-delete: true It seems, that they are not durable. The re-installation is no option since the cloud is already in use. My current workaround is a cron job that purges these queues every week.

2015-11-05 13:40:37 -0500 received badge  Famous Question (source)
2015-11-05 13:40:37 -0500 received badge  Notable Question (source)
2015-11-05 13:40:37 -0500 received badge  Popular Question (source)
2015-09-30 09:46:56 -0500 commented answer Why do two cinder-scheduler_fanout queues have no consumer?

But why do queues like cinder-scheduler only occur once? How can I check whether a queue is set durable?

2015-09-30 09:40:22 -0500 commented answer Why do two cinder-scheduler_fanout queues have no consumer?

During the OpenStack set up, the controller node executing Cinder changed several times without reinstalling rabbitmq. Thus these queues without consumers might have survived from the previous set up. Would a solution be in this case to simply delete these queues?

2015-09-24 09:35:56 -0500 asked a question Why do two cinder-scheduler_fanout queues have no consumer?

Hi everyone,

when I had a look at the RabbitMQ queues on my controller nodes, I was surprised seeing three cinder-scheduler_fanout queues. Two of them have no consumer leading to continuously increasing queue sizes. Each queue already requires more than 1 GB of memory. I have manually purged the queues but there are still new messages arriving.

The third queue receives messages, too but they are directly consumed.

In order to prevent the queues to run out of memory, I tried to identify which service is sending the messages to theses queues. But I had no luck. Does anyone of you has an idea which service is sending messages on these queues and how I can reconfigure it such that these queues are not used any more?

Cloud Setup

  • Openstack Juno on Ubuntu 14.04
  • 3 controller nodes
    • each running RabbitMQ
    • one controller running cinder-scheduler and cinder-api
  • 3 storage nodes
    • running a ceph cluster as storage backend
    • each runs a ceph monitor and a cinder-volume

Example Message Sent to Queues

{
"oslo.message": "{
    \"_context_domain\":            null, 
    \"_context_request_id\":        \"req-4b04bf3f-5efc-4cf3-84e2-8c594fa027a6\", 
    \"_context_quota_class\":       null, 
    \"_context_service_catalog\":   [], 
    \"_context_auth_token\":        null, 
    \"_context_user_id\":           null, 
    \"_context_is_admin\":      true, 
    \"version\":                \"1.0\", 
    \"_context_timestamp\":     \"2015-03-04T13:21:42.034252\", 
    \"_context_project_domain\":    null, 
    \"_context_user\":          null, 
    \"method\":             \"update_service_capabilities\", 
    \"_context_remote_address\":    null, 
    \"_context_roles\":         [\"admin\"], 
    \"args\":                   {
        \"service_name\":   \"volume\", 
        \"host\":           \"storage3@rbd-storage\", 
        \"capabilities\":       {
            \"volume_backend_name\":    \"RBD_STORAGE\", 
            \"free_capacity_gb\":       239266, 
            \"driver_version\":     \"1.1.0\", 
            \"total_capacity_gb\":      244822, 
            \"reserved_percentage\":    0, 
            \"vendor_name\":        \"Open Source\", 
            \"storage_protocol\":       \"ceph\"
        }
    },
    \"_unique_id\":             \"014c9c97b86b4c6b9eb8f7207af1ba5b\", 
    \"_context_project_name\":      null, 
    \"_context_read_deleted\":      \"no\", 
    \"_context_user_identity\":     \"- - - - -\", 
    \"_context_tenant\":            null, 
    \"_context_project_id\":        null, 
    \"_context_user_domain\":       null
}", 
"oslo.version":                 "2.0"
}

cinder.conf

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
volume_name_template = volume-%s
#verbose = True
#debug = true
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder

rpc_backend = rabbit
rabbit_hosts = <rabbit1>,<rabbit2>,<rabbit3>
rabbit_password = <rabbit_password>
rabbit_userid = <rabbit_user>
rabbit_virtual_host = /
rabbit_durable_queues = False
rabbit_ha_queues = true

auth_strategy = keystone

my_ip = <myIP>
glance_host = <glanceHostIP>
glance_api_version = 2

backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/storage.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups_storage
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
# the topic volume backup nodes listen on (string value)
backup_topic=cinder-backup


enabled_backends=rbd-storage,rbd-compute
default_volume_type=storage

[rbd-storage]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes_storage
volume_backend_name=RBD_STORAGE
rbd_ceph_conf = /etc/ceph/storage.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = <user>
rbd_secret_uuid = <uuid>

[rbd-compute]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes_compute
volume_backend_name=RBD_COMPUTE
rbd_ceph_conf = /etc/ceph/storage.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = <user>
rbd_secret_uuid = <uuid>

[keystone_authtoken]
auth_uri = <keystone>/v2.0
identity_uri = <keystoneIdentit>
admin_tenant_name = <tenantName>
admin_user = <userName>
admin_password = <userPassword>

[database]
connection = <databaseConnection>

ceph/storage.conf

[global]
fsid = <fsid>
mon_initial_members = storage1, storage2, storage3
mon_host = <host1>,<host2>,<host3>
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = <networkIP>
cluster_network = <networkIP>
osd_journal_size = 40000
mon_clock_drift_allowed = 1

[client.cinder]
keyring = <keyring>

[client.glance]
keyring = <keyring>

[client.cinder-backup]
keyring = <keyring>
2015-08-07 06:45:54 -0500 commented answer Change keystone token backend from SQL to memcached

Based on my understanding of the official documentation, the [cache] section is used to configure keystone that it just caches the latest tokens in memcached but all tokens remain in the database. Is this correct? I would prefer replacing the database by memcached.

2015-08-05 13:31:51 -0500 received badge  Famous Question (source)
2015-08-04 12:41:36 -0500 received badge  Popular Question (source)
2015-08-04 12:41:36 -0500 received badge  Notable Question (source)
2015-05-28 11:19:26 -0500 asked a question Change keystone token backend from SQL to memcached

I have installed OpenStack Juno on Ubuntu. In the past, keystone has stored its tokens in an SQL database. Now, I want to use memcached to store the tokens. Therefore, I have installed the packages memcached and python-memcache. Then I have updated the keystone configuration file as shown below and restarted the keystone service. But from then on, keystone only logs the following messages:

WARNING keystone.middleware.core [-] RBAC: Invalid token
WARNING keystone.common.wsgi [-] The request you have made requires authentication.

Even after waiting of 15 minutes these warnings are still logged. How can I configure keystone correctly, to use memcached instead of SQL?

Now, I have changed the configuration back to SQL by (un)comment the appropriate driver lines in the configuration file and restarting keystone but I still get these warnings.

Does it simple require more time until all openstack services have received new tokens?

Here is the keystone.conf file (some sensitive informations are replaced by <> tags):

[DEFAULT]
admin_token=<admin_token>
log_dir=/var/log/keystone
rabbit_hosts=<rabbit_server1>,<rabbit_server2>,<rabbit_server3>
rabbit_userid=<rabbit_user>
rabbit_password=<rabbit_password>
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
rabbit_max_retries = 0
rabbit_durable_queues = false
rabbit_ha_queues=true

[catalog]
driver=keystone.catalog.backends.sql.Catalog

[database]
connection = <database_connection>
use_db_reconnect = True

[token]
provider = keystone.token.providers.uuid.Provider
#driver = keystone.token.persistence.backends.sql.Token
driver = keystone.token.persistence.backends.memcache.Token

[extra_headers]
Distribution = Ubuntu