Ask Your Question

mike11's profile - activity

2019-01-30 06:17:07 -0500 received badge  Notable Question (source)
2019-01-26 01:52:13 -0500 received badge  Popular Question (source)
2018-12-11 06:59:47 -0500 commented question cloud-init user-data flag

the instance is in a all in one, so the network is local. my metadata-agent is active and running

2018-12-11 05:30:59 -0500 commented question cloud-init user-data flag

Hi Bernd, thank you for answering :) I updated the post above. the error i get is Failed to connect to 169.254.169.254 port 80: No route to host. the full log is in the pastebin https://pastebin.com/jyq0gRE1

Thank you

2018-12-11 03:41:49 -0500 asked a question cloud-init user-data flag

Hi All,

has anyone used the nova-boot command with the --user-data flag? When i try to boot an image with the user-data it boots up properly, but all of the input from the user-data does not transmit to the actual instance. when i boot the VM i do see the a user-data hash as in the image below: image description

When i select config drive the instance does get the user-data info and modifies the VM. i installed the neutron-dhcp-agent, neutron-metadata-agent, however, the only thing that i see in the instance cloud-init.. logs is that it is not able to find a datasource. i added the following to the nova.conf

metadata_listen = controller
metadata_host = controller
verbose = True
ec2_private_dns_show_ip = True
enabled_apis = ec2,osapi_compute,metadata

and in /etc/neutron/dhcp_agent.ini

interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_metadata_network = True
dhcp_broadcast_reply = True

and in the metadata_agent.ini we have:

auth_region = RegionOne
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = password
nova_metadata_ip = controller
nova_metadata_ip = controller
nova_metadata_port = 8775

I tried multiple examples of user data for example now i tried the user-data:

#cloud-config
users:
  - name: fred #UNIX username
    sudo: ['ALL=(ALL) NOPASSWD:ALL'] #setup sudo privileges if required
  - name: wilma
    ssh-authorized-keys: #insert SSH keys to enable passwordless access
      - [insert key]
    plain_text_password: 'bedrock' #provide a password, very insecure, use SSH keys where possible
write_files:
  - path: /home/README2018.md
    content: |
      ==== Fred instructions ====
      1. Write some instructions!
      ===========================

The log:

can be viewd in the pastebin: https://pastebin.com/jyq0gRE1

2018-10-15 08:36:43 -0500 received badge  Organizer (source)
2018-10-15 08:11:44 -0500 asked a question Unable to use magnum cluster-create

Hi Stackers, Following the procedure to use containers in Openstack with Magnum, i get stuck in the step of creating the cluster. it seems like have some issue with the certificates or rabbit. here is part of the magnum-conductor log. Every-time i use cluster-create i get CREATE_FAILED . image description


my config file is:

[DEFAULT]

#
# From magnum
#
pybasedir = /usr/lib/python2.7/dist-packages/magnum
bindir = /usr/bin
state_path = /var/lib/magnum
# Path to the rootwrap configuration file to use for running commands
# as root. (string value)
#rootwrap_config = /etc/magnum/rootwrap.conf

debug = true

verbose = true

publish_errors = true

#
# From oslo.messaging
#


transport_url = rabbit://openstack:mypassword@controller
rpc_backend = rabbit

#
# From oslo.service.periodic_task
#

#
# From oslo.service.service
#

[api]

#
# From magnum
#



# The listen IP for the Magnum API server. (IP address value)
host = 0.0.0.0

[barbican_client]

#
# From magnum
#


[certificates]

#
# From magnum
#

cert_manager_type = local



[cinder_client]

#
# From magnum
#

[cluster]

#
# From magnum
#


[cluster_heat]

#
# From magnum
#

[cluster_template]

#
# From magnum
#

[conductor]

#
# From magnum
#


[cors]

#
# From oslo.middleware.cors
#

allowed_origin = controller 


[cors.subdomain]

#
# From oslo.middleware.cors
#

[database]

#
# From magnum
#

#
# From oslo.db
#

connection = mysql+pymysql://magnum:mypassword@controller/magnum

[docker]

#
# From magnum
#

[docker_registry]

#
# From magnum
#

[glance_client]

#
# From magnum
#



[keystone_auth]

#
# From magnum
#

[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

auth_url = http://controller:35357
memcached_server = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = magnum
password = mypassword

auth_version = v3
auth_uri = http://controller:5000/v3
project_domain_id = default
project_name = service
user_domain_id = default
password = mypassword
username = magnum
auth_url = http://controller:35357
auth_type = password
admin_user = magnum
admin_password = mypassword
admin_tenant_name = service

region_name = RegionOne

memcached_servers = controller:11211

auth_protocol = http

identity_uri = https://localhost:35357/


admin_user = magnum

admin_password = mypassword

admin_tenant_name = magnum


[magnum_client]

#
# From magnum
#

[matchmaker_redis]

#
# From oslo.messaging
#

[neutron_client]

#
# From magnum
#


[oslo_concurrency]

#
# From oslo.concurrency
#

lock_path = /var/lock/magnum


[oslo_messaging_amqp]

#
# From oslo.messaging
#

trace = true

username = magnum 

password = mypassword

[oslo_messaging_notifications]

#
# From oslo.messaging
#

driver = messaging

[oslo_messaging_rabbit]

#
# From oslo.messaging
#

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = mypassword


[oslo_messaging_zmq]

#
# From oslo.messaging
#

[oslo_policy]

#
# From oslo.policy
#
[trust]
#
# From magnum
#
trustee_domain_name = magnum
trustee_domain_admin_name = magnum_domain_admin
trustee_domain_admin_password = mypassword
trustee_keystone_interface = KEYSTONE_INTERFACE

[x509]

#
# From magnum
#

The logs from magnum-conductor are:

DEBUG magnum.conductor.handlers.common.cert_manager [req-73 admin admin - - -] CA cert is created: 3d4d6f9
DEBUG magnum.conductor.handlers.common.cert_manager [req-73 admin admin - - -] Magnum client cert is created: fbf7

ERROR oslo_messaging.rpc.server [req-73 admin admin - - -] Exception during message handling
ERROR oslo_messaging.rpc.server   File "/...dispatcher.py", line 121, in _do_dispatch
ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
ERROR oslo_messaging.rpc.server   File "/usr...handlers/cluster_conductor.py", line 165, in cluster_create
ERROR oslo_messaging.rpc.server     create_timeout)
ERROR oslo_messaging.rpc.server   File "/usr/.../cluster_conductor.py", line 97, in _create_stack
ERROR oslo_messaging.rpc.server     _extract_template_definition(context, cluster))
ERROR oslo_messaging.rpc.server   File "/usr/.../cluster_conductor.py", line 82, in _extract_template_definition
ERROR oslo_messaging.rpc.server     scale_manager=scale_manager)
ERROR oslo_messaging.rpc.server   File "/usr/.../template_def.py", line 337, in extract_definition
ERROR oslo_messaging.rpc.server     self.get_params(context, cluster_template, cluster, **kwargs),
ERROR oslo_messaging.rpc.server   File "/usr/.../swarm_fedora_atomic_v1/template_def.py", line 95, in get_params
ERROR oslo_messaging.rpc.server     extra_params['discovery_url'] = self.get_discovery_url(cluster)
ERROR oslo_messaging.rpc.server   File "/usr/...on/template_def.py", line ...
(more)
2018-07-11 09:21:18 -0500 received badge  Student (source)
2018-06-25 04:09:39 -0500 received badge  Famous Question (source)
2018-06-25 04:09:39 -0500 received badge  Notable Question (source)
2018-06-20 02:31:52 -0500 received badge  Popular Question (source)
2018-06-20 02:30:32 -0500 received badge  Popular Question (source)
2018-06-18 03:24:58 -0500 received badge  Supporter (source)
2018-06-18 03:15:47 -0500 received badge  Editor (source)
2018-06-18 03:12:09 -0500 asked a question Uploading glance image with direct flag

Hi Everyone,

There is a flag in local_settings.py which enables direct upload of images (by passing the temporary storage of the image). However, when i use the flag i get the error (only in the Horizon) of: TypeError: Cannot read property 'data' of undefined. In my setting, i am running all in one. I changed the [cors] in glance-api.conf file, however, i still get this error and the image gets stuck in queue. Below are the settings that i put in cors.

[cors]
allowed_origin= http://169.19.1.80/horizon, https://localhost.local,http://localhost.local, https://0.0.0.0:0,http://0.0.0.0:0,http://169.19.1.80/dashboard
allow_methods: GET,PUT,POST,DELETE
allow_headers=X-Custom-Header
expose_headers=X-Custom-Header
max_age: 3600

in the allowed_origins i tried multiple settings which is why there are multiple addresses. Source: https://docs.openstack.org/horizon/latest/configuration/settings.html (https://docs.openstack.org/horizon/la...)

Thank you for any input, Michael

2018-06-18 03:00:19 -0500 commented answer Glance image upload uses swap

Hi - Yes, that is the first thing that i did which unfortunately did not help. As i understand this is how Linux works but it would be nice if there is a way to bypass this since loading large files to the controller creates huge buffer memory issues since services start using swap. Best regards

2018-06-18 03:00:19 -0500 received badge  Commentator
2018-06-18 02:55:55 -0500 received badge  Famous Question (source)
2018-06-10 07:19:07 -0500 received badge  Famous Question (source)
2018-06-07 02:29:51 -0500 received badge  Notable Question (source)
2018-06-06 08:59:22 -0500 received badge  Popular Question (source)
2018-06-06 08:21:44 -0500 commented answer Upload images in small chunks

Thank you for your reply :) my setting is an all-in-one (which is why i am so limited in space), backend = sqlalchemy. i haven't yet tried your solution, but is this a solution using CLI. is there a way to change to config so i can use the Horizon? Thank you

2018-06-05 23:48:58 -0500 received badge  Popular Question (source)
2018-06-05 10:43:31 -0500 asked a question Upload images in small chunks

Hi Everyone,

Is there a way to upload large images in small chunks? My issue is that i have only 1GB in cashe memory and when i upload large images (around 3.5 GB) the Linux starts to use swap which slowes the system dramatically. Especially since it is swapping OpenStack services and not only the image.

Thank you for any input, Michael

2018-06-04 08:57:08 -0500 asked a question Glance image upload uses swap

Hi Everyone,

when uploading images to the controller the controller starts to swap which slows my system down. It looks like the buffer/cache gets filled up and then instead of deleting the cache and writing there it continues to load the image and writes it into the swap. I am looking for a settings if there is, and not to constantly delete the cache.

Thank you, Michael

2018-06-04 08:50:30 -0500 commented answer Glance change the default tmp download directory

This actually worked! My problem was the path /mnt/my_tmp which might be that it is not in the file system. i guess there is an issue with that. Thank you!

2018-06-04 08:49:40 -0500 answered a question Glance change the default tmp download directory

This actually worked! My problem was the path /mnt/my_tmp which might be that it is not in the file system. i guess there is an issue with that. Thank you!

2018-06-04 06:00:18 -0500 commented answer Glance change the default tmp download directory

Hi, Thank you for you answer! I tried this, but what happens is that it loads the image into the swap and then errors in the end which is strange. I added also the following line: FILE_UPLOAD_MAX_MEMORY_SIZE = '1', but it didn't help. Do you see anything that i am missing? Thanks,

2018-06-04 02:34:12 -0500 asked a question Glance change the default tmp download directory

Hi All,

The setup that i have has a very limited hard drive space on the controller and when i upload a large image (through the horizon) the horizon keeps the image in /tmp. This causes my disk to get full (with 3.5 GB images). The best solution for me would be to be able to change the default download /tmp directory to a different directory, for example, to /mnt/tmp.

Thank you so much for any help. Michael

2018-05-15 02:51:00 -0500 commented answer Cinder volume as file system

That is exactly what i am doing the Cinder is on my compute node. What i am trying to do is use some NFS with the server as the localhost.

2018-05-14 08:07:32 -0500 received badge  Famous Question (source)
2018-05-14 07:33:14 -0500 commented answer Cinder volume as file system

Hi cinderela, This is a local compute that the VM is running on and i cannot connect to it an external array.

2018-05-14 02:52:19 -0500 commented answer Cinder volume as file system

as for why we would like not to have another block storage is that we are short on space and we cannot allocate all of potential space for the volume. I would like to dynamically allocate the space. Does anyone have any other ideas?

2018-05-14 02:52:05 -0500 commented answer Cinder volume as file system

Thanks Bernd for you answer! The solution of Manila was something that i was looking at as well but the issue there is that i do not need to share the files which is why i think that this would not fit for security reason and that it will also use more resources then needed (an additional service).

2018-05-14 02:51:00 -0500 answered a question Cinder volume as file system

Thanks Bernd for you answer! The solution of Manila was something that i was looking at as well but the issue there is that i do not need to share the files which is why i think that this would not fit for security reason and that it will also use more resources then needed (an additional service). as for why we would like not to have another block storage is that we already have too many mounted devices. Does anyone have any other ideas?

Thanks, Michael

2018-05-13 23:04:03 -0500 received badge  Notable Question (source)
2018-05-13 16:48:32 -0500 received badge  Popular Question (source)
2018-05-09 02:16:31 -0500 asked a question Cinder volume as file system

Hi Everyone,

Is there a way to create a volume drive with Cinder that is not an actual disk? I hope that this is clear, but what i would like to do is to write data dynamically to a local file which will be persistent as a drive. For example, using NFS we write to a local file which is then written to the distant FS. Here i would like to write to a local file and never upload it to the network. I was thinking of using SambaFS for example, but i could not find a proper procedure or where do i actually download the driver. And, i am also not sure that it would work with Sambafs. I would appreciate any input in this.

Thank you kindly, Michael

2018-04-16 02:22:10 -0500 received badge  Notable Question (source)
2018-04-15 04:39:41 -0500 received badge  Popular Question (source)
2018-04-08 07:32:45 -0500 asked a question how to set mrg_rxbuf=off

Hi All,

Does anyone knows how to set the mrg_rxbuf=off, in the instance using the Horizon?

Thank you, Michael