Ask Your Question

Franjo's profile - activity

2020-07-14 11:04:09 -0500 received badge  Notable Question (source)
2020-05-14 09:35:13 -0500 answered a question HEAT stack stuck on "Create In Progress"

Right, so my problem wasn't with the heat config. It was keystone. Specifically the below line in /etc/keystone/keystone.conf

public_endpoint=http://127.0.0.1:5000

Heat just sends this to the VMs as the auth url, and since the VMs can't reach that that address the whole stack hangs. I just changed the above to the corresponding controller IP address that the VMs can reach:

public_endpoint=http://192.168.5.100:5000

After the change, everything worked.

2020-05-14 07:17:39 -0500 received badge  Popular Question (source)
2020-05-13 19:20:54 -0500 received badge  Famous Question (source)
2020-04-30 11:55:33 -0500 received badge  Notable Question (source)
2020-04-30 11:55:33 -0500 received badge  Popular Question (source)
2020-04-10 05:12:28 -0500 asked a question HEAT stack stuck on "Create In Progress"

Hi, I'm trying to deploy a kubernetes cluster via magnum. I'm creating the cluster as follows:

openstack coe cluster template create clustertemplate3 \
--coe kubernetes \
--server-type vm \
--image Fedora-Atomic-27-20180419.0.x86_64 \
--external-network provider \
--network-driver calico \
--volume-driver cinder \
--docker-volume-size 1 \
--docker-storage-driver cinder \
--dns-nameserver 8.8.8.8 \
--flavor m1.small \
--registry-enabled \
--labels cloud_provider_enabled=true

openstack coe cluster create \
--cluster-template clustertemplate3 \
--docker-volume-size 3 \
--keypair keypair1 \
--master-count 1 \
--node-count 1 \
--master-flavor m1.small \
--flavor m1.small \
--floating-ip-enabled \
cluster

The cluster status hangs in "CREATE_IN_PROGRESS" and eventually times out. There is an issue when creating the kube-master. The VM gets created fine, but I guess there's a problem with provisioning. I'm using Fedora-Atomic-27-20180419.0.x86_64.qcow2 for the image, because I had less luck with newer ones. Here is the full "journalctl" output of the kube-master VM: https://pastebin.com/VRkvPvA0

I'm guessing the issue lies with:

 Apr 10 09:18:02 cluster-kxo37xfwor3i-master-0.novalocal runc[1507]: Authorization failed: Unable to establish connection to http://127.0.0.1:5000/v3/auth/tokens
Apr 10 09:18:02 cluster-kxo37xfwor3i-master-0.novalocal runc[1507]: Source [heat] Unavailable.
Apr 10 09:18:02 cluster-kxo37xfwor3i-master-0.novalocal runc[1507]: /var/lib/os-collect-config/local-data not found. Skipping

...But 127.0.0.1:5000 isn't the IP I set in heat.conf. If I try curl-ing the controller IP address I see that it is reachable from the VM.

[root@cluster-kxo37xfwor3i-master-0 ~]# curl http://192.168.5.100:5000/v3/auth/tokens
{"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}

In the VM,

/var/lib/os-collect-config/heat_local.json

and

/etc/os-collect-config.conf

both point to the correct IP of the openstack controller node.

[root@cluster-kxo37xfwor3i-master-0 ~]# cat /var/lib/os-collect-config/heat_local.json
{
 "os-collect-config": {
  "heat": {
   "password": "!WSa!NEmYAzVE@YBixypi5eZ3NpR80U8",
   "user_id": "7dba31ab737c4573b1b0fb48f82d9cc3",
   "region_name": null,
   "stack_id": "cluster-kxo37xfwor3i-kube_masters-q67abiuxjita-0-pbgqfoultwol/1335ffa0-d82e-4b1a-9c0b-874a97b8e7ba",
   "resource_name": "kube-master",
   "auth_url": "http://192.168.5.100:5000/v3/",
   "project_id": "0b3a89797867438daea494732195a714"
  },
  "collectors": [
   "ec2",
   "heat",
   "local"
  ]
 },
 "deployments": []
}[root@cluster-kxo37xfwor3i-master-0 ~]# 
[root@cluster-kxo37xfwor3i-master-0 ~]# cat /etc/os-collect-config.conf 
[DEFAULT]
command = os-refresh-config
collectors = ec2
collectors = heat
collectors = local


[heat]
auth_url = http://192.168.5.100:5000/v3/
user_id = 7dba31ab737c4573b1b0fb48f82d9cc3
password = !WSa!NEmYAzVE@YBixypi5eZ3NpR80U8
project_id = 0b3a89797867438daea494732195a714
stack_id = cluster-kxo37xfwor3i-kube_masters-q67abiuxjita-0-pbgqfoultwol/1335ffa0-d82e-4b1a-9c0b-874a97b8e7ba
resource_name = kube-master
region_name =

Here is my heat.conf

[DEFAULT]
heat_metadata_server_url = http://192.168.5.100:8000
heat_waitcondition_server_url = http://192.168.5.100:8000/v1/waitcondition
stack_user_domain_name = heat
stack_domain_admin = heat_domain_admin
stack_domain_admin_password = PASSWORD
transport_url = rabbit://guest:guest@192.168.5.100
[auth_password]
[clients]
endpoint_type = public
[clients_aodh]
[clients_barbican]
[clients_cinder]
[clients_designate]
[clients_glance]
[clients_heat]
[clients_keystone]
endpoint_type = public
auth_uri = http://192.168.5.100:5000
[clients_magnum]
endpoint_type = public
[clients_manila]
[clients_mistral]
[clients_monasca]
[clients_neutron]
[clients_nova]
[clients_octavia]
[clients_sahara]
[clients_senlin]
[clients_swift]
[clients_trove]
[clients_zaqar]
[cors]
[database]
connection = mysql+pymysql://heat:HEAT_DBPASS@192.168.5.100/heat
[ec2authtoken]
[eventlet_opts]
[healthcheck]
[heat_api]
[heat_api_cfn]
[heat_api_cloudwatch]
[noauth]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[profiler]
[revision]
[ssl]
[keystone_authtoken]
www_authenticate_uri = http://192.168.5.100:5000
auth_url = http://192.168.5.100:5000
memcached_servers = 192.168.5.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = services
username = heat
password = PASSWORD
[trustee]
auth_type = password
auth_url = http://192.168.5.100:5000 ...
(more)
2020-04-09 19:20:56 -0500 answered a question neutron warning No controller found for: floatingips

I'm probably 2 years late, but I just had the same issue on the train release. I also had router and floating IP errors. Couldn't even list them via:

openstack floating ip list
openstack router list

Related error in neutron logs:

3044 WARNING neutron.pecan_wsgi.controllers.root [req-1ccb5282-df34-481a-a1b8-f5a4b6e75c73 51e54f8ec02f41e3903e0f40ced8a6db da668bc102784935b284613d29827e05 - default default] No controller found for: routers - returning response code 404: PecanNotFound

Fix was to enable the router service in neutron.conf

service_plugins=router,metering

That fixed the router and floating IP 404 messages. Hope this answer will help out someone one day.

2020-04-09 06:41:10 -0500 asked a question Magnum deploys cluster VMs on controller node instead of compute node

Hi, I'm having trouble deploying a kubernetes cluster on my two node setup. I have one controller and one compute node. The controller node has 1 NIC and the compute node has two, one of which is used for the flat provider network. Spinning up regular KVM VMs works well and they get deployed on the compute node with networking fine.

I'm trying to deploy a kubernetes cluster via magnum, and it fails on the networking step. I can see from the neutron logs on the controller:

ERROR neutron.plugins.ml2.managers [req-36b007f4-8da0-42b5-9b25-3c4928b32d12 415d47fb69d94d8b8b964d92cdb60e56 db07ee3987614128bc3c93527c04d1a9 - default default] Failed to bind port ed644ee8-76b0-4ff1-80e3-30ff2c7d21c6 on host openstack-control for vnic_type normal using segments [{'network_id': '07a06eaf-223e-4c53-9a69-4164b5d1b3d8', 'segmentation_id': None, 'physical_network': u'provider', 'id': '90260147-33df-4e08-a953-10a1aa5f6834', 'network_type': u'flat'}]

Nova and heat logs provide similar errors, but this is fine, because the port doesn't exist on the controller node. It does exist on the compute node. but I can't figure out how to get magnum to deploy the cluster there.

I also tried to disable the nova-compute service on the controller node:

[root@openstack-control ~(keystone_admin)]# openstack compute service set  --disable --disable-reason testing openstack-control nova-compute

...but then nova just reports:

ERROR nova.conductor.manager [req-01fedb41-3102-4eb6-b88a-ca99f3926037 51e54f8ec02f41e3903e0f40ced8a6db da668bc102784935b284613d29827e05 - default default] Failed to schedule instances: NoValidHost_Remote: No valid host was found.

I create the cluster as follows:

openstack coe cluster template create clustertemplate2 \
--coe kubernetes \
--server-type vm \
--image coreos_production_openstack \
--external-network provider \
--network-driver calico \
--volume-driver cinder \
--docker-volume-size 1 \
--docker-storage-driver cinder \
--dns-nameserver 8.8.8.8 \
--flavor m1.tiny \
--registry-enabled \
--labels cloud_provider_enabled=true

openstack coe cluster create \
--cluster-template clustertemplate2 \
--docker-volume-size 3 \
--keypair keypair1 \
--master-count 1 \
--node-count 1 \
--master-flavor m1.small \
--flavor m1.small \
--fixed-network provider  \
--fixed-subnet provider   \
--floating-ip-enabled \
cluster

Here are some service outputs:

[root@openstack-control ~(keystone_admin)]# openstack coe service list
+----+------+------------------+-------+----------+-----------------+---------------------------+---------------------------+
| id | host | binary           | state | disabled | disabled_reason | created_at                | updated_at                |
+----+------+------------------+-------+----------+-----------------+---------------------------+---------------------------+
|  1 | None | magnum-conductor | up    | False    | None            | 2020-04-06T18:50:09+00:00 | 2020-04-09T10:42:42+00:00 |
+----+------+------------------+-------+----------+-----------------+---------------------------+---------------------------+
[root@openstack-control ~(keystone_admin)]# openstack network agent list
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host               | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
| 2b24e18b-e325-413d-9071-99204d794e68 | Linux bridge agent | openstack-compute1 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 35df3f47-f8d2-4393-844f-ddf51392e192 | DHCP agent         | openstack-compute1 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 7ab7a882-b855-45e8-a73f-66da41f8827f | Metadata agent     | openstack-compute1 | None              | :-)   | UP    | neutron-metadata-agent    |
| b1f2bb45-64fe-467c-82df-1d229a700465 | DHCP agent         | openstack-control  | nova              | :-)   | UP    | neutron-dhcp-agent        |
| db9e3ee7-ec45-4122-93bc-b94f520d0f2c | Metadata agent     | openstack-control  | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+--------------------+-------------------+-------+-------+---------------------------+
[root@openstack-control ~(keystone_admin)]# openstack compute service list
+----+----------------+--------------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host               | Zone     | Status  | State | Updated At                 |
+----+----------------+--------------------+----------+---------+-------+----------------------------+
|  2 | nova-conductor | openstack-control  | internal | enabled | up    | 2020-04-09T10:43:45.000000 |
|  3 | nova-scheduler | openstack-control  | internal | enabled | up    | 2020-04-09T10:43:46.000000 |
|  4 | nova-compute   | openstack-control  | nova     | enabled | up    | 2020-04-09T10:43:43.000000 |
|  5 | nova-compute   | openstack-compute1 | nova     | enabled | up    | 2020-04-09T10:43:47.000000 |
+----+----------------+--------------------+----------+---------+-------+----------------------------+

I'm obviously missing something on the compute node so the cluster VMs don't get scheduled there, but I have no idea what. Can anyone please help out? What am I missing on the compute node for the cluster VMs to be deployed there?