Ask Your Question

stef97's profile - activity

2019-05-16 16:47:32 -0500 commented answer openstack volume list The resource could not be found

if I add "--os-volume-api-version 3 " it is working fine. It seems openstack client is using version 2 by default as per the 'openstack --help " Many thanks for your help

2019-05-16 10:28:08 -0500 received badge  Editor (source)
2019-05-16 08:28:54 -0500 answered a question openstack volume list The resource could not be found

Thanks for taking the trouble to provide guidance adding --debug yield the following results ( excerpts) Any ( further) help will be appreciated

authentications seems to work

Making authentication request to http://10.10.30.158:5000/v3/auth/tokens
http://10.10.30.158:5000 "POST /v3/auth/tokens HTTP/1.1" 201 8715

 Instantiating volume client: <class 'cinderclient.v2.client.Client'>
Instantiating compute client for API Version Major: 2, Minor: 0
Instantiating compute api: <class 'openstackclient.api.compute_v2.APIv2'>
Instantiating identity client: <class 'keystoneclient.v3.client.Client'>
Making authentication request to http://10.10.30.158:5000/v3/auth/tokens
http://10.10.30.158:5000 "POST /v3/auth/tokens HTTP/1.1" 201 8715

but getting data from endpoint is not despite the endpoint being up and running

 Starting new HTTP connection (1): 10.10.30.158
http://10.10.30.158:8776 "GET /v2/b7d003aea29e4ebab92315a3f15e9987/volumes/detail HTTP/1.1" 404 112
RESP: [404] Connection: keep-alive Content-Length: 112 Content-Type: application/json Date: Thu, 16 May 2019 12:57:30 GMT
RESP BODY: {"message": "The resource could not be found.<br /><br />\n\n\n", "code": "404 Not Found", "title": "Not Found"}
The resource could not be found.<br /><br />


openstack endpoint list | grep 8776 | grep v2
| 473244d56b144b11aa6c4a69593175ca | RegionOne | cinderv2     | volumev2        | True    | public    | http://10.10.30.158:8776/v2/%(tenant_id)s      |
| 8d75c76384414574a6293c82ac5e9938 | RegionOne | cinderv2     | volumev2        | True    | internal  | http://10.10.30.158:8776/v2/%(tenant_id)s      |
| deb8d6e18525449a9fa12a9ed14663ec | RegionOne | cinderv2     | volumev2        | True    | admin     | http://10.10.30.158:8776/v2/%(tenant_id)s      |

api.log does not have anything useful ( only 404 error) excerpts from cinder --debug list ( which works fine) - it seems the difference is that it uses version 3 instead of 2

DEBUG:keystoneauth.session:REQ: curl -g -i -X GET http://10.10.30.158:5000/v3 -H "Accept: application/json" -H "User-Agent: cinder keystoneauth1/3.10.0 python-requests/2.19.1 CPython/2.7.5"
DEBUG:keystoneauth.session:RESP: [200] Connection: Keep-Alive Content-Encoding: gzip Content-Length: 193 Content-Type: application/json Date: Thu, 16 May 2019 15:23:43 GMT Keep-Alive: timeout=15, max=100 Server: Apache/2.4.6 (CentOS) Vary: X-Auth-Token,Accept-Encoding x-openstack-request-id: req-af5efd23-4154-4af5-a7be-de53907e80bb
DEBUG:keystoneauth.session:RESP BODY: {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://10.10.30.158:5000/v3/", "rel": "self"}]}}
DEBUG:keystoneauth.session:GET call to http://10.10.30.158:5000/v3 used request id req-af5efd23-4154-4af5-a7be-de53907e80bb
DEBUG:keystoneauth.identity.v3.base:Making authentication request to http://10.10.30.158:5000/v3/auth/tokens
enter code here

DEBUG:keystoneauth:GET call to volumev3 for http://10.10.30.158:8776/v3/b7d003aea29e4ebab92315a3f15e9987/volumes/detail used request id req-3d19e750-8aea-4697-b2ba-c36609b03b57

I am running both openstack and cinder commands using same shell sourced environment Why the different results ?

cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='CCCCCC'
    export OS_AUTH_URL=http://10.10.30.158:5000/v3
    export PS1='[\u@\h \W(keystone_admin)]\$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME ...
(more)
2019-05-15 16:55:04 -0500 asked a question openstack volume list The resource could not be found

Hi, If I run below command openstack volume list I am getting this response The resource could not be found

However, "cinder list" and listing the volumes from GUI(Horizon), works fine

Any suggestions / ideas will be appreciated I am using CEPH for volumes and imgaes

Here is cinder.conf [root@ops-ctrl-dev ~]# cat /etc/cinder/cinder.conf

[DEFAULT]
enabled_backends = ceph
default_volume_type = ceph
backup_swift_url=http://10.10.30.158:8080/v1/AUTH_
backup_swift_container=volumebackups
backup_driver=cinder.backup.drivers.swift
enable_v3_api=True
enable_v2_api=false
storage_availability_zone=nova
default_availability_zone=nova
auth_strategy=keystone
osapi_volume_listen=0.0.0.0
osapi_volume_workers=2
enable_force_upload = true
debug=True
log_dir=/var/log/cinder
transport_url=rabbit://guest:guest@10.10.30.158:5672/
control_exchange=openstack
api_paste_config=/etc/cinder/api-paste.ini
glance_host=10.10.30.158
#image_conversion_dir = /mnt/nfs/img/conversion
#image_conversion_dir = $state_path/conversion
[database]
connection=mysql+pymysql://cinder:92fcdb6ddf42473e@10.10.30.158/cinder
[key_manager]
backend=cinder.keymgr.conf_key_mgr.ConfKeyManager

[keystone_authtoken] www_authenticate_uri=http://10.10.30.158:5000/v3 auth_uri=http://10.10.30.158:5000/v3 auth_type=password auth_url=http://10.10.30.158:35357/v3 username=cinder password=8a68c661ca6e4de6 user_domain_name=Default project_name=services project_domain_name=Default [ceph]

volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = 78720044-69a4-4256-a2a1-22f55ca1de11 rbd_keyring_config = /etc/ceph/ceph.client.cinder.keyring

2019-05-08 10:09:19 -0500 received badge  Popular Question (source)
2019-05-08 10:09:19 -0500 received badge  Famous Question (source)
2019-05-08 10:09:19 -0500 received badge  Notable Question (source)
2019-05-06 04:48:10 -0500 received badge  Famous Question (source)
2019-05-03 18:17:27 -0500 commented answer dhcp lease not issued for new networks

I ended up copying /etc/sysconfig/iptables from a working node and that solved my issue Still not sure why iptables did not get created properly when I deployed a new node

2019-04-24 09:41:02 -0500 answered a question dhcp lease not issued for new networks

I apologize for jumping the gun and asking for help without doing my due diligence I spent last few days investigating and the only difference that I found between the current nodes and the new one is missing iptables rule for neutron port on controller

Current nodes have something like below, new nodes doesn't

ACCEPT udp -- any any http://ops03.tor.medavail.net anywhere multiport dports 4789 /* 001 neutron tunnel port incoming neutron_tunnel_10.10.30.158_10.10.30.223 */

We are using vxlan

Although adding it manually does NOT solve my issue, I am thinking that there might be other rules missing

Any idea my iptables are not being properly configured when adding nodes using packstack ? Any resources / tutorilas for adding a compute/neutron centos 7 node manually ?

Steven

2019-04-23 08:48:01 -0500 received badge  Enthusiast
2019-04-22 16:47:30 -0500 asked a question dhcp lease not issued for new networks

Hi, After I added a new compute and network node I noticed that any new network I create does not provide dhcp leases Horizon is reporting the IP was successfully assigned but the Vm does not get it If I attach the vm to one of the old networks while running on the new node, it is working fine if I move the vm to one of the old nodes it is working fine The above is happening irrespective of on which host the dnsmaq/netspace is running running tcpdump on the netspace interface appears to indicate that all is good

I would really appreciate any help as it does not make any sense to me

below some technical details

ip netns exec qdhcp-77011d2e-259e-4788-8b06-6f4fee8c350e tcpdump -vnes0 -i tapda8686cd-dd

14:01:59.165236 fa:16:3e:35:52:7c > Broadcast, ethertype IPv4 (0x0800), length 332: (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto UDP (17), length 318) 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:35:52:7c, length 290, xid 0xe90e6162, secs 60, Flags [none] Client-Ethernet-Address fa:16:3e:35:52:7c Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Discover Client-ID Option 61, length 7: ether fa:16:3e:35:52:7c MSZ Option 57, length 2: 576 Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Hostname Domain-Name, MTU, BR, NTP Classless-Static-Route Vendor-Class Option 60, length 12: "udhcp 1.20.1" Hostname Option 12, length 6: "cirros" 14:01:59.165444 fa:16:3e:47:a4:7b > fa:16:3e:35:52:7c, ethertype IPv4 (0x0800), length 370: (tos 0xc0, ttl 64, id 3425, offset 0, flags [none], proto UDP (17), length 356) 10.10.200.10.bootps > 10.10.200.26.bootpc: BOOTP/DHCP, Reply, length 328, xid 0xe90e6162, secs 60, Flags [none] Your-IP 10.10.200.26 Server-IP 10.10.200.10 Client-Ethernet-Address fa:16:3e:35:52:7c Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Offer Server-ID Option 54, length 4: 10.10.200.10 Lease-Time Option 51, length 4: 86400 RN Option 58, length 4: 43200 RB Option 59, length 4: 75600 Subnet-Mask Option 1, length 4: 255.255.255.0 BR Option 28, length 4: 10.10.200.255 Domain-Name Option 15, length 14: "openstacklocal" Default-Gateway Option 3, length 4: 10.10.200.1 Classless-Static-Route Option 121, length 14: (169.254.169.254/32:10.10.200.10),(default:10.10.200.1) Domain-Name-Server Option 6, length 4: 10.9.0.2 MTU Option 26, length 2: 1450

openstack server show test3 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | http://ops04.tor.medavail.net | | OS-EXT-SRV-ATTR:hypervisor_hostname | http://ops04.tor.medavail.net | | OS-EXT-SRV-ATTR:instance_name | instance-0000012e | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-04-17T16:52:35.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | 2testnetwork=10.10.200.26

2019-03-25 05:30:24 -0500 received badge  Notable Question (source)
2019-03-10 16:34:48 -0500 commented question magnum kubernetes NoSchedule taint

yes, many thanks Adding cluster_user_trust = true under [trust] in magnum.conf took care of tainting

2019-03-10 08:45:41 -0500 commented answer magnum fedora-atomic no available network renderers found

I was able to have this working after I added "region_name_for_services = RegionOne" to heat.conf I tested it only with local certificates

2019-03-08 15:20:28 -0500 asked a question magnum kubernetes NoSchedule taint

Hi,

My kubernetes clusters have to be manually modified ( taint removed) in order to allow pods to be created Is this expected ?

Here is the command I had to use to remove taint from minion kubectl taint nodes k8tls-29-ggwvjhvnerad-minion-0 node.cloudprovider.kubernetes.io/uninitialized:NoSchedule-

I created a cluster with 1 node, 1 master using latest Fedora-atomic 29

here is described node for the minion

Name: k8tls-29-ggwvjhvnerad-minion-0 Roles: <none> Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=k8tls-29-ggwvjhvnerad-minion-0 Annotations: node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 08 Mar 2019 14:27:20 -0500 Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Fri, 08 Mar 2019 15:44:36 -0500 Fri, 08 Mar 2019 14:27:19 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Fri, 08 Mar 2019 15:44:36 -0500 Fri, 08 Mar 2019 14:27:19 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 08 Mar 2019 15:44:36 -0500 Fri, 08 Mar 2019 14:27:19 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 08 Mar 2019 15:44:36 -0500 Fri, 08 Mar 2019 14:27:19 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 08 Mar 2019 15:44:36 -0500 Fri, 08 Mar 2019 14:27:19 -0500 KubeletReady kubelet is posting ready status Addresses: Capacity: cpu: 1 ephemeral-storage: 6130Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 2038528Ki pods: 110 Allocatable: cpu: 1 ephemeral-storage: 5784993783 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 1936128Ki pods: 110 System Info: Machine ID: 6db739bb62eb47698add665127cb335a System UUID: 6db739bb-62eb-4769-8add-665127cb335a Boot ID: 4dbcee13-4378-4918-85c6-85d4a74a414a Kernel Version: 4.20.8-200.fc29.x86_64 OS Image: Debian GNU/Linux 9 (stretch) Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.13.1 Kubelet Version: v1.11.1 Kube-Proxy Version: v1.11.1 Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 53s kubelet, k8tls-29-ggwvjhvnerad-minion-0 Starting kubelet. Normal NodeHasSufficientDisk 52s (x2 over 52s) kubelet, k8tls-29-ggwvjhvnerad-minion-0 Node k8tls-29-ggwvjhvnerad-minion-0 status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 52s (x2 over 52s) kubelet, k8tls-29-ggwvjhvnerad-minion-0 Node k8tls-29-ggwvjhvnerad-minion-0 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 52s (x2 over 52s) kubelet, k8tls-29-ggwvjhvnerad-minion-0 Node k8tls-29-ggwvjhvnerad-minion-0 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 52s (x2 over 52s) kubelet, k8tls-29-ggwvjhvnerad-minion-0 Node k8tls-29-ggwvjhvnerad-minion-0 status is now: NodeHasSufficientPID

2019-03-08 06:39:29 -0500 received badge  Popular Question (source)
2019-03-07 17:34:05 -0500 answered a question magnum fedora-atomic no available network renderers found

I have been unable to make it work with TLS enabled The error message from master cloud-init is "waiting for CA to be made available for certificate manager api"

I tried both barbican and local with the same result I also added 'cert_manager_api=true' label to the template

All I want is a kubernetes cluster so if anyone is aware of a winning combination between fedora-atomic version and magnum with/without TLS it will be really appreciated if they can share their knowledge / config files

2019-03-07 15:40:47 -0500 answered a question barbican installation on rocky - not working

Hi, I've used this and it is working now https://www.server-world.info/en/note?os=CentOS_7&p=openstack_ocata2&f=18 (https://www.server-world.info/en/note...)

barbican is working fine as a service - not sure why the docs suggested to added to Apache

2019-03-07 15:40:47 -0500 asked a question barbican installation on rocky - not working

Hi,

I have installed barbican using official documentation https://docs.openstack.org/barbican/latest/install/verify.html (https://docs.openstack.org/barbican/l...)

Everything went fine except that the no one is listening on port 9311 There is no log file in /var/log/barbican and nothing useful httpd logs ( except complains that SSL is disabled)

endpoints are correct ( no versioning at the end)

I must be missing something really simple/basic as there are not too many people having installation issues

Any hints how to troubleshoot this will be appreciated

openstack secret store --name mysecret --payload j4=]d21 Failed to contact the endpoint at http://10.10.30.158:9311 for discovery. Fallback to using that endpoint as the base url.

openstack endpoint list | grep barbican | 20c900918bc74f2794a21b671ff2fb84 | RegionOne | barbican | key-manager | True | internal | http://10.10.30.158:9311

/etc/barbican/barbican.conf

[DEFAULT] host_href = http://10.10.30.158:9311 sql_connection = mysql+pymysql://barbican:@10.10.30.158/barbican db_auto_create = false debug = true transport_url=rabbit://guest:guest@10.10.30.158:5672/ [certificate] [certificate_event] [cors] [crypto] [dogtag_plugin] [keystone_authtoken] www_authenticate_uri=http://10.10.30.158:5000/v3 memcached_servers=10.10.30.158:11211 auth_type=password auth_url=http://10.10.30.158:5000 username=barbican password=* user_domain_name=Default project_name=services project_domain_name=Default

Thanks

2019-03-06 10:14:35 -0500 answered a question magnum fedora-atomic no available network renderers found

Thanks for taking the trouble to respond Using Fedora-AtomicHost-28-20180806.0.x86_64.qcow2 got rid of the network issue but I am running into a certificate one now " creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied" Looks like there are a few documented bugs about it https://bugs.launchpad.net/ubuntu/+source/magnum/+bug/1809254 (https://bugs.launchpad.net/ubuntu/+so...) https://bugs.launchpad.net/magnum/+bug/1714880 (https://bugs.launchpad.net/magnum/+bu...)

any ideas / suggestions will be appreciated Note Ill try to start a cluster with TLS enabled

Steven

2019-03-05 15:33:28 -0500 asked a question magnum fedora-atomic no available network renderers found

Hi, I have been struggling getting kubernetes cluster deployed using magnum and the cluster timeout I noticed below in the cloud-init-output.log of the deployed minion/master

"stages.py[ERROR]: Unable to render networking. Network config is likely broken: No available network renderers found. Searched through list: ['eni', 'sysconfig', 'netplan'] ."

I am using latest fedora-atomic ( version 29, which has cloun-init version 17.1) and openstack rocky

I was thinking of trying a different image but before I go that route I am hoping someone can either suggest an image that works or a way to fix current one

Any help/hint would be greatly appreciated.

Thanks for taking the trouble to respond Using Fedora-AtomicHost-28-20180806.0.x86_64.qcow2 got rid of the network issue but I am running into a certificate one now " creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied" Looks like there are a few documented bugs about it https://bugs.launchpad.net/ubuntu/+so... (https://bugs.launchpad.net/ubuntu/+so...) https://bugs.launchpad.net/magnum/+bu... (https://bugs.launchpad.net/magnum/+bu...)

any ideas / suggestions will be appreciated Note Ill try to start a cluster with TLS enabled

Steven