ubuntu cloud-init can't get metadata at boot
Hi, This is my environment "Juno ml2 openvswitch vlan"
node1 - controller (controller+Network"Neutron")
node2 - compute1 (Hyperv)
node3 - compute2 (Hyperv)
My ubuntu image which i made manually (Vhd/Vhdx) is not getting metadata at boot and not renames hostname nor resize's root partition.
But the crazy part is that i can "curl" the metadata url "http://169.254.169.254/2009-04-04/meta-data/instance-id" ".../metadata/hostname" etc., from instance
Also hostname change of Windows instance(vm) in openstack with cloud-init is working good.
I have installed cloud-init, cloud-utils and tried pushing image into glance as both vhd and vhdx both had same result.
My complete log of cloud-init from my ubuntu instance is here link
The hostname DEBUG logs are as below.
- importer.py[DEBUG]: Failed at attempted import of 'cc_update_hostname' due to: No module named cc_update_hostname
- importer.py[DEBUG]: Found cc_update_hostname with attributes ['handle'] in ['cloudinit.config.cc_update_hostname']
It says "Found cc_update-hostname" above but it just keeps my default hostname "adminvm" which i used while creating image.
one more Warning in log
- url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'="">: [Errno 101] Network is unreachable)]
- DataSourceEc2.py[CRITICAL]: Giving up on md from ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 126 seconds
Some Details about my network :
- eth0-->Managent_NIC
- eth1-->VM_DATA_NETWORK(br-eth1)
- eth2-->External_Network(br-ex)
Compute has eth0 and eth1(br-eth1) but instance(VM in compute) has port in eth1(i.e, br-eth1) alone
ovs-vsctl show (in node1)
57812622-2530-4afe-8f22-1bcf81ea7aff
Bridge br-ex
Port "eth2"
Interface "eth2"
Port br-ex
Interface br-ex
type: internal
Port "qg-741aae57-35"
Interface "qg-741aae57-35"
type: internal
Bridge br-int
fail_mode: secure
Port "int-br-eth1"
Interface "int-br-eth1"
type: patch
options: {peer="phy-br-eth1"}
Port "tap46945191-d3"
tag: 2
Interface "tap46945191-d3"
type: internal
Port "tapd7668caa-ea"
tag: 1
Interface "tapd7668caa-ea"
type: internal
Port br-int
Interface br-int
type: internal
Port "qr-4cfdd1c9-73"
tag: 1
Interface "qr-4cfdd1c9-73"
type: internal
Bridge "br-eth1"
Port "phy-br-eth1"
Interface "phy-br-eth1"
type: patch
options: {peer="int-br-eth1"}
Port "eth1"
Interface "eth1"
Port "br-eth1"
Interface "br-eth1"
type: internal
ovs_version: "2.0.2"
/etc/neutron/metadata_agent.ini (in node1)
verbose = True
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_user_pass
nova_metadata_ip = controller
metadata_proxy_shared_secret = metadata_pass
/etc/nova/nova.conf (in node1)
[neutron]
...
service_metadata_proxy = True
metadata_proxy_shared_secret = metadata_pass
....
route -n (eth0 is Management interface node1)
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.3.9 0.0.0.0 UG 0 0 0 br-ex
10.1.3.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ex
10.2.3.0 0.0.0.0 255.255.255.0 U 0 0 0 br-eth1
10.20.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0