Ask Your Question
0

cannot login to instance?

asked 2016-12-29 10:53:00 -0500

mkhan gravatar image

updated 2017-01-06 09:09:22 -0500

I have deployed few times openstack version liberty, mitaka, and newton and quite fimiliar with installation. Now I am facing an issue with all of these openstack version that I cannot access the instances I have launch. i have deployed in different enviroment like vcenter and vmware fusion. Both having the same issue abd I did follow the same instruction as it mentioned on packstack. When I have try to access to console through dashboard it's keep saying it's starting and when I access through virsh console instance it's giving me error: internal error: character console0 is not using PTY. when I try to goto console from dashboard, console keep saying starting up.

[root@mitaka ~(keystone_admin)]# nova service-list
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| 3  | nova-cert        | mitaka | internal | enabled | up    | 2017-01-05T14:39:22.000000 | -               |
| 4  | nova-consoleauth | mitaka | internal | enabled | up    | 2017-01-05T14:39:32.000000 | -               |
| 5  | nova-scheduler   | mitaka | internal | enabled | up    | 2017-01-05T14:39:23.000000 | -               |
| 6  | nova-conductor   | mitaka | internal | enabled | up    | 2017-01-05T14:39:22.000000 | -               |
| 7  | nova-compute     | mitaka | nova     | enabled | up    | 2017-01-05T14:39:27.000000 | -               |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------


[root@mitaka ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host   | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+
| 0c1946c2-a57a-4613-acae-b6c693e9d4a3 | Metadata agent     | mitaka |                   | :-)   | True           | neutron-metadata-agent    |
| 2e8d7acf-85cc-4fe4-b473-b3d0701f62b0 | Open vSwitch agent | mitaka |                   | :-)   | True           | neutron-openvswitch-agent |
| 46bfcd3e-9565-4344-a2f3-3b7be6580001 | Metering agent     | mitaka |                   | :-)   | True           | neutron-metering-agent    |
| a9c91dfc-8a79-4e19-afa5-d9e61471c323 | DHCP agent         | mitaka | nova              | :-)   | True           | neutron-dhcp-agent        |
| fb8405d1-b13b-4479-b953-82f9580e3890 | L3 agent           | mitaka | nova              | :-)   | True           | neutron-l3-agent          |
+--------------------------------------+--------------------+--------+-------------------+-------+----------------+----------------------

+--------------------------------------+------+--------+------------+-------------+-----------------+
| ID                                   | Name | Status | Task State | Power State | Networks        |
+--------------------------------------+------+--------+------------+-------------+-----------------+
| d138504d-c73f-4644-ae92-04e685085965 | test | ACTIVE | -          | Running     | Net1=10.10.10.3 |
+--------------------------------------+------+--------+------------+-------------+-----------------+

[root@mitaka ~(keystone_admin)]# grep -v \# /etc/nova/nova.conf | grep -v ^$
[DEFAULT]

notify_api_faults=False

state_path=/var/lib/nova

report_interval=10

enabled_apis=osapi_compute,metadata

osapi_compute_listen=0.0.0.0

osapi_compute_listen_port=8774

osapi_compute_workers=2

metadata_listen=0.0.0.0

metadata_listen_port=8775

metadata_workers=2

compute_manager=nova.compute.manager.ComputeManager

service_down_time=60

rootwrap_config=/etc/nova/rootwrap.conf

volume_api_class=nova.volume.cinder.API

auth_strategy=keystone

use_forwarded_for=False

fping_path=/usr/sbin/fping

heal_instance_info_cache_interval=60

reserved_host_memory_mb=512

cpu_allocation_ratio=16.0

ram_allocation_ratio=1.5

allow_resize_to_same_host=False

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePr

opertiesFilter,CoreFilter

scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

compute_driver=libvirt.LibvirtDriver

vif_plugging_is_fatal=True

vif_plugging_timeout=300

firewall_driver=nova.virt.firewall.NoopFirewallDriver

force_raw_images=True

api_paste_config=api-paste.ini

use_neutron=True

default_floating_pool=public

force_snat_range =0.0.0.0/0

metadata_host=192.168.18.187

dhcp_domain=novalocal

security_group_api=neutron

debug=False

verbose=True

log_dir=/var/log/nova

rpc_backend=rabbit

notification_driver=nova.openstack.common.notifier.rabbit_notifier,ceilometer.compute.nova_notifier

image_service=nova.image.glance.GlanceImageService

notification_topics=notifications

osapi_volume_listen=0.0.0.0

novncproxy_host=0.0.0.0

novncproxy_port=6080

[api_database]

connection=mysql+pymysql://nova_api:97a16d2d447f41ed@192.168.18.187/nova_api

[barbican]

[cache]

[cells]

[cinder]

catalog_info=volumev2:cinderv2:publicURL

[conductor]

use_local=False

[cors]

[cors.subdomain]

[database]

connection=mysql+pymysql://nova:97a16d2d447f41ed@192.168.18.187/nova

[ephemeral_storage_encryption]

[glance]

api_servers=192.168.18.187:9292

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri=http://192.168.18.187:5000/v2.0

admin_password=3c95bc72f0a24597

admin_user=nova

admin_tenant_name=services

identity_uri=http://192.168.18.187:35357

[libvirt]

virt_type=qemu

inject_password=False

inject_key=False

inject_partition=-1

live_migration_uri=qemu+tcp://nova@%s/system

cpu_mode=host-model

vif_driver=nova.virt ...
(more)
edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted
1

answered 2017-01-05 23:33:31 -0500

updated 2017-01-05 23:35:24 -0500

Hi Khan,

I have gone through your above-described query. Please help me to provide the below clarification.

  1. are you able to access the console of vm through virt-manager? If yes, please check is the vm have a valid IP or not
  2. Is the network namespace has created successfully? If vm have a valid IP, are able to access the IP through namespace.
  3. List item If possible please share dump details of the instance to verify all the parameter.

Since you tried with vCenter environment and had the same issue. Could you please share some valid log which helps us to debug in this issue deeply.

Regards, Pradipta

edit flag offensive delete link more
0

answered 2017-01-16 10:54:59 -0500

mkhan gravatar image

tail nova-compute.log 2017-01-16 11:49:39.259 12925 WARNING nova.virt.libvirt.driver [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] couldn't obtain the vcpu count from domain id: f1c20077-c836-4e08-8d0e-69cd5a8c58d8, exception: Requested operation is not valid: cpu affinity is not supported 2017-01-16 11:49:40.138 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Total usable vcpus: 4, total allocated vcpus: 2 2017-01-16 11:49:40.139 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Final resource view: name=mitaka-controller1 phys_ram=12287MB used_ram=1536MB phys_disk=44GB used_disk=2GB total_vcpus=4 used_vcpus=2 pci_stats=[] 2017-01-16 11:49:40.174 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Compute_service record updated for mitaka-controller1:mitaka-controller1 2017-01-16 11:50:41.246 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Auditing locally available compute resources for node mitaka-controller1 2017-01-16 11:50:41.274 12925 WARNING nova.virt.libvirt.driver [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] couldn't obtain the vcpu count from domain id: d3498b3f-998f-4ef9-8c97-45567eae92df, exception: Requested operation is not valid: cpu affinity is not supported 2017-01-16 11:50:41.280 12925 WARNING nova.virt.libvirt.driver [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] couldn't obtain the vcpu count from domain id: f1c20077-c836-4e08-8d0e-69cd5a8c58d8, exception: Requested operation is not valid: cpu affinity is not supported 2017-01-16 11:50:42.961 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Total usable vcpus: 4, total allocated vcpus: 2 2017-01-16 11:50:42.962 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Final resource view: name=mitaka-controller1 phys_ram=12287MB used_ram=1536MB phys_disk=44GB used_disk=2GB total_vcpus=4 used_vcpus=2 pci_stats=[] 2017-01-16 11:50:43.017 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Compute_service record updated for mitaka-controller1:mitaka-controller1 [root@mitaka-controller1 nova]# tail -n 20 nova-compute.log 2017-01-16 11:48:39.574 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Final resource view: name=mitaka-controller1 phys_ram=12287MB used_ram=1536MB phys_disk=44GB used_disk=2GB total_vcpus=4 used_vcpus=2 pci_stats=[] 2017-01-16 11:48:39.605 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Compute_service record updated for mitaka-controller1:mitaka-controller1 2017-01-16 11:49:39.253 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Auditing locally available compute resources for node mitaka-controller1 2017-01-16 11:49:39.258 12925 WARNING nova.virt.libvirt.driver [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] couldn't obtain the vcpu count from domain id: d3498b3f-998f-4ef9-8c97-45567eae92df, exception: Requested operation is not valid: cpu affinity is not supported 2017-01-16 11:49:39.259 12925 WARNING nova.virt.libvirt.driver [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] couldn't obtain the vcpu count from domain id: f1c20077-c836-4e08-8d0e-69cd5a8c58d8, exception: Requested operation is not valid: cpu affinity is not supported 2017-01-16 11:49:40.138 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Total usable vcpus: 4, total allocated vcpus: 2 2017-01-16 11:49:40.139 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Final resource view: name=mitaka-controller1 phys_ram=12287MB used_ram=1536MB phys_disk=44GB used_disk=2GB total_vcpus=4 used_vcpus=2 pci_stats=[] 2017-01-16 11:49:40.174 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Compute_service record updated for mitaka-controller1:mitaka-controller1 2017-01-16 11:50:41.246 12925 INFO nova.compute.resource_tracker [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] Auditing locally available compute resources for node mitaka-controller1 2017-01-16 11:50:41.274 12925 WARNING nova.virt.libvirt.driver [req-34de1f6d-e312-497c-b241-7258f6e1a411 - - - - -] couldn't obtain the vcpu count from domain id: d3498b3f-998f-4ef9-8c97-45567eae92df, exception: Requested operation is ... (more)

edit flag offensive delete link more
-1

answered 2017-01-05 22:54:13 -0500

Anand TS gravatar image

I have seen that you have set virt_type=qemu in your nova configuration file.

Please run below command and verify the output:

egrep -c '(vmx|svm)' /proc/cpuinfo

If the output is

  1. 0 then "virt_type = qemu" should be set in the nova.conf configuration of the compute node.
  2. Any number higher then "virt_type = kvm" should be set in the nova.conf configuration of the compute node.

Please try this out and let me know this helps.

Thanks, Anand TS

edit flag offensive delete link more

Comments

I have changed the virt_type=kvm because the value was greater than 0 but I am having same issue cannot access the console.

console displaying: Booting from hard disk GRUB

mkhan gravatar imagemkhan ( 2017-01-16 10:44:09 -0500 )edit

Hello, how did you solve your issue?

Alex Sousa gravatar imageAlex Sousa ( 2018-10-29 09:28:40 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

3 followers

Stats

Asked: 2016-12-29 10:53:00 -0500

Seen: 809 times

Last updated: Jan 16 '17