Ask Your Question

kildarejoe's profile - activity

2017-03-15 07:08:08 -0600 received badge  Famous Question (source)
2016-09-07 04:48:02 -0600 asked a question Openstack kilo Compute Hosts - Ubuntu 14.04 - Guest Windows Server 2012 R2 will not boot.

Hi,

Could anybody be of assistance - we have computes hosts made up of DL360 G5's and G7's, to enable live migration of guests between the two differnet versions of server - we enabled the following in the nova.conf file on the computes:

/etc/nova/nova.conf

On computes 1,5,10 – G7s

[DEFAULT] vcpu_pin_set = "0-3"

[libvirt]

cpu_mode = custom cpu_model = kvm64

/etc/nova/nova.conf

On Computes 2,3,4,6,7,8,9,11,12

[libvirt] cpu_mode = custom cpu_model = kvm64

This enabled live-migration of VM's across all computes - but the problem we are having in this configuration is that Windows server 2012 R2 guest VM's will not boot on any node - get an generic boot error on the console of the VM.

Centos,Ubunut,Fedora and windows server 2008R2 - VM's have no problem booting on this configuration.

Thanks.

2016-07-20 01:29:51 -0600 received badge  Notable Question (source)
2016-07-08 22:08:46 -0600 received badge  Popular Question (source)
2016-07-07 08:54:25 -0600 commented answer Openstack Security Group - default Rules

Ok , many thanks for your reply, we were looking for an way to measure connectivity for tenant VM's for sla purposes - may need to take a different direction then.

Cheers..

2016-07-07 03:01:08 -0600 asked a question Openstack Security Group - default Rules

Hi,

I am trying to add security rules to the default security group, so when a new tenant is created - the rules are included in the default security policies?

Thanks..

2016-06-13 07:32:08 -0600 received badge  Famous Question (source)
2016-06-13 07:32:08 -0600 received badge  Notable Question (source)
2016-06-09 03:13:40 -0600 received badge  Popular Question (source)
2016-06-08 04:32:44 -0600 answered a question OPENSTACK KILO - VM's stuck in scheduling state

Maybe an problem with keystone ->

Verify correct functionality by request auth token and service catalog:

curl -i \ -H "Content-Type: application/json" \ -d ' { "auth": { "identity": { "methods": ["password"], "password": { "user": { "name": "admin", "domain": { "id": "default" }, "password": "Your_Admin_Password" } } } } }' \ http://HA_PRoxy_VIP_Controller_IP:353... ; echo

Just substitute in your admin password and VIP/controller IP, is that 172.x.x.x an VIP or controller IP?

2016-06-08 04:18:28 -0600 asked a question TroubleShooting Openstack

Hi,

Just want to get peoples opinions on how they go about troubleshoot the openstack infrastructure, particularly openstack services:

I usually follow Vm provisioning life cycle build within openstack to track down where there might be problems:

Pulled this ilearnstack.com:

Dashboard or CLI gets the user credential and does the REST call to Keystone for authentication.
Keystone authenticate the credentials and generate & send back auth-token which will be used for sending request to other Components through REST-call.
Dashboard or CLI convert the new instance request specified in  ‘launch instance’ or ‘nova-boot’ form to REST API request and send it to nova-api.
nova-api receive the request and sends the request for validation auth-token and access permission to keystone.
Keystone validates the token and sends updated auth headers with roles and permissions.
nova-api interacts with nova-database.
Creates initial db entry for new instance.
 nova-api sends the rpc.call request to nova-scheduler excepting to get  updated instance entry with host ID specified.
nova-scheduler picks the request from the queue.
nova-scheduler interacts with nova-database to find an appropriate host via filtering and weighing.
Returns the updated instance entry with appropriate host ID after filtering and weighing.
nova-scheduler sends the rpc.cast request to nova-compute for ‘launching instance’ on appropriate host .
nova-compute picks the request from the queue.
nova-compute send the rpc.call request to nova-conductor to fetch the instance information such as host ID and flavor( Ram , CPU ,Disk).
nova-conductor picks the request from the queue.
nova-conductor interacts with nova-database.
Return the instance information.
nova-compute picks the instance information from the queue.
nova-compute does the REST call by passing auth-token to glance-api  to get the Image URI by Image ID from glance and upload image from image storage.
glance-api validates the auth-token with keystone. 
nova-compute get the image metadata.
nova-compute does the REST-call by passing auth-token to Network API to allocate and configure the network such that instance gets the IP address. 
neutron-server validates the auth-token with keystone.
nova-compute get the network info.
nova-compute does the REST call by passing auth-token to Volume API to attach volumes to instance.
cinder-api validates the auth-token with keystone.
nova-compute gets the block storage info.
nova-compute generates data for hypervisor driver and executes request on Hypervisor( via libvirt or api).

The table represents the Instance state at various steps during the provisioning : Status Task Power state Steps Build scheduling None 3-12 Build networking None 22-24 Build block_device_mapping None 25-27 Build spawing None 28 Active none Running

For amqp message flows I have trace options turned on in the rabbitmq-web-console - so I can see message flow to worker processes. API queries - I use curl or cli clients to verify correct working of a particular API.

If a process is dyeing silently - I will run it from the command line and turn debug on in the configuration file, also checking log files. i.e nova-compute - /usr/bin/nova-compute.

Just what other troubleshooting techniques people use to solve problems within an openstack infrastructure - if this is being asked in the ... (more)

2016-06-02 08:07:08 -0600 answered a question unable to connect to controller:9696/extensions

Hi can you do an neutron --debug net-list and post the output.

Can you do service neutron-server status on the controller and an netstat

netstat -a | grep 9696 to see if the port is open?

2016-06-02 08:00:58 -0600 answered a question nova-compute node stuck in down state

Have you run the neutron-openvswitch-agent from the command line

/usr/bin/neutron-openvswitch-agent - should show you connecting to the rabbit queue sucessfully - or where about in the process it is having an problem - once the agent connects to the rabbit cluster and creates the channel to publish/consume messages - it should notify all other compute/neutron nodes to create the overlay networks for layer 2 communication.

The do an ovs-vsctl show - to see the veth pair interface connections and see the gre/vxlan tunnels have established.

2016-06-02 07:44:25 -0600 answered a question Failed to bind port on host

Hi,

What hypervisor are you running - KVM/Qemu, if so have you checked the libvirt logs.

If you do an brctl show - has the tap device being created ?

Have you checked the XML file of the VM(presuming kvm) that tap devices declared?

2016-06-02 07:31:19 -0600 answered a question please help me with floating IP issue

Hi,

As far as I know you should setting an IP on the network interface on your neutron node on the external network, also have you tied the physical interface i.e eth0 to the ovs bridge - br-ex ->

i.e: ovs-vsctl show 58e4fc1f-8035-4eeb-bdb1-fe82025dbec0 Bridge br-ex Port br-ex Interface br-ex type: internal Port "eth0" Interface "eth0"

Also don't think you have to create an virtual router - just set the gateway IP on your physical layer 3 device for the external range for your subnet.

Could you share the results from

neutron net-show public

and then neutron subnet-show UUID_of_subnet

2015-07-01 11:14:25 -0600 received badge  Famous Question (source)
2015-06-23 01:02:36 -0600 received badge  Notable Question (source)
2015-06-09 02:25:03 -0600 received badge  Popular Question (source)
2015-06-08 06:08:03 -0600 answered a question Kilo deployment using packstack fails with 403-error on "/usr/bin/openstack service list"

Hi,

I am having the same problem with Ubuntu14.04 - kilo install -

root@kloud-controller1:~# netstat -nltp | grep 35357 tcp6 0 0 :::35357 :::* LISTEN 23385/apache2

Is there any solution?

2015-06-08 05:31:39 -0600 answered a question openstack Could not determine a suitable URL for the plugin

Hi,

Have this same problem - did you find a solution?

Thanks.

2015-06-08 04:41:46 -0600 asked a question OPENSTACK KILO - VM's stuck in scheduling state

Hi,

Wonder if the community could help me, I have an openstack enviroment bult as following but can not boot VM's, they are stuck in the scheduling state.(Please let me know if I need to provide more info, or there is a troubleshooting process that you could suggest)

I have 3 KVM hosts running OPENSTACk management services(Each Openstack component Replicated 3 times for HA) - VM1,2,3-Kloud-Controller1,2,3 running Cinder,Glance,NOVA,Neutron API's, these VM's are attached to a CEPH backend which stores the images and volumes for Glance and Cinder. 3 VM's running the rabbit messaging queue, 2 Mysql Vm's running Galera - for actvie-actve 3 VM's running neutron nodes - with ML2 plugin.

There is an CEPH cluster - with 3 Monitors and OSD daemons on each of the 3 nodes with DAs's attached to each host.CEPH cluster is in a healthy state and I have previously being able to boot virtual machines.

When I do a nova agent-list - it returns nothing :

root@kloud-controller1:~# nova agent-list
+----------+------------+----+--------------+---------+---------+-----+
| Agent_id | Hypervisor | OS | Architecture | Version | Md5hash | Url |
+----------+------------+----+--------------+---------+---------+-----+
+----------+------------+----+--------------+---------+---------+-----+

if I do a nova-manage - it returns services are fine:

root@kloud-controller1:~# nova-manage service list
No handlers could be found for logger "oslo_config.cfg"
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        kloud-controller2                    internal         enabled    :-)   2015-06-08 09:25:27
nova-conductor   kloud-controller2                    internal         enabled    :-)   2015-06-08 09:25:27
nova-consoleauth kloud-controller2                    internal         enabled    :-)   2015-06-08 09:25:25
nova-scheduler   kloud-controller2                    internal         enabled    :-)   2015-06-08 09:25:28
nova-consoleauth kloud-controller1                    internal         enabled    :-)   2015-06-08 09:25:27
nova-cert        kloud-controller1                    internal         enabled    :-)   2015-06-08 09:25:27
nova-scheduler   kloud-controller1                    internal         enabled    :-)   2015-06-08 09:25:29
nova-conductor   kloud-controller1                    internal         enabled    :-)   2015-06-08 09:25:34
nova-consoleauth kloud-controller3                    internal         enabled    :-)   2015-06-08 09:25:32
nova-scheduler   kloud-controller3                    internal         enabled    :-)   2015-06-08 09:25:24
nova-cert        kloud-controller3                    internal         enabled    :-)   2015-06-08 09:25:33
nova-conductor   kloud-controller3                    internal         enabled    :-)   2015-06-08 09:25:31
nova-compute     kloud-compute1                       nova             enabled    :-)   2015-06-08 09:25:31
nova-compute     kloud-compute3                       nova             enabled    :-)   2015-06-08 09:25:25
nova-compute     kloud-compute4                       nova             enabled    :-)   2015-06-08 09:25:31
nova-compute     kloud-compute2                       nova             enabled    :-)   2015-06-08 09:25:

cat openrc.sh

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=xxxxxx
export OS_AUTH_URL=http://172.x.x.x:35357/v3
export OS_IMAGE_API_VERSION=2
export OS_VOLUME_API_VERSION=2

keystone --debug catalog

root@kloud-controller1:~# keystone --debug catalog
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://172.x.x.x:35357/v3/tokens
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 172.x.x.x
DEBUG:urllib3.connectionpool:"POST /v3/tokens HTTP/1.1" 404 93
DEBUG:keystoneclient.session:Request returned failure status: 404
Authorization Failed: The resource could not be found. (HTTP 404) (Request-ID: req-b03f47bc-23cb-4383-9671-2ce1279b6738)

I have turned off all redundant VM's so that I have just one instance of each Openstack service and checked the nova scheduler log and cinder scheduler log.

root@kloud-controller1:~# tail -5 /var/log/nova/nova-scheduler.log
2015-06-08 10:37:39.746 1120 INFO nova.scheduler.host_manager [req-4721db32-89ac-47ad-9c54-ad37879572dc - - - - -] Successfully synced instances from host 'kloud-compute4 ...
(more)
2015-02-09 02:43:51 -0600 received badge  Famous Question (source)
2015-02-05 12:38:40 -0600 received badge  Notable Question (source)
2015-02-05 02:56:52 -0600 received badge  Enthusiast
2015-02-05 00:59:31 -0600 received badge  Popular Question (source)
2015-02-04 07:52:29 -0600 asked a question Glance Image Creation problem

I have been creatinga HA enviroment based on this article:

Would anybody have any ideas - is there any standard troubleshooting guide that you can use? From google - it would appear the error is being thrown from the registry service.

http://behindtheracks.com/2014/04/ope...

Just when I get dwon to creating the glance service - I get the following when creating a image,

root@controller1:~# glance image-create --name cirrosRaw --is-public=true --disk-format=raw --container-format=bare < cirros-0.3.1-x86_64-disk.raw
Request returned failure status.
500 Internal Server Error
Failed to upload image 1e8e8d47-befe-484c-9df9-1f86b18e8960
    (HTTP 500)
API log:
2015-02-04 13:45:39.928 29729 DEBUG glance.api.middleware.version_negotiation [-] Determining version of request: OPTIONS / Accept:  process_request /usr/lib/python2.7/dist-packages/glance/api/middleware/version_negotiation.py:44
2015-02-04 13:45:39.928 29729 DEBUG glance.api.middleware.version_negotiation [-] Using url versioning process_request /usr/lib/python2.7/dist-packages/glance/api/middleware/version_negotiation.py:57
2015-02-04 13:45:39.929 29729 DEBUG glance.api.middleware.version_negotiation [-] Unknown version. Returning version choices. process_request /usr/lib/python2.7/dist-packages/glance/api/middleware/version_negotiation.py:64
2015-02-04 13:45:39.929 29729 INFO glance.wsgi.server [-] 172.16.1.23 - - [04/Feb/2015 13:45:39] "OPTIONS / HTTP/1.0" 300 697 0.001705

Registry log:
2015-02-04 13:46:12.593 29712 DEBUG routes.middleware [f71f0d99-252f-4c97-aacf-43d52a5f49c9 - - - - -] Route path: '/', defaults: {'action': u'index', 'controller': <glance.common.wsgi.Resource object at 0x7fefadba5790>} __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:102
2015-02-04 13:46:12.593 29712 DEBUG routes.middleware [f71f0d99-252f-4c97-aacf-43d52a5f49c9 - - - - -] Match dict: {'action': u'index', 'controller': <glance.common.wsgi.Resource object at 0x7fefadba5790>} __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:103
2015-02-04 13:46:12.618 29712 INFO glance.registry.api.v1.images [f71f0d99-252f-4c97-aacf-43d52a5f49c9 - - - - -] Returning image list
2015-02-04 13:46:12.619 29712 INFO glance.wsgi.server [f71f0d99-252f-4c97-aacf-43d52a5f49c9 - - - - -] 172.16.1.23 - - [04/Feb/2015 13:46:12] "OPTIONS / HTTP/1.0" 200 222 0.028079

glance-api.conf: .90 is a VIP on the HAProxy nodes:

[database]
connection = mysql://glance:Password12@172.16.1.90/glance
backend = sqlalchemy

[keystone_authtoken]

auth_uri = http://172.16.0.90:5000/v2.0
identity_uri = http://172.16.1.90:35357
#auth_host = 172.16.1.90
#auth_port = 35357
#auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = Password12

glance-registry.conf file :

[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = True

# Show debugging output in logs (sets DEBUG log level output)
debug = True

# Address to bind the registry server
bind_host = 0.0.0.0

# Port the bind the registry server to
bind_port = 9191

log_file = /var/log/glance/registry.log

# Backlog requests when creating socket
backlog = 4096

# Enable Registry API versions individually or simultaneously
#enable_v1_registry = True
#enable_v2_registry = True

api_limit_max = 1000
limit_param_default = 25

[database]
connection = mysql://glance:Password12@172.16.1.90/glance

#backend = sqlalchemy

[keystone_authtoken]
auth_host = 172.16.1.90
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = Password12

HAPROXY config:
listen glance-api 172.16.1.90:9292
        balance source
        option tcpka
        option httpchk
        maxconn 10000
        server controller2 172.16 ...
(more)
2014-09-05 08:42:10 -0600 received badge  Famous Question (source)
2014-09-04 19:50:49 -0600 received badge  Famous Question (source)
2014-09-02 15:45:05 -0600 received badge  Notable Question (source)
2014-09-02 10:23:55 -0600 answered a question Problem launching VM on compute nodes

Hi,

I change the ml2.conf.ini file as per that bug report and I get the following error: file: [root@compute1 ml2]# cat ml2_conf.ini [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch,mlnx

(ListOpt) List of network type driver entrypoints to be loaded from

the neutron.ml2.type_drivers namespace.

#

type_drivers = local,flat,vlan,gre,vxlan

Example: type_drivers = flat,vlan,gre,vxlan

(ListOpt) Ordered list of network_types to allocate as tenant

networks. The default value 'local' is useful for single-box testing

but provides no connectivity between hosts.

#

tenant_network_types = local

Example: tenant_network_types = vlan,gre,vxlan

(ListOpt) Ordered list of networking mechanism driver entrypoints

to be loaded from the neutron.ml2.mechanism_drivers namespace.

mechanism_drivers =

Example: mechanism_drivers = openvswitch,mlnx

Example: mechanism_drivers = arista

Example: mechanism_drivers = cisco,logger

Example: mechanism_drivers = openvswitch,brocade

Example: mechanism_drivers = linuxbridge,brocade

[ml2_type_flat]

(ListOpt) List of physical_network names with which flat networks

can be created. Use * to allow flat networks with arbitrary

physical_network names.

#

flat_networks =

Example:flat_networks = physnet1,physnet2

Example:flat_networks = *

[ml2_type_vlan]

(ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples

specifying physical_network names usable for VLAN provider and

tenant networks, as well as ranges of VLAN tags on each

physical_network available for allocation as tenant networks.

#

network_vlan_ranges =

Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre] tunnel_id_ranges = 1:1000

(ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation

tunnel_id_ranges =

[ml2_type_vxlan]

(ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating

ranges of VXLAN VNI IDs that are available for tenant network allocation.

#

vni_ranges =

(StrOpt) Multicast group for the VXLAN interface. When configured, will

enable sending all broadcast traffic to this multicast group. When left

unconfigured, will disable multicast VXLAN mode.

#

vxlan_group =

Example: vxlan_group = 239.1.1.1

[securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True

Controls if neutron security group is enabled or not.

It should be false when you use nova security group.

enable_security_group = True

[ovs] local_ip = 172.16.1.6 tunnel_types = gre enable_tunneling = True

Starting neutron-openvswitch-agent: [ OK ] [root@compute1 ml2]# service openvswitch restart ovs-vswitchd is not running. Killing ovsdb-server (6559) [ OK ] Starting ovsdb-server [ OK ] Configuring Open vSwitch system IDs [ OK ] Inserting openvswitch module FATAL: Module openvswitch not found. [FAILED] not removing bridge module because bridges exist (virbr0) ... (warning). Enabling remote OVSDB managers [ OK ]

2014-09-02 07:01:54 -0600 received badge  Popular Question (source)
2014-09-01 17:36:49 -0600 received badge  Editor (source)
2014-09-01 17:35:10 -0600 asked a question Problem launching VM on compute nodes

Hi, I have 2 compute nodes - Icehouse install and use neutron networking . I can not lauch an vm from the dashboard - I can see one - compute1 on the dashboard and the openstack-nova-compute is dead on the second - compute2.

I would like to deal with getting the vm deployed on compute1 if people can help me - then we could move on to getting the process working on compute 2 - thanks

Here are the nova.conf - compute1:

[DEFAULT]
auth_strategy = keystone
rpc_backend = qpid
qpid_hostname = controller
my_ip = *******
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = ******
novncproxy_base_url = http://controller:6080/vnc_auto.html
glance_host = controller
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = neutron
neutron_admin_auth_url = http://controller:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

[database]
connection = mysql://nova:nova@controller/nova


[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_user = nova
admin_tenant_name = service
admin_password = nova

WHen I try to launch the VM get the following messages in scheduler log:

2014-09-01 22:59:42.999 5547 ERROR nova.scheduler.filter_scheduler [req-e8265a50-e3cd-43a3-b8d7-7c2a083ea6a9 b684f4ee84ec4a929e43451eff2213a3 1e8bb30522b84783a2b321db35d31489] [instance: a44b06ae-bf26-485f-8e13-840029002f11] Error from last host: compute1 (node compute1): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1305, in _build_instance\n    set_access_ip=set_access_ip)\n', u'  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 393, in decorated_function\n    return function(self, context, *args, **kwargs)\n', u'  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1717, in _spawn\n    LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u'  File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n    six.reraise(self.type_, self.value, self.tb)\n', u'  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1714, in _spawn\n    block_device_info)\n', u'  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2262, in spawn\n    write_to_disk=True)\n', u'  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3443, in to_xml\n    disk_info, rescue, block_device_info)\n', u'  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3259, in get_guest_config\n    flavor)\n', u'  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py", line 384, in get_config\n    _("Unexpected vif_type=%s") % vif_type)\n', u'NovaException: Unexpected vif_type=binding_failed\n']
2014-09-01 22:59:43.008 5547 WARNING nova.scheduler.driver [req-e8265a50-e3cd-43a3-b8d7-7c2a083ea6a9 b684f4ee84ec4a929e43451eff2213a3 1e8bb30522b84783a2b321db35d31489] [instance: a44b06ae-bf26-485f-8e13-840029002f11] Setting instance to ERROR state.

the compute log on compute1:

2014-09-01 22:59:42.952 29747 TRACE nova.compute.manager [instance: a44b06ae-bf26-485f-8e13-840029002f11]     disk_info, rescue, block_device_info)
2014-09-01 22:59:42.952 29747 TRACE nova.compute.manager [instance: a44b06ae-bf26-485f-8e13-840029002f11]   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3259, in get_guest_config
2014-09-01 22:59:42.952 29747 TRACE nova.compute.manager [instance: a44b06ae-bf26-485f-8e13-840029002f11]     flavor)
2014-09-01 22:59:42.952 29747 TRACE nova.compute.manager [instance: a44b06ae-bf26-485f-8e13-840029002f11]   File "/usr/lib/python2.6 ...
(more)
2014-08-14 06:38:31 -0600 commented question What shared storage would I use for live migration of VM's?

Hi SGPJ, no issue really - just looking for the correct technology to use for shared strage - as I read on the RED HAT openstack site(though cant seem to find the link now) - that nfs is not suitable for use for shared storage for production purposes - once you start to scale - so just want to get some opinions?

2014-08-14 06:38:16 -0600 received badge  Notable Question (source)
2014-08-14 04:16:11 -0600 answered a question What shared storage would I use for live migration of VM's?

Hi jproulx, many thanks for the response - so would you say in your opinion that CEPH would be the best choice at this stage, we currently run VMWARE and Hyper-V clouds -but wish to offer a service to customers of the choice of three hypervisers , - kvm,esx and hyper-V with openstack managing the three - obviously I have a way to go before this is realized - I use an iscsi SAN for the vmware and hyper-v , so using an iscsi san for all three would probably be easiest for backend storage, for management purposes.

2014-08-13 19:30:19 -0600 received badge  Popular Question (source)
2014-08-13 13:17:00 -0600 received badge  Student (source)