dedicated network between cinder and compute

asked 2018-05-15 05:03:52 -0500

AB239 gravatar image

Hello all,

Setup information:

OpenStack Version: Newton

1 KVM Controller on Server 1 (having one 1Gbps NIC card attached at eth0)

1 KVM Cinder on Server 2. Server 2 has attached SSD of 120G as a block device(haven't created any file-system on SSD ). This VM has a 1Gbps NIC card attached at eth0

1 Compute node, Server 3. Compute node has one 1Gbps network card attached to it which currently serves for all types of traffics.

I want to create a dedicated 10Gbps network (attached via a cross cable) between Server 2(having Cinder) and Server 3(Compute). I currently have a default 1Gbps connection but I am not able to get high throughput in R/W operations to SSD due to 1Gbps network limit.

Requirement:

1) Please suggest if I can create a 10gbps network between compute and cinder node. Where do I have to configure the additional interface on either side? 2) I am using LVM driver in Cinder. I can see there is an option of configuring iscsi_ip_address in cinder.conf from the reference config files available on openstack.org. But, I am unable to find similar configuration param on compute side. Looks like some config is needed on neutron side but I am unsure at this point. 3) Any documentation where an end-to-end openstack deployment is explained for multiple NICs?

Looking forward to your replies.

TIA

edit retag flag offensive close merge delete

Comments

How about https://ask.openstack.org/en/question... (thank Google, not me:)

Bernd Bausch gravatar imageBernd Bausch ( 2018-05-15 05:20:47 -0500 )edit