Ask Your Question

Revision history [back]

Create nova compute instance with virtual pci devices (NICs)

Hello,

I'd like to create an instance to test an application which is using DPDK as a driver.

This instance basically has a default SSH port (to manage it, doesn't technically even need internet, although could be handy). But it should have multiple NICs, not just NIC aliases such as in ifconfig, but lspcikind of NICs.

To do this with a virtual machine is rather "straightforward", I can edit the vagrant setup file to contain all these nics and even map them to the correct PCI domain, bus, lane, ... On my host computer, I can just spawn 2 of these virtual machines, create a bridge on the host machine, and link them together.

However, doing this all on a host machine is rather tedious. If possible I'd like to automate all of this with a CI (ansible most likely), and let the CI create these images on openstack. The CI would also create the NICs, assign them the correct PCI setups, and link them together outside of the virtual machine through a virtual bridge.

I'm aware that I'll lose almost all the speed of using DPDK in this way, which is the reason we're using DPDK. However, this is a purely test setup, we just need to check if this application which we're writing is working properly.

So far I've found ways to do pci passthrough from host --> nova compute instance, however I wish to create a full new virtual pci device (NIC), and atttach that the nova compute instance.

Almost all virtualization software have these type of setups, just looking at vagrant, virtualbox, ... They all have the option to add more network adapters, specify the types, etc.

Any suggestions where I should be looking for documentation like this?

Best regards, Mathieu