# Revision history [back]

What is the function of both?

The terms "compute node" and "controller" describe a fairly simplistic deployment model with only two roles:

• A "compute" node runs a hypervisor and the nova-compute service. This is where virtual instances actually run.

• The "controller" is basically everything else -- public facing APIs, web interface, scheduler, database server, message queue, etc.

2) Do we always need both of them?

It really depends on what you are trying to do. You need all of the services that comprise these two roles, but they don't need to be organized like this.

It's pretty common to have a "compute" role in any model. On the other hand, the stuff on the "controller" is often spread across several systems:

• Dedicated network hosts to isolate tenant traffic
• Dedicated storage nodes providing glance/cinder/etc, or providing a shared filesystem like NFS or GlusterFS for use by the rest of the cluster,
• Dedicated database servers
• Dedicated web servers to clearly separate "ui" from "api services"

Etc.

How you structure your deployment depends heavily on your goals: do you need HA? Are you trying to optimize storage performance? Are you trying to minimize cost?

What is the function of both?

The terms "compute node" and "controller" describe a fairly simplistic deployment model with only two roles:

• A "compute" node runs a hypervisor and the nova-compute service. This is where virtual instances actually run.

• The "controller" is basically everything else -- public facing APIs, web interface, scheduler, database server, message queue, etc.

2)

Do we always need both of them?

It really depends on what you are trying to do. You need all of the services that comprise these two roles, but they don't need to be organized like this.

It's pretty common to have a "compute" role in any model. On the other hand, the stuff on the "controller" is often spread across several systems:

• Dedicated network hosts to isolate tenant traffic
• Dedicated storage nodes providing glance/cinder/etc, or providing a shared filesystem like NFS or GlusterFS for use by the rest of the cluster,
• Dedicated database servers
• Dedicated web servers to clearly separate "ui" from "api services"

Etc.

How you structure your deployment depends heavily on your goals: do you need HA? Are you trying to optimize storage performance? Are you trying to minimize cost?