Ask Your Question

Revision history [back]

Sahara via Heat Services forks Hadoop Cluster at least 4 VMs ( Vanilla 2.7.1) or more
Targeting this task first thing I would avoid is Ubuntu 14.04 . I would attempt to setup 3 Node
Cluster on RDO Mitaka on CentOS 7.2 via pretty simple answer-file

1. Controller/Network + Sahara Node + Compute Node (as more powerful as better )

or

2. Controller/Network + Compute/SAHARA (as more powerful as better )

Setting

CONFIG_SAHARA_INSTALL=y
CONFIG_HEAT_INSTALL=y
CONFIG_CONTROLLER_HOST=IP1
CONFIG_COMPUTE_HOST=IP2
CONFIG_NETWORK_HOST=IP1
CONFIG_UNSUPORTED=y
CONFIG_SAHARA_HOST = IP3

Worst case scenario I cannot split SAHARA HOST from Controller ( CONFIG_UNSUPPORTED doesn't work
for SAHARA HOST ) then

Controller/SAHARA + Network + Compute

Updating answer-file correspondingly.
In both scenarios I would install yum install centos-release-openstack-mitaka on Compute Node
to run qemu-kvm-ev 2.3.0 to support Hadoop VM's Cluster running on Compute.
I cannot explain why I would start with attempting Config 2.

Sahara via Heat Services forks Hadoop Cluster at least 4 VMs ( Vanilla 2.7.1) or more
Targeting this task first thing I would avoid is Ubuntu 14.04 . I would attempt to setup 3 Node
Cluster on RDO Mitaka on CentOS 7.2 via pretty simple answer-file

1. Controller/Network + Sahara Node + Compute Node (as more powerful as better )

or

2. Controller/Network + Compute/SAHARA (as more powerful as better )

Setting

CONFIG_SAHARA_INSTALL=y
CONFIG_HEAT_INSTALL=y
CONFIG_CONTROLLER_HOST=IP1
CONFIG_COMPUTE_HOST=IP2
CONFIG_NETWORK_HOST=IP1
CONFIG_UNSUPORTED=y
CONFIG_SAHARA_HOST = IP3
CONFIG_SAHARA_HOST=IP2 ( or IP3 matching Config 1)

Worst case scenario I cannot split SAHARA HOST from Controller ( CONFIG_UNSUPPORTED doesn't work
for SAHARA HOST ) then

Controller/SAHARA + Network + Compute

Updating answer-file correspondingly.
In both scenarios I would install yum install centos-release-openstack-mitaka on Compute Node
to run qemu-kvm-ev 2.3.0 to support Hadoop VM's Cluster running on Compute.
I cannot explain why I would start with attempting Config 2.

Sahara via Heat Services forks Hadoop Cluster at least 4 VMs ( Vanilla 2.7.1) or more
Targeting this task first thing I would avoid is Ubuntu 14.04 . I would attempt to setup 3 Node
Cluster on RDO Mitaka on CentOS 7.2 via pretty simple answer-file

1. Controller/Network + Sahara Node + Compute Node (as more powerful as better )

or

2. Controller/Network + Compute/SAHARA (as more powerful as better )

Setting

CONFIG_SAHARA_INSTALL=y
CONFIG_HEAT_INSTALL=y
CONFIG_CONTROLLER_HOST=IP1
CONFIG_COMPUTE_HOST=IP2
CONFIG_NETWORK_HOST=IP1
CONFIG_UNSUPORTED=y
CONFIG_SAHARA_HOST=IP2 ( or IP3 matching Config 1)

Worst case scenario I cannot split SAHARA HOST from Controller ( CONFIG_UNSUPPORTED doesn't work
for SAHARA HOST ) then

Controller/SAHARA + Network + Compute

Updating answer-file correspondingly.
In both scenarios I would install yum install centos-release-openstack-mitaka on Compute Node
to run qemu-kvm-ev 2.3.0 to support Hadoop VM's Cluster running on Compute.
I cannot explain why I would start with attempting Config 2.

2. In general, if CONFIG_UNSUPPORTED=y works
for SAHARA HOST classic design would be Config 1.