I would advise test TripleO QuickStart utilizing box as VIRTHOST . Any Core2Duo PC is good as WKS.
See for instance http://lxer.com/module/newswire/view/...
Just deploy
3 Ceph VMs to get stanadard 3 Repicas on 3 OSD Nodes. Each HA Controller from PCS Controller's Cluster would
run it's own Ceph Monitor Daemon. To deploy Overcloud :-
#!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy \
--control-scale 3 --compute-scale 5 --ceph-storage-scale 3 \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /usr/share/openstack-tripleo-heat-templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e $HOME/network_env.yaml
Using ansible template ( for instance )
# Sample ha.yml deploying 5 compute nodes and 3 ceph nodes to get healthy cluster
# as far as hardware resources are sufficient
control_memory: 7000
compute_memory: 6700
undercloud_memory: 12000
# Giving the undercloud additional CPUs can greatly improve heat's
# performance (and result in a shorter deploy time).
undercloud_vcpu: 4
# This enables TLS for the undercloud which will also make haproxy bind to the
# configured public-vip and admin-vip.
undercloud_generate_service_certificate: True
# Create three controller nodes and one compute node.
overcloud_nodes:
- name: control_0
flavor: control
- name: control_1
flavor: control
- name: control_2
flavor: control
- name: compute_0
flavor: compute
- name: compute_1
flavor: compute
- name: compute_2
flavor: compute
- name: compute_3
flavor: compute
- name: compute_4
flavor: compute
- name: ceph_0
flavor: ceph
- name: ceph_1
flavor: ceph
- name: ceph_2
flavor: ceph
# We intend test inrospection in VENV
step_introspect: true
# Tell tripleo about our environment.
network_isolation: true
extra_args: >-
--control-scale 3
--compute-scale 5
--ceph-storage-scale 3
--neutron-network-type vxlan
--neutron-tunnel-types vxlan
--ntp-server pool.ntp.org
-e {{overcloud_templates_path}}/environments/storage-environment.yaml
test_ping: true
enable_pacemaker: true
run_tempest: false
Originate deployment from WKS with :-
[user1@fedora24wks tripleo-quickstart]$ bash quickstart.sh -R newton --config ./config/general_config/ha.yml $VIRTHOST
How much RAM ? Having Intel Xeon 8 CORE+HT support / 32-48 GB RAM I would use box as VIRTHOST for TripleO QuckStart VENV Deployment having /home/stack created on nvme drives. CTLPLANE is already rout-able network in QuickStart.
Another idea setting box on linux bridge proceed with packstack VENV Deployment having office network as external for VM hosting RDO Nodes with /var/lib/libvirt/images on nvme drives. Having KSM && Nested KVM enabled. VIRTHOST OS in this case might be F25/F24.
@dbaxps, I have 28 cores with hyper-threading disabled because this formerly was an hpc node (I an thinking to enable hyper-threading) and 512 Gigs of RAM
Why I suggest TripleO QS VENV ? RH would push forward TripleO to bare metal whatever it takes.
Packstack would be left in past.