Mastodon

Hardware requirements

While Warren supports a majority of commodity hardware there are still minimum requirements in place that will provide a redundant and reliable IaaS cluster.

Hardware specifications below are a good indication of a production grade setup to start with and to grow upon.

Recommended hardware

The following hardware recommendation is optimal for most commercial use cases and launching a regional cloud business.
Type Nodes Configuration
Control 3 x Warren Control Nodes

for resilient IaaS solution

  • CPU: 2 x Intel Xeon E5-2xxx (or better with a minimum of 10 cores)
  • RAM: 128GB ECC DDR4+ (or more)
  • Storage (OS): 2 x 512GB SSD (RAID 1)
  • NIC (management): 1 x 1Gbit/s
  • NIC: 2 x 25Gbit/s
Compute 3 x Compute nodes

used to run customer workloads (minimum of 3 nodes gives you live migrations in case of hardware failure)

  • CPU: 2 x Intel Xeon Silver/Gold/Platinum or AMD Epyc (with minimum of 20 cores per CPU for optimal RAM and vCPU ratio read more here)
  • RAM: 512GB ECC DDR4+ (optimal)
  • Storage (OS): 2 x 256GB SSD (RAID 1)
  • NIC (management): 1 x 1Gbit/s
  • NIC: 2 x 25Gbit/s
Storage 3 x Ceph nodes

for distributed block storage immune to most hardware failures

  • CPU: 2 x Intel Xeon E5-2xxx (or similar with a minimum of 12 cores)
  • RAM: 128GB ECC DDR4+ (or more)
  • Storage (OS): 2 x 512GB NVMe (Mon-data, Wal, Rocksd)
  • Storage (OSD): 4 x 3.84TB NVMe (Block storage)
  • NIC (management): 1 x 1Gbit/s
  • NIC: 2 x 25Gbit/s
  • Please refer to Croit documentation for planning out your Ceph storage cluster.
Object Storage (S3)
(optional)
2 x Ceph nodes (in addition to standard general storage)

for distributed object storage immune to most hardware failures

  • CPU: 2 x Intel Xeon E5-2xxx (or similar with a minimum of 12 cores)
  • RAM: 128GB ECC DDR4+ (or more)
  • Storage (OS): 2 x 512GB NVMe (Mon-data, Wal, Rocksd)
  • Storage (OSD): 4 x 18TB SAS (Object storage)
  • NIC (management): 1 x 1Gbit/s
  • NIC: 2 x 25Gbit/s
Network 2 x TOR & 1 x management switch

for redundant high speed network

  • 2 x TOR switch 25G (48 port)
  • 1 x management switch 1G (48 port)

Why Warren needs 3 physical control nodes to operate?

Since the software is built exclusively for production grade setups, Warren team has decided to deliver only highly redundant and resilient deployments especially if the initial purpose of the setup is a pilot or proof of concept.

What actually runs on the 3 physical control nodes?

Delivering a turnkey IaaS solution involves much more than for instance vanilla OpenStack or CloudStack provides. The following breakdown should give an high level overview of some of the components running on the 3 Warren Control Nodes.

NixOS operating system is specifically designed to declaratively deploy identical cloud systems for maximal fault tolerance. Every time a new release is deployed the prior version will be discarded and every cloud provider will have the fresh features introduced with zero downtime (usually twice every month).
Some of the open-source software hosted with the control plane
  • Tungsten Fabric – Software Defined Networking.
  • Ceph – Software Defined Storage.
  • Prometheus, Loki & Grafana – Monitoring managed workloads remotely and locally.
  • Apache Mesos & Marathon – Compute cluster manager that handles workloads in a distributed environment through dynamic resource sharing and isolation.
  • Kong – Gateway for presenting endpoints of most Warren components from a single root API.
  • Consul – Service discovery & key-value store
Warren platform components / microservices
  • Base operator – Virtual machine and compute instance management.
  • User Interface – Each UI user gets an individual UI Docker container of JupyterLab for isolation.
  • User Resource Interface – User management, token management and managed services management.
  • Storage Manager – Blocks storage management, object storage management.
  • Charging Engine – Collecting user resource usage (currently on hour precision)
  • Network Manager – VPC, FIP and Load Balancer management.
  • Payment – Invoices, payment gateways, pricing management.

Rack planning

Configuration

BASE
12 nodes
STANDALONE RACK
GROWTH
32 nodes
STANDALONE RACK
SCALE
31 nodes
3 RACK MINIMUM
Aggregation node
Control nodes
Compute nodes
Network nodes
Block storage nodes
Object storage nodes
(optional)
* Every other rack

Routers are not included.
Juniper routers are required for IPv6 support.

Have a question or want to learn more about Warren Metal Stack?