RESOURCES

Hardware Recommendation

Clear, production-ready baseline for control plane, compute, storage and networking. Start with a single rack, then grow by adding compute and storage nodes as demand increases. This configuration provides high availability and fits most general-purpose workloads.

RECOMMENDED SETUP

We recommend a disaggregated setup

Separation of control, compute and storage keeps the platform reliable, easy to grow, and simpler to operate over time.

CONTROL PLANE

Always 3 nodes

Runs the Warren control plane and core services. Three nodes ensure quorum, safe upgrades, and maintenance without tenant disruption.

Compute PLANE

Starts at 2 nodes

Hosts user workloads (VMs and services). At 2 nodes you can already use live migration. With licensed Windows workloads we recommend starting from 4 compute nodes for economic efficiency.

Storage PLANE

Starts at 3 nodes

Provides block (and optional S3 object) storage for all compute nodes. Block storage only starts at 3 nodes; to also offer S3 object storage, plan for at least 5 nodes.

specifications

Warren is flexible with hardware, but we recommend certain minimum thresholds to achieve predictable performance and availability. Use this as a baseline and adjust for your workload mix.

Components

Control node

Compute node

Storage node

CPU

2 x Intel Xeon E5-2xxx (or better with a minimum of 10 cores)

2 x Intel Xeon Silver/Gold/Platinum or AMD EPYC (with minimum of 20 cores per CPU for optimal RAM and vCPU ratio

2x Intel Xeon E5-2xxx (or similar with a minimum of 12 cores)

RAM

128GB ECC DDR4+ (or more)

512GB ECC DDR4+ (optimal)

128GB ECC DDR4+ (or more)

Storage (OS)

2 x 512GB SSD (RAID 1)

2 x 256GB SSD (RAID 1)

Storage (OS)

NIC (management)

1 x 1Gbit/s

1 x 1Gbit/s

1 x 1Gbit/s

NICs

2 x 25Gbit/s

2 x 25Gbit/s

2 x 25Gbit/s

Ceph Mon-data, Wal, Rocksd

-

-

2 x 512GB NVMe

Ceph block storage (OSD)

-

-

4x 3.84TB NVMe

Ceph S3 object storage (OSD) *

-

-

4 x 18TB SAS/SATA

Number of nodes

3

2+ (scale as needed)

3+ (Block only) 

5+ (Block+object)

* SAS/SATA disks for object storage are only needed if you plan to offer S3.

Network equipment

The standard setup fits into a single rack. As you grow, you can stack additional switches and repeat the same pattern across racks or locations.

Components

Specification

Top-of-rack

2 x 25Gbit/s TOR, 48-port each (stacked)

Management

1 x 1Gbit/s, 48-port

Uplinks

Dual 100G (or according to DC design and upstream capacity)

ARCHITECTURE OVERVIEW

How control, compute and storage work together

Each node group has a distinct role. Separating them keeps failure domains small, simplifies upgrades and lets you scale capacity where it is actually needed.

Control nodes

Run the Warren control plane, APIs and management services.

Store cluster configuration, metadata, billing and API endpoints.

Orchestrate cluster-wide tasks such as upgrades, monitoring and scheduling.

Always deployed as 3 nodes for high availability and safe rolling upgrades.

Compute nodes

Run tenant workloads: virtual machines and managed services.

Consume block storage from the storage cluster over high-speed networks.

Scale linearly as usage grows; capacity can be added node-by-node.

Can be specialised into pools of high-memory, high-CPU, GPU or other instance types as needed.

Storage nodes

Run the distributed storage cluster for block and optional S3 object storage.

Use NVMe/SSD for performance-critical block storage and HDD/SAS for deep capacity.

Maintain availability during disk or node failures through replication or erasure coding.

Increase capacity and throughput by adding storage nodes without any interruption.

Planning at scale

Rack planning for growth

If you are planning a larger deployment, including multi-rack or high-density layouts, our team can help review the design and ensure smooth scaling.

Configuration
Number of server nodes

Minimum
8 nodes

Fully features
12 nodes

Multi-rack

Depends

Aggregation node *





Control nodes








Compute nodes















Network nodes










Storage nodes










+ Object storage







* every other rack with multi-rack deployments.

Alternatives

Planning a hyper-converged infrastructure (HCI) or other special setup?

If you are designing an HCI cluster, mixing GPU nodes, or re-using existing hardware, our team can help validate the design and trade-offs for Warren.