Performance at Scale: MinIO Pushes Past 1.4 terabits per second with 256 NVMe Drives

The fact that MinIO is fast is not a secret. We routinely publish our benchmarks and have put out comparision work against HDFS and AWS (Spark + Presto) in addition to our HDD and NVMe numbers.

We recently discovered the availability of large NVMe instances on AWS. Larger than we have ever seen in fact. We procured 32 units of i3en.24xlarge instances, each with 8 NVMe drives for a total of 256. This is 4 times larger than the 8 node setup we used for our initial benchmarks.

Once again, MinIO selected the S3-benchmark by wasabi-tech to perform our benchmark tests. This tool conducts benchmark tests from a single client to a single endpoint. During our evaluation, it produced consistent and reproducible results over multiple runs.

Architecture


Measuring JBOD Performance

JBOD performance with O_DIRECT was measured using iozone. Iozone is a filesystem benchmark tool that generates and measures filesystem performance for read, and write among other operations. Iozone command operating with 64 parallel threads, 4MB block-size and O_DIRECT option.

The max JBOD performance of 23.98 GB/sec of read throughput and 12.939 GB/sec of write throughput on a single node was measured. This represents the throughput combining all drives.

Network Performance

The network hardware on these nodes allows a maximum of 100 Gbit/sec. 100 Gbit/sec equates to 12.5 Gbyte/sec (1 Gbyte = 8 Gbit).

Therefore, the maximum throughput that can be expected from each of these nodes would be 12.5 Gbyte/sec.

Results

With just 32 nodes MinIO was able to deliver 183.2 Gigabytes/sec (1.46 Tbps) on reads and 171.3 Gigabytes/sec (1.37 Tbps) on writes on 3.2 Tbps total available bandwidth. Each node contributed 45.8 Gbps and 43 Gbps on average for reads and writes respectively.

The network was almost entirely choked during these tests. In this setup, MinIO shared the same network for server-client and server-server communication. The throughput can nearly be doubled if a dedicated network was available for internode traffic.

The best part about this benchmark (and all of our benchmarks for that matter) is that it can be replicated by anyone with the requisite interest and a credit card. For context, this benchmark cost about $1K to run.

We will publish a full writeup shortly under Resources - > Benchmarks but we felt this was worth sharing.

Previous Post Next Post