The 2022 Cloud Report represents our deepest dive yet into how the three public clouds – AWS, GCP, and Azure – perform for OLTP applications.
And while we’ve highlighted some of the most interesting findings in the report’s Insights section, the full report runs nearly 80 pages. There are many interesting details that didn’t make it into that top-level summary.
In this article, we’ll take a look at four of the more interesting lessons from our testing this year.
First, though, a couple of notes:
Again, for more details about both the how and the why of our testing, please consult the full report!
While achieving low latency is critical for building a great customer experience in any application, our testing found that latency wasn’t a significant differentiator between the three clouds.
Specifically, for the 2022 Cloud Report we tested both intra-AZ and cross-region latency on AWS, GCP, and Azure using netperf 2.7.1, and found that the differences between the three were not particularly significant.
In our intra-AZ testing, all three clouds fell within a range of 0.05 milliseconds – close enough for us to conclude that they were effectively tied.
In our cross-region testing, the gap was a bit wider, but all three of the clouds offered reasonable latency.
Overall, AWS, GCP, and Azure were all close enough that any cloud would likely work well for most applications, although for very high-performance use cases it may be worth taking a closer look at the cloud-by-cloud results in the report.
Our testing found that for most use cases, high-performance storage (pd-extreme on GCP, io2 on AWS, ultra-disk on Azure) probably isn’t worth it.
That’s not to say that the clouds aren’t offering what they promise – they are. But moving from a mid-tier option (pd-ssd on GCP, gp3 on AWS, premium-disk on Azure) to a high-performance option has a significant impact on the overall price of running a node. In fact, high-performance storage accounted for nearly 70% of the total cost of running a node across all three clouds (the cost of mid-tier storage was closer to 30%). For most use cases, that added cost simply isn’t going to be worth it.
The caveat here, though, is that the extra cost can be worth it in specific cases. If you’re running very large (96+ vCPU) nodes or have a high-performance computing workload, for example, the higher-tier storage options may offer the best bang for your buck.
One of the big takeaways from the 2022 Cloud Report was that small instance types (8 vCPUs) outperformed large instance types (~32 vCPUs) in our OLTP benchmark. It’s worth mentioning that small instance types weren’t just the winner in terms of outright performance, they were best in our price-for-performance comparisons as well.
This was true across all three clouds, although some large instance types such as GCP’s t2d-standard-32 offered excellent price for performance as well. In general, instances with AMD Milan processors, which dominated the overall performance rankings, also performed quite well in our $/TPM price-for-performance metric.
In our networking benchmarks, both GCP and Azure instances had runs that exceeded their advertised throughput numbers. No instance type consistently exceeded advertised performance levels, but some instances did see performance bursts well above what GCP and Azure promise.
Because they weren’t consistent, these bursts can only be seen as a nice-to-have – applications that require high performance should pay for guaranteed throughput to match or exceed their needs. For everyone else, though, it’s nice to know that at least on GCP and Azure, you can occasionally get more than what you paid for!
Want more insights? Download the 2022 Cloud Report today – it’s free – and dig into the results for yourself!
One of the most interesting questions we get to dig into in our annual Cloud Report is which CPUs offer the best …Read More