Learn SQL fast and focus on what app devs need to know, from schema design to writing transactions that perform at scale.
Start learningAs organizations transition to the cloud, they eventually find that the legacy relational databases that are behind some of their most critical applications simply do not take advantage of the promise of the cloud and are difficult to scale. It is the database that is limiting the speed and effectiveness of this transition. To address this, organizations want the reliability of a tested relational data store, such as Oracle, SQL Server, Postgres, and MySQL, but with the benefits of scale and global coverage that comes with the cloud.
Some have turned to NoSQL stores to try to meet these requirements. These alternatives can typically meet the scale requirements but then fall short as a transactional database because they were not designed from the ground up to provide true consistency. Recently, some of the NoSQL solutions have offered “ACID transactions” but they’re full of caveats and fail at delivering isolation levels necessary for mission-critical workloads like a financial ledger, inventory control, and identity management.
Some of the most successful companies that function at global scale have actually sorted out this problem and purpose-built databases to handle this. The most public example of this is Google Cloud Spanner. In 2012, Google published a paper on Spanner that demonstrated a new way of looking at databases, one that was rooted in distributed systems and global scale.
“Spanner is Google’s scalable, multi-version, globally distributed, and synchronously-replicated database. It is the first system to distribute data at global scale and support externally-consistent distributed transactions.”
– Spanner: Google’s Globally-Distributed Database
There is a lot wrapped up in the description and also a 14-page paper that goes into explicit detail of how they were able to build a consistent AND scalable database. The paper is pure genius and outlines the foundation of the next evolution of the database: Distributed SQL.
Distributed SQL is a single logical database deployed across multiple physical nodes in a single data center or across many data centers if need be; all of which allow it to deliver elastic scale and bulletproof resilience.
Several attempts have been made to deliver truly scalable SQL in a distributed environment. Some have tried to retrofit existing databases to meet their needs but this ultimately does not deliver on the promise of a truly distributed SQL database. So then, what makes up a distributed SQL database? The requirements can be summarized into the five core conditions:
A distributed SQL database must seamlessly scale in order to mirror the capabilities of cloud environments without introducing operational complexity. Just as we can scale up compute without heavy lifting, the database should be able to scale as well. This includes an ability to evenly distribute data across multiple distributed participants in the database.
A distributed SQL database must deliver a high level of isolation in a distributed environment. In a cloud-based world with distributed systems and microservices are the default architectures, transactional consistency becomes difficult as multiple operators may be trying to work on the same data. The database should mediate contention and deliver the same level of isolation of transactions as we expect in a single instance database.
A distributed SQL database must naturally deliver the highest level of resiliency without any need of external tooling to accomplish this. The cloud presents an always-on environment for our workloads and the database should have the same properties. With a distributed database we can reduce the time it takes to recover from a failure down to near zero and replicate data naturally without any external configuration.
A distributed SQL database should allow for distribution of data throughout a complex, widely dispersed geographic environment. The cloud presents an ability to reach every corner of the globe with an acceptable quality of service and the database should not restrict your applications from doing so. It should perform to meet your expectations
And while these four technical requirements are paramount, there is one key prerequisite above all. The database must speak SQL. It is the language of data and the default for all application logic. We should not have to retrain developers to use the database. They should be able to use the SQL dialect they are already familiar with.
There are a few databases that meet these requirements. The list includes Spanner, of course, but you could also consider Amazon Aurora, Yugabyte, FaunaDB and CockroachDB as members of this new category. All of these members meet the requirements in some form, some better than others. Noticeably missing from this list are Oracle, Postgres, MySQL and all of the NoSQL options. While each may meet some of the requirements, none of them meet all of the requirements and cannot be considered alternatives.
Here is a quick video in which I describe these principles:
Once you live in a distributed world, it becomes apparent that the database itself could actually take care of domiciling data. With participants located in various regions or data centers, it becomes possible to understand the location of each and then tie the data that it stores to a location. Some application architects have implemented this as part of an application but this approach is error-prone and brittle. Using the database to geo-partition data based on some field in a table is a new requirement for distributed SQL. This allows you to use the database to address data sovereignty concerns. It can also be used to have data follow a user so you can ensure low latency access to their information or to tie data to an explicit cloud so you can minimize egress charges.
A unique trait of a distributed SQL database is that it has semi-autonomous units that all participate in a larger system. Each unit should be able to be deployed by itself and then join the larger system, the CockroachDB cluster. This is an inherent trait that fuels the first five requirements listed above. However, this can also be used to extend the database to be truly multi-cloud. The database should not rely on a single network to accomplish distribution. It should be divorced from these limits so that a participant can be located anywhere, from any public cloud, a private cloud and even a single on-premise instance. This requirement is important for the future of compute where we live in a distributed hybrid and multi-cloud world.
While the aforementioned seven requirements are unique to distributed SQL (well, all of them except the SQL thing), it is important to note that it is still a database and, of course, is required to deliver on the baseline requirements to be a database. There are a set of expectations around the following:
These “foundational” requirements are critical and signal a more mature, enterprise-ready database. They may not be the most exciting features but are critical for adoption and success within any project.
CockroachDB is a wonderful option for your cloud-native distributed SQL database. It has helped hundreds of organizations transition both the very most mundane workloads and some of the most mission-critical to the cloud. It has been the foundation of a cloud-native strategy within more advanced orchestrated environments. We are quite proud of what we have built.
We are also advocates of this emerging category and believe that distributed SQL is a proper evolution of the database and the future of the way we manage data in the cloud. To this end, we feel strongly that our solution and every other should be held to the highest regard when it comes to these core requirements:
A database that meets all of these requirements has matured to be trusted for the mission-critical (and not so mission-critical) applications in the cloud. And some of these requirements are not simple. These are advanced topics that take time to get right. When you discuss these items with your vendor, we encourage you to go deep on concepts like consistency and locality. While everyone has read the same papers, ultimately it comes down to implementation and more importantly, production use. As with any new category, this becomes a paramount concern as it is only in production where the complex corner cases and issues can be identified and fixed. So, I guess then, the final (and 12th) requirement is maturity.
We recently announced general availability (GA) for Serverless, with support for change data capture (CDC), backup and …
Read moreDealing with slow database performance? One potential cause of this problem is database contention.
Even if you’re not …
Read moreBefore we define what a serverless database is, perhaps we should talk about why there seems to be building momentum …
Read more