Moving is challenging. I recently moved into a new house. Boxes are everywhere. Little pieces of styrofoam surprise me in most rooms. At night the house makes noises I don’t understand. And the electric drill I used maybe twice a year is sitting on my kitchen counter. I use it daily.
But I didn’t move into the house because I thought moving would be pleasant. I moved because my family needed more space.
There are plenty of other reasons someone might move: Cost. Quality of life. Proximity to the people that are important to you. Database migrations are the same.
With that context, let’s review the top 3 reasons to migrate databases in 2023, and some of the ways migrations can be simpler than you imagine.
A product was built on a legacy relational database like Postgres or MySQL. Now the app has grown and the database is the bottleneck. Sharding will add unwanted complexity. It’s time to migrate.
Many enduring businesses are undergoing tectonic architecture and infrastructure shifts as they move to the cloud and adopt distributed systems. We don’t have to unpack all the benefits of the cloud here, but shifting to it makes leveraging a distributed database easier. Suffice it to say that this shift to the cloud provides some opportunities to reduce cost, improve availability, and increase efficiency.
Initially, it made sense to build on something like MongoDB or Cassandra but as the application took shape it became clear that transactional consistency and leveraging the power of SQL was a priority and a new database was needed.
There is particularly green grass on the distributed database side of the fence. Maybe you want the advantage of knowing that you can scale your workload at any time without any application changes. Or maybe you want the ability to survive more than just a node going down, but an entire cloud region failing. Or perhaps you’re trying to deliver a true low latency experience for users that are spread out across the world.
Regardless of why you want to migrate, there is always going to be some apprehension about the difficulty of migrating a database. But, if you’re migrating to CockroachDB, it doesn’t have to be so challenging.
In the image above you see what a typical journey to CockroachDB looks like:
To make several steps in this migration process more simple, we’ve introduced two new tools: A Schema Conversion Tool and support for CockroachDB as an AWS DMS target. The tools are the foundation of a new migration suite we’re calling MOLT (which might stand for “Migrate Off Legacy Technology”).
The beauty of the schema conversion tool is that it’s simple to use. You upload a schema file that’s a pgdump from a PostgreSQL, Oracle, MySQL, or SQL Server instance, and it will walk you through the entire schema that you have line by line and show you which statements are good for CockroachDB and which ones need some changes. It will also make suggestions for changes that will help you achieve the performance characteristics that you want from a distributed database system.
The schema conversion tool allows you to address the incompatibilities between the legacy and distributed systems quickly and easily. Check out the demo below in which we begin with zero CockroachDB presence, create a new serverless cluster, migrate our schema to CockroachDB, and create a database containing the migrated schema. After nine minutes you can have a migrated schema in a CockroachDB cluster into which you can now start loading data.
After migrating the schema you need to get the data into CockroachDB. There are a handful of ways to do that. You can read about each of them in our documentation or watch the clip of Adam Storm, Senior Director of Engineering, explaining each option. For the purposes of this blog, I want to introduce our support for AWS DMS.
AWS DMS (Database Migration Service) is a database migration service that allows you to migrate between any two endpoints provided that one of them live on AWS. It works for SQL Server, Oracle, MySQL, PostgreSQL, and many other database sources.
While configuring CockroachDB as your migration target, you’ll want to configure it as a PostgreSQL target. This leverages our PostgreSQL wire compatibility which tells Amazon that it’s moving data into PostgreSQL when the data actually ends up in CockroachDB. What’s important is that your cluster does not need to be running on AWS to use DMS. DMS requires that only one of the endpoints, either the source or the target, be on AWS. So if you’re moving to a CockroachDB cloud instance on AWS, your source database could be self-hosted or in another cloud provider.
There are essentially three steps for using CockroachDB with AWS DMS:
Adam explains each step in this clip:
We know that this is a simple example of AWS DMS. But we have customers who have been moving large amounts of data in a very seamless and quick fashion using the tool. If you’re interested in learning more, we have this fully explained in our documentation about how to set up CockroachDB to use AWS DMS.
Recently, in the 23.1 release of CockroachDB, we announced integrations with both Qlik and Striim to give you more options for extracting and loading data.
If you’re looking for more info on CockroachDB migrations, there is also a webinar in which we demo the migration suite and go into more depth about the benefits of migrating to a distributed database. If you have further questions please connect with us in the CockroachDB community slack. And remember, you can get started with CockroachDB for free right here.
When we set out to build a better relational database seven years ago, we envisioned a solution that was scalable, …
Read moreToday we released a series of exciting updates to give you even more flexibility to run CockroachDB how you want, where …
Read moreTransactions make up an important part of the database workload for most modern applications. And when it comes time to …
Read more