Use the Live Migration Service

On this page Carat arrow pointing down
Note:

This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.

MOLT LMS (Live Migration Service) is used to perform a live migration to CockroachDB.

The LMS is a self-hosted, horizontally scalable proxy that routes traffic between an application, a source database, and a target CockroachDB database. You use the LMS to control which database, as the "source of truth", is serving reads and writes to an application. You can optionally configure the LMS to shadow production traffic from the source database and validate the query results on CockroachDB. When you have sufficiently tested your application and are confident with its consistency and performance on CockroachDB, you use the LMS to perform the cutover to CockroachDB.

MOLT LMS is self-hosted on Kubernetes and configured using Helm. At a high level, the LMS consists of the following:

  • A number of proxy instances (running in separate Kubernetes pods) across which application traffic is distributed and routed to the source and target databases.
  • An "orchestrator" service (running in a single Kubernetes pod) that coordinates the proxy instances and sends the cutover commands.

This page describes how to install, configure, secure, and use the LMS to perform a live migration.

Terminology

  • A live migration keeps two production databases online (a source and a target database) and uses either replication or dual writing to keep data identical between them until a final cutover.
  • The source of truth is the database that serves reads and writes to the application during a live migration. A cutover switches the source of truth.
  • Shadowing is the execution of source SQL statements on the target database in parallel. The LMS supports multiple shadowing modes.

Requirements

Supported database technologies

Installation

  1. Add the Helm chart repository at https://molt.cockroachdb.com/lms/charts/ with helm repo add. Then install the chart with helm install. For example:

    icon/buttons/copy
    helm repo add lms https://molt.cockroachdb.com/lms/charts/
    helm install lms lms/lms
    
  2. Port-forward from your local machine to the orchestrator, using the release name that you specified with helm install. The orchestrator port is configurable and is 4200 by default.

    icon/buttons/copy
    kubectl port-forward svc/{releasename}-lms-orchestrator 4200:4200 
    
    Tip:

    If you named the release lms, exclude {releasename}- from the command.

  3. To set up the LMS resources, install molt-lms-cli and run the following command, specifying the orchestrator URL:

    icon/buttons/copy
    molt-lms-cli initialize --orchestrator-url localhost:4200
    
  4. The LMS proxy instances and orchestrator are initialized as Kubernetes pods:

    icon/buttons/copy
    kubectl get pods
    
    NAME                                READY   STATUS    RESTARTS   AGE
    lms-orchestrator-86779b87f7-qrk9q   1/1     Running   0          52s
    lms-576bffdd8c-pmh6g                1/1     Running   0          52s
    lms-576bffdd8c-pbdvl                1/1     Running   0          52s
    lms-576bffdd8c-s7kx4                1/1     Running   0          52s
    ...
    

    You will see lms pods that match the configured number of LMS instances, along with one lms-orchestrator pod.

    The pod names are prefixed with the release name you specified when running helm install, unless you named the release lms.

Configuration

To configure the LMS, override the Helm chart values. This involves a rolling restart of your pods. For information on setting Helm chart values, see the Helm documentation.

This section describes the most important and commonly used values. For details on all configurable values, refer to the values.yaml file.

Source dialect

lms:
  sourceDialect: ""
...
orchestrator:
  sourceDialect: ""

You must provide a string value for sourceDialect, which specifies the dialect of your source database. Supported dialects are:

  • postgres: PostgreSQL
  • mysql: MySQL
  • cockroach: CockroachDB

Shadowing

lms:
  shadowMode: none

lms.shadowMode specifies the shadowing behavior used by the LMS. This should depend on your specific migration requirements. For details, see Shadowing modes.

LMS instances

lms:
  replicaCount: 3

lms.replicaCount determines the number of LMS instances created as lms pods on the Kubernetes cluster, across which application traffic is distributed. This defaults to 3.

Connection strings

The following connection strings are specific to your configuration:

  • External connection string for the source database.
  • External connection string for the target CockroachDB database.
  • Internal connection string for the LMS.

You should specify these in external Kubernetes secrets. For details, see Manage external secret.

Warning:

Storing sensitive keys in external secrets is strongly recommended.

Service type

lms:
  service:
    type: ClusterIP
    port: 9043
    metricsPort: 9044
...
orchestrator:
  service:
    type: ClusterIP
    port: 4200
    metricsPort: 4201

service specifies the Kubernetes service type and ports for the LMS instances and orchestrator.

Prometheus Operator

serviceMonitor:
  enabled: false
  labels: {}
  annotations: {}
  interval: 30s
  namespaced: false

serviceMonitor is a custom resource used with the Prometheus Operator for monitoring Kubernetes. For more information, see the Prometheus Operator documentation.

Security

Warning:

Cockroach Labs strongly recommends the following:

Manage external secrets

Cockroach Labs recommends using External Secrets Operator to create and manage Kubernetes secrets that contain:

For information on Kubernetes secrets, see the Kubernetes documentation.

Configure an LMS secret

Create an external secret that specifies the connection strings for the source and target CockroachDB database.

For example, the following ExternalSecret called lms-config uses AWS Secrets Manager as the SecretStore, and references a remote AWS secret called lms-secret:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: lms-config
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secret-store
    kind: SecretStore
  target:
    name: lms-config
    creationPolicy: Owner
    template:
      engineVersion: v2
      data:
        config.json: |
          {
            "INIT_SOURCE": "{{ .source }}",
            "INIT_TARGET": "{{ .target }}"
          }
  data:
  - secretKey: source
    remoteRef:
      key: lms-secret
      property: INIT_SOURCE
  - secretKey: target
    remoteRef:
      key: lms-secret
      property: INIT_TARGET

The connection strings are specified with the following keys inside config.json:

  • INIT_SOURCE: External connection string for the source database, including the paths to your client certificate and keys.
  • INIT_TARGET: External connection string for the CockroachDB database, including the paths to your client certificate and keys.

The remote secret lms-secret will contain the full connection strings and paths, such that the config.json keys resolve to:

"INIT_SOURCE": "mysql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full?sslrootcert=path/to/mysql.ca&sslcert=path/to/mysql.crt&sslkey=path/to/mysql.key",
"INIT_TARGET": "postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full?sslrootcert=path/to/ca.crt&sslcert=path/to/client.username.crt&sslkey=path/to/client.username.key"

In the Helm configuration, lms.configSecretName must specify the external secret name:

lms:
  configSecretName: "lms-config"

Configure an orchestrator secret

Create an external secret that specifies the connection strings for the LMS and target CockroachDB database.

For example, the following ExternalSecret called orch-config uses AWS Secrets Manager as the SecretStore, and references a remote AWS secret called orch-secret:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: orch-config
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secret-store
    kind: SecretStore
  target:
    name: orch-config
    creationPolicy: Owner
    template:
      engineVersion: v2
      data:
        config.json: |
          {
            "LMS_URL": "{{ .lmsUrl }}",
            "CRDB_URL": "{{ .crdbUrl }}"
          }
  data:
  - secretKey: lmsUrl
    remoteRef:
      key: orch-secret
      property: LMS_URL
  - secretKey: crdbUrl
    remoteRef:
      key: orch-secret
      property: CRDB_URL

The connection strings are specified with the following keys inside config.json:

  • LMS_URL: Internal connection string for the LMS, specifying the username and password of the source database. The format depends on your source dialect:

    • MySQL: {username}:{password}@({releasename}-lms.{namespace}.svc.cluster.local:{port})/{database}
    • PostgreSQL: postgresql://{username}:{password}@{releasename}-lms.{namespace}.svc.cluster.local:{port}/{database}
    Tip:

    If you named the release lms during installation, exclude {releasename}- from the LMS connection string.

  • CRDB_URL: External connection string for the CockroachDB database, including the paths to your client certificate and keys.

The remote secret orch-secret will contain the full connection strings, such that the config.json keys resolve to:

"LMS_URL": "{username}:{password}@({releasename}-molt-lms.{namespace}.svc.cluster.local:{port})/{database}",
"CRDB_URL": "postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=verify-full?sslrootcert=path/to/ca.crt&sslcert=path/to/client.username.crt&sslkey=path/to/client.username.key"

In the Helm configuration, orchestrator.configSecretName must specify the external secret name:

orchestrator:
  configSecretName: "orch-config"

Configure the LMS certificates

Create an external secret that specifies the LMS certificate, key, and (optional) CA certificate.

For example, the following ExternalSecret called lms-tls uses AWS Secrets Manager as the SecretStore, and references a remote AWS secret called lms-certs:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: lms-tls
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secret-store
    kind: SecretStore
  target:
    name: lms-tls
    creationPolicy: Owner
    template:
      engineVersion: v2
      data:
        lms-ca.crt: '{{ .caCert }}'
        lms-tls.crt: '{{ .serverCert }}'
        lms-tls.key: '{{ .serverKey }}'
  data:
  - secretKey: caCert
    remoteRef:
      key: lms-certs
      property: caCert
  - secretKey: serverCert
    remoteRef:
      key: lms-certs
      property: serverCert
  - secretKey: serverKey
    remoteRef:
      key: lms-certs
      property: serverKey

In the preceding example, each .crt and .key filename is associated with its corresponding value in the remote secret lms-certs.

In the Helm configuration, lms.sslVolumes and lms.sslVolumeMounts must specify volumes and mount paths that contain the server-side certificates. The path to each file is specified as an environment variable in lms.env. Cockroach Labs recommends mounting certificates to /app/certs.

lms:
  sslVolumes:
    - name: lms-tls
      secret:
        secretName: lms-tls
  sslVolumeMounts:
    - mountPath: "/app/certs"
      name: lms-tls
      readOnly: true
  env:
    - name: LMS_SSL_CA
      value: /app/certs/lms-ca.crt
    - name: LMS_SSL_CERT
      value: /app/certs/lms-tls.crt
    - name: LMS_SSL_KEY
      value: /app/certs/lms-tls.key

Configure the orchestrator and client certificates

Create an external secret that specifies the orchestrator certificate, key, and (optional) CA certificate.

For example, the following ExternalSecret called orch-tls uses AWS Secrets Manager as the SecretStore, and references a remote AWS secret called orch-certs:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: orch-tls
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secret-store
    kind: SecretStore
  target:
    name: orch-tls
    creationPolicy: Owner
    template:
      engineVersion: v2
      data:
        orch-ca.crt: '{{ .caCert }}'
        orch-tls.crt: '{{ .serverCert }}'
        orch-tls.key: '{{ .serverKey }}'
  data:
  - secretKey: caCert
    remoteRef:
      key: orch-certs
      property: caCert
  - secretKey: serverCert
    remoteRef:
      key: orch-certs
      property: serverCert
  - secretKey: serverKey
    remoteRef:
      key: orch-certs
      property: serverKey

In the preceding example, each .crt and .key filename is associated with its corresponding value in the remote secret orch-certs.

In the Helm configuration, orchestrator.sslVolumes and orchestrator.sslVolumeMounts must specify volumes and mount paths that contain the server-side certificates. The path to each file is specified as an environment variable in orchestrator.env. Cockroach Labs recommends mounting certificates to /app/certs.

orchestrator:
  sslVolumes:
    - name: orch-tls
      secret:
        secretName: orch-tls
  sslVolumeMounts:
    - mountPath: "/app/certs"
      name: orch-tls
      readOnly: true
  env:
    - name: ORCH_CA_TLS_CERT
      value: /app/certs/orch-ca.crt
    - name: ORCH_TLS_CERT
      value: /app/certs/orch-tls.crt
    - name: ORCH_TLS_KEY
      value: /app/certs/orch-tls.key

You will also need to create and specify a CLI client certificate, key, and (optional) CA certificate. It's easiest to specify these as environment variables in the shell that is running molt-lms-cli:

icon/buttons/copy
export CLI_TLS_CA_CERT="{path-to-cli-ca-cert}"
icon/buttons/copy
export CLI_TLS_CLIENT_CERT="{path-to-cli-client-cert}"
icon/buttons/copy
export CLI_TLS_CLIENT_KEY="{path-to-cli-client-key}"

molt-lms-cli

The molt-lms-cli command-line interface is used to inspect the LMS instances and perform cutover.

To install molt-lms-cli, download the binary that matches your system. To download the latest binary:

Operating System AMD 64-bit ARM 64-bit
Windows Download Download
Linux Download Download
Mac Download Download
Tip:

For previous binaries, see the MOLT version manifest.

Commands

Command Usage
initialize Set up the required objects for running the LMS. You must run this before using the LMS.
connections list List all client connections to the LMS and their most recent queries.
cutover consistent Specify a consistent cutover. You must also specify begin, commit, or abort. For usage details, see Consistent cutover.
begin Begin a consistent cutover. This pauses traffic to the source database.
commit Commit a consistent cutover. This resumes traffic on the target database. This is only effective after running cutover consistent begin.
abort Abort a consistent cutover after running consistent cutover begin, unless you have also run consistent cutover commit. This resumes traffic to the source database.
status Display the current configuration of the LMS instances.

Flags

Flag Description
--orchestrator-url The URL for the orchestrator, using the configured port. Prefix the URL with https instead of http when using certificates. This flag is required unless the value is exported as an environment variable using export CLI_ORCHESTRATOR_URL="{orchestrator-URL}".
--tls-ca-cert The path to the CA certificate. This can also be exported as an environment variable using export CLI_TLS_CA_CERT="{path-to-cli-ca-cert}".
--tls-client-cert The path to the client certificate. This can also be exported as an environment variable using export CLI_TLS_CLIENT_CERT="{path-to-cli-client-cert}".
--tls-client-key The path to the client key. This can also be exported as an environment variable using export CLI_TLS_CLIENT_KEY="{path-to-cli-client-key}".

Shadowing modes

The LMS can be configured to shadow production traffic from the source database and validate the query results on the target. The exact behavior is configured with the shadowMode Helm value.

none

MOLT LMS shadowing mode - none

shadowMode: none disables shadowing.

  • The LMS sends application requests to the source of truth only.
  • Query results from the source of truth are returned to the application.
  • Writes must be manually replicated from the source database to the target database.

You can use this mode to perform a consistent cutover, along with a database replication technology that replicates writes to the target database.

async

MOLT LMS shadowing mode - async

shadowMode: async writes to both databases.

  • The LMS sends application requests to the source of truth and target database in asynchronous threads, and waits only for the source of truth to respond.
  • Query results from the source of truth are returned to the application.
  • If an asynchronous request has not yet completed, subsequent asynchronous requests will be permanently dropped.

You can use this mode to confirm that your queries succeed on CockroachDB without verifying performance or correctness.

Note:

async mode is intended for testing purposes.

sync

MOLT LMS shadowing mode - sync

shadowMode: sync writes to both databases.

  • The LMS sends application requests to the source of truth and the target database, and waits for each to respond.
  • Query results from the source of truth are returned to the application.
  • Query results from the non-source of truth are discarded.

strict-sync

MOLT LMS shadowing mode - strict-sync

shadowMode: strict-sync writes to both databases and enforces correctness on both databases.

  • The LMS sends application requests to the source of truth and the target database, and waits for each to respond.
  • Query results from the source of truth are returned to the application.
  • If the query returns an error on the source of truth, that error is returned to the application. If the query succeeds on the source of truth but fails on the target, the error from the target is returned to the application.
  • If the query fails on both databases, the target will return the error from the source of truth.

Perform a cutover

Consistent cutover

A consistent cutover maintains data consistency with minimal downtime. The goal of consistent cutover is to stop application traffic long enough for replication to catch up and ensure that the cutover achieves consistency across the two databases.

When using the LMS, consistent cutover is handled using the molt-lms-cli commands cutover consistent begin and cutover consistent commit, during which application requests are queued and will be responded to after cutover. This delay in response time is related to the maximum duration of any transactions and queries that need to complete, and the time it takes for replication to catch up from the source to the target database.

Note:

These steps assume you have already followed the overall steps to prepare for migration. In particular, update your schema and application queries to work with CockroachDB.

To perform a consistent cutover with the LMS:

  1. Configure the LMS with your deployment details, and follow our security recommendations.

  2. Set the shadowing mode to none.

  3. Set up ongoing replication between the source database and CockroachDB, using a tool that replicates writes to the target database.

  4. Send application requests to the LMS, which routes the traffic to the source database. The source database is designated the source of truth.

  5. Use MOLT Verify to validate that the replicated data on CockroachDB is consistent with the source of truth.

  6. Begin the consistent cutover. Requests are now queued in the LMS, including queries from existing connections and new connection requests to the LMS:

    icon/buttons/copy
    molt-lms-cli cutover consistent begin {flags}
    

    This command tells the LMS to pause all application traffic to the source of truth. The LMS then waits for transactions to complete and prepared statements to close.

  7. Verify that replication on CockroachDB has caught up with the source of truth. For example, insert a row on the source database and check that the row exists on CockroachDB.

    If you have an implementation that replicates back to the source database, this should be enabled before committing the cutover.

  8. Once all writes have been replicated to the target database, commit the consistent cutover:

    icon/buttons/copy
    molt-lms-cli cutover consistent commit {flags}
    

    This command tells the LMS to switch the source of truth to the target database. Application traffic is now routed to the target database, and requests are processed from the queue in the LMS.

    To verify that CockroachDB is now the source of truth, you can run molt-lms-cli status.

  9. Again, use MOLT Verify to validate that the data on the source database and CockroachDB are consistent.

If any problems arise during a consistent cutover:

  • After running cutover consistent begin:

    icon/buttons/copy
    molt-lms-cli cutover consistent abort {flags}
    

    This command tells the LMS to resume application traffic to the source of truth, which has not yet been switched. Cutover cannot be aborted after running cutover consistent commit.

  • After running cutover consistent commit:

    Reissue the cutover consistent begin and cutover consistent commit commands to revert the source of truth to the source database.

See also


Yes No
On this page

Yes No