2. Pair Clusters


Understand cluster pairing

In order to failover an application running on one Kubernetes cluster to another Kubernetes cluster, you need to migrate the resources between them. On Kubernetes you will define a trust object required to communicate with the other Kubernetes cluster called a ClusterPair. This creates a pairing between the scheduler (Kubernetes) so that all the Kubernetes resources can be migrated between them. Throughout this section, the notion of source and destination clusters apply only at the Kubernetes level and does not apply to Storage, as you have a single Portworx storage fabric running on both the clusters. As Portworx is stretched across them, the volumes do not need to be migrated.

For reference:

  • Source Cluster is the Kubernetes cluster where your applications are running.
  • Destination Cluster is the Kubernetes cluster where the applications will be failed over, in case of a disaster in the source cluster.

Generate and Apply a ClusterPair Spec

In Kubernetes, you must define a trust object called ClusterPair. Portworx requires this object to communicate with the destination cluster. The ClusterPair object pairs the Portworx storage driver with the Kubernetes scheduler, allowing the volumes and resources to be migrated between clusters.

The ClusterPair is generated and used in the following way:

  • The ClusterPair spec is generated on the destination cluster.
  • The generated spec is then applied on the source cluster

Perform the following steps to create a cluster pair:

Create object store credentials for cloud clusters

If you are running Kubernetes on-premises, you may skip this section. If your Kubernetes clusters are on the cloud, you must create object credentials on both the destination and source clusters before you can create a cluster pair.

The options you use to create your object store credentials differ based on which object store you use:

Create Amazon s3 credentials

Create the credentials by entering the pxctl credentials create command, specifying the following:

  • --provider as s3
  • --s3-access-key with your aws access key
  • --s3-secret-key with your aws secret key
  • --s3-region with your region
  • --s3-endpoint with s3.amazonaws.com
  • clusterPair_ with the UUID of your destination cluster
/opt/pwx/bin/pxctl credentials create --provider s3 --s3-access-key <aws_access_key> --s3-secret-key <aws_secret_key> --s3-region us-east-1  --s3-endpoint s3.amazonaws.com clusterPair_<UUID_of_destination_cluster>

Create Microsoft Azure credentials

Create the credentials by entering the pxctl credentials create command, specifying the following:

  • --provider as azure
  • --azure-account-name with the name of your Azure account
  • --azure-account-key with your Azure account key
  • clusterPair_ with the UUID of your destination cluster appended
/opt/pwx/bin/pxctl credentials create --provider azure --azure-account-name <your_azure_account_name> --azure-account-key <your_azure_account_key> clusterPair_<UUID_of_destination_cluster>

Create Google Cloud Platform credentials

Create the credentials by entering the pxctl credentials create command, specifying the following:

  • --provider as google
  • --google-project-id with the string of your Google project ID
  • --google-json-key-file with the filename of your GCP JSON key file
  • clusterPair_ with the UUID of your destination cluster appended
/opt/pwx/bin/pxctl credentials create --provider google --google-project-id <your_google_project_ID> --google-json-key-file <your_GCP_JSON_key_file> clusterPair_<UUID_of_destination_cluster>

Generate a ClusterPair on the destination cluster

To generate the ClusterPair spec, run the following command on the destination cluster:

storkctl generate clusterpair -n migrationnamespace remotecluster

Here, the name (remotecluster) is the Kubernetes object that will be created on the source cluster representing the pair relationship.

During the actual migration, you will reference this name to identify the destination of your migration.

apiVersion: stork.libopenstorage.org/v1alpha1
kind: ClusterPair
metadata:
    creationTimestamp: null
    name: remotecluster
    namespace: migrationnamespace
spec:
   config:
      clusters:
         kubernetes:
            LocationOfOrigin: /etc/kubernetes/admin.conf
            certificate-authority-data: <CA_DATA>
            server: https://192.168.56.74:6443
      contexts:
         kubernetes-admin@kubernetes:
            LocationOfOrigin: /etc/kubernetes/admin.conf
            cluster: kubernetes
            user: kubernetes-admin
      current-context: kubernetes-admin@kubernetes
      preferences: {}
      users:
         kubernetes-admin:
            LocationOfOrigin: /etc/kubernetes/admin.conf
            client-certificate-data: <CLIENT_CERT_DATA>
            client-key-data: <CLIENT_KEY_DATA>
    options:
       <insert_storage_options_here>: ""
       mode: DisasterRecovery
status:
  remoteStorageId: ""
  schedulerStatus: ""
  storageStatus: ""

In the generated ClusterPair spec, you will need to do the following modifications:

  • You will see an unpopulated options section. It expects options that are required to pair Storage. However, as we have a single storage fabric, this section is not needed. You should delete the line <insert_storage_options_here>.
  • Under the options section, the mode is set to DisasterRecovery, this is required for scheduling periodic migrations. More information about it in the next step.

Once the modifications are done, save it into a file clusterpair.yaml

Apply the generated ClusterPair on the source cluster

On the source cluster create the clusterpair by applying the generated spec.

kubectl create -f clusterpair.yaml

Verify the Pair status

Once you apply the above spec on the source cluster you should be able to check the status of the pairing using storkctl on the source cluster.

storkctl get clusterpair
NAME               STORAGE-STATUS   SCHEDULER-STATUS   CREATED
remotecluster      NotProvided      Ready              09 Apr 19 18:16 PDT

So, on a successful pairing you should see the “Scheduler Status” as “Ready” and the “Storage Status” as “Not Provided”

Once the pairing is configured, applications can now failover from one cluster to another. In order to achieve that, we need to migrate the Kubernetes resources to the destination cluster. The next step will help your synchronize the Kubernetes resources between your clusters.



Last edited: Thursday, Oct 3, 2019