Upgrade on Kubernetes


This guide describes the procedure to upgrade Portworx running as OCI container using talisman.

Upgrading Portworx

To upgrade to the 2.1.2 release (the latest stable at the time of this writing), run the following command:

curl -fsL https://install.portworx.com/2.1.2/upgrade | bash -s

This runs a script that will start a Kubernetes Job to perform the following operations:

  1. Updates RBAC objects that are being used by Portworx with the latest set of permissions that are required
  2. Triggers RollingUpdate of the Portworx DaemonSet to the default stable image and monitors that for completion
If you’re running Portworx 2.0.3.7, we recommend upgrading directly to 2.1.2 or later as this version fixes several issues in the previous build. Please see the release notes page for more details.

Upgrading Stork

Fetch the latest Stork specs using the following curl command. Run these commands on any machine that has kubectl access to your cluster.

If you are using your own private/custom registry for container images, add &reg=<your-registry-url> to the below curl command. e.g &reg=artifactory.company.org:6555
KBVER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}')
curl -fsL -o stork-spec.yaml "https://install.portworx.com/2.1?kbver=$KBVER&comp=stork"

Next, apply it in your cluster.

kubectl apply -f stork-spec.yaml

Customizing the upgrade process

Specify a different Portworx upgrade image

You can invoke the upgrade script with the -t to override the default Portworx image. For example below command upgrades Portworx to portworx/oci-monitor:2.0.3.4 image.

curl -fsL https://install.portworx.com/2.1/upgrade | bash -s -- -t 2.0.3.4

Airgapped clusters

When upgrading Portworx in Kubernetes using the curl command in examples above, a number of docker images are fetched from container registries on the Internet (e.g. docker.io, gcr.io). If your nodes don’t have access to these registries, you need to first pull the required images in your cluster and then provide the precise image names to the upgrade process.

The below sections outline the exact steps for this.

Step 1: Pull the required images

If you want to upgrade to the latest 2.1 stable release, skip the below export. If you wish to upgrade to a specific 2.1 release, set the PX_VER variable as below to the desired version.

# To determine the latest minor 2.1 release currently available, please use the curl-expression below
# Alternatively, you can specify the version yourself, e.g.: PX_VER=2.0.2.3
export PX_VER=$(curl -fs https://install.portworx.com/2.1/upgrade | awk -F'=' '/^OCI_MON_TAG=/{print $2}')

Now pull the required Portworx images.

export PX_IMGS="portworx/oci-monitor:$PX_VER portworx/px-enterprise:$PX_VER portworx/talisman:latest"

echo $PX_IMGS | xargs -n1 docker pull

Step 2: Loading Portworx images on your nodes

If you have nodes which have access to a private registry, follow Step 2a: Push to local registry server, accessible by air-gapped nodes.

Otherwise, follow Step 2b: Push directly to nodes using tarball.

Step 2a: Push to local registry server, accessible by air-gapped nodes

  1. Export your registry location:

    export REGISTRY=myregistry.net:5443
    The registry location above can be a registry and it’s port (e.g myregistry.net:5443) or it could include your own repository in the registry (e.g myregistry.net:5443/px-images).
  2. Push images to the above registry:

    # Trim trailing slashes:
    REGISTRY=${REGISTRY%/}
    # re-tag and push into custom/local registry defined previously
    # Check if using custom registry+repository (e.g. `REGISTRY=myregistry.net:5443/px-images`)
    # or just the registry (e.g. `REGISTRY=myregistry.net:5443`)
    echo $REGISTRY | grep -q /
    if [ $? -eq 0 ]; then
        # registry + repo are used -- we'll strip original image repositories
        for i in $PX_IMGS $PX_ENT; do tg="$REGISTRY/$(basename $i)" ; docker pull $i; docker tag $i $tg ; docker push $tg ; done
    else
        # only registry used -- we'll keep original image repositories
        for i in $PX_IMGS $PX_ENT; do tg="$REGISTRY/$i" ; docker pull $i; docker tag $i $tg ; docker push $tg ; done
    fi

Now that you have the images in your registry, continue with Step 3: Start the upgrade.

Step 2b: Push directly to nodes using tarball

Below steps save all Portworx images into a tarball after which they can be loaded onto nodes individually.

  1. Save all Portworx images into a tarball called px-offline.tar.

    docker save -o px-offline.tar $PX_IMGS $PX_ENT
  2. Load images from tarball

    You can load all images from the tarball on a node using docker load command. Below command uses ssh on nodes node1, node2 and node3 to copy the tarball and load it. Change the node names as per your environment.

    for no in node1 node2 node3; do
        cat px-offline.tar | ssh $no docker load
    done

Step 3: Start the upgrade

Run the below script to start the upgrade on your airgapped cluster.

# Default image names
TALISMAN_IMAGE=portworx/talisman
OCIMON_IMAGE=portworx/oci-monitor

# Do we have container registry override?
if [ "x$REGISTRY" != x ]; then
   echo $REGISTRY | grep -q /
   if [ $? -eq 0 ]; then  # REGISTRY defines both registry and repository
      TALISMAN_IMAGE=$REGISTRY/talisman
      OCIMON_IMAGE=$REGISTRY/oci-monitor
   else                   # $REGISTRY contains only registry, we'll assume default repositories
      TALISMAN_IMAGE=$REGISTRY/portworx/talisman
      OCIMON_IMAGE=$REGISTRY/portworx/oci-monitor
   fi
fi

[[ -z "$PX_VER" ]] || ARG_PX_VER="-t $PX_VER"

curl -fsL https://install.portworx.com/2.1/upgrade | bash -s -- -I $TALISMAN_IMAGE -i $OCIMON_IMAGE $ARG_PX_VER

Troubleshooting

Find out status of Portworx pods

To get more information about the status of Portworx daemonset across the nodes, run:

kubectl get pods -o wide -n kube-system -l name=portworx
NAME             READY   STATUS              RESTARTS   AGE   IP              NODE
portworx-9njsl   1/1     Running             0          16d   192.168.56.73   minion4
portworx-fxjgw   1/1     Running             0          16d   192.168.56.74   minion5
portworx-fz2wf   1/1     Running             0          5m    192.168.56.72   minion3
portworx-x29h9   0/1     ContainerCreating   0          0s    192.168.56.71   minion2

As we can see in the example output above:

  • looking at the STATUS and READY, we can tell that the rollout-upgrade is currently creating the container on the “minion2” node
  • looking at AGE, we can tell that:
    • “minion4” and “minion5” have Portworx up for 16 days (likely still on old version, and to be upgraded), while the
    • “minion3” has Portworx up for only 5 minutes (likely just finished upgrade and restarted Portworx)
  • if we keep on monitoring, we will observe that the upgrade will not switch to the “next” node until STATUS is “Running” and the READY is 11 (meaning, the “readynessProbe” reports Portworx service is operational).

Find out version of all nodes in Portworx cluster

One can run the following command to inspect the Portworx cluster:

PX_POD=$(kubectl get pods -n kube-system -l name=portworx -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $PX_POD -n kube-system /opt/pwx/bin/pxctl cluster list
[...]
Nodes in the cluster:
ID      DATA IP         CPU        MEM TOTAL  ...   VERSION             STATUS
minion5 192.168.56.74   1.530612   4.0 GB     ...   1.2.11.4-3598f81    Online
minion4 192.168.56.73   3.836317   4.0 GB     ...   1.2.11.4-3598f81    Online
minion3 192.168.56.72   3.324808   4.1 GB     ...   1.2.11.10-421c67f   Online
minion2 192.168.56.71   3.316327   4.1 GB     ...   1.2.11.10-421c67f   Online
  • from the output above, we can confirm that the:
    • “minion4” and “minion5” are still on the old Portworx version (1.2.11.4), while
    • “minion3” and “minion2” have already been upgraded to the latest version (in our case, v1.2.11.10).

Last edited: Wednesday, Aug 21, 2019