• United States+1
  • United Kingdom+44
  • Afghanistan (‫افغانستان‬‎)+93
  • Albania (Shqipëri)+355
  • Algeria (‫الجزائر‬‎)+213
  • American Samoa+1684
  • Andorra+376
  • Angola+244
  • Anguilla+1264
  • Antigua and Barbuda+1268
  • Argentina+54
  • Armenia (Հայաստան)+374
  • Aruba+297
  • Australia+61
  • Austria (Österreich)+43
  • Azerbaijan (Azərbaycan)+994
  • Bahamas+1242
  • Bahrain (‫البحرين‬‎)+973
  • Bangladesh (বাংলাদেশ)+880
  • Barbados+1246
  • Belarus (Беларусь)+375
  • Belgium (België)+32
  • Belize+501
  • Benin (Bénin)+229
  • Bermuda+1441
  • Bhutan (འབྲུག)+975
  • Bolivia+591
  • Bosnia and Herzegovina (Босна и Херцеговина)+387
  • Botswana+267
  • Brazil (Brasil)+55
  • British Indian Ocean Territory+246
  • British Virgin Islands+1284
  • Brunei+673
  • Bulgaria (България)+359
  • Burkina Faso+226
  • Burundi (Uburundi)+257
  • Cambodia (កម្ពុជា)+855
  • Cameroon (Cameroun)+237
  • Canada+1
  • Cape Verde (Kabu Verdi)+238
  • Caribbean Netherlands+599
  • Cayman Islands+1345
  • Central African Republic (République centrafricaine)+236
  • Chad (Tchad)+235
  • Chile+56
  • China (中国)+86
  • Christmas Island+61
  • Cocos (Keeling) Islands+61
  • Colombia+57
  • Comoros (‫جزر القمر‬‎)+269
  • Congo (DRC) (Jamhuri ya Kidemokrasia ya Kongo)+243
  • Congo (Republic) (Congo-Brazzaville)+242
  • Cook Islands+682
  • Costa Rica+506
  • Côte d’Ivoire+225
  • Croatia (Hrvatska)+385
  • Cuba+53
  • Curaçao+599
  • Cyprus (Κύπρος)+357
  • Czech Republic (Česká republika)+420
  • Denmark (Danmark)+45
  • Djibouti+253
  • Dominica+1767
  • Dominican Republic (República Dominicana)+1
  • Ecuador+593
  • Egypt (‫مصر‬‎)+20
  • El Salvador+503
  • Equatorial Guinea (Guinea Ecuatorial)+240
  • Eritrea+291
  • Estonia (Eesti)+372
  • Ethiopia+251
  • Falkland Islands (Islas Malvinas)+500
  • Faroe Islands (Føroyar)+298
  • Fiji+679
  • Finland (Suomi)+358
  • France+33
  • French Guiana (Guyane française)+594
  • French Polynesia (Polynésie française)+689
  • Gabon+241
  • Gambia+220
  • Georgia (საქართველო)+995
  • Germany (Deutschland)+49
  • Ghana (Gaana)+233
  • Gibraltar+350
  • Greece (Ελλάδα)+30
  • Greenland (Kalaallit Nunaat)+299
  • Grenada+1473
  • Guadeloupe+590
  • Guam+1671
  • Guatemala+502
  • Guernsey+44
  • Guinea (Guinée)+224
  • Guinea-Bissau (Guiné Bissau)+245
  • Guyana+592
  • Haiti+509
  • Honduras+504
  • Hong Kong (香港)+852
  • Hungary (Magyarország)+36
  • Iceland (Ísland)+354
  • India (भारत)+91
  • Indonesia+62
  • Iran (‫ایران‬‎)+98
  • Iraq (‫العراق‬‎)+964
  • Ireland+353
  • Isle of Man+44
  • Israel (‫ישראל‬‎)+972
  • Italy (Italia)+39
  • Jamaica+1876
  • Japan (日本)+81
  • Jersey+44
  • Jordan (‫الأردن‬‎)+962
  • Kazakhstan (Казахстан)+7
  • Kenya+254
  • Kiribati+686
  • Kosovo+383
  • Kuwait (‫الكويت‬‎)+965
  • Kyrgyzstan (Кыргызстан)+996
  • Laos (ລາວ)+856
  • Latvia (Latvija)+371
  • Lebanon (‫لبنان‬‎)+961
  • Lesotho+266
  • Liberia+231
  • Libya (‫ليبيا‬‎)+218
  • Liechtenstein+423
  • Lithuania (Lietuva)+370
  • Luxembourg+352
  • Macau (澳門)+853
  • Macedonia (FYROM) (Македонија)+389
  • Madagascar (Madagasikara)+261
  • Malawi+265
  • Malaysia+60
  • Maldives+960
  • Mali+223
  • Malta+356
  • Marshall Islands+692
  • Martinique+596
  • Mauritania (‫موريتانيا‬‎)+222
  • Mauritius (Moris)+230
  • Mayotte+262
  • Mexico (México)+52
  • Micronesia+691
  • Moldova (Republica Moldova)+373
  • Monaco+377
  • Mongolia (Монгол)+976
  • Montenegro (Crna Gora)+382
  • Montserrat+1664
  • Morocco (‫المغرب‬‎)+212
  • Mozambique (Moçambique)+258
  • Myanmar (Burma) (မြန်မာ)+95
  • Namibia (Namibië)+264
  • Nauru+674
  • Nepal (नेपाल)+977
  • Netherlands (Nederland)+31
  • New Caledonia (Nouvelle-Calédonie)+687
  • New Zealand+64
  • Nicaragua+505
  • Niger (Nijar)+227
  • Nigeria+234
  • Niue+683
  • Norfolk Island+672
  • North Korea (조선 민주주의 인민 공화국)+850
  • Northern Mariana Islands+1670
  • Norway (Norge)+47
  • Oman (‫عُمان‬‎)+968
  • Pakistan (‫پاکستان‬‎)+92
  • Palau+680
  • Palestine (‫فلسطين‬‎)+970
  • Panama (Panamá)+507
  • Papua New Guinea+675
  • Paraguay+595
  • Peru (Perú)+51
  • Philippines+63
  • Poland (Polska)+48
  • Portugal+351
  • Puerto Rico+1
  • Qatar (‫قطر‬‎)+974
  • Réunion (La Réunion)+262
  • Romania (România)+40
  • Russia (Россия)+7
  • Rwanda+250
  • Saint Barthélemy (Saint-Barthélemy)+590
  • Saint Helena+290
  • Saint Kitts and Nevis+1869
  • Saint Lucia+1758
  • Saint Martin (Saint-Martin (partie française))+590
  • Saint Pierre and Miquelon (Saint-Pierre-et-Miquelon)+508
  • Saint Vincent and the Grenadines+1784
  • Samoa+685
  • San Marino+378
  • São Tomé and Príncipe (São Tomé e Príncipe)+239
  • Saudi Arabia (‫المملكة العربية السعودية‬‎)+966
  • Senegal (Sénégal)+221
  • Serbia (Србија)+381
  • Seychelles+248
  • Sierra Leone+232
  • Singapore+65
  • Sint Maarten+1721
  • Slovakia (Slovensko)+421
  • Slovenia (Slovenija)+386
  • Solomon Islands+677
  • Somalia (Soomaaliya)+252
  • South Africa+27
  • South Korea (대한민국)+82
  • South Sudan (‫جنوب السودان‬‎)+211
  • Spain (España)+34
  • Sri Lanka (ශ්‍රී ලංකාව)+94
  • Sudan (‫السودان‬‎)+249
  • Suriname+597
  • Svalbard and Jan Mayen+47
  • Swaziland+268
  • Sweden (Sverige)+46
  • Switzerland (Schweiz)+41
  • Syria (‫سوريا‬‎)+963
  • Taiwan (台灣)+886
  • Tajikistan+992
  • Tanzania+255
  • Thailand (ไทย)+66
  • Timor-Leste+670
  • Togo+228
  • Tokelau+690
  • Tonga+676
  • Trinidad and Tobago+1868
  • Tunisia (‫تونس‬‎)+216
  • Turkey (Türkiye)+90
  • Turkmenistan+993
  • Turks and Caicos Islands+1649
  • Tuvalu+688
  • U.S. Virgin Islands+1340
  • Uganda+256
  • Ukraine (Україна)+380
  • United Arab Emirates (‫الإمارات العربية المتحدة‬‎)+971
  • United Kingdom+44
  • United States+1
  • Uruguay+598
  • Uzbekistan (Oʻzbekiston)+998
  • Vanuatu+678
  • Vatican City (Città del Vaticano)+39
  • Venezuela+58
  • Vietnam (Việt Nam)+84
  • Wallis and Futuna+681
  • Western Sahara (‫الصحراء الغربية‬‎)+212
  • Yemen (‫اليمن‬‎)+967
  • Zambia+260
  • Zimbabwe+263
  • Åland Islands+358
Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Kubernetes Migration: How To Move Data Freely Across Clusters

This blog focuses on migrating Kubernetes clusters from one cloud provider to another. We will be migrating our entire data from Google Kubernetes Engine to Azure Kubernetes Service using Velero.

Prerequisite

  • A Kubernetes cluster > 1.10

Setup Velero with Restic Integration

Velero consists of a client installed on your local computer and a server that runs in your Kubernetes cluster, like Helm

Installing Velero Client

You can find the latest release corresponding to your OS and system and download Velero from there:

$ wget
https://github.com/vmware-tanzu/velero/releases/download/v1.3.1/velero-v1.3.1-linux-amd64.tar.gz

Extract the tarball (change the version depending on yours) and move the Velero binary to /usr/local/bin

$ tar -xvzf velero-v0.11.0-darwin-amd64.tar.gz
$ sudo mv velero /usr/local/bin/
$ velero help
view raw tarball.sh hosted with ❤ by GitHub

Create a Bucket for Velero on GCP

Velero needs an object storage bucket where it will store the backup. Create a GCS bucket using:

gsutil mb gs://<bucket-name

Create a Service Account for Velero

# Create a Service Account
gcloud iam service-accounts create velero --display-name "Velero service account"
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list --filter="displayName:Velero service account" --format 'value(email)')
#Define Permissions for the Service Account
ROLE_PERMISSIONS=(
compute.disks.get
compute.disks.create
compute.disks.createSnapshot
compute.snapshots.get
compute.snapshots.create
compute.snapshots.useReadOnly
compute.snapshots.delete
compute.zones.get
)
# Create a Role for Velero
PROJECT_ID=$(gcloud config get-value project)
gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
# Create a Role Binding for Velero
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin
# Generate Service Key file for Velero and save it for later
gcloud iam service-accounts keys create credentials-velero \
--iam-account $SERVICE_ACCOUNT_EMAIL

Install Velero Server on GKE and AKS

Use the --use-restic flag on the Velero install command to install restic integration.

$ velero install \
--use-restic \
--bucket \
--provider gcp \
--secret-file \
--use-volume-snapshots=false \
--plugins=--plugins restic/restic
$ velero plugin add velero/velero-plugin-for-gcp:v1.0.1
$ velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0

After that, you can see a DaemonSet of restic and deployment of Velero in your Kubernetes cluster.

$ kubectl get po -n velero
view raw deployment.sh hosted with ❤ by GitHub

Restic Components

In addition, there are three more Custom Resource Definitions and their associated controllers to provide restic support.

Restic Repository

  • Maintain the complete lifecycle for Velero’s restic repositories.
  • Restic lifecycle commands such as restic init check and prune are handled by this CRD controller.

PodVolumeBackup

  • This CRD backs up the persistent volume based on the annotated pod in selected namespaces. 
  • This controller executes backup commands on the pod to initialize backups. 

PodVolumeRestore

  • This controller restores the respective pods that were inside restic backups. And this controller is responsible for the restore commands execution.

Backup an application on GKE

For this blog post, we are considering that Kubernetes already has an application that is using persistent volumes. Or you can install Wordpress as an example as explained here.

We will perform GKE Persistent disk migration to Azure Persistent Disk using Velero. 

Follow the below steps:

  1. To back up, the deployment or statefulset checks for the volume name that is mounted to backup that particular persistent volume. For example, here pods need to be annotated with Volume Name “data”.

volumes:
- name: data
persistentVolumeClaim:
claimName: mongodb

  1. Annotate the pods with the volume names, you’d like to take the backup of and only those volumes will be backed up: 

$ kubectl -n NAMESPACE annotate pod/POD_NAME backup.velero.io/backup-volumes=VOLUME_NAME1,VOLUME_NAME2
view raw annotat_pods.sh hosted with ❤ by GitHub

For example, 

$ kubectl -n application annotate pod/wordpress-pod backup.velero.io/backup-volumes=data
view raw app_annotate.sh hosted with ❤ by GitHub

  1. Take a backup of the entire namespace in which the application is running. You can also specify multiple namespaces or skip this flag to backup all namespaces by default.
    We are going to backup only one namespace in this blog.

$ velero backup create testbackup --include-namespaces application

  1. Monitor the progress of backup:

$ velero backup describe testbackup --details       

Once the backup is complete, you can list it using:

$ velero backup get
view raw get.sh hosted with ❤ by GitHub

You can also check the backup on GCP Portal under Storage.
Select the bucket you created and you should see a similar directory structure:

Velero GCP bucket

Restore the application to AKS

Follow the below steps to restore the backup:

  1. Make sure to have the same StorageClass available in Azure as used by GKE Persistent Volumes. For example, if the Storage Class of the PVs is “persistent-ssd”, create the same on AKS using below template:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: persistent-ssd // same name as GKE storageclass name
provisioner: kubernetes.io/azure-disk
parameters:
storageaccounttype: Premium_LRS
kind: Managed

  1. Run Velero restore.

$ velero restore create testrestore --from-backup testbackup 
view raw run_velero.sh hosted with ❤ by GitHub

You can monitor the progress of restore:

$ velero restore describe testrestore --details
view raw monitor.sh hosted with ❤ by GitHub

You can also check on GCP Portal, a new folder “restores” is created under the bucket.

In some time, you should be able to see that the application namespace is back and Wordpress and MySQL pods are running again.

Troubleshooting

For any errors/issues related to Velero, you may find below commands helpful for debugging purposes:

# Describe the backup to see the status
$ velero backup describe testbackup --details
# Check backup logs, and look for errors if any
$ velero backup logs testbackup
# Describe the restore to see the status
$ velero restore describe testrestore --details
# Check restore logs, and look for errors if any
$ velero restore logs testrestore
# Check velero and restic pod logs, and look for errors if any
$ kubectl -n velero logs VELERO_POD_NAME/RESTIC_POD_NAME
NOTE: You can change the default log-level to debug mode by adding --log-level=debug as an argument to the container command in the velero pod template spec.
# Describe the BackupStorageLocation resource and look for any errors in Events
$ kubectl describe BackupStorageLocation default -n velero

Conclusion

The migration of persistent workloads across Kubernetes clusters on different cloud providers is difficult. This became possible by using restic integration with the Velero backup tool. This tool is still said to be in beta quality as mentioned on the official site. I have performed GKE to AKS migration and it went successfully. You can try other combinations of different cloud providers for migrations.

The only drawback of using Velero to migrate data is if your data is too huge, it may take a while to complete migration. It took me almost a day to migrate a 350 GB disk from GKE to AKS. But, if your data is comparatively less, this should be a very efficient and hassle-free way to migrate it.

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Kubernetes Migration: How To Move Data Freely Across Clusters

This blog focuses on migrating Kubernetes clusters from one cloud provider to another. We will be migrating our entire data from Google Kubernetes Engine to Azure Kubernetes Service using Velero.

Prerequisite

  • A Kubernetes cluster > 1.10

Setup Velero with Restic Integration

Velero consists of a client installed on your local computer and a server that runs in your Kubernetes cluster, like Helm

Installing Velero Client

You can find the latest release corresponding to your OS and system and download Velero from there:

$ wget
https://github.com/vmware-tanzu/velero/releases/download/v1.3.1/velero-v1.3.1-linux-amd64.tar.gz

Extract the tarball (change the version depending on yours) and move the Velero binary to /usr/local/bin

$ tar -xvzf velero-v0.11.0-darwin-amd64.tar.gz
$ sudo mv velero /usr/local/bin/
$ velero help
view raw tarball.sh hosted with ❤ by GitHub

Create a Bucket for Velero on GCP

Velero needs an object storage bucket where it will store the backup. Create a GCS bucket using:

gsutil mb gs://<bucket-name

Create a Service Account for Velero

# Create a Service Account
gcloud iam service-accounts create velero --display-name "Velero service account"
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list --filter="displayName:Velero service account" --format 'value(email)')
#Define Permissions for the Service Account
ROLE_PERMISSIONS=(
compute.disks.get
compute.disks.create
compute.disks.createSnapshot
compute.snapshots.get
compute.snapshots.create
compute.snapshots.useReadOnly
compute.snapshots.delete
compute.zones.get
)
# Create a Role for Velero
PROJECT_ID=$(gcloud config get-value project)
gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
# Create a Role Binding for Velero
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin
# Generate Service Key file for Velero and save it for later
gcloud iam service-accounts keys create credentials-velero \
--iam-account $SERVICE_ACCOUNT_EMAIL

Install Velero Server on GKE and AKS

Use the --use-restic flag on the Velero install command to install restic integration.

$ velero install \
--use-restic \
--bucket \
--provider gcp \
--secret-file \
--use-volume-snapshots=false \
--plugins=--plugins restic/restic
$ velero plugin add velero/velero-plugin-for-gcp:v1.0.1
$ velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0

After that, you can see a DaemonSet of restic and deployment of Velero in your Kubernetes cluster.

$ kubectl get po -n velero
view raw deployment.sh hosted with ❤ by GitHub

Restic Components

In addition, there are three more Custom Resource Definitions and their associated controllers to provide restic support.

Restic Repository

  • Maintain the complete lifecycle for Velero’s restic repositories.
  • Restic lifecycle commands such as restic init check and prune are handled by this CRD controller.

PodVolumeBackup

  • This CRD backs up the persistent volume based on the annotated pod in selected namespaces. 
  • This controller executes backup commands on the pod to initialize backups. 

PodVolumeRestore

  • This controller restores the respective pods that were inside restic backups. And this controller is responsible for the restore commands execution.

Backup an application on GKE

For this blog post, we are considering that Kubernetes already has an application that is using persistent volumes. Or you can install Wordpress as an example as explained here.

We will perform GKE Persistent disk migration to Azure Persistent Disk using Velero. 

Follow the below steps:

  1. To back up, the deployment or statefulset checks for the volume name that is mounted to backup that particular persistent volume. For example, here pods need to be annotated with Volume Name “data”.

volumes:
- name: data
persistentVolumeClaim:
claimName: mongodb

  1. Annotate the pods with the volume names, you’d like to take the backup of and only those volumes will be backed up: 

$ kubectl -n NAMESPACE annotate pod/POD_NAME backup.velero.io/backup-volumes=VOLUME_NAME1,VOLUME_NAME2
view raw annotat_pods.sh hosted with ❤ by GitHub

For example, 

$ kubectl -n application annotate pod/wordpress-pod backup.velero.io/backup-volumes=data
view raw app_annotate.sh hosted with ❤ by GitHub

  1. Take a backup of the entire namespace in which the application is running. You can also specify multiple namespaces or skip this flag to backup all namespaces by default.
    We are going to backup only one namespace in this blog.

$ velero backup create testbackup --include-namespaces application

  1. Monitor the progress of backup:

$ velero backup describe testbackup --details       

Once the backup is complete, you can list it using:

$ velero backup get
view raw get.sh hosted with ❤ by GitHub

You can also check the backup on GCP Portal under Storage.
Select the bucket you created and you should see a similar directory structure:

Velero GCP bucket

Restore the application to AKS

Follow the below steps to restore the backup:

  1. Make sure to have the same StorageClass available in Azure as used by GKE Persistent Volumes. For example, if the Storage Class of the PVs is “persistent-ssd”, create the same on AKS using below template:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: persistent-ssd // same name as GKE storageclass name
provisioner: kubernetes.io/azure-disk
parameters:
storageaccounttype: Premium_LRS
kind: Managed

  1. Run Velero restore.

$ velero restore create testrestore --from-backup testbackup 
view raw run_velero.sh hosted with ❤ by GitHub

You can monitor the progress of restore:

$ velero restore describe testrestore --details
view raw monitor.sh hosted with ❤ by GitHub

You can also check on GCP Portal, a new folder “restores” is created under the bucket.

In some time, you should be able to see that the application namespace is back and Wordpress and MySQL pods are running again.

Troubleshooting

For any errors/issues related to Velero, you may find below commands helpful for debugging purposes:

# Describe the backup to see the status
$ velero backup describe testbackup --details
# Check backup logs, and look for errors if any
$ velero backup logs testbackup
# Describe the restore to see the status
$ velero restore describe testrestore --details
# Check restore logs, and look for errors if any
$ velero restore logs testrestore
# Check velero and restic pod logs, and look for errors if any
$ kubectl -n velero logs VELERO_POD_NAME/RESTIC_POD_NAME
NOTE: You can change the default log-level to debug mode by adding --log-level=debug as an argument to the container command in the velero pod template spec.
# Describe the BackupStorageLocation resource and look for any errors in Events
$ kubectl describe BackupStorageLocation default -n velero

Conclusion

The migration of persistent workloads across Kubernetes clusters on different cloud providers is difficult. This became possible by using restic integration with the Velero backup tool. This tool is still said to be in beta quality as mentioned on the official site. I have performed GKE to AKS migration and it went successfully. You can try other combinations of different cloud providers for migrations.

The only drawback of using Velero to migrate data is if your data is too huge, it may take a while to complete migration. It took me almost a day to migrate a 350 GB disk from GKE to AKS. But, if your data is comparatively less, this should be a very efficient and hassle-free way to migrate it.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings