• United States+1
  • United Kingdom+44
  • Afghanistan (‫افغانستان‬‎)+93
  • Albania (Shqipëri)+355
  • Algeria (‫الجزائر‬‎)+213
  • American Samoa+1684
  • Andorra+376
  • Angola+244
  • Anguilla+1264
  • Antigua and Barbuda+1268
  • Argentina+54
  • Armenia (Հայաստան)+374
  • Aruba+297
  • Australia+61
  • Austria (Österreich)+43
  • Azerbaijan (Azərbaycan)+994
  • Bahamas+1242
  • Bahrain (‫البحرين‬‎)+973
  • Bangladesh (বাংলাদেশ)+880
  • Barbados+1246
  • Belarus (Беларусь)+375
  • Belgium (België)+32
  • Belize+501
  • Benin (Bénin)+229
  • Bermuda+1441
  • Bhutan (འབྲུག)+975
  • Bolivia+591
  • Bosnia and Herzegovina (Босна и Херцеговина)+387
  • Botswana+267
  • Brazil (Brasil)+55
  • British Indian Ocean Territory+246
  • British Virgin Islands+1284
  • Brunei+673
  • Bulgaria (България)+359
  • Burkina Faso+226
  • Burundi (Uburundi)+257
  • Cambodia (កម្ពុជា)+855
  • Cameroon (Cameroun)+237
  • Canada+1
  • Cape Verde (Kabu Verdi)+238
  • Caribbean Netherlands+599
  • Cayman Islands+1345
  • Central African Republic (République centrafricaine)+236
  • Chad (Tchad)+235
  • Chile+56
  • China (中国)+86
  • Christmas Island+61
  • Cocos (Keeling) Islands+61
  • Colombia+57
  • Comoros (‫جزر القمر‬‎)+269
  • Congo (DRC) (Jamhuri ya Kidemokrasia ya Kongo)+243
  • Congo (Republic) (Congo-Brazzaville)+242
  • Cook Islands+682
  • Costa Rica+506
  • Côte d’Ivoire+225
  • Croatia (Hrvatska)+385
  • Cuba+53
  • Curaçao+599
  • Cyprus (Κύπρος)+357
  • Czech Republic (Česká republika)+420
  • Denmark (Danmark)+45
  • Djibouti+253
  • Dominica+1767
  • Dominican Republic (República Dominicana)+1
  • Ecuador+593
  • Egypt (‫مصر‬‎)+20
  • El Salvador+503
  • Equatorial Guinea (Guinea Ecuatorial)+240
  • Eritrea+291
  • Estonia (Eesti)+372
  • Ethiopia+251
  • Falkland Islands (Islas Malvinas)+500
  • Faroe Islands (Føroyar)+298
  • Fiji+679
  • Finland (Suomi)+358
  • France+33
  • French Guiana (Guyane française)+594
  • French Polynesia (Polynésie française)+689
  • Gabon+241
  • Gambia+220
  • Georgia (საქართველო)+995
  • Germany (Deutschland)+49
  • Ghana (Gaana)+233
  • Gibraltar+350
  • Greece (Ελλάδα)+30
  • Greenland (Kalaallit Nunaat)+299
  • Grenada+1473
  • Guadeloupe+590
  • Guam+1671
  • Guatemala+502
  • Guernsey+44
  • Guinea (Guinée)+224
  • Guinea-Bissau (Guiné Bissau)+245
  • Guyana+592
  • Haiti+509
  • Honduras+504
  • Hong Kong (香港)+852
  • Hungary (Magyarország)+36
  • Iceland (Ísland)+354
  • India (भारत)+91
  • Indonesia+62
  • Iran (‫ایران‬‎)+98
  • Iraq (‫العراق‬‎)+964
  • Ireland+353
  • Isle of Man+44
  • Israel (‫ישראל‬‎)+972
  • Italy (Italia)+39
  • Jamaica+1876
  • Japan (日本)+81
  • Jersey+44
  • Jordan (‫الأردن‬‎)+962
  • Kazakhstan (Казахстан)+7
  • Kenya+254
  • Kiribati+686
  • Kosovo+383
  • Kuwait (‫الكويت‬‎)+965
  • Kyrgyzstan (Кыргызстан)+996
  • Laos (ລາວ)+856
  • Latvia (Latvija)+371
  • Lebanon (‫لبنان‬‎)+961
  • Lesotho+266
  • Liberia+231
  • Libya (‫ليبيا‬‎)+218
  • Liechtenstein+423
  • Lithuania (Lietuva)+370
  • Luxembourg+352
  • Macau (澳門)+853
  • Macedonia (FYROM) (Македонија)+389
  • Madagascar (Madagasikara)+261
  • Malawi+265
  • Malaysia+60
  • Maldives+960
  • Mali+223
  • Malta+356
  • Marshall Islands+692
  • Martinique+596
  • Mauritania (‫موريتانيا‬‎)+222
  • Mauritius (Moris)+230
  • Mayotte+262
  • Mexico (México)+52
  • Micronesia+691
  • Moldova (Republica Moldova)+373
  • Monaco+377
  • Mongolia (Монгол)+976
  • Montenegro (Crna Gora)+382
  • Montserrat+1664
  • Morocco (‫المغرب‬‎)+212
  • Mozambique (Moçambique)+258
  • Myanmar (Burma) (မြန်မာ)+95
  • Namibia (Namibië)+264
  • Nauru+674
  • Nepal (नेपाल)+977
  • Netherlands (Nederland)+31
  • New Caledonia (Nouvelle-Calédonie)+687
  • New Zealand+64
  • Nicaragua+505
  • Niger (Nijar)+227
  • Nigeria+234
  • Niue+683
  • Norfolk Island+672
  • North Korea (조선 민주주의 인민 공화국)+850
  • Northern Mariana Islands+1670
  • Norway (Norge)+47
  • Oman (‫عُمان‬‎)+968
  • Pakistan (‫پاکستان‬‎)+92
  • Palau+680
  • Palestine (‫فلسطين‬‎)+970
  • Panama (Panamá)+507
  • Papua New Guinea+675
  • Paraguay+595
  • Peru (Perú)+51
  • Philippines+63
  • Poland (Polska)+48
  • Portugal+351
  • Puerto Rico+1
  • Qatar (‫قطر‬‎)+974
  • Réunion (La Réunion)+262
  • Romania (România)+40
  • Russia (Россия)+7
  • Rwanda+250
  • Saint Barthélemy (Saint-Barthélemy)+590
  • Saint Helena+290
  • Saint Kitts and Nevis+1869
  • Saint Lucia+1758
  • Saint Martin (Saint-Martin (partie française))+590
  • Saint Pierre and Miquelon (Saint-Pierre-et-Miquelon)+508
  • Saint Vincent and the Grenadines+1784
  • Samoa+685
  • San Marino+378
  • São Tomé and Príncipe (São Tomé e Príncipe)+239
  • Saudi Arabia (‫المملكة العربية السعودية‬‎)+966
  • Senegal (Sénégal)+221
  • Serbia (Србија)+381
  • Seychelles+248
  • Sierra Leone+232
  • Singapore+65
  • Sint Maarten+1721
  • Slovakia (Slovensko)+421
  • Slovenia (Slovenija)+386
  • Solomon Islands+677
  • Somalia (Soomaaliya)+252
  • South Africa+27
  • South Korea (대한민국)+82
  • South Sudan (‫جنوب السودان‬‎)+211
  • Spain (España)+34
  • Sri Lanka (ශ්‍රී ලංකාව)+94
  • Sudan (‫السودان‬‎)+249
  • Suriname+597
  • Svalbard and Jan Mayen+47
  • Swaziland+268
  • Sweden (Sverige)+46
  • Switzerland (Schweiz)+41
  • Syria (‫سوريا‬‎)+963
  • Taiwan (台灣)+886
  • Tajikistan+992
  • Tanzania+255
  • Thailand (ไทย)+66
  • Timor-Leste+670
  • Togo+228
  • Tokelau+690
  • Tonga+676
  • Trinidad and Tobago+1868
  • Tunisia (‫تونس‬‎)+216
  • Turkey (Türkiye)+90
  • Turkmenistan+993
  • Turks and Caicos Islands+1649
  • Tuvalu+688
  • U.S. Virgin Islands+1340
  • Uganda+256
  • Ukraine (Україна)+380
  • United Arab Emirates (‫الإمارات العربية المتحدة‬‎)+971
  • United Kingdom+44
  • United States+1
  • Uruguay+598
  • Uzbekistan (Oʻzbekiston)+998
  • Vanuatu+678
  • Vatican City (Città del Vaticano)+39
  • Venezuela+58
  • Vietnam (Việt Nam)+84
  • Wallis and Futuna+681
  • Western Sahara (‫الصحراء الغربية‬‎)+212
  • Yemen (‫اليمن‬‎)+967
  • Zambia+260
  • Zimbabwe+263
  • Åland Islands+358
Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Installing Redis Cluster with Persistent Storage on Mesosphere DC/OS

Parvez Kazi

Cloud & DevOps

In the first part of this blog, we saw how to install standalone redis service on DC/OS with Persistent storage using RexRay and AWS EBS volumes.

A single server is a single point of failure in every system, so to ensure high availability of redis database, we can deploy a master-slave cluster of Redis servers. In this blog, we will see how to setup such 6 node (3 master, 3 slave) Redis cluster and persist data using RexRay and AWS EBS volumes. After that we will see how to import existing data into this cluster.

Redis Cluster

It is form of replicated Redis servers in multi-master architecture. All the data is sharded into 16384 buckets, where every master node is assigned subset of buckets out of them (generally evenly sharded) and each master replicated by its slaves.  It provides more resilience and scaling for production grade deployments where heavy workload is expected. Applications can connect to any node in cluster mode and the request will be redirected to respective master node.

     

Redis Cluster
 Source:  Octo

Objective: To create a Redis cluster with number of services in DCOC environment with persistent storage and import the existing Redis dump.rdb data to the cluster.

Prerequisites :  

  • Make sure rexray component is running and is in a healthy state for DCOS cluster.
DCOS Cluster

Steps:

  • As per Redis doc, the minimal cluster should have at least 3 master and 3 slave nodes, so making it a total 6 Redis services.
  • All services will use similar json configuration except changes in names of service, external volume, and port mappings.
  • We will deploy one Redis service for each Redis cluster node and once all services are running, we will form cluster among them.
  • We will use host network for Redis node containers, for that we will restrict Redis nodes to run on particular node. This will help us to troubleshoot cluster (fixed IP, so we can restart Redis node any time without data loss).
  • Using host network adds a prerequisites that number of dcos nodes >= number of Redis nodes.
  1. First create Redis node services on DCOS:
  2. Click on the Add button in Services tab of DCOS UI
DCOS UI
  • Click on JSON configuration
JSON Configuration
  • Add below json config for Redis service, change the values which are written in BLOCK letters with # as prefix and suffix.
  • #NODENAME# - Name of Redis node (Ex. redis-node-1)
  • #NODEHOSTIP# - IP of dcos node on which this Redis node will run. This ip must be unique for each Redis node. (Ex. 10.2.12.23)
  • #VOLUMENAME# - Name of persistent volume, Give name to identify volume on AWS EBS (Ex. <dcos cluster="" name="">-redis-node-<node number="">)</node></dcos>
  • #NODEVIP# - VIP For the Redis node. It must be ‘Redis’ for first Redis node, for others it can be the same as NODENAME (Ex. redis-node-2)

{
"id": "/#NODENAME#",
"backoffFactor": 1.15,
"backoffSeconds": 1,
"constraints": [
[
"hostname",
"CLUSTER",
"#NODEHOSTIP#"
]
],
"container": {
"type": "DOCKER",
"volumes": [
{
"external": {
"name": "#VOLUMENAME#",
"provider": "dvdi",
"options": {
"dvdi/driver": "rexray"
}
},
"mode": "RW",
"containerPath": "/data"
}
],
"docker": {
"image": "parvezkazi13/redis:latest",
"forcePullImage": false,
"privileged": false,
"parameters": []
}
},
"cpus": 0.5,
"disk": 0,
"fetch": [],
"healthChecks": [],
"instances": 1,
"maxLaunchDelaySeconds": 3600,
"mem": 4096,
"gpus": 0,
"networks": [
{
"mode": "host"
}
],
"portDefinitions": [
{
"labels": {
"VIP_0": "/#NODEVIP#:6379"
},
"name": "#NODEVIP#",
"protocol": "tcp",
"port": 6379
}
],
"requirePorts": true,
"upgradeStrategy": {
"maximumOverCapacity": 0,
"minimumHealthCapacity": 0.5
},
"killSelection": "YOUNGEST_FIRST",
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
}
}

  • After updating the highlighted fields, copy above json to json configuration box, click on ‘Review & Run’ button in the right corner, this will start the service with above configuration.
  • Once above service is UP and Running, then repeat the step 2 to 4 for each Redis node with respective values for highlighted fields.
  • So if we go with 6 node cluster, at the end we will have 6 Redis nodes UP and Running, like:
Redis Nodes

Note: Since we are using external volume for persistent storage, we can not scale our services, i.e. each service will only one instance max. If we try to scale, we will get below error :

Scale Service Error

2. Form the Redis cluster between Redis node services:

  • To create or manage Redis-cluster, first deploy redis-cluster-util container on DCOS using below json config:

{
"id": "/infrastructure/redis-cluster-util",
"backoffFactor": 1.15,
"backoffSeconds": 1,
"constraints": [],
"container": {
"type": "DOCKER",
"volumes": [
{
"containerPath": "/backup",
"hostPath": "backups",
"mode": "RW"
}
],
"docker": {
"image": "parvezkazi13/redis-util",
"forcePullImage": true,
"privileged": false,
"parameters": []
}
},
"cpus": 0.25,
"disk": 0,
"fetch": [],
"instances": 1,
"maxLaunchDelaySeconds": 3600,
"mem": 4096,
"gpus": 0,
"networks": [
{
"mode": "host"
}
],
"portDefinitions": [],
"requirePorts": true,
"upgradeStrategy": {
"maximumOverCapacity": 0,
"minimumHealthCapacity": 0.5
},
"killSelection": "YOUNGEST_FIRST",
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
},
"healthChecks": []
}

This will run service as :

Run Redis Service
  • Get the IP addresses of all Redis nodes to form the cluster, as Redis-cluster can not be created with node's hostname / dns. This is an open issue.

Since we are using host network, we need the dcos node IP on which Redis nodes are running.

Running Redis Nodes

Get all Redis nodes IP using:

NODE_BASE_NAME=redis-nodedcos task $NODE_BASE_NAME | grep -E "$NODE_BASE_NAME-\<[0-9]\>" | awk '{print $2":6379"}' | paste -s -d' '  
view raw redis_node.sh hosted with ❤ by GitHub

Here Redis-node is the prefix used for all Redis nodes.

Note the output of this command, we will use it in further steps.

  • Get the node where redis-cluster-util container is running and ssh to dcos node using:

dcos node ssh --master-proxy --private-ip $(dcos task | grep "redis-cluster-util" | awk '{print $2}')
view raw util.sh hosted with ❤ by GitHub

  • Now find the docker container id of redis-cluster-util and exec it using:

docker exec -it $(docker ps -qf ancestor="parvezkazi13/redis-util") bash  
view raw exec.sh hosted with ❤ by GitHub

  • No we are inside the redis-cluster-util container. Run below command to form Redis cluster.

redis-trib.rb create --replicas 1 <Space separated IP address:PORT pair of all Redis nodes>

  • Here use the Redis nodes IP addresses which retrieved in step 2.

redis-trib.rb create --replicas 1 10.0.1.90:6379 10.0.0.19:6379 10.0.9.203:6379 10.0.9.79:6379 10.0.3.199:6379 10.0.9.104:6379
view raw retrieved.sh hosted with ❤ by GitHub

  • Parameters:
  • The option --replicas 1 means that we want a slave for every master created.
  • The other arguments are the list of addresses (host:port) of the instances we want to use to create the new cluster.
  • Output:
  • Select ‘yes’ when it prompts to set the slot configuration shown.
  • Run below command to check the status of the newly create cluster

redis-trib.rb check <Any redis node host:PORT>
Ex:
redis-trib.rb check 10.0.1.90:6379

  • Parameters:
  • host:port of any node from the cluster.
  • Output:
  • If all OK, it will show OK with status, else it will show ERR with the error message.
Output

3. Import existing dump.rdb to Redis cluster

  • At this point, all the Redis nodes should be empty and each one should have an ID and some assigned slots:

Before reuse an existing dump data, we have to reshard all slots to one instance. We specify the number of slots to move (all, so 16384), the id we move to (here Node 1 - 10.0.1.90:6379) and where we take these slots from (all other nodes).

redis-trib.rb reshard 10.0.1.90:6379  
view raw reshard.sh hosted with ❤ by GitHub

Parameters:

host:port of any node from the cluster.

Output:

It will prompt for number of slots to move - here all. i.e 16384

Receiving node id - here id of node 10.0.1.90:6379 (redis-node-1)

Source node IDs  - here all, as we want to shard  all slots to one node.

And prompt to proceed - press ‘yes’

Output Prompt
  • Now check again node 10.0.1.90:6379  

redis-trib.rb check 10.0.1.90:6379  
view raw check.sh hosted with ❤ by GitHub

Parameters: host:port of any node from the cluster.

Output: it will show all (16384) slots moved to node 10.0.1.90:6379

  • Next step is Importing our existing Redis dump data.  

Now copy the existing dump.rdb to our redis-cluster-util container using below steps:

- Copy existing dump.rdb to dcos node on which redis-cluster-util container is running. Can use scp from any other public server to dcos node.

- Now we have dump .rdb in our dcos node, copy this dump.rdb to redis-cluster-util container using below command:

docker cp dump.rdb $(docker ps -qf ancestor="parvezkazi13/redis-util"):/data
view raw cp_dump.sh hosted with ❤ by GitHub

Now we have dump.rdb in our redis-cluster-util container, we can import it to our Redis cluster. Execute and go to the redis-cluster-util container using:

docker exec -it $(docker ps -qf ancestor="parvezkazi13/redis-util") bash
view raw cluster_util.sh hosted with ❤ by GitHub

It will execute redis-cluster-util container which is already running and start its bash cmd.

Run below command to import dump.rdb to Redis cluster:

rdb --command protocol /data/dump.rdb | redis-cli --pipe -h 10.0.1.90 -p 6379  

Parameters:

Path to dump.rdb

host:port of any node from the cluster.

Output:

If successful, you’ll see something like:

All data transferred. Waiting for the last reply...Last reply received from server.errors: 0, replies: 4341259  
view raw result.sh hosted with ❤ by GitHub

as well as this in the Redis server logs:

95086:M 01 Mar 21:53:42.071 * 10000 changes in 60 seconds. Saving...95086:M 01 Mar 21:53:42.072 * Background saving started by pid 9822398223:C 01 Mar 21:53:44.277 * DB saved on disk
view raw logs.sh hosted with ❤ by GitHub

WARNING:
Like our Oracle DB instance can have multiple databases, similarly Redis saves keys in keyspaces.
Now when Redis is in cluster mode, it does not accept the dumps which has more than one keyspaces. As per documentation:

Redis Cluster does not support multiple databases like the stand alone version of Redis. There is just database 0 and the SELECT command is not allowed. “

So while importing such multi-keyspace Redis dump, server fails while starting on below issue :

23049:M 16 Mar 17:21:17.772 * DB loaded from disk: 5.222 seconds
23049:M 16 Mar 17:21:17.772 # You can't have keys in a DB different than DB 0 when in Cluster mode. Exiting.
Solution / WA :

There is redis-cli command “MOVE” to move keys from one keyspace to another keyspace.

Also can run below command to move all the keys from keyspace 1 to keyspace 0 :

redis-cli -h "$HOST" -p "$PORT" -n 1 --raw keys "*" |  xargs -I{} redis-cli -h "$HOST" -p "$PORT" -n 1 move {} 0
view raw move_keys.sh hosted with ❤ by GitHub

  • Verify import status, using below commands : (inside redis-cluster-util container)

redis-cli -h 10.0.1.90 -p 6379 info keyspace

It will run Redis info command on node 10.0.1.90:6379 and fetch keyspace information, like below:

# Keyspace
db0:keys=33283,expires=0,avg_ttl=0

  • Now reshard all the slots to all instances evenly

The reshard command will again list the existing nodes, their IDs and the assigned slots.

redis-trib.rb reshard 10.0.1.90:6379

Parameters:

host:port of any node from the cluster.

Output:

It will prompt for number of slots to move - here (16384 /3 Masters = 5461)

Receiving node id - here id of master node 2  

Source node IDs  - id of first instance which has currently all the slots. (master 1)

And prompt to proceed - press ‘yes’

Repeat above step and for receiving node id, give id of master node 3.

  • After the above step, all 3 masters will have equal slots and imported keys will be distributed among the master nodes.
  • Put keys to cluster for verification

redis-cli -h 10.0.1.90 -p 6379 set foo bar
OK
redis-cli -h 10.0.1.90 -p 6379 set foo bar
(error) MOVED 4813 10.0.9.203:6379
view raw redis_cli.sh hosted with ❤ by GitHub

Above error shows that server saved this key to instance 10.0.9.203:6379, so client redirected it. To follow redirection, use flag “-c” which says it is a cluster mode, like:

redis-cli -h 10.0.1.90 -p 6379 -c set foo bar
OK
view raw cluster_mode.sh hosted with ❤ by GitHub

Redis Entrypoint

Application entrypoint for Redis cluster is mostly depends how your Redis client handles cluster support. Generally connecting to one of master nodes should do the work.

Use below host:port in your applications :

redis.marathon.l4lb.thisdcos.directory:6379

Automation of Redis Cluster Creation

We have automation script in place to deploy 6 node Redis cluster and form a cluster between them.

Script location: Github

  • It deploys 6 marathon apps for 6 Redis nodes. All nodes are deployed on different nodes with CLUSTER_NAME as prefix to volume name.
  • Once all nodes are up and running, it deploys redis-cluster-util app which will be used to form Redis cluster.
  • Then it will print the Redis nodes and their IP addresses and prompt the user to proceed cluster creation.
  • If user selects to proceed, it will run redis-cluster-util app and create the cluster using IP addresses collected. Util container will prompt for some input that the user has to select.

Conclusion

We learned about Redis cluster deployment on DCOS with Persistent storage using RexRay. We also learned how rexray automatically manages volumes over aws ebs and how to integrate them in DCOS apps/services. We saw how to use redis-cluster-util container to manage Redis cluster for different purposes, like forming cluster, resharding, importing existing dump.rdb data etc. Finally, we looked at the automation part of whole cluster setup using dcos cli and bash.

Reference

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Installing Redis Cluster with Persistent Storage on Mesosphere DC/OS

In the first part of this blog, we saw how to install standalone redis service on DC/OS with Persistent storage using RexRay and AWS EBS volumes.

A single server is a single point of failure in every system, so to ensure high availability of redis database, we can deploy a master-slave cluster of Redis servers. In this blog, we will see how to setup such 6 node (3 master, 3 slave) Redis cluster and persist data using RexRay and AWS EBS volumes. After that we will see how to import existing data into this cluster.

Redis Cluster

It is form of replicated Redis servers in multi-master architecture. All the data is sharded into 16384 buckets, where every master node is assigned subset of buckets out of them (generally evenly sharded) and each master replicated by its slaves.  It provides more resilience and scaling for production grade deployments where heavy workload is expected. Applications can connect to any node in cluster mode and the request will be redirected to respective master node.

     

Redis Cluster
 Source:  Octo

Objective: To create a Redis cluster with number of services in DCOC environment with persistent storage and import the existing Redis dump.rdb data to the cluster.

Prerequisites :  

  • Make sure rexray component is running and is in a healthy state for DCOS cluster.
DCOS Cluster

Steps:

  • As per Redis doc, the minimal cluster should have at least 3 master and 3 slave nodes, so making it a total 6 Redis services.
  • All services will use similar json configuration except changes in names of service, external volume, and port mappings.
  • We will deploy one Redis service for each Redis cluster node and once all services are running, we will form cluster among them.
  • We will use host network for Redis node containers, for that we will restrict Redis nodes to run on particular node. This will help us to troubleshoot cluster (fixed IP, so we can restart Redis node any time without data loss).
  • Using host network adds a prerequisites that number of dcos nodes >= number of Redis nodes.
  1. First create Redis node services on DCOS:
  2. Click on the Add button in Services tab of DCOS UI
DCOS UI
  • Click on JSON configuration
JSON Configuration
  • Add below json config for Redis service, change the values which are written in BLOCK letters with # as prefix and suffix.
  • #NODENAME# - Name of Redis node (Ex. redis-node-1)
  • #NODEHOSTIP# - IP of dcos node on which this Redis node will run. This ip must be unique for each Redis node. (Ex. 10.2.12.23)
  • #VOLUMENAME# - Name of persistent volume, Give name to identify volume on AWS EBS (Ex. <dcos cluster="" name="">-redis-node-<node number="">)</node></dcos>
  • #NODEVIP# - VIP For the Redis node. It must be ‘Redis’ for first Redis node, for others it can be the same as NODENAME (Ex. redis-node-2)

{
"id": "/#NODENAME#",
"backoffFactor": 1.15,
"backoffSeconds": 1,
"constraints": [
[
"hostname",
"CLUSTER",
"#NODEHOSTIP#"
]
],
"container": {
"type": "DOCKER",
"volumes": [
{
"external": {
"name": "#VOLUMENAME#",
"provider": "dvdi",
"options": {
"dvdi/driver": "rexray"
}
},
"mode": "RW",
"containerPath": "/data"
}
],
"docker": {
"image": "parvezkazi13/redis:latest",
"forcePullImage": false,
"privileged": false,
"parameters": []
}
},
"cpus": 0.5,
"disk": 0,
"fetch": [],
"healthChecks": [],
"instances": 1,
"maxLaunchDelaySeconds": 3600,
"mem": 4096,
"gpus": 0,
"networks": [
{
"mode": "host"
}
],
"portDefinitions": [
{
"labels": {
"VIP_0": "/#NODEVIP#:6379"
},
"name": "#NODEVIP#",
"protocol": "tcp",
"port": 6379
}
],
"requirePorts": true,
"upgradeStrategy": {
"maximumOverCapacity": 0,
"minimumHealthCapacity": 0.5
},
"killSelection": "YOUNGEST_FIRST",
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
}
}

  • After updating the highlighted fields, copy above json to json configuration box, click on ‘Review & Run’ button in the right corner, this will start the service with above configuration.
  • Once above service is UP and Running, then repeat the step 2 to 4 for each Redis node with respective values for highlighted fields.
  • So if we go with 6 node cluster, at the end we will have 6 Redis nodes UP and Running, like:
Redis Nodes

Note: Since we are using external volume for persistent storage, we can not scale our services, i.e. each service will only one instance max. If we try to scale, we will get below error :

Scale Service Error

2. Form the Redis cluster between Redis node services:

  • To create or manage Redis-cluster, first deploy redis-cluster-util container on DCOS using below json config:

{
"id": "/infrastructure/redis-cluster-util",
"backoffFactor": 1.15,
"backoffSeconds": 1,
"constraints": [],
"container": {
"type": "DOCKER",
"volumes": [
{
"containerPath": "/backup",
"hostPath": "backups",
"mode": "RW"
}
],
"docker": {
"image": "parvezkazi13/redis-util",
"forcePullImage": true,
"privileged": false,
"parameters": []
}
},
"cpus": 0.25,
"disk": 0,
"fetch": [],
"instances": 1,
"maxLaunchDelaySeconds": 3600,
"mem": 4096,
"gpus": 0,
"networks": [
{
"mode": "host"
}
],
"portDefinitions": [],
"requirePorts": true,
"upgradeStrategy": {
"maximumOverCapacity": 0,
"minimumHealthCapacity": 0.5
},
"killSelection": "YOUNGEST_FIRST",
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
},
"healthChecks": []
}

This will run service as :

Run Redis Service
  • Get the IP addresses of all Redis nodes to form the cluster, as Redis-cluster can not be created with node's hostname / dns. This is an open issue.

Since we are using host network, we need the dcos node IP on which Redis nodes are running.

Running Redis Nodes

Get all Redis nodes IP using:

NODE_BASE_NAME=redis-nodedcos task $NODE_BASE_NAME | grep -E "$NODE_BASE_NAME-\<[0-9]\>" | awk '{print $2":6379"}' | paste -s -d' '  
view raw redis_node.sh hosted with ❤ by GitHub

Here Redis-node is the prefix used for all Redis nodes.

Note the output of this command, we will use it in further steps.

  • Get the node where redis-cluster-util container is running and ssh to dcos node using:

dcos node ssh --master-proxy --private-ip $(dcos task | grep "redis-cluster-util" | awk '{print $2}')
view raw util.sh hosted with ❤ by GitHub

  • Now find the docker container id of redis-cluster-util and exec it using:

docker exec -it $(docker ps -qf ancestor="parvezkazi13/redis-util") bash  
view raw exec.sh hosted with ❤ by GitHub

  • No we are inside the redis-cluster-util container. Run below command to form Redis cluster.

redis-trib.rb create --replicas 1 <Space separated IP address:PORT pair of all Redis nodes>

  • Here use the Redis nodes IP addresses which retrieved in step 2.

redis-trib.rb create --replicas 1 10.0.1.90:6379 10.0.0.19:6379 10.0.9.203:6379 10.0.9.79:6379 10.0.3.199:6379 10.0.9.104:6379
view raw retrieved.sh hosted with ❤ by GitHub

  • Parameters:
  • The option --replicas 1 means that we want a slave for every master created.
  • The other arguments are the list of addresses (host:port) of the instances we want to use to create the new cluster.
  • Output:
  • Select ‘yes’ when it prompts to set the slot configuration shown.
  • Run below command to check the status of the newly create cluster

redis-trib.rb check <Any redis node host:PORT>
Ex:
redis-trib.rb check 10.0.1.90:6379

  • Parameters:
  • host:port of any node from the cluster.
  • Output:
  • If all OK, it will show OK with status, else it will show ERR with the error message.
Output

3. Import existing dump.rdb to Redis cluster

  • At this point, all the Redis nodes should be empty and each one should have an ID and some assigned slots:

Before reuse an existing dump data, we have to reshard all slots to one instance. We specify the number of slots to move (all, so 16384), the id we move to (here Node 1 - 10.0.1.90:6379) and where we take these slots from (all other nodes).

redis-trib.rb reshard 10.0.1.90:6379  
view raw reshard.sh hosted with ❤ by GitHub

Parameters:

host:port of any node from the cluster.

Output:

It will prompt for number of slots to move - here all. i.e 16384

Receiving node id - here id of node 10.0.1.90:6379 (redis-node-1)

Source node IDs  - here all, as we want to shard  all slots to one node.

And prompt to proceed - press ‘yes’

Output Prompt
  • Now check again node 10.0.1.90:6379  

redis-trib.rb check 10.0.1.90:6379  
view raw check.sh hosted with ❤ by GitHub

Parameters: host:port of any node from the cluster.

Output: it will show all (16384) slots moved to node 10.0.1.90:6379

  • Next step is Importing our existing Redis dump data.  

Now copy the existing dump.rdb to our redis-cluster-util container using below steps:

- Copy existing dump.rdb to dcos node on which redis-cluster-util container is running. Can use scp from any other public server to dcos node.

- Now we have dump .rdb in our dcos node, copy this dump.rdb to redis-cluster-util container using below command:

docker cp dump.rdb $(docker ps -qf ancestor="parvezkazi13/redis-util"):/data
view raw cp_dump.sh hosted with ❤ by GitHub

Now we have dump.rdb in our redis-cluster-util container, we can import it to our Redis cluster. Execute and go to the redis-cluster-util container using:

docker exec -it $(docker ps -qf ancestor="parvezkazi13/redis-util") bash
view raw cluster_util.sh hosted with ❤ by GitHub

It will execute redis-cluster-util container which is already running and start its bash cmd.

Run below command to import dump.rdb to Redis cluster:

rdb --command protocol /data/dump.rdb | redis-cli --pipe -h 10.0.1.90 -p 6379  

Parameters:

Path to dump.rdb

host:port of any node from the cluster.

Output:

If successful, you’ll see something like:

All data transferred. Waiting for the last reply...Last reply received from server.errors: 0, replies: 4341259  
view raw result.sh hosted with ❤ by GitHub

as well as this in the Redis server logs:

95086:M 01 Mar 21:53:42.071 * 10000 changes in 60 seconds. Saving...95086:M 01 Mar 21:53:42.072 * Background saving started by pid 9822398223:C 01 Mar 21:53:44.277 * DB saved on disk
view raw logs.sh hosted with ❤ by GitHub

WARNING:
Like our Oracle DB instance can have multiple databases, similarly Redis saves keys in keyspaces.
Now when Redis is in cluster mode, it does not accept the dumps which has more than one keyspaces. As per documentation:

Redis Cluster does not support multiple databases like the stand alone version of Redis. There is just database 0 and the SELECT command is not allowed. “

So while importing such multi-keyspace Redis dump, server fails while starting on below issue :

23049:M 16 Mar 17:21:17.772 * DB loaded from disk: 5.222 seconds
23049:M 16 Mar 17:21:17.772 # You can't have keys in a DB different than DB 0 when in Cluster mode. Exiting.
Solution / WA :

There is redis-cli command “MOVE” to move keys from one keyspace to another keyspace.

Also can run below command to move all the keys from keyspace 1 to keyspace 0 :

redis-cli -h "$HOST" -p "$PORT" -n 1 --raw keys "*" |  xargs -I{} redis-cli -h "$HOST" -p "$PORT" -n 1 move {} 0
view raw move_keys.sh hosted with ❤ by GitHub

  • Verify import status, using below commands : (inside redis-cluster-util container)

redis-cli -h 10.0.1.90 -p 6379 info keyspace

It will run Redis info command on node 10.0.1.90:6379 and fetch keyspace information, like below:

# Keyspace
db0:keys=33283,expires=0,avg_ttl=0

  • Now reshard all the slots to all instances evenly

The reshard command will again list the existing nodes, their IDs and the assigned slots.

redis-trib.rb reshard 10.0.1.90:6379

Parameters:

host:port of any node from the cluster.

Output:

It will prompt for number of slots to move - here (16384 /3 Masters = 5461)

Receiving node id - here id of master node 2  

Source node IDs  - id of first instance which has currently all the slots. (master 1)

And prompt to proceed - press ‘yes’

Repeat above step and for receiving node id, give id of master node 3.

  • After the above step, all 3 masters will have equal slots and imported keys will be distributed among the master nodes.
  • Put keys to cluster for verification

redis-cli -h 10.0.1.90 -p 6379 set foo bar
OK
redis-cli -h 10.0.1.90 -p 6379 set foo bar
(error) MOVED 4813 10.0.9.203:6379
view raw redis_cli.sh hosted with ❤ by GitHub

Above error shows that server saved this key to instance 10.0.9.203:6379, so client redirected it. To follow redirection, use flag “-c” which says it is a cluster mode, like:

redis-cli -h 10.0.1.90 -p 6379 -c set foo bar
OK
view raw cluster_mode.sh hosted with ❤ by GitHub

Redis Entrypoint

Application entrypoint for Redis cluster is mostly depends how your Redis client handles cluster support. Generally connecting to one of master nodes should do the work.

Use below host:port in your applications :

redis.marathon.l4lb.thisdcos.directory:6379

Automation of Redis Cluster Creation

We have automation script in place to deploy 6 node Redis cluster and form a cluster between them.

Script location: Github

  • It deploys 6 marathon apps for 6 Redis nodes. All nodes are deployed on different nodes with CLUSTER_NAME as prefix to volume name.
  • Once all nodes are up and running, it deploys redis-cluster-util app which will be used to form Redis cluster.
  • Then it will print the Redis nodes and their IP addresses and prompt the user to proceed cluster creation.
  • If user selects to proceed, it will run redis-cluster-util app and create the cluster using IP addresses collected. Util container will prompt for some input that the user has to select.

Conclusion

We learned about Redis cluster deployment on DCOS with Persistent storage using RexRay. We also learned how rexray automatically manages volumes over aws ebs and how to integrate them in DCOS apps/services. We saw how to use redis-cluster-util container to manage Redis cluster for different purposes, like forming cluster, resharding, importing existing dump.rdb data etc. Finally, we looked at the automation part of whole cluster setup using dcos cli and bash.

Reference

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings