redis迁移⼯具redis-shake
Redis-shake is a tool for synchronizing data between two redis databas.
1、下载并安装
wget -/alibaba/RedisShake/releas/download/relea-v1.6.24-20191220/redis-shake-1.6.
tar -zxvf redis-shake-1.6.
cd redis-shake-1.6.
2、配置⽂件 f
# this is the configuration of redis-shake.
# if you have any problem, plea /alibaba/RedisShake/wiki/FAQ
# id
id = redis-shake
# log file,⽇志⽂件,不配置将打印到stdout (e.g. /var/log/redis-shake.log )
log.file = /data/db_tools/soft/redis/redis-shake-1.6.24/redis-shake-new.log
# log level: "none", "error", "warn", "info", "debug", "all". default is"info". "debug" == "all"
log.level = info
# pid path,进程⽂件存储地址(e.g. /var/run/),不配置将默认输出到执⾏下⾯,
# 注意这个是⽬录,真正的pid是`{pid_path}/{id}.pid`
pid_path =
# pprof port.
system_profile = 9310
# restful port, t -1 means disable, in `restore` mode RedisShake will exit once finish restoring RDB only if this value
# is -1, otherwi, it'll wait forever.
# restful port,查看metric端⼝, -1表⽰不启⽤,如果是`restore`模式,只有设置为-1才会在完成RDB恢复后退出,否则会⼀直block。
http_profile = 9320
# parallel routines number ud in RDB file syncing. default is64.
# 启动多少个并发线程同步⼀个RDB⽂件。
parallel = 4
# source redis configuration.
# ud in `dump`, `sync` and `rump`.
# source redis type, e.g. "standalone" (default), "ntinel" or "cluster".
# 1. "standalone": standalone db mode.
# 2. "ntinel": the redis address is read from ntinel.
# 3. "cluster": the source redis has veral db.
# 4. "proxy": the proxy address, currently, only ud in"rump" mode.
# 源端redis的类型,⽀持standalone,ntinel,cluster和proxy四种模式,注意:⽬前proxy只⽤于rump模式。
# ip:port
# the source address can be the following:
# 1. single db address. for"standalone" type.
# 2. ${ntinel_master_name}:${master or slave}@ntinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for"ntinel" type. # 3. cluster that has veral db nodes split by micolon(;). for"cluster" type. e.g., 10.1.1.1:20331;10.1.1.2:20441.
# 4. proxy address(ud in"rump" mode only). for"proxy" type.
# 源redis地址。对于ntinel或者开源cluster模式,输⼊格式为"master名字:拉取⾓⾊为master或者slave@ntinel的地址",别的cluster
# 架构,⽐如codis, twemproxy, aliyun proxy等需要配置所有master或者slave的db地址。
source.address = source_ip:6381
# password of db/proxy. even if type is ntinel.
source.password_raw = AComCdgN09srE
# auth type, don't modify it
source.auth_type = auth
# tls enable, true or fal. Currently, only support standalone.
# open source redis does NOT support tls so far, but some cloud versions do.
source.tls_enable = fal
# input RDB file.
# ud in `decode` and `restore`.
# if the input is list split by micolon(;), redis-shake will restore the list one by one.
# 如果是decode或者restore,这个参数表⽰读取的rdb⽂件。⽀持输⼊列表,例如:rdb.0;rdb.1;rdb.2
# redis-shake将会挨个进⾏恢复。
source.rdb.input = local
# the concurrence of RDB syncing, default is len(source.address) or len(source.rdb.input).
# ud in `dump`, `sync` and `restore`. 0 means default.
韩磊
# This is uless pe isn't cluster or only input is only one RDB.
# 拉取的并发度,如果是`dump`或者`sync`,默认是source.address中db的个数,`restore`模式默认len(source.rdb.input)。
# 假如db节点/输⼊的rdb有5个,但rdb.parallel=3,那么⼀次只会
# 并发拉取3个db的全量数据,直到某个db的rdb拉取完毕并进⼊增量,才会拉取第4个db节点的rdb,
# 以此类推,最后会有len(source.address)或者len(rdb.input)个增量线程同时存在。
source.rdb.parallel = 0
# for special cloud vendor: ucloud
# ud in `decode` and `restore`.
# ucloud集群版的rdb⽂件添加了slot前缀,进⾏特判剥离: ucloud_cluster。
source.rdb.special_cloud =
# target redis configuration. ud in `restore`, `sync` and `rump`.
咸菜炒毛豆# the type of target redis can be "standalone", "proxy" or "cluster".
# 1. "standalone": standalone db mode.
# 2. "ntinel": the redis address is read from ntinel.
# 3. "cluster": open source cluster (not supported currently).
# 4. "proxy": proxy layer ahead redis. Data will be inrted in a round-robin way if more than 1 proxy given.
# ⽬的redis的类型,⽀持standalone,ntinel,cluster和proxy四种模式。
# ip:port
# the target address can be the following:鱼的故事
# 1. single db address. for"standalone" type.
# 2. ${ntinel_master_name}:${master or slave}@ntinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for"ntinel" type. # 3. cluster that has veral db nodes split by micolon(;). for"cluster" type.
# 4. proxy address(ud in"rump" mode only). for"proxy" type.
target.address = target_cluster-01_ip:6379;target_cluster-02_ip:6379;target_cluster-03_ip:6379
# password of db/proxy. even if type is ntinel.
target.password_raw = AComCdgN09srE
# auth type, don't modify it
target.auth_type = auth
# all the data will be written into this db. < 0 means disable.
target.db = -1
# tls enable, true or fal. Currently, only support standalone.
# open source redis does NOT support tls so far, but some cloud versions do.
target.tls_enable = fal
# output RDB file prefix.
# ud in `decode` and `dump`.
# 如果是decode或者dump,这个参数表⽰输出的rdb前缀,⽐如输⼊有3个db,那么dump分别是:
# ${output_rdb}.0, ${output_rdb}.1, ${output_rdb}.2
target.rdb.output = local_dump
# some redis proxy like twemproxy doesn't support to fetch version, so plea t it here.机票英语
# e.g., target.version = 4.0
target.version =
# u for expire key, t the time gap when source and target timestamp are not the same.
晚餐
# ⽤于处理过期的键值,当迁移两端不⼀致的时候,⽬的端需要加上这个值
fake_time =
# force rewrite when destination restore has the key
# ud in `restore`, `sync` and `rump`.
# 当源⽬的有重复key,是否进⾏覆写
rewrite = true
# filter db, key, slot, lua.
# filter db.
# ud in `restore`, `sync` and `rump`.
# e.g., "0;5;10" means match db0, db5 and db10.
# at most one of `filter.db.whitelist` and `filter.db.blacklist` parameters can be given.
# if the filter.db.whitelist is not empty, the given db list will be pasd while others filtered.
# if the filter.db.blacklist is not empty, the given db list will be filtered while others pasd.
# all dbs will be pasd if no condition given.
# 指定的db被通过,⽐如0;5;10将会使db0, db5, db10通过, 其他的被过滤
filter.db.whitelist =
# 指定的db被过滤,⽐如0;5;10将会使db0, db5, db10过滤,其他的被通过
filter.db.blacklist =
# filter key with prefix string. multiple keys are parated by ';'.
# e.g., "abc;bzz" match let "abc", "abc1", "abcxxx", "bzz" and "bzzwww".
# ud in `restore`, `sync` and `rump`.
# at most one of `filter.key.whitelist` and `filter.key.blacklist` parameters can be given.
# if the filter.key.whitelist is not empty, the given keys will be pasd while others filtered.
# if the filter.key.blacklist is not empty, the given keys will be filtered while others pasd.
下一步工作# all the namespace will be pasd if no condition given.
# ⽀持按前缀过滤key,只让指定前缀的key通过,分号分隔。⽐如指定abc,将会通过abc, abc1, abcxxx
filter.key.whitelist =
# ⽀持按前缀过滤key,不让指定前缀的key通过,分号分隔。⽐如指定abc,将会阻塞abc, abc1, abcxxx
filter.key.blacklist =
# filter given slot, multiple slots are parated by ';'.
# e.g., 1;2;3
# ud in `sync`.
# 指定过滤slot,只让指定的slot通过
filter.slot =
# filter lua script. true means not pass. However, in redis 5.0, the lua
# converts to transaction(multi+{commands}+exec) which will be pasd.
# 控制不让lua脚本通过,true表⽰不通过
filter.lua = fal
# big key threshold, the default is500 * 1024 * 1024 bytes. If the value is bigger than
# this given value, all the field will be spilt and write into the target in order. If
# the target Redis type is Codis, this should be t to 1, plea checkout FAQ to find
# the reason.
# 正常key如果不⼤,那么都是直接调⽤restore写⼊到⽬的端,如果key对应的value字节超过了给定
# 的值,那么会分批依次⼀个⼀个写⼊。如果⽬的端是Codis,这个需要置为1,具体原因请查看FAQ。
# 如果⽬的端⼤版本⼩于源端,也建议设置为1。
big_key_threshold = 524288000
# u psync command.
# ud in `sync`.
# 默认使⽤psync命令进⾏同步,置为fal将会⽤sync命令进⾏同步,代码层⾯会⾃动识别2.8以前的版本改为sync。
psync = true
# enable metric
# ud in `sync`.
# 是否启⽤metric
metric = true
# print in log
# 是否将metric打印到log中
metric.print_log = fal
# nder information.
# nder flush buffer size of byte.
# ud in `sync`.
# 发送缓存的字节长度,超过这个阈值将会强⾏刷缓存发送
nder.size = 104857600
# nder flush buffer size of oplog number.
# ud in `sync`. flush nder buffer when bigger than this threshold.
# 发送缓存的报⽂个数,超过这个阈值将会强⾏刷缓存发送,对于⽬的端是cluster的情况,这个值
# 的调⼤将会占⽤部分内存。
# delay channel size. once one oplog is nt to target redis, the oplog id and timestamp will also
# stored in this delay queue. this timestamp will be ud to calculate the time delay when receiving
# ack from target redis.
# ud in `sync`.
# ⽤于metric统计时延的队列
nder.delay_channel_size = 65535
# enable keep_alive option in TCP when connecting redis.
# the unit is cond.
# 0 means disable.
# TCP keep-alive保活参数,单位秒,0表⽰不启⽤。
keep_alive = 0
# ud in `rump`.
# number of keys captured each time. default is100.
# 每次scan的个数,不配置则默认100.
scan.key_number = 50
# ud in `rump`.
# we support some special redis types that don't u default `scan` command like alibaba cloud and tencent cloud.
# 有些版本具有特殊的格式,与普通的scan命令有所不同,我们进⾏了特殊的适配。⽬前⽀持腾讯云的集群版"tencent_cluster" # 和阿⾥云的集群版"aliyun_cluster"。
scan.special_cloud =
# ud in `rump`.
# we support to fetching data from given file which marks the key list.
# 有些云版本,既不⽀持sync/psync,也不⽀持scan,我们⽀持从⽂件中进⾏读取所有key列表并进⾏抓取:⼀⾏⼀个key。scan.key_file =
# limit the rate of transmission. Only ud in `rump` currently.
# e.g., qps = 1000 means pass 1000 keys per cond. default is500,000(0 means default)
qps = 200000
# ----------------splitter----------------
# below variables are uless for current open source version so don't t.
# replace hash tag.
# ud in `sync`.
replace_hash_tag = fal
3、启动
./redis-shake.linux -f -type=xxx #xxx为sync、restore、dump、decode、rump,全量+增量为"sync" 4、数据的导⼊和导出
数据导出:
./redis-shake.linux -f -type=dump
数据导⼊:
./redis-shake.linux -f -type=restore
注意:
数据导⼊的时候需要配置要导⼊的类型与数据源
source.rdb.input = local_dump.0
5、通过redis-full-check验证数据同步完整性(源端或⽬标端为集群的话,每个集群地址通过(;)分割,前后都加上双引号(""))
5.1 下载并安装
wget -/alibaba/RedisFullCheck/releas/download/relea-v1.4.7-20191203/redis-full-check-1.4.
tar -zxvf redis-full-check-1.4. && cd redis-full-check-1.4.
5.2 执⾏命令校验
./redis-full-check -s source_ip:6381 -p "AComCdgN09srE" -t "target_cluster-01_ip:6379;target_cluster-02_ip:6379;target_cluster-03_ip:6379" -a "AComCdgN09srE" --comparemode=1 --comparetimes=1 --qps=10 --batchcount=100 --sourcedbtype=注意:
如果⽬标是集群的话,需要指定--targetdbtype 类型为1,源端是独⽴节点的话,需要指定--sourcedbtype=0
5.3 校验结果:
[INFO 2021-12-28-16:05:14 :328]: --------------- finished! ----------------
楠木手串all finish successfully, totally 0 key(s) and 0 field(s) conflict
5.4 执⾏查看命令:
sqlite3 result.db.1
5.5 查看异常表数据:
lect * from key;
6、更多参数详解通过 ./redis-full-check --help 查看
部分参数详解:
-s 源端Redis的连接地址和端⼝,如果源端Redis为集群,每个集群地址间需要以半⾓分号(;)分割不同的连接地址,集群地址前后需要添加半⾓双引号("),该选项必填。
-p 源端Redis的密码
-t ⽬的端Redis的连接地址和端⼝,如果⽬的Redis为集群版,每个集群地址间需要以半⾓分号(;)分割不同的连接地址,集群地址前后需要添加半⾓双引号("),该选项必填。
-a ⽬的端Redis的密码。
--sourcedbtype 源库的类别:0:单节点版、主从版;1:集群版;2:阿⾥云/腾讯云;例如:--sourcedbtype=1
--sourcedbfilterlist 源端Redis指定需要校验的DB;开源集群版Redis⽆需填写该选项;⾮开源集群版Redis不指定该选项表⽰校验所有DB;多个DB之间使⽤半⾓分号(;)连接;例如:--sourcedbfilterlist=0;1;2
--targetdbtype ⽬的库的类别:0:单节点版、主从版;1:集群版;2:阿⾥云/腾讯云例如:--targetdbtype=0
--targetdbfilterlist ⽬的端Redis指定需要校验的DB;开源集群版Redis⽆需填写该选项;⾮开源集群版Redis不指定该选项表⽰校验所有DB;多个DB之间使⽤半⾓分号(;)连接;例如:--targetdbfilterlist=0;1;2
-d 异常数据列表保存的⽂件名称,默认为result.db
--comparetimes 校验次数:该选项不填则默认为3次;最⼩值为1;⽆最⼤值,建议不超过5次;--comparetimes=1
-m 校验模式:1:全量校验;2:仅校验value的长度;3:仅校验key是否存在;4:全量对⽐的情况下,忽略⼤key的⽐较
-qps 限速阈值说明:最⼩值为1;最⼤值取决于服务器性能;例如:--qps=10
--filterlist 需要⽐较的key列表,以竖线(|)分割;abc*:表⽰匹配所有abc开头的key;abc:表⽰仅匹配abc这个key ;例如:--filterlist=abc*|efg|m*