• redis-shake 2.x说明



    官网 https://github.com/aliyun/redis-shake

    一、下载

    文档及下载地址:https://github.com/alibaba/RedisShake
    文档:https://github.com/alibaba/RedisShake/wiki
    下载地址:https://github.com/alibaba/RedisShake/releases

    wget https://github.com/alibaba/RedisShake/releases/download/release-v2.1.2-20220329/release-v2.1.2-20220329.tar.gz
    tar -zxf release-v2.1.2-20220329.tar.gz -C bb/
    
    • 1
    • 2

    二、redis-shake.conf文件说明

    1、基本配置说明

    source.type: 源redis的类型,支持一下4种类型:
    	standalone: 单db节点/主从版模式。如果源端是从多个db节点拉取就选择这个模式,即便是codis等开源的		proxy-db架构。
    	sentinel: sentinel模式。
    	cluster: 集群模式。对于阿里云来说,用户目前无法拉取db的地址,所以此处只能是proxy。
    	proxy: proxy模式。如果是阿里云redis的集群版,从proxy拉取/写入请选择proxy,从db拉取请选择cluster。正常cluster到cluster同步源端请选择cluster模式,proxy模式目前只用于rump。
    source.address: 源redis的地址,从1.6版本开始我们支持集群版,不同的类型对应不同的地址:
    	standalone模式下,需要填写单个db节点的地址,主从版需要输入master或者slave的地址。
    	sentinel模式下,需要填写sentinel_master_name:master_or_slave@sentinel_cluster_address。	sentinel_master_name表示sentinel配置下master的名字,master_or_slave表示从sentinel中选择的db是master还是slave,sentinel_cluster_address表示sentinel的单节点或者集群地址,其中集群地址以分号(;)分割。例如:mymaster:master@127.0.0.1:26379;127.0.0.1:26380。注意,如果是sentinel模式,目前只能拉取一个master或者slave信息,如果需要拉取多个节点,需要启动多个shake。
    	cluster模式下,需要填写集群地址,以分号(;)分割。例如:10.1.1.1:20331;10.1.1.2:20441。同样也支持上面sentinel介绍的自动发现机制,包含@即可,参考3.2。
    	proxy模式下,需要填写单个proxy的地址,此模式目前仅用于rump。
    source.password_raw:源redis的密码。
    target.type: 目的redis的类型,与source.type一致。注意,目的端如果是阿里云的集群版,类型请填写proxy,填写cluster只会同步db0。
    target.address:目的redis的地址。从1.6版本开始我们支持集群版,不同的类型对应不同的地址。 
    	standalone模式,参见source.address。
    	sentinel模式,需要填写sentinel_master_name@sentinel_cluster_address。sentinel_master_name表示sentinel配置下master的名字,sentinel_cluster_address表示sentinel的单节点或者集群地址,其中集群地址以分号(;)分割。例如:mymaster@127.0.0.1:26379;127.0.0.1:26380
    	cluster模式,参见source.address。
    	proxy模式下,填写proxy的地址,如果是多个proxy,则round-robin循环负载均衡连接,保证一个源端db连接只会对应一个proxy。如果是阿里云的集群版请选择这种模式。
    target.password_raw:目的redis的密码。
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    2、target.db、filter.db.whitelist

    target.db = -1    #filter.db.whitelist不设置,单机对单机会把源相同的db导入到目标相同的db,单机对集群只会将单机的db0导入到集群的db0,集群对单机会把集群的db0导入到单机的db0;如果 filter.db.whitelist = 1;3,则单机对单机只会导入对应的db1和db3,单机对集群则什么都不会导入,因为集群只有db0,集群对单机什么都不会导入,因为集群只有db0。
    target.db = X    # 目标redis是单机的话,X的值为0-15,filter.db.whitelist不设置,单机对单机会把源所有db导入到目标dbX中,单机对集群(集群X的值只能是0)会把源所有db导入到集群db0中,集群对单机会把集群db0导入到源dbX中;如果 filter.db.whitelist = 1;2  ,单机对单机会把源db1,、db2导入到目标dbX中,单机对集群(集群X的值只能是0)会把源db1,、db2导入到目标db0中,集群对单机什么都不会导入,因为集群只有db0。
    
    • 1
    • 2

    注:因为我的集群比单机版本高,集群导入单机时修改了如下参数

    # 如果目的端大版本小于源端,也建议设置为1。
    #big_key_threshold = 524288000
    big_key_threshold = 1
    
    • 1
    • 2
    • 3

    3、单个节点到单个节点配置举例。

    source.type: standalone
    source.address: 192.168.180.45:6379
    source.password_raw: 12345
    
    target.type: standalone
    target.address: 192.168.180.46:6379
    target.password_raw: 12345
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    4、主从版/单节点到cluster配置举例

    source.type: standalone
    source.address = 192.168.180.45:6379
    source.password_raw: 12345
    
    target.type: cluster
    target.address = 192.168.180.45:3791;192.168.180.45:3792;192.168.180.45:3793
    target.password_raw: 12345
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    5、集群版cluster到集群版cluster配置举例

    source.type: cluster
    source.address: 10.1.1.1:20441;10.1.1.1:20443;10.1.1.1:20445
    source.password_raw: 12345
    target.type: cluster
    target.address: 10.1.1.1:20551;10.1.1.1:20553;10.1.1.1:20555
    target.password_raw: 12345
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    对于source.address或者target.address,需要配置源端的所有集群中db节点列表以及目的端集群所有db节点列表,用户也可以启用自动发现机制,地址以’@‘开头,redis-shake将会根据cluster nodes命令自动去探测有几个节点。对于source.address,用户可以在’@'前面配置master(默认)或者slave表示分表从master或者slave进行拉取;对于target.address,只能是master或者不配置:

    source.type: cluster
    source.address: master@10.1.1.1:20441 # 将会自动探测到10.1.1.1:20441集群下的所有节点,并从所有master进行拉取。同理如果是slave@10.1.1.1:20441将会扫描集群下的所有slave节点。
    source.password_raw: 12345
    target.type: cluster
    target.address: @10.1.1.1:20551 # 将会自动探测到10.1.1.1:20551集群下的所有节点,并写入所有master。
    target.password_raw: 12345
      以上的说明是开源cluster,当然,源端也可以是别的集群架构模式,比如带proxy的集群(比如codis,或者别的云集群架构,但这种情况下有些不支持自动发现,需要手动配置所有master或者slave的地址),那么需要选择db节点进行拉取,source.type同样选择cluster,source.address后面跟所有db的地址(只要主或者从的其中一个即可)。

    6、集群版cluster到单点配置举例

    source.type: cluster
    source.address: 192.168.180.45:3791;192.168.180.45:3792;192.168.180.45:3793
    source.password_raw: 12345
    
    target.type: standalone
    target.address: 192.168.180.45:6379
    target.password_raw: 12345
    
    target.db = 0
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    7、集群版cluster到proxy配置举例

    source.type: cluster
    source.address: 10.1.1.1:20441;10.1.1.1:20443;10.1.1.1:20445;10.1.1.1:20447
    source.password_raw: 12345
    target.type: proxy
    target.address: 10.1.1.1:30331;10.1.1.1:30441;10.1.1.1:30551
    target.password_raw: 12345
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    source.address同样支持自动发现机制。此外,target.address为proxy的地址,proxy支持roundrobin写入,也就是说,对于这个配置来说,10.1.1.1:20441和10.1.1.1:20447将会写入10.1.1.1:30331;10.1.1.1:20443写入10.1.1.1:30441;10.1.1.1:20445写入10.1.1.1:30551。
      如3.2中所述,源端也可以是别的集群架构模式。

    7、如何使用dump模式?

    配置source.type, source.address, source.password_raw,rdb文件会自动以output_rdb参数进行dump,如果是集群模式(参考3.2的source.*配置),将会有多个rdb文件。

    8、如何使用decode模式?

    配置input_rdb即可。

    9、如何使用restore模式(根据输入的rdb进行恢复)?

    配置source.input.rdb表示输入的rdb文件列表(分号";"分割),同时配置target.type, target.address和target.password_raw表示恢复的地址(参考3.2的target.*配置)。

    10、如何使用rump模式?

    配置源redis和目的redis的类型、地址和密码之后,如果源端是阿里云或者腾讯云的集群版redis,配置scan.special_cloud为aliyun_cluster或者tencent_cluster,将会从源端scan key的方式同步到目的端。如果源端不支持scan命令,还可以指定scan.key_file表示输入的key文件,会从源端拉取这些key对应的信息进行同步。 配置举例:

    source.type: proxy
    source.address: 10.1.1.1:20441 // 源端proxy的地址
    source.password_raw: 12345
    target.type: proxy
    target.address: 10.1.1.1:30331;10.1.1.1:30441;10.1.1.1:30551 // 目的端proxy的地址,填多个可以负载均衡
    target.password_raw: 12345
    scan.special_cloud: aliyun_cluster
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    11、如何提高全量同步的性能?

    提高parallel的数目。

    12、如何进行过滤功能的支持?

    filter.db支持让指定的db进行通过,对其余的进行过滤。同理filter.key和filter.slot。

    15、如何控制并发数?

    设置source.rdb.parallel,参考issue。适用于dump, restore, sync模式。

    16、如何把不同的逻辑db同步到集群版?

    target.db = 0表示所有源端的db都同步到db0,如果你只是希望源端db3同步到目的端db0,别的db不需要,那么可以指定filter.db.whitelist = 3。

    17、 例子:

    一般情况下,只需要改这几个参数即可:

    # 原redis
    source.type = cluster
    source.address = 192.168.1.21:6380;192.168.1.22::6380;1192.168.1.23::6380
    source.password_raw = root123456
    
    # 目标redis
    target.type = cluster
    target.address = 192.168.1.21:6380;192.168.1.22::6380;1192.168.1.23::6380
    target.password_raw = root123456
    
    #迁移的redis key
    filter.key.whitelist = redis_ceshi_key_01;redis_ceshi_key_02
    
    #目标库key存在策略(覆盖)
    key_exists = rewrite
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    三、启动

    1、启动二进制:

    ./redis-shake.linux -conf=redis-shake.conf -type=xxx

    xxx为sync, restore, dump, decode, rump其中之一。
    全量+增量同步请选择sync。
    mac下请使用redis-shake.darwin,windows请用redis-shake.windows.

    2、日志信息

    sync日志信息

    全量同步阶段,显示百分比:
    
    [root@app01 bin]# ./redis-shake.linux -conf=redis-shake.conf -type=sync
    2023/02/06 11:13:01 [WARN] source.auth_type[auth] != auth
    2023/02/06 11:13:01 [WARN] target.auth_type[auth] != auth
    2023/02/06 11:13:01 [INFO] source rdb[192.168.180.45:6379] checksum[yes]
    2023/02/06 11:13:01 [WARN]
    ______________________________
    \                             \           _         ______ |
     \                             \        /   \___-=O'/|O'/__|
      \   RedisShake, here we go !! \_______\          / | /    )
      /                             /        '/-==__ _/__|/__=-|  -GM
     /        Alibaba Cloud        /         *             \ | |
    /                             /                        (o)
    ------------------------------
    if you have any problem, please visit https://github.com/alibaba/RedisShake/wiki/FAQ
    
    2023/02/06 11:13:01 [INFO] redis-shake configuration: {"ConfVersion":1,"Id":"redis-shake","LogFile":"","LogLevel":"info","SystemProfile":9310,"HttpProfile":9320,"Parallel":32,"SourceType":"standalone","SourceAddress":"192.168.180.45:6379","SourcePasswordRaw":"***","SourcePasswordEncoding":"***","SourceAuthType":"auth","SourceTLSEnable":false,"SourceTLSSkipVerify":false,"SourceRdbInput":[],"SourceRdbParallel":1,"SourceRdbSpecialCloud":"","TargetAddress":"192.168.180.46:6379","TargetPasswordRaw":"***","TargetPasswordEncoding":"***","TargetDBString":"-1","TargetDBMapString":"","TargetAuthType":"auth","TargetType":"standalone","TargetTLSEnable":false,"TargetTLSSkipVerify":false,"TargetRdbOutput":"local_dump","TargetVersion":"7.0.8","FakeTime":"","KeyExists":"rewrite","FilterDBWhitelist":[],"FilterDBBlacklist":[],"FilterKeyWhitelist":[],"FilterKeyBlacklist":[],"FilterSlot":[],"FilterCommandWhitelist":[],"FilterCommandBlacklist":[],"FilterLua":false,"BigKeyThreshold":524288000,"Metric":true,"MetricPrintLog":false,"SenderSize":104857600,"SenderCount":4095,"SenderDelayChannelSize":65535,"SenderTickerMs":20,"KeepAlive":0,"PidPath":"","ScanKeyNumber":50,"ScanSpecialCloud":"","ScanKeyFile":"","Qps":200000,"ResumeFromBreakPoint":false,"Psync":true,"NCpu":0,"HeartbeatUrl":"","HeartbeatInterval":10,"HeartbeatExternal":"","HeartbeatNetworkInterface":"","ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"FilterKey":null,"FilterDB":"","Rewrite":false,"SourceAddressList":["192.168.180.45:6379"],"TargetAddressList":["192.168.180.46:6379"],"SourceVersion":"4.0.14","HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetReplace":false,"TargetDB":-1,"Version":"develop,e43689343aa046b19965854b5794828b1484e457,go1.17,2022-03-29_15:03:51","Type":"sync","TargetDBMap":null}
    2023/02/06 11:13:01 [INFO] DbSyncer[0] starts syncing data from 192.168.180.45:6379 to [192.168.180.46:6379] with http[9321], enableResumeFromBreakPoint[false], slot boundary[-1, -1]
    2023/02/06 11:13:01 [INFO] DbSyncer[0] psync connect '192.168.180.45:6379' with auth type[auth] OK!
    2023/02/06 11:13:01 [INFO] DbSyncer[0] psync send listening port[9320] OK!
    2023/02/06 11:13:01 [INFO] DbSyncer[0] try to send 'psync' command: run-id[?], offset[-1]
    2023/02/06 11:13:01 [INFO] Event:FullSyncStart  Id:redis-shake
    2023/02/06 11:13:01 [INFO] DbSyncer[0] psync runid = ab2acc9e925de216328d0fc501ceed4a9f6730f4, offset = 0, fullsync
    2023/02/06 11:13:01 [INFO] DbSyncer[0] rdb file size = 318
    2023/02/06 11:13:01 [INFO] Aux information key:redis-ver value:4.0.14
    2023/02/06 11:13:01 [INFO] Aux information key:redis-bits value:64
    2023/02/06 11:13:01 [INFO] Aux information key:ctime value:1675653181
    2023/02/06 11:13:01 [INFO] Aux information key:used-mem value:2107344
    2023/02/06 11:13:01 [INFO] Aux information key:repl-stream-db value:0
    2023/02/06 11:13:01 [INFO] Aux information key:repl-id value:ab2acc9e925de216328d0fc501ceed4a9f6730f4
    2023/02/06 11:13:01 [INFO] Aux information key:repl-offset value:0
    2023/02/06 11:13:01 [INFO] Aux information key:aof-preamble value:0
    2023/02/06 11:13:01 [INFO] db_size:2 expire_size:0
    2023/02/06 11:13:01 [INFO] db_size:2 expire_size:0
    2023/02/06 11:13:01 [INFO] db_size:1 expire_size:0
    2023/02/06 11:13:01 [INFO] db_size:1 expire_size:0
    2023/02/06 11:13:01 [INFO] db_size:1 expire_size:0
    2023/02/06 11:13:01 [INFO] db_size:1 expire_size:0
    2023/02/06 11:13:01 [INFO] DbSyncer[0] total = 318B -         318B [100%]  entry=8
    2023/02/06 11:13:01 [INFO] DbSyncer[0] sync rdb done
    2023/02/06 11:13:01 [INFO] DbSyncer[0] FlushEvent:IncrSyncStart Id:redis-shake
    
    增量同步,出现字样sync rdb done后,当前dbSyncer进入增量同步:
    
    2023/02/06 11:14:02 [INFO] DbSyncer[0] sync:  +forwardCommands=2      +filterCommands=0      +writeBytes=28
    2023/02/06 11:14:03 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:04 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:05 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:06 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:07 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:08 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:09 [INFO] DbSyncer[0] sync:  +forwardCommands=1      +filterCommands=0      +writeBytes=4
    2023/02/06 11:14:10 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:11 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:12 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    2023/02/06 11:14:13 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
    ^C2023/02/06 11:14:14 [INFO] receive signal: interrupt
    [root@app01 bin]#
    
    其中forwardCommands表示发送的命令个数,filterCommands表示过滤的命令个数,比如opinfo或者指定了filter都会被过滤,writeBytes表示发送的字节数。
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61

    rump日志信息

    [root@app01 bin]# ./redis-shake.linux -conf=redis-shake.conf -type=rump
    2023/02/06 11:10:06 [WARN] source.auth_type[auth] != auth
    2023/02/06 11:10:06 [WARN] target.auth_type[auth] != auth
    2023/02/06 11:10:06 [WARN]
    ______________________________
    \                             \           _         ______ |
     \                             \        /   \___-=O'/|O'/__|
      \   RedisShake, here we go !! \_______\          / | /    )
      /                             /        '/-==__ _/__|/__=-|  -GM
     /        Alibaba Cloud        /         *             \ | |
    /                             /                        (o)
    ------------------------------
    if you have any problem, please visit https://github.com/alibaba/RedisShake/wiki/FAQ
    
    2023/02/06 11:10:06 [INFO] redis-shake configuration: {"ConfVersion":1,"Id":"redis-shake","LogFile":"","LogLevel":"info","SystemProfile":9310,"HttpProfile":9320,"Parallel":32,"SourceType":"standalone","SourceAddress":"192.168.180.46:6379","SourcePasswordRaw":"***","SourcePasswordEncoding":"***","SourceAuthType":"auth","SourceTLSEnable":false,"SourceTLSSkipVerify":false,"SourceRdbInput":[],"SourceRdbParallel":0,"SourceRdbSpecialCloud":"","TargetAddress":"192.168.180.45:6379","TargetPasswordRaw":"***","TargetPasswordEncoding":"***","TargetDBString":"-1","TargetDBMapString":"","TargetAuthType":"auth","TargetType":"standalone","TargetTLSEnable":false,"TargetTLSSkipVerify":false,"TargetRdbOutput":"local_dump","TargetVersion":"4.0.14","FakeTime":"","KeyExists":"rewrite","FilterDBWhitelist":[],"FilterDBBlacklist":[],"FilterKeyWhitelist":[],"FilterKeyBlacklist":[],"FilterSlot":[],"FilterCommandWhitelist":[],"FilterCommandBlacklist":[],"FilterLua":false,"BigKeyThreshold":1,"Metric":true,"MetricPrintLog":false,"SenderSize":104857600,"SenderCount":4095,"SenderDelayChannelSize":65535,"SenderTickerMs":20,"KeepAlive":0,"PidPath":"","ScanKeyNumber":50,"ScanSpecialCloud":"","ScanKeyFile":"","Qps":200000,"ResumeFromBreakPoint":false,"Psync":false,"NCpu":0,"HeartbeatUrl":"","HeartbeatInterval":10,"HeartbeatExternal":"","HeartbeatNetworkInterface":"","ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"FilterKey":null,"FilterDB":"","Rewrite":false,"SourceAddressList":["192.168.180.46:6379"],"TargetAddressList":["192.168.180.45:6379"],"SourceVersion":"7.0.8","HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetReplace":true,"TargetDB":-1,"Version":"develop,e43689343aa046b19965854b5794828b1484e457,go1.17,2022-03-29_15:03:51","Type":"rump","TargetDBMap":null}
    2023/02/06 11:10:06 [INFO] start dbRumper[0]
    2023/02/06 11:10:06 [INFO] dbRumper[0] get node count: 1
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] start fetcher with special-cloud[]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] fetch db list: [1 5 6 0]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] fetch logical db: 1
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] start fetching node db[1]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] finish fetching db[1]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] fetch logical db: 5
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] start fetching node db[5]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] finish fetching db[5]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] fetch logical db: 6
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] start fetching node db[6]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] finish fetching db[6]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] fetch logical db: 0
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] start fetching node db[0]
    2023/02/06 11:10:06 [INFO] dbRumper[0] executor[0] finish fetching db[0]
    2023/02/06 11:10:07 [INFO] dbRumper[0] total = 4(keys) -          0(keys) [  0%]  entry=0
    2023/02/06 11:10:07 [INFO] dbRumper[0] executor[0] restore big key[key71] with length[19], pttl[0], db[1]
    2023/02/06 11:10:07 [INFO] RestoreBigkey select db[1]
    2023/02/06 11:10:07 [INFO] dbRumper[0] executor[0] restore big key[key75] with length[19], pttl[0], db[5]
    2023/02/06 11:10:07 [INFO] RestoreBigkey select db[5]
    2023/02/06 11:10:07 [INFO] dbRumper[0] executor[0] restore big key[key76] with length[19], pttl[0], db[6]
    2023/02/06 11:10:07 [INFO] RestoreBigkey select db[6]
    2023/02/06 11:10:07 [INFO] dbRumper[0] executor[0] restore big key[key70] with length[19], pttl[0], db[0]
    2023/02/06 11:10:07 [INFO] RestoreBigkey select db[0]
    2023/02/06 11:10:08 [INFO] dbRumper[0] executor[0] finish!
    2023/02/06 11:10:08 [INFO] dbRumper[0] finished!
    2023/02/06 11:10:08 [INFO] all rumpers finish!, total data: [map[Details:map[192.168.180.46:6379:map[0:map[avgSize:19 cCommands:0 keyChan:0 maxSize:19 minSize:19 rBytes:76 rCommands:4 resultChan:0 wBytes:0 wCommands:0]]]]]
    2023/02/06 11:10:08 [INFO] execute runner[*run.CmdRump] finished!
    2023/02/06 11:10:08 [WARN]
                    ##### | #####
    Oh we finish ? # _ _ #|# _ _ #
                   #      |      #
             |       ############
                         # #
      |                  # #
                        #   #
             |     |    #   #      |        |
      |  |             #     #               |
             | |   |   # .-. #         |
                       #( O )#    |    |     |
      |  ################. .###############  |
       ##  _ _|____|     ###     |_ __| _  ##
      #  |                                |  #
      #  |    |    |    |   |    |    |   |  #
       ######################################
                       #     #
                        #####
    [root@app01 bin]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
  • 相关阅读:
    hbase hdfs path
    第四节 Electron 调用H5事件结合node模块fs 实现文件拖拽读取
    LeetCode 面试题 03.02. 栈的最小值
    Nginx简介,Nginx搭载负载均衡以及Nginx部署前端项目
    DO280OpenShift访问控制--security policy和章节实验
    Rust原子类型和内存排序
    自研能力再获认可,腾讯云数据库入选 Forrester Translytical 报告
    BAT 脚本转 EXE 工具
    公路中、边桩坐标计算及放样程序
    HTML网页设计制作——响应式网页影视动漫资讯bootstrap网页(9页)
  • 原文地址:https://blog.csdn.net/lihongbao80/article/details/127766383