首先生成批量命令脚本(注意这里用的是单机版redis),然后用 管道批量执行命令
[root@localhost myredis]# for((i=1;i<=100*10000;i++)); do echo "set k$i v$i" >> ./redisTest.txt; done
[root@localhost myredis]# cat redisTest.txt | redis-cli -a 123456 -p 6379 --pipe
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 1000000
[root@localhost myredis]# redis-cli -a 123456 -p 6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6379> DBSIZE
(integer) 1000000
如何发现大key?
1、使用 redis-cli -a 123456 -p 6379 --bigkeys
命令
[root@localhost myredis]# redis-cli -a 123456 -p 6379 --bigkeys
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
# Scanning the entire keyspace to find biggest keys as well as
# average sizes per key type. You can use -i 0.1 to sleep 0.1 sec
# per 100 SCAN commands (not usually needed).
[00.00%] Biggest string found so far '"k761048"' with 7 bytes
[90.35%] Biggest string found so far '"k1000000"' with 8 bytes
[100.00%] Sampled 1000000 keys so far
-------- summary -------
Sampled 1000000 keys in the keyspace!
Total key length in bytes is 6888896 (avg len 6.89)
Biggest string found '"k1000000"' has 8 bytes
0 lists with 0 items (00.00% of keys, avg size 0.00)
0 hashs with 0 fields (00.00% of keys, avg size 0.00)
1000000 strings with 6888896 bytes (100.00% of keys, avg size 6.89)
0 streams with 0 entries (00.00% of keys, avg size 0.00)
0 sets with 0 members (00.00% of keys, avg size 0.00)
0 zsets with 0 members (00.00% of keys, avg size 0.00)
2、memory usage <具体key名字>
127.0.0.1:6379> memory usage k232323
(integer) 72
以前我们代码逻辑可能是这么写的
上面这段代码,业务逻辑并没有问题,对于小厂中厂(QPS ≤ 1000)可以使用,但是大厂不行,来看看下面优化有的方法(只截方法体了…),加强补充,避免突然key失效了,打爆mysql,做一下预防,尽量不出现缓存击穿的情况。 即所谓的双检加锁策略
1、先看第一种,两个截图,两种异常情况
其中第一种和第二种在多线程场景下都可能会出现数据不一致问题
最后一种,最稳妥