本文部署redis集群,6节点组成3主3从集群模式
我们知道redis的默认端口是6379,但为了安全,本文将redis的端口设置为6360,同时redis启动使用自己创建的redis.conf配置文件,通过configmap 卷挂载自己的redis.conf配置文件到pod中即可,但有一点需要特别注意,就是自己创建的redis.conf配置文件里面的“daemonize no ”必须是no,即redis是否以后台方式运行,必须设置为no,让redis以前台方式运行。我们知道,在单体应用中,redis一般是以后台方式运行的,但在容器里面,为什么需要设置redis以前台方式运行呢?因为容器内的进程必须以前台方式运行,这点设计到容器的知识,平时我们看到的nginx的dockerfile启动nginx也是这样的,ENTRYPOINT [ “/usr/sbin/nginx”, “-g”, “daemon off;” ] ,以前台方式运行,所以这里再次强调的是redis.conf配置文件里面的“daemonize no ”必须是no,即redis是否以后台方式运行,必须设置为no,让redis以前台方式运行。
下面开始部署redis集群。
一般k8s集群都会有一个动态分配存储,本章略过,参考https://blog.csdn.net/MssGuo/article/details/123611986
,动态分配存储篇。
说明:我们知道,redis默认目录是/var/lib/redis/和/etc/redis/,同时官方在构建redis镜像时,默认工作目录在/data目录,所以本篇为了规范redis数据存放目录,将redis.conf挂载到/etc/redis/下,其他redis日志文件、数据文件全部放到/data目录下。
[root@master redis]# vim redis.conf #编写一个redis.conf配置文件
[root@master redis]# grep -Ev "$^|#" redis.conf #下面是redis.conf配置文件
bind 0.0.0.0
protected-mode yes
port 6360 #redis端口,为了安全设置为6360端口
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no #redis是否以后台模式运行,必须设置no
supervised no
pidfile /data/redis.pid #redis的pid文件,放到/data目录下
loglevel notice
logfile /data/redis_log #redis日志文件,放到/data目录下
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb #这个文件会放在dir定义的/data目录
dir /data #数据目录
masterauth iloveyou #redis集群各节点相互认证的密码,必须配置和下面的requirepass一致
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
replica-priority 100
requirepass iloveyou #redis的密码
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
appendonly no
appendfilename "appendonly.aof" #这个文件会放在dir定义的/data目录
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
cluster-enabled yes
cluster-config-file nodes.conf #这个文件会放在dir定义的/data目录
cluster-node-timeout 15000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
[root@master redis]#
#创建cm,名称为redis-conf,key为redis.conf,value为上创建的redis.conf配置文件
kubectl create configmap redis-conf --from-file=redis.conf=redis.conf #创建configmap
redis集群一般可以使用deployment和statefulsets,这里使用statefulsets有状态应用来创建redis,创建sts有状态应用需要有一个headless service,同时在sts中挂载configmap卷,使用动态分配pv用于redis数据持久化。
[root@master redis]# cat redis-cluster-sts.yaml
---
apiVersion: v1
kind: Service #先创建一个无头service
metadata:
labels: #service本身的标签
app: redis-svc
name: redis-svc #service的名称,下面创建的StatefulSet就要引用这个service名称
spec:
ports:
- port: 6360 #service本身的端口
protocol: TCP
targetPort: 6360 #目标端口6360,redis默认端口是6379,这里为了安全改成了6360
selector:
app: redis #标签选择器要与下面创建的pod的标签一样
type: ClusterIP
clusterIP: None #clusterIP为None表示创建的service为无头service
---
apiVersion: apps/v1
kind: StatefulSet #创建StatefulSet资源
metadata:
labels: #StatefulSet本身的标签
app: redis-sts
name: redis-sts #资源名称
namespace: default #资源所属命名空间
spec:
selector: #标签选择器,要与下面pod模板定义的pod标签保持一致
matchLabels:
app: redis-sts
replicas: 6 #副本数为6个,redis集群模式最少要为6个节点,构成3主3从
serviceName: redis-svc #指定使用service为上面我们创建的无头service的名称
template:
metadata:
labels: #pod的标签,上面的无头service的标签选择器和sts标签选择器都要与这个相同
app: redis-sts
spec:
# affinity:
# podAntiAffinity: #定义pod反亲和性,目的让6个pod不在同一个主机上,实现均衡分布,这里我的node节点不够,所以不定义反亲和性
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 100
# podAffinityTerm:
# labelSelector:
# matchExpressions:
# - key: app
# operator: In
# values:
# - redis-sts
# topologyKey: kubernetes.io/hostname
containers:
- name: redis #容器名称
image: redis:latest #redis镜像
imagePullPolicy: IfNotPresent #镜像拉取策略
command: #定义容器的启动命令和参数
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--cluster-announce-ip" #这个参数和下面的这个参数
- "$(POD_IP)" #这个参数是为了解决pod重启ip变了之后,redis集群状态无法自动同步问题
env:
- name: POD_IP #POD_IP值引用自status.podIP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports: #定义容器端口
- name: redis-6360 #为端口取个名称为http
containerPort: 6360 #容器端口
volumeMounts: #挂载点
- name: "redis-conf" #引用下面定义的redis-conf卷
mountPath: "/etc/redis" #redis配置文件的挂载点
- name: "redis-data" #指定使用的卷名称,这里使用的是下面定义的pvc模板的名称
mountPath: "/data" #redis数据的挂载点
restartPolicy: Always
volumes:
- name: "redis-conf" #挂载一个名为redis-conf的configMap卷,这个cm卷已经定义好了
configMap:
name: "redis-conf"
items:
- key: "redis.conf"
path: "redis.conf"
volumeClaimTemplates: #定义创建pvc的模板
- metadata:
name: "redis-data" #模板名称
spec:
resources: #资源请求
requests:
storage: 100M #需要100M的存储空间
accessModes:
- ReadWriteOnce #访问模式为RWO
storageClassName: "nfs-storageclass" #指定使用的存储类,实现动态分配pv
[root@master redis]#
#查看sts、pod、pvc、pv资源状态,现在各个资源已经创建完成,6个pods都正常运行
[root@master redis]# kubectl get sts,pods,pvc,pv
NAME READY AGE
statefulset.apps/redis-sts 6/6 54m
NAME READY STATUS RESTARTS AGE
pod/redis-sts-0 1/1 Running 0 54m
pod/redis-sts-1 1/1 Running 0 54m
pod/redis-sts-2 1/1 Running 0 53m
pod/redis-sts-3 1/1 Running 0 53m
pod/redis-sts-4 1/1 Running 0 53m
pod/redis-sts-5 1/1 Running 0 53m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/redis-data-redis-sts-0 Bound pvc-2fbe8103-23bf-4cb5-bb9d-1def8decf564 100M RWO nfs-storageclass 54m
persistentvolumeclaim/redis-data-redis-sts-1 Bound pvc-c4358d43-c34f-4072-ba78-f9f8785ecab9 100M RWO nfs-storageclass 54m
persistentvolumeclaim/redis-data-redis-sts-2 Bound pvc-01808f68-884b-489a-8299-ecc77ed37d19 100M RWO nfs-storageclass 53m
persistentvolumeclaim/redis-data-redis-sts-3 Bound pvc-b82d500d-9f99-4f70-bf65-928d9353efee 100M RWO nfs-storageclass 53m
persistentvolumeclaim/redis-data-redis-sts-4 Bound pvc-8e048240-18aa-4e77-945e-f7cc6c24f194 100M RWO nfs-storageclass 53m
persistentvolumeclaim/redis-data-redis-sts-5 Bound pvc-7e2c97ad-0fb8-4bfa-93f9-7ea407e93da8 100M RWO nfs-storageclass 53m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-01808f68-884b-489a-8299-ecc77ed37d19 100M RWO Retain Bound default/redis-data-redis-sts-2 nfs-storageclass 53m
persistentvolume/pvc-2fbe8103-23bf-4cb5-bb9d-1def8decf564 100M RWO Retain Bound default/redis-data-redis-sts-0 nfs-storageclass 54m
persistentvolume/pvc-7e2c97ad-0fb8-4bfa-93f9-7ea407e93da8 100M RWO Retain Bound default/redis-data-redis-sts-5 nfs-storageclass 53m
persistentvolume/pvc-8e048240-18aa-4e77-945e-f7cc6c24f194 100M RWO Retain Bound default/redis-data-redis-sts-4 nfs-storageclass 53m
persistentvolume/pvc-b82d500d-9f99-4f70-bf65-928d9353efee 100M RWO Retain Bound default/redis-data-redis-sts-3 nfs-storageclass 53m
persistentvolume/pvc-c4358d43-c34f-4072-ba78-f9f8785ecab9 100M RWO Retain Bound default/redis-data-redis-sts-1 nfs-storageclass 54m
[root@master redis]#
6个pod已经创建完毕,状态都是running,下面将6个pod 组成redis集群,3主3从模式。
#在redis任意一个pod执行初始化命令,可以进入到pod里面执行也可以直接在外面执行
#其中为了获取每个pod的ip,使用kubectl get pods -l app=redis-sts -o jsonpath='{range.items[*]}{.status.podIP}:6360 {end}'获取
#本次采用自动创建redis的形式,也就是说不指定哪个是主哪个是从节点,让redis自动分配,生产环境中也建议使用该种方式
[root@master redis]# kubectl exec -it redis-sts-0 -- redis-cli -a iloveyou --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-sts -o jsonpath='{range.items[*]}{.status.podIP}:6360 {end}')
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.244.2.22:6360 to 10.244.2.20:6360
Adding replica 10.244.1.20:6360 to 10.244.1.18:6360
Adding replica 10.244.2.21:6360 to 10.244.1.19:6360
M: 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852 10.244.2.20:6360
slots:[0-5460] (5461 slots) master
M: ac6cc9dd3a86cf370333d36933c99df5f13f42ab 10.244.1.18:6360
slots:[5461-10922] (5462 slots) master
M: 18b4ceacd3222e546ab59e041e4ae50e736c5c26 10.244.1.19:6360
slots:[10923-16383] (5461 slots) master
S: 8394ceff0b32fc7119b65704ea78e9b5bbc2fbd7 10.244.2.21:6360
replicates 18b4ceacd3222e546ab59e041e4ae50e736c5c26
S: 565f9f9931323f8ac0376b7a7ec701f0a2955e8b 10.244.2.22:6360
replicates 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852
S: 5c1270743b6a5f81003da4402f39c360631a2d0f 10.244.1.20:6360
replicates ac6cc9dd3a86cf370333d36933c99df5f13f42ab
Can I set the above configuration? (type 'yes' to accept): yes #输入yes表示接受redis给我们自动分配的槽点,主从也是redis任意指定的
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 10.244.2.20:6360)
M: 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852 10.244.2.20:6360
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 5c1270743b6a5f81003da4402f39c360631a2d0f 10.244.1.20:6360
slots: (0 slots) slave
replicates ac6cc9dd3a86cf370333d36933c99df5f13f42ab
M: 18b4ceacd3222e546ab59e041e4ae50e736c5c26 10.244.1.19:6360
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ac6cc9dd3a86cf370333d36933c99df5f13f42ab 10.244.1.18:6360
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 565f9f9931323f8ac0376b7a7ec701f0a2955e8b 10.244.2.22:6360
slots: (0 slots) slave
replicates 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852
S: 8394ceff0b32fc7119b65704ea78e9b5bbc2fbd7 10.244.2.21:6360
slots: (0 slots) slave
replicates 18b4ceacd3222e546ab59e041e4ae50e736c5c26
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@master redis]#
如果有要求,也可以手动指定redis的主从节点,下面仅给出操作步骤,未经过验证,请自行验证。
redis-sts-0 主 >> redis-sts-1 从
redis-sts-2 主 >> redis-sts-3 从
redis-sts-4 主 >> redis-sts-5 从
[root@master redis]# kubectl get pods -l app=redis-sts -o wide #先查看各个pod的ip
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis-sts-0 1/1 Running 0 3h12m 10.244.2.20 node2 <none> <none>
redis-sts-1 1/1 Running 0 3h12m 10.244.1.18 node1 <none> <none>
redis-sts-2 1/1 Running 0 3h12m 10.244.1.19 node1 <none> <none>
redis-sts-3 1/1 Running 0 3h12m 10.244.2.21 node2 <none> <none>
redis-sts-4 1/1 Running 0 3h12m 10.244.2.22 node2 <none> <none>
redis-sts-5 1/1 Running 0 3h12m 10.244.1.20 node1 <none> <none>
[root@master redis]#
#手动创建redis集群的master节点,指定redis-sts-0、redis-sts-2、redis-sts-4的pod为master节点,这里直接使用pod的ip来指定
[root@master redis]# kubectl exec -it redis-sts-0 -- redis-cli -a iloveyou --cluster create 10.244.2.20:6360 10.244.1.19:6360 10.244.2.22:6360
M: 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852 10.244.2.20:6360
slots:[0-5460] (5461 slots) master
replicates ac6cc9dd3a86cf370333d36933c99df5f13f42ab
M: 18b4ceacd3222e546ab59e041e4ae50e736c5c26 10.244.1.19:6360
slots:[10923-16383] (5461 slots) master
M: ac6cc9dd3a86cf370333d36933c99df5f13f42ab 10.244.1.22:6360
slots:[5461-10922] (5462 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
.................
#为每个master节点添加slave节点
#10.244.2.20:6360的位置可以是任意一个master节点,一般我们用第一个master节点即redis-sts-0的ip地址
#--cluster-master-id参数指定该salve节点对应的master节点的id
# redis-sts-0 主 >> redis-sts-1 从
[root@master redis]# kubectl exec -it redis-sts-0 -- redis-cli -a iloveyou --cluster add-node 10.244.1.18:6360 10.244.2.20:6360 --cluster-slave --cluster-master-id 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852
# redis-sts-2 主 >> redis-sts-3 从
[root@master redis]# kubectl exec -it redis-sts-0 -- redis-cli -a iloveyou --cluster add-node 10.244.2.21:6360 10.244.2.20:6360 --cluster-slave --cluster-master-id 18b4ceacd3222e546ab59e041e4ae50e736c5c26
# redis-sts-4 主 >> redis-sts-5 从
[root@master redis]# kubectl exec -it redis-sts-0 -- redis-cli -a iloveyou --cluster add-node 10.244.1.20:6360 10.244.2.20:6360 --cluster-slave --cluster-master-id ac6cc9dd3a86cf370333d36933c99df5f13f42ab
redis集群创建后之后,我们来验证一下。
#使用下面这条命令验证集群状态,注意--cluster check 后面仅需指定任意一个节点ip即可,这里的range.items[0]就表示指定第一个redis-sts-0的ip,如下所示,集群状态正常
[root@master redis]# kubectl exec -it redis-sts-0 -- redis-cli -a iloveyou --cluster check $(kubectl get pods -l app=redis-sts -o jsonpath='{range.items[0]}{.status.podIP}:6360 {end}')
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.244.2.20:6360 (972b376e...) -> 0 keys | 5461 slots | 1 slaves.
10.244.1.19:6360 (18b4ceac...) -> 0 keys | 5461 slots | 1 slaves.
10.244.1.18:6360 (ac6cc9dd...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.244.2.20:6360)
M: 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852 10.244.2.20:6360
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 5c1270743b6a5f81003da4402f39c360631a2d0f 10.244.1.20:6360
slots: (0 slots) slave
replicates ac6cc9dd3a86cf370333d36933c99df5f13f42ab
M: 18b4ceacd3222e546ab59e041e4ae50e736c5c26 10.244.1.19:6360
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ac6cc9dd3a86cf370333d36933c99df5f13f42ab 10.244.1.18:6360
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 565f9f9931323f8ac0376b7a7ec701f0a2955e8b 10.244.2.22:6360
slots: (0 slots) slave
replicates 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852
S: 8394ceff0b32fc7119b65704ea78e9b5bbc2fbd7 10.244.2.21:6360
slots: (0 slots) slave
replicates 18b4ceacd3222e546ab59e041e4ae50e736c5c26
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@master redis]#
#也可以进入任意一个节点pod中进行验证集群状态
[root@master redis]# kubectl exec -it redis-sts-0 -- bash
root@redis-sts-0:/data# redis-cli -a iloveyou --cluster check 10.244.1.20:6360 #检查redis集群状态是否正常
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.244.1.18:6360 (ac6cc9dd...) -> 0 keys | 5462 slots | 1 slaves.
10.244.1.19:6360 (18b4ceac...) -> 0 keys | 5461 slots | 1 slaves.
10.244.2.20:6360 (972b376e...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.244.1.20:6360)
S: 5c1270743b6a5f81003da4402f39c360631a2d0f 10.244.1.20:6360
slots: (0 slots) slave
replicates ac6cc9dd3a86cf370333d36933c99df5f13f42ab
S: 8394ceff0b32fc7119b65704ea78e9b5bbc2fbd7 10.244.2.21:6360
slots: (0 slots) slave
replicates 18b4ceacd3222e546ab59e041e4ae50e736c5c26
M: ac6cc9dd3a86cf370333d36933c99df5f13f42ab 10.244.1.18:6360
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: 18b4ceacd3222e546ab59e041e4ae50e736c5c26 10.244.1.19:6360
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852 10.244.2.20:6360
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 565f9f9931323f8ac0376b7a7ec701f0a2955e8b 10.244.2.22:6360
slots: (0 slots) slave
replicates 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@redis-sts-0:/data#
#开始验证之前,我们先开个窗口,进入到redis-sts-0里面查看redis集群状态
[root@master redis]# kubectl exec -it redis-sts-0 -- bash
root@redis-sts-0:/data# redis-cli -a iloveyou --cluster check 10.244.1.20:6360 #查看redis集群状态
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.244.1.18:6360 (ac6cc9dd...) -> 0 keys | 5462 slots | 1 slaves.
10.244.1.19:6360 (18b4ceac...) -> 0 keys | 5461 slots | 1 slaves.
10.244.2.20:6360 (972b376e...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.244.1.20:6360)
S: 5c1270743b6a5f81003da4402f39c360631a2d0f 10.244.1.20:6360
slots: (0 slots) slave
replicates ac6cc9dd3a86cf370333d36933c99df5f13f42ab
S: 8394ceff0b32fc7119b65704ea78e9b5bbc2fbd7 10.244.2.21:6360
slots: (0 slots) slave
replicates 18b4ceacd3222e546ab59e041e4ae50e736c5c26
M: ac6cc9dd3a86cf370333d36933c99df5f13f42ab 10.244.1.18:6360 #这是现在redis-sts-1的ip
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: 18b4ceacd3222e546ab59e041e4ae50e736c5c26 10.244.1.19:6360
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852 10.244.2.20:6360
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 565f9f9931323f8ac0376b7a7ec701f0a2955e8b 10.244.2.22:6360 #这是现在redis-sts-4的ip
slots: (0 slots) slave
replicates 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
#开始手动删除pod,模拟pod挂掉后重新创建后ip改变了,redis集群是否正常
[root@master redis]# kubectl delete pod redis-sts-4 #删掉redis-sts-4
pod "redis-sts-4" deleted
[root@master redis]# kubectl get pods -l app=redis-sts -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis-sts-0 1/1 Running 0 3h56m 10.244.2.20 node2 <none> <none>
redis-sts-1 1/1 Running 0 3h56m 10.244.1.18 node1 <none> <none>
redis-sts-2 1/1 Running 0 3h56m 10.244.1.19 node1 <none> <none>
redis-sts-3 1/1 Running 0 3h55m 10.244.2.21 node2 <none> <none>
redis-sts-4 1/1 Running 0 6s 10.244.2.23 node2 <none> <none> #redis-sts-4重新创建后ip变了
redis-sts-5 1/1 Running 0 3h55m 10.244.1.20 node1 <none> <none>
[root@master redis]# kubectl delete pod redis-sts-1 #再删掉redis-sts-1
pod "redis-sts-1" deleted
[root@master redis]# kubectl get pods -l app=redis-sts -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis-sts-0 1/1 Running 0 3h57m 10.244.2.20 node2 <none> <none>
redis-sts-1 1/1 Running 0 3s 10.244.1.22 node1 <none> <none> ##redis-sts-1重新创建后ip变了
redis-sts-2 1/1 Running 0 3h57m 10.244.1.19 node1 <none> <none>
redis-sts-3 1/1 Running 0 3h57m 10.244.2.21 node2 <none> <none>
redis-sts-4 1/1 Running 0 2m38s 10.244.2.23 node2 <none> <none>
redis-sts-5 1/1 Running 0 3h57m 10.244.1.20 node1 <none> <none>
#查看redis集群状态,看看pod的ip变了,redis集群状态是否正常
root@redis-sts-0:/data# redis-cli -a iloveyou --cluster check 10.244.1.20:6360
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.244.1.22:6360 (ac6cc9dd...) -> 0 keys | 5462 slots | 1 slaves.
10.244.1.19:6360 (18b4ceac...) -> 0 keys | 5461 slots | 1 slaves.
10.244.2.20:6360 (972b376e...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.244.1.20:6360)
S: 5c1270743b6a5f81003da4402f39c360631a2d0f 10.244.1.20:6360
slots: (0 slots) slave
replicates ac6cc9dd3a86cf370333d36933c99df5f13f42ab
S: 8394ceff0b32fc7119b65704ea78e9b5bbc2fbd7 10.244.2.21:6360
slots: (0 slots) slave
replicates 18b4ceacd3222e546ab59e041e4ae50e736c5c26
M: ac6cc9dd3a86cf370333d36933c99df5f13f42ab 10.244.1.22:6360 #这是现在的redis-sts-1的ip,我们发现redis-sts-1的ip变了,但是前面这一串id没有变
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: 18b4ceacd3222e546ab59e041e4ae50e736c5c26 10.244.1.19:6360
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852 10.244.2.20:6360
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 565f9f9931323f8ac0376b7a7ec701f0a2955e8b 10.244.2.23:6360 #这是现在的redis-sts-4的ip,我们发现redis-sts-4的ip变了,但是前面这一串id没有变
slots: (0 slots) slave
replicates 972b376e8cc658b8bf5f2a1a3294cbe2c84ee852
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@redis-sts-0:/data#
#以上验证充分说明,redis集群的pod挂掉之后,pod重新创建之后,即使pod的ip变了,redis集群仍是正常的,同时节点的id没有变。
1、redis这种有状态的应用到底应不应该使用k8s部署,还是使用外部服务器部署redis集群?
2、创建statefulsets时,如果不指定下面的这些参数,redis集群的pod挂掉之后,pod重新创建之后,pod的ip变了,redis集群状态会怎样?
args:
- "/etc/redis/redis.conf"
- "--cluster-announce-ip" #这个参数和下面的这个参数
- "$(POD_IP)" #这个参数是为了解决pod重启ip变了之后,redis集群状态无法自动同步问题
env:
- name: POD_IP #POD_IP值引用之status.podIP
valueFrom:
fieldRef:
fieldPath: status.podIP
3、pod ip可能随时改变,但是pod的域名、无头服务的name,statefulsets的name都是固定的,都不会改变,那么如果不指定以上2步骤说的参数,但在初始化redis集群时通过pod的FQDN的形式指定,即kubectl exec -it redis-sts-0 -- redis-cli -a iloveyou --cluster create --cluster-replicas 1 redis-sts-0.redis-svc.default.svc.cluster.local:6360 redis-sts-1.redis-svc.default.svc.cluster.local:6360 redis-sts-2.redis-svc.default.svc.cluster.local:6360 redis-sts-3.redis-svc.default.svc.cluster.local:6360 redis-sts-4.redis-svc.default.svc.cluster.local:6360 redis-sts-5.redis-svc.default.svc.cluster.local:6360
,能否正常创建redis集群,能否实现pod重新创建之后,pod的ip变了,redis集群状态会怎样?