网站首页下拉广告,关于网络营销的论文,哪些网站做的好,中国交通建设监理协会官方网站问题描述
之前在k8s上面部署了一台node#xff0c;然后创建了6个redis的pod#xff0c;构建了一个redis的集群#xff0c;正常运行。
最近添加了一台slave node#xff0c;然后把其中的几个redis的pod调度到了slave node上面#xff0c;结果集群就起不来了#xff0c;… 问题描述
之前在k8s上面部署了一台node然后创建了6个redis的pod构建了一个redis的集群正常运行。
最近添加了一台slave node然后把其中的几个redis的pod调度到了slave node上面结果集群就起不来了看了下log报如下错误
127.0.0.1:6379 get test
(error) CLUSTERDOWN The cluster is down127.0.0.1:6379 cluster info
cluster_state:fail
cluster_slots_assigned:16384
cluster_slots_ok:0
cluster_slots_pfail:16384
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:14
cluster_my_epoch:14
cluster_stats_messages_ping_sent:4
cluster_stats_messages_sent:4
cluster_stats_messages_received:0
total_cluster_links_buffer_limit_exceeded:0$ kubectl logs redis-app-5
... ...
1:S 19 Nov 2024 01:58:13.251 * Connecting to MASTER 172.16.43.44:6379
1:S 19 Nov 2024 01:58:13.251 * MASTER - REPLICA sync started
1:S 19 Nov 2024 01:58:13.251 * Cluster state changed: ok
1:S 19 Nov 2024 01:58:20.754 # Cluster state changed: fail
1:S 19 Nov 2024 01:59:14.979 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 01:59:14.979 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 01:59:14.979 * MASTER - REPLICA sync started
1:S 19 Nov 2024 02:00:15.422 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 02:00:15.422 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 02:00:15.422 * MASTER - REPLICA sync started
1:S 19 Nov 2024 02:01:16.357 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 02:01:16.357 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 02:01:16.357 * MASTER - REPLICA sync started问题分析
这种情况是redis的pod已经重新启动了相应的ip地址可能已经变掉了但是集群部署还是按照重启之前的配置来的所以导致启动失败。 我的解决办法
首先查看各个redis pod的信息 $ kubectl describe pod redis-app | grep IPcni.projectcalico.org/podIP: 172.16.178.201/32cni.projectcalico.org/podIPs: 172.16.178.201/32
IP: 172.16.178.201
IPs:IP: 172.16.178.201cni.projectcalico.org/podIP: 172.16.178.202/32cni.projectcalico.org/podIPs: 172.16.178.202/32
IP: 172.16.178.202
IPs:IP: 172.16.178.202cni.projectcalico.org/podIP: 172.16.43.1/32cni.projectcalico.org/podIPs: 172.16.43.1/32
IP: 172.16.43.1
IPs:IP: 172.16.43.1cni.projectcalico.org/podIP: 172.16.178.203/32cni.projectcalico.org/podIPs: 172.16.178.203/32
IP: 172.16.178.203
IPs:IP: 172.16.178.203cni.projectcalico.org/podIP: 172.16.43.63/32cni.projectcalico.org/podIPs: 172.16.43.63/32
IP: 172.16.43.63
IPs:IP: 172.16.43.63cni.projectcalico.org/podIP: 172.16.178.204/32cni.projectcalico.org/podIPs: 172.16.178.204/32
IP: 172.16.178.204
IPs:IP: 172.16.178.204$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis-app-0 1/1 Running 0 2m34s 172.16.178.201 kevin-s1 none none
redis-app-1 1/1 Running 0 2m32s 172.16.178.202 kevin-s1 none none
redis-app-2 1/1 Running 0 2m30s 172.16.43.1 kevin-pc none none
redis-app-3 1/1 Running 0 2m26s 172.16.178.203 kevin-s1 none none
redis-app-4 1/1 Running 0 2m24s 172.16.43.63 kevin-pc none none
redis-app-5 1/1 Running 0 2m19s 172.16.178.204 kevin-s1 none none 然后通过inem0o/redis-trib根据pods的最新ip重新创建集群 $ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379 在创建集群的过程中可能会遇到下面这个错误
$ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379Creating cluster
[ERR] Node 172.16.178.201:6379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0. 这种情况是因为redis的pod没有被重置需要登录出问题的pod然后用redis-cii重置集群
$ kubectl exec -it redis-app-0 -- redis-cli -h 172.16.178.201 -p 6379
172.16.178.201:6379 CLUSTER RESET
OK出问题的pod全部重置完之后再执行上面的命令集群重新构建成功
$ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379Creating clusterPerforming hash slots allocation on 6 nodes...
Using 3 masters:
172.16.178.201:6379
172.16.178.202:6379
172.16.43.1:6379
Adding replica 172.16.43.63:6379 to 172.16.178.201:6379
Adding replica 172.16.178.204:6379 to 172.16.178.202:6379
Adding replica 172.16.178.203:6379 to 172.16.43.1:6379
M: 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f 172.16.178.201:6379slots:0-5460 (5461 slots) master
M: f5d617c0ed655dd6afa32c5d4ec6260713668639 172.16.178.202:6379slots:5461-10922 (5462 slots) master
M: 808de7e00f10fe17a5582cd76a533159a25006d8 172.16.43.1:6379slots:10923-16383 (5461 slots) master
S: 44ac042b99b9b73051b05d1be3d98cf475f67f0a 172.16.43.63:6379replicates 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f
S: 8db8f89b7b28d0ce098de275340e3c4679fd342d 172.16.178.204:6379replicates f5d617c0ed655dd6afa32c5d4ec6260713668639
S: 2f5860e62f03ea17d398bbe447a6f1d428ae8698 172.16.178.203:6379replicates 808de7e00f10fe17a5582cd76a533159a25006d8
Can I set the above configuration? (type yes to accept): yesNodes configuration updatedAssign a different config epoch to each nodeSending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.Performing Cluster Check (using node 172.16.178.201:6379)
M: 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f 172.16.178.201:6379slots:0-5460 (5461 slots) master1 additional replica(s)
S: 44ac042b99b9b73051b05d1be3d98cf475f67f0a 172.16.43.63:637916379slots: (0 slots) slavereplicates 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f
M: f5d617c0ed655dd6afa32c5d4ec6260713668639 172.16.178.202:637916379slots:5461-10922 (5462 slots) master1 additional replica(s)
S: 8db8f89b7b28d0ce098de275340e3c4679fd342d 172.16.178.204:637916379slots: (0 slots) slavereplicates f5d617c0ed655dd6afa32c5d4ec6260713668639
S: 2f5860e62f03ea17d398bbe447a6f1d428ae8698 172.16.178.203:637916379slots: (0 slots) slavereplicates 808de7e00f10fe17a5582cd76a533159a25006d8
M: 808de7e00f10fe17a5582cd76a533159a25006d8 172.16.43.1:637916379slots:10923-16383 (5461 slots) master1 additional replica(s)
[OK] All nodes agree about slots configuration.Check for open slots...Check slots coverage...
[OK] All 16384 slots covered.查看集群状态
$ kubectl exec -it redis-app-3 -- redis-cli
127.0.0.1:6379 cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:14
cluster_my_epoch:3
cluster_stats_messages_ping_sent:39
cluster_stats_messages_pong_sent:40
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:80
cluster_stats_messages_ping_received:40
cluster_stats_messages_pong_received:36
cluster_stats_messages_received:76
total_cluster_links_buffer_limit_exceeded:0至此问题解决。 2024.11.20更新 因为重启node之后node上面的pod也会被重启而redis pod 的IP地址是启动时候随机分配的所以重启node可能会导致集群再次down掉另一种解决办法就是在构建集群的时候使用各个redis pod的DNS名称构造DNS名称的格式是
statefulset-name-ordinal.service-name.namespace.svc.cluster.local
其中的statefulset-nameservice-name和namespace可以从redis-stateful.yaml里面获取到
-ordinal就是instance的编号
例如下面这个redis-stateful.yaml
$ cat redis-stateful.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:name: redis-app
spec:serviceName: redis-servicereplicas: 6selector:matchLabels:app: redisappCluster: redis-clustertemplate:metadata:labels:app: redisappCluster: redis-clusterspec:containers:- name: redisimage: redisimagePullPolicy: IfNotPresentcommand: [/bin/bash, -ce, tail -f /dev/null]command: [redis-server]args:- /etc/redis/redis.conf- --protected-mode- noports:- name: rediscontainerPort: 6379protocol: TCP- name: clustercontainerPort: 16379protocol: TCPvolumeMounts:- name: redis-confmountPath: /etc/redis- name: redis-datamountPath: /var/lib/redisvolumes:- name: redis-confconfigMap:name: redis-confitems:- key: redis.confpath: redis.confvolumeClaimTemplates:- metadata:name: redis-dataspec:accessModes: [ ReadWriteMany ]storageClassName: redisresources:requests:storage: 1Gi其中
statefulset-name是redis-app
service-name是redis-service
namespace默认为default 你可以配置自己的namespace
因为起了6个节点所以ordinal是0~5
有了以上信息那么构造集群的命令就是
$ kubectl exec -it redis-app-0 -n default -- redis-cli --cluster create
redis-app-1.redis-service.default.svc.cluster.local:6379
redis-app-2.redis-service.default.svc.cluster.local:6379
redis-app-3.redis-service.default.svc.cluster.local:6379
redis-app-4.redis-service.default.svc.cluster.local:6379
redis-app-5.redis-service.default.svc.cluster.local:6379
redis-app-0.redis-service.default.svc.cluster.local:6379 --cluster-replicas 1至此redis集群构造成功重启node之后集群的节点还是可以通过DNS name连接成功。
127.0.0.1:6379 cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:15
cluster_my_epoch:15
cluster_stats_messages_ping_sent:1204
cluster_stats_messages_pong_sent:1195
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:2400
cluster_stats_messages_ping_received:1195
cluster_stats_messages_pong_received:1200
cluster_stats_messages_received:2395
total_cluster_links_buffer_limit_exceeded:0