当前位置:网站首页>K8s Review Notes 7--K8S Implementation of Redis Standalone and Redis-cluster
K8s Review Notes 7--K8S Implementation of Redis Standalone and Redis-cluster
2022-08-11 04:55:00 【Shanghai_Operation and Maintenance_Mr.Q】
1. PV/PVC Redis
Redis数据持久化
提供了多种不同级别的持久化方式:一种是RDB,另一种是AOF.
RDB 持久化可以在指定的时间间隔内生成数据集的时间点快照(point-in-time snapshot).
AOF 持久化记录服务器执行的所有写操作命令,并在服务器启动时,通过重新执行这些命令来还原数据集. AOF 文件中的命令全部以 Redis 协议的格式来保存,新命令会被追加到文件的末尾. Redis 还可以在后台对 AOF 文件进行重写(rewrite),使得 AOF 文件的体积不会超出保存数据集状态所需的实际大小.Redis 还可以同时使用 AOF 持久化和 RDB 持久化. 在这种情况下, 当 Redis 重启时, 它会优先使用 AOF 文件来还原数据集, 因为 AOF 文件保存的数据集通常比 RDB 文件所保存的数据集更完整.你甚至可以关闭持久化功能,让数据只在服务器运行时存在.
2. Redis 镜像制作
Redis Dockerfile
#Redis Image
FROM harbor.intra.com/baseimages/centos-base:7.9.2009
ADD redis-4.0.14.tar.gz /usr/local/src
RUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server /usr/sbin/ && mkdir -pv /data/redis-data
ADD redis.conf /usr/local/redis/redis.conf
ADD run_redis.sh /usr/local/redis/run_redis.sh
EXPOSE 6379
CMD ["/usr/local/redis/run_redis.sh"]
build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.intra.com/wework/redis:${TAG} .
sleep 3
docker push harbor.intra.com/wework/redis:${TAG}
redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo yes
save 900 1
save 5 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data/redis-data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
run_redis.sh
#!/bin/bash
/usr/sbin/redis-server /usr/local/redis/redis.conf
tail -f /etc/hosts
构建Redis镜像
[email protected]:/opt/k8s-data/dockerfile/web/wework/redis# ./build-command.sh v4.0.14
Successfully built f13c1ccdf5d6
Successfully tagged harbor.intra.com/wework/redis:v4.0.14
The push refers to repository [harbor.intra.com/wework/redis]
e045520d5142: Pushed
f7c5723d3227: Pushed
bf4069c34244: Pushed
d383bf570da4: Pushed
6f2f514dbcfd: Pushed
42a5df432d46: Pushed
7a6c7dc8d8df: Pushed
c91e83206e44: Pushed
bf0b39b2f6ed: Pushed
174f56854903: Mounted from wework/tomcat-app1
v4.0.14: digest: sha256:22882e70d65d693933f5cb61b2b449a4cef62ee65c28530030ff94b06a7eee1b size: 2416
[email protected]:/opt/k8s-data/dockerfile/web/wework/redis# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
harbor.intra.com/wework/redis v4.0.14 f13c1ccdf5d6 12 minutes ago 3.28GB
Test whether the mirror can be up
[email protected]:/opt/k8s-data/dockerfile/web/wework/redis# docker run -it --rm harbor.intra.com/wework/redis:v4.0.14
7:C 10 Aug 12:48:10.607 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
7:C 10 Aug 12:48:10.609 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=7, just started
7:C 10 Aug 12:48:10.609 # Configuration loaded
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 a9b6b1d26209
3. Redis单机 yaml
Nfs服务器上
mkdir -p /data/k8s/wework/redis-datadir-1
PV的yaml
redis-persistentvolume.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-datadir-pv-1
namespace: wework
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
path: /data/k8s/wework/redis-datadir-1
server: 192.168.31.109
PVC的yaml
redis-persistentvolumeclaim.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-datadir-pvc-1
namespace: wework
spec:
volumeName: redis-datadir-pv-1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
创建pv
[email protected]:/opt/k8s-data/yaml/wework/redis/pv# kubectl apply -f redis-persistentvolume.yaml
persistentvolume/redis-datadir-pv-1 created
[email protected]:/opt/k8s-data/yaml/wework/redis/pv# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
redis-datadir-pv-1 10Gi RWO Retain Available 6s
test 1Gi RWX Retain Available nfs 57d
zookeeper-datadir-pv-1 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-1 15h
zookeeper-datadir-pv-2 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-2 15h
zookeeper-datadir-pv-3 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-3 15h
创建pvc
[email protected]:/opt/k8s-data/yaml/wework/redis/pv# kubectl apply -f redis-persistentvolumeclaim.yaml
persistentvolumeclaim/redis-datadir-pvc-1 created
[email protected]:/opt/k8s-data/yaml/wework/redis/pv# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
redis-datadir-pv-1 10Gi RWO Retain Bound wework/redis-datadir-pvc-1 2m15s
test 1Gi RWX Retain Available nfs 57d
zookeeper-datadir-pv-1 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-1 15h
zookeeper-datadir-pv-2 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-2 15h
zookeeper-datadir-pv-3 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-3 15h
[email protected]:/opt/k8s-data/yaml/wework/redis/pv# kubectl get pvc -n wework
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-datadir-pvc-1 Pending redis-datadir-pv-1 0 10s
zookeeper-datadir-pvc-1 Bound zookeeper-datadir-pv-1 20Gi RWO 15h
zookeeper-datadir-pvc-2 Bound zookeeper-datadir-pv-2 20Gi RWO 15h
zookeeper-datadir-pvc-3 Bound zookeeper-datadir-pv-3 20Gi RWO 15h
[email protected]:/opt/k8s-data/yaml/wework/redis/pv# kubectl get pvc -n wework
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-datadir-pvc-1 Bound redis-datadir-pv-1 10Gi RWO 14s
zookeeper-datadir-pvc-1 Bound zookeeper-datadir-pv-1 20Gi RWO 15h
zookeeper-datadir-pvc-2 Bound zookeeper-datadir-pv-2 20Gi RWO 15h
zookeeper-datadir-pvc-3 Bound zookeeper-datadir-pv-3 20Gi RWO 15h
Redis deployment的yaml
redis.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: devops-redis
name: deploy-devops-redis
namespace: wework
spec:
replicas: 1
selector:
matchLabels:
app: devops-redis
template:
metadata:
labels:
app: devops-redis
spec:
containers:
- name: redis-container
image: harbor.intra.com/wework/redis:v4.0.14
imagePullPolicy: Always
volumeMounts:
- mountPath: "/data/redis-data/"
name: redis-datadir
volumes:
- name: redis-datadir
persistentVolumeClaim:
claimName: redis-datadir-pvc-1
---
kind: Service
apiVersion: v1
metadata:
labels:
app: devops-redis
name: srv-devops-redis
namespace: wework
spec:
type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
nodePort: 36379
selector:
app: devops-redis
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
创建Redis Deployment
[email protected]:/opt/k8s-data/yaml/wework/redis# kubectl apply -f redis.yaml
deployment.apps/deploy-devops-redis created
service/srv-devops-redis created
[email protected]:/opt/k8s-data/yaml/wework/redis# kubectl get pods -n wework
NAME READY STATUS RESTARTS AGE
deploy-devops-redis-7864f5d7dc-v9tq8 1/1 Running 0 8m34s
wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 4h19m
wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 4h19m
zookeeper1-699d46468c-8jq4x 1/1 Running 0 167m
zookeeper2-7cc484778-gj45x 1/1 Running 0 167m
zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 167m
[email protected]:/opt/k8s-data/yaml/wework/redis# kubectl get svc -n wework
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
srv-devops-redis NodePort 10.200.67.224 <none> 6379:36379/TCP 9m7s
wework-nginx-service NodePort 10.200.89.252 <none> 80:30090/TCP,443:30091/TCP 47h
wework-tomcat-app1-service ClusterIP 10.200.21.158 <none> 80/TCP 28h
zookeeper ClusterIP 10.200.117.19 <none> 2181/TCP 167m
zookeeper1 NodePort 10.200.167.230 <none> 2181:32181/TCP,2888:31774/TCP,3888:56670/TCP 167m
zookeeper2 NodePort 10.200.36.129 <none> 2181:32182/TCP,2888:46321/TCP,3888:30984/TCP 167m
zookeeper3 NodePort 10.200.190.129 <none> 2181:32183/TCP,2888:61447/TCP,3888:51393/TCP 167m
Delete after test write dataredis,数据是否丢失
[[email protected] /]# redis-cli
127.0.0.1:6379> AUTH 123456
OK
127.0.0.1:6379> set key1 value1
OK
127.0.0.1:6379> keys *
1) "key1"
127.0.0.1:6379>
在nfsConfirm whether the data is generated on the server
[email protected]:~# ll /data/k8s/wework/redis-datadir-1
total 12
drwxr-xr-x 2 root root 4096 Aug 10 13:14 ./
drwxr-xr-x 7 root root 4096 Aug 10 12:54 ../
-rw-r--r-- 1 root root 111 Aug 10 13:14 dump.rdb
测试删除Pod
[email protected]:/opt/k8s-data/yaml/wework/redis# kubectl get pods -n wework
NAME READY STATUS RESTARTS AGE
deploy-devops-redis-7864f5d7dc-v9tq8 1/1 Running 0 15m
wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 4h26m
wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 4h26m
zookeeper1-699d46468c-8jq4x 1/1 Running 0 173m
zookeeper2-7cc484778-gj45x 1/1 Running 0 173m
zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 173m
[email protected]:/opt/k8s-data/yaml/wework/redis# kubectl delete pods deploy-devops-redis-7864f5d7dc-v9tq8 -n wework
pod "deploy-devops-redis-7864f5d7dc-v9tq8" deleted
[email protected]:/opt/k8s-data/yaml/wework/redis# kubectl get pods -n wework
NAME READY STATUS RESTARTS AGE
deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 35s
wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 4h27m
wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 4h27m
zookeeper1-699d46468c-8jq4x 1/1 Running 0 174m
zookeeper2-7cc484778-gj45x 1/1 Running 0 174m
zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 174m
Make sure the data is still there
[[email protected] /]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> keys *
1) "key1"
127.0.0.1:6379> get key1
"value1"
127.0.0.1:6379>
通过python批量写入数据到Redis
import redis
import time
pool = redis.ConnectionPool(host="192.168.31.113",port="36379",password="123456",decode_responses=True)
r = redis.Redis(connection_pool=pool)
for i in range(100):
r.set("key-m49_%s" % i,"value-m49_%s" % i)
data=r.get("key-m49_%s" % i)
print(data)
再到Redis查询
82) "key-m49_36"
83) "key-m49_10"
84) "key-m49_15"
85) "key-m49_21"
86) "key-m49_74"
87) "key-m49_50"
88) "key-m49_42"
89) "key-m49_31"
90) "key-m49_79"
91) "key-m49_90"
92) "key-m49_16"
93) "key-m49_49"
94) "key-m49_81"
95) "key-m49_12"
96) "key-m49_59"
97) "key-m49_66"
98) "key-m49_65"
99) "key-m49_54"
100) "key-m49_96"
101) "key-m49_34"
127.0.0.1:6379> get "key-m49_96"
"value-m49_96"
127.0.0.1:6379>
4. Redis-cluster
Nfs服务器上创建目录
mkdir /data/k8s/wework/redis{
0..5}
Pv的yaml文件
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv0
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.109
path: /data/k8s/wework/redis0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv1
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.109
path: /data/k8s/wework/redis1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv2
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.109
path: /data/k8s/wework/redis2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv3
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.109
path: /data/k8s/wework/redis3
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv4
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.109
path: /data/k8s/wework/redis4
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv5
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.31.109
path: /data/k8s/wework/redis5
创建pv
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster/pv# kubectl apply -f redis-cluster-pv.yaml
persistentvolume/redis-cluster-pv0 created
persistentvolume/redis-cluster-pv1 created
persistentvolume/redis-cluster-pv2 created
persistentvolume/redis-cluster-pv3 created
persistentvolume/redis-cluster-pv4 created
persistentvolume/redis-cluster-pv5 created
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster/pv# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
redis-cluster-pv0 5Gi RWO Retain Available 3s
redis-cluster-pv1 5Gi RWO Retain Available 3s
redis-cluster-pv2 5Gi RWO Retain Available 3s
redis-cluster-pv3 5Gi RWO Retain Available 3s
redis-cluster-pv4 5Gi RWO Retain Available 3s
redis-cluster-pv5 5Gi RWO Retain Available 3s
redis-datadir-pv-1 10Gi RWO Retain Bound wework/redis-datadir-pvc-1 49m
test 1Gi RWX Retain Available nfs 57d
zookeeper-datadir-pv-1 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-1 16h
zookeeper-datadir-pv-2 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-2 16h
zookeeper-datadir-pv-3 20Gi RWO Retain Bound wework/zookeeper-datadir-pvc-3 16h
配置文件redis.conf
appendonly yes
cluster-enabled yes
cluster-config-file /var/lib/redis/nodes.conf
cluster-node-timeout 5000
dir /var/lib/redis
port 6379
创建configMap
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster# kubectl create configmap redis-conf --from-file=redis.conf -n wework
configmap/redis-conf created
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get configmaps -n wework
NAME DATA AGE
kube-root-ca.crt 1 2d
redis-conf 1 32s
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster# kubectl describe configmaps redis-conf -n wework
Name: redis-conf
Namespace: wework
Labels: <none>
Annotations: <none>
Data
====
redis.conf:
----
appendonly yes
cluster-enabled yes
cluster-config-file /var/lib/redis/nodes.conf
cluster-node-timeout 5000
dir /var/lib/redis
port 6379
Events: <none>
Redis Statefulset yaml
redis.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: wework
labels:
app: redis
spec:
selector:
app: redis
appCluster: redis-cluster
ports:
- name: redis
port: 6379
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: redis-access
namespace: wework
labels:
app: redis
spec:
selector:
app: redis
appCluster: redis-cluster
ports:
- name: redis-access
protocol: TCP
port: 6379
targetPort: 6379
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: wework
spec:
serviceName: redis
replicas: 6
selector:
matchLabels:
app: redis
appCluster: redis-cluster
template:
metadata:
labels:
app: redis
appCluster: redis-cluster
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- redis
topologyKey: kubernetes.io/hostname
containers:
- name: redis
image: redis:4.0.14
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
resources:
requests:
cpu: "500m"
memory: "500Mi"
ports:
- containerPort: 6379
name: redis
protocol: TCP
- containerPort: 16379
name: cluster
protocol: TCP
volumeMounts:
- name: conf
mountPath: /etc/redis
- name: data
mountPath: /var/lib/redis
volumes:
- name: conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
volumeClaimTemplates:
- metadata:
name: data
namespace: wework
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
创建Redis StatefulSet
Statefulsetwill be created one by one,The previous one succeeds and then the latter one is created
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster# kubectl apply -f redis.yaml
service/redis created
service/redis-access created
statefulset.apps/redis created
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get pods -n wework
NAME READY STATUS RESTARTS AGE
deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 46m
redis-0 0/1 ContainerCreating 0 44s
wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 5h12m
wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 5h12m
zookeeper1-699d46468c-8jq4x 1/1 Running 0 3h40m
zookeeper2-7cc484778-gj45x 1/1 Running 0 3h40m
zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 3h40m
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get pods -n wework
NAME READY STATUS RESTARTS AGE
deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 47m
redis-0 1/1 Running 0 98s
redis-1 0/1 ContainerCreating 0 24s
wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 5h13m
wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 5h13m
zookeeper1-699d46468c-8jq4x 1/1 Running 0 3h41m
zookeeper2-7cc484778-gj45x 1/1 Running 0 3h41m
zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 3h41m
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get pods -n wework
NAME READY STATUS RESTARTS AGE
deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 48m
redis-0 1/1 Running 0 3m36s
redis-1 1/1 Running 0 2m22s
redis-2 0/1 ContainerCreating 0 61s
wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 5h15m
wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 5h15m
zookeeper1-699d46468c-8jq4x 1/1 Running 0 3h43m
zookeeper2-7cc484778-gj45x 1/1 Running 0 3h43m
zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 3h43m
[email protected]:/opt/k8s-data/yaml/wework/redis-cluster# kubectl get pods -n wework
NAME READY STATUS RESTARTS AGE
deploy-devops-redis-7864f5d7dc-lgx48 1/1 Running 0 49m
redis-0 1/1 Running 0 4m18s
redis-1 1/1 Running 0 3m4s
redis-2 1/1 Running 0 103s
redis-3 1/1 Running 0 22s
redis-4 1/1 Running 0 18s
redis-5 1/1 Running 0 14s
wework-nginx-deployment-cdbb4945f-7xgx5 1/1 Running 0 5h16m
wework-tomcat-app1-deployment-65d8d46957-s4666 1/1 Running 0 5h16m
zookeeper1-699d46468c-8jq4x 1/1 Running 0 3h43m
zookeeper2-7cc484778-gj45x 1/1 Running 0 3h43m
zookeeper3-cdf484f7c-jh6hz 1/1 Running 0 3h43m
临时启动一个pod用来初始化Redis集群
[email protected]:~# kubectl run -it ubuntu1804 --image=ubuntu:18.04 --restart=Never -n wework bash
## 替换apt仓库
[email protected]:/# cat > /etc/apt/sources.list << EOF
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
EOF
[email protected]:/# apt update
[email protected]:/# apt install python2.7 python-pip redis-tools dnsutils iputils-ping net-tools -y
[email protected]:/# pip install --upgrade pip
Collecting pip
Downloading https://files.pythonhosted.org/packages/27/79/8a850fe3496446ff0d584327ae44e7500daf6764ca1a382d2d02789accf7/pip-20.3.4-py2.py3-none-any.whl (1.5MB)
100% |################################| 1.5MB 788kB/s
Installing collected packages: pip
Found existing installation: pip 9.0.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Successfully installed pip-20.3.4
Collecting redis-trib==0.5.1
Downloading redis-trib-0.5.1.tar.gz (10 kB)
Collecting Werkzeug
Downloading Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
|################################| 298 kB 1.1 MB/s
Collecting click
Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
|################################| 82 kB 2.2 MB/s
Collecting hiredis
Downloading hiredis-1.1.0-cp27-cp27mu-manylinux2010_x86_64.whl (58 kB)
|################################| 58 kB 14.5 MB/s
Collecting retrying
Downloading retrying-1.3.3.tar.gz (10 kB)
Requirement already satisfied: six>=1.7.0 in /usr/lib/python2.7/dist-packages (from retrying->redis-trib==0.5.1) (1.11.0)
Building wheels for collected packages: redis-trib, retrying
Building wheel for redis-trib (setup.py) ... done
Created wheel for redis-trib: filename=redis_trib-0.5.1-py2-none-any.whl size=11341 sha256=6f2df4b780df481dabf61d859abb65f7ae73b1a517faa79c093fbf05633733c2
Stored in directory: /root/.cache/pip/wheels/fe/52/82/cf08baa7853197e3f591a295185666ec90f1e44b609d4456d4
Building wheel for retrying (setup.py) ... done
Created wheel for retrying: filename=retrying-1.3.3-py2-none-any.whl size=9532 sha256=86dcf1e1445fc7b140c402342d735ad7d7b172e73b8db141dbdd2d9b7eeee510
Stored in directory: /root/.cache/pip/wheels/fa/24/c3/9912f4c9363033bbd0eafbec1b27c65b04d7ea6acd312876b0
Successfully built redis-trib retrying
Installing collected packages: Werkzeug, click, hiredis, retrying, redis-trib
Successfully installed Werkzeug-1.0.1 click-7.1.2 hiredis-1.1.0 redis-trib-0.5.1 retrying-1.3.3
创建Redis集群
redisslot from0-16383,共16384个
redis-trib.py create `dig +short redis-0.redis.wework.svc.magedu.local`:6379 \
`dig +short redis-1.redis.wework.svc.magedu.local`:6379 \
`dig +short redis-2.redis.wework.svc.magedu.local`:6379
Redis-trib 0.5.1 Copyright (c) HunanTV Platform developers
INFO:root:Instance at 172.100.140.82:6379 checked
INFO:root:Instance at 172.100.109.88:6379 checked
INFO:root:Instance at 172.100.76.159:6379 checked
INFO:root:Add 5462 slots to 172.100.140.82:6379
INFO:root:Add 5461 slots to 172.100.109.88:6379
INFO:root:Add 5461 slots to 172.100.76.159:6379
# 将redis-3加入redis-0
[email protected]:/# redis-trib.py replicate --master-addr `dig +short redis-0.redis.wework.svc.magedu.local`:6379 \
--slave-addr `dig +short redis-3.redis.wework.svc.magedu.local`:6379
Redis-trib 0.5.1 Copyright (c) HunanTV Platform developers
INFO:root:Instance at 172.100.140.83:6379 has joined 172.100.140.82:6379; now set replica
INFO:root:Instance at 172.100.140.83:6379 set as replica to bbe92769df4e5164ec73542064220006d96bdc40
# redis-4和redis-1绑定
[email protected]:/# redis-trib.py replicate --master-addr `dig +short redis-1.redis.wework.svc.magedu.local`:6379 --slave-addr `dig +short redis-4.redis.wework.svc.magedu.local`:6379
Redis-trib 0.5.1 Copyright (c) HunanTV Platform developers
INFO:root:Instance at 172.100.76.160:6379 has joined 172.100.76.159:6379; now set replica
INFO:root:Instance at 172.100.76.160:6379 set as replica to 0c3ff3127c1cfcff63b96c51b727977cf619c9b3
# redis-5和redis-2绑定
[email protected]:/# redis-trib.py replicate --master-addr `dig +short redis-2.redis.wework.svc.magedu.local`:6379 --slave-addr `dig +short redis-5.redis.wework.svc.magedu.local`:6379
Redis-trib 0.5.1 Copyright (c) HunanTV Platform developers
INFO:root:Instance at 172.100.109.89:6379 has joined 172.100.109.88:6379; now set replica
INFO:root:Instance at 172.100.109.89:6379 set as replica to 98b86162f083a3f6269ed5abdfac9f3535729f90
连接到Redis集群任意一个pod
[email protected]:/data# redis-cli
127.0.0.1:6379> CLUSTER INFO
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_ping_sent:483
cluster_stats_messages_pong_sent:483
cluster_stats_messages_sent:966
cluster_stats_messages_ping_received:478
cluster_stats_messages_pong_received:483
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:966
127.0.0.1:6379> CLUSTER NODES
93fb7651914a1dad36190f55df96e167b92bc36b 172.100.76.160:[email protected] slave 0c3ff3127c1cfcff63b96c51b727977cf619c9b3 0 1660118751907 2 connected
0c3ff3127c1cfcff63b96c51b727977cf619c9b3 172.100.76.159:[email protected] master - 0 1660118751000 2 connected 10923-16383
bbe92769df4e5164ec73542064220006d96bdc40 172.100.140.82:[email protected] master - 0 1660118750000 0 connected 0-5461
b22c872939d02f9b890f996d353b83ef7776644c 172.100.140.83:[email protected] slave bbe92769df4e5164ec73542064220006d96bdc40 0 1660118750000 0 connected
143e2c972b90ba375269b8fafa64422e8b9635b0 172.100.109.89:[email protected] myself,slave 98b86162f083a3f6269ed5abdfac9f3535729f90 0 1660118751000 5 connected
98b86162f083a3f6269ed5abdfac9f3535729f90 172.100.109.88:[email protected] master - 0 1660118751000 1 connected 5462-10922
127.0.0.1:6379>
Attempt to write data on different nodes
[email protected]:/data# redis-cli
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> set key1 val1
(error) MOVED 9189 172.100.109.88:6379
127.0.0.1:6379>
[email protected]:/data# redis-cli
127.0.0.1:6379> set key2 val2
OK
127.0.0.1:6379> set key3 val3
OK
127.0.0.1:6379> set key4 val4
(error) MOVED 13120 172.100.76.159:6379
127.0.0.1:6379> keys *
1) "key3"
2) "key2"
[email protected]:/data# redis-cli
127.0.0.1:6379> set key1 val1
(error) MOVED 9189 172.100.109.88:6379
127.0.0.1:6379> set key4 val4
OK
127.0.0.1:6379> keys *
1) "key4"
[email protected]:/data# redis-cli
127.0.0.1:6379> set key1 val1
OK
127.0.0.1:6379>
至此Redis-cluster Statefulset 配置完成
边栏推荐
猜你喜欢

Network Skill Tree

交换机和路由器技术-32-命名ACL

ALSA音频架构 -- aplay播放流程分析

3 模块二:科研工具使用

To break the bottleneck of transactional work, the gentleman signs the electronic contract to release the "source power" of HR!

CAD2020 打开错误报告 e06d7363h Exception at 13644F69h

Switch and Router Technology - 22/23 - OSPF Dynamic Routing Protocol/Link State Synchronization Process

我的LaTeX入门

交换机和路由器技术-27-OSPF路由重分发

Optimization is a kind of habit low starting point is the "standing near the critical"
随机推荐
堆排序 和冒泡排序
开发工具篇第七讲:阿里云日志查询与分析
Listen to pull out U disk inserted into the message, U disk drive
交换机和路由器技术-34-动态NAT
Switches and routers technology - 24 - configure OSPF single area
【FPGA教程案例50】控制案例2——基于FPGA的PD控制器verilog实现
2.2 user manual] [QNX Hypervisor 10.15 vdev timer8254
Word2021 中的图片保存后就变模糊了
网络安全培训机构哪家好?排名怎么选择?
二叉堆的基础~
Three 】 【 yolov7 series of actual combat from 0 to build training data sets
FPGA工程师面试试题集锦121~130
Snap - rotate the smallest number of an array
交换机和路由器技术-33-静态NAT
[QNX Hypervisor 2.2用户手册]10.16 vdev virtio-blk
Development Tools Lecture 7: Alibaba Cloud Log Query and Analysis
一种基于共识机制的数字集群终端防失控方案研究
Apache初体验
梅克尔工作室--OpenEuler培训笔记(1)
Switch and Router Technology-29-OSPF Virtual Link