二十、Redis进阶-实战Redis集群搭建

作者: 温新

分类: 【Redis】

阅读: 2373

时间: 2020-09-06 08:06:45

说明:使用3个一主一从(也就是3主3从)来实现Redis集群。

主数据库端口为:9000、9001、9002

从数据库端口为:9003、9004、9005

步骤一:创建6个实例配置文件

配置文件名为9000~~9006.conf。此处以9000.conf配置文件为例,其余5个配置文件复制9000.conf并修改端口

# 9000.conf
port 9000
daemonize no
pidfile "/var/run/redis_9000.pid"
logfile "/usr/local/bin/redis_9000.log"
dir "/usr/local/bin/redis_data/9000"
dbfilename "9000_dump.rdb"

#集群配置
# 启动集群模式
cluster-enabled yes
# 集群配置文件
cluster-config-file "/usr/local/bin/redis_data/nodes_9000.conf"
# 超时时间
cluster-node-timeout 15000

1)由于是本地单台系统操作,因此省略了绑定地址操作,真是服务器系统中带上ip地址。

2)由于是本地环境,为了学习记录的需要,没有开启后台守护启动,真实服务器环境中,开启守护进程启动。

步骤二:启动配置好的6台服务器

[root@192 bin]# pwd
/usr/local/bin

[root@192 bin]# ./redis-server ./redis_conf/9000.conf 
[root@192 bin]# ./redis-server ./redis_conf/9001.conf 
[root@192 bin]# ./redis-server ./redis_conf/9002.conf 
[root@192 bin]# ./redis-server ./redis_conf/9003.conf 
[root@192 bin]# ./redis-server ./redis_conf/9004.conf
[root@192 bin]# ./redis-server ./redis_conf/9005.conf 

启动完成之后查看redis进程

[root@192 ~]# ps -ef | grep redis
root       1574   1323  0 22:37 pts/0    00:00:00 ./redis-server *:9000 [cluster]
root       1581   1414  0 22:38 pts/1    00:00:00 ./redis-server *:9001 [cluster]
root       1623   1591  0 22:40 pts/2    00:00:00 ./redis-server *:9002 [cluster]
root       1664   1633  0 22:41 pts/3    00:00:00 ./redis-server *:9003 [cluster]
root       1699   1674  0 22:42 pts/4    00:00:00 ./redis-server *:9004 [cluster]
root       1733   1709  0 22:42 pts/5    00:00:00 ./redis-server *:9005 [cluster]
root       1767   1743  0 22:42 pts/6    00:00:00 grep --color=auto redis

可见与常规的不一样了,多了一个cluster集群的标志

步骤三:创建集群

注意:Redis6的启动与5之前的版本不一样

1)创建集群

./redis-cli --cluster create  127.0.0.1:9000 127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003 127.0.0.1:9004 127.0.0.1:9005 --cluster-replicas 1

2)出现如下内容

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:9004 to 127.0.0.1:9000
Adding replica 127.0.0.1:9005 to 127.0.0.1:9001
Adding replica 127.0.0.1:9003 to 127.0.0.1:9002
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: cf2c6c30ce292036343d6b0422d0eb009b04baf8 127.0.0.1:9000
   slots:[0-5460] (5461 slots) master
M: 591a32f5077a38828ef27073750717678314be7e 127.0.0.1:9001
   slots:[5461-10922] (5462 slots) master
M: c16c9e3a6f7c262a15659d21fc01e3ff8ff160db 127.0.0.1:9002
   slots:[10923-16383] (5461 slots) master
S: 8ae9e888cd78d05f507a0854ca448bdec9a9c2db 127.0.0.1:9003
   replicates 591a32f5077a38828ef27073750717678314be7e
S: f155a57e3565835051456b97a3e3625552228e9c 127.0.0.1:9004
   replicates c16c9e3a6f7c262a15659d21fc01e3ff8ff160db
S: 8b0ba372622f2b668402f76fbafe382045992540 127.0.0.1:9005
   replicates cf2c6c30ce292036343d6b0422d0eb009b04baf8

输入:yes 后会等待一下,然后出现如下信息,显示哪些节点加入了

>>> Performing Cluster Check (using node 127.0.0.1:9000)
M: cf2c6c30ce292036343d6b0422d0eb009b04baf8 127.0.0.1:9000
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 8b0ba372622f2b668402f76fbafe382045992540 127.0.0.1:9005
   slots: (0 slots) slave
   replicates cf2c6c30ce292036343d6b0422d0eb009b04baf8
S: f155a57e3565835051456b97a3e3625552228e9c 127.0.0.1:9004
   slots: (0 slots) slave
   replicates c16c9e3a6f7c262a15659d21fc01e3ff8ff160db
M: 591a32f5077a38828ef27073750717678314be7e 127.0.0.1:9001
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: c16c9e3a6f7c262a15659d21fc01e3ff8ff160db 127.0.0.1:9002
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 8ae9e888cd78d05f507a0854ca448bdec9a9c2db 127.0.0.1:9003
   slots: (0 slots) slave
   replicates 591a32f5077a38828ef27073750717678314be7e
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

3)查看nodes_9000.conf配置文件信息

[root@192 redis_data]# cat nodes_9000.conf 
8b0ba372622f2b668402f76fbafe382045992540 127.0.0.1:9005@19005 slave cf2c6c30ce292036343d6b0422d0eb009b04baf8 0 1599405602000 1 connected
f155a57e3565835051456b97a3e3625552228e9c 127.0.0.1:9004@19004 slave c16c9e3a6f7c262a15659d21fc01e3ff8ff160db 0 1599405601892 3 connected
591a32f5077a38828ef27073750717678314be7e 127.0.0.1:9001@19001 master - 0 1599405601000 2 connected 5461-10922
cf2c6c30ce292036343d6b0422d0eb009b04baf8 127.0.0.1:9000@19000 myself,master - 0 1599405601000 1 connected 0-5460
c16c9e3a6f7c262a15659d21fc01e3ff8ff160db 127.0.0.1:9002@19002 master - 0 1599405602910 3 connected 10923-16383
8ae9e888cd78d05f507a0854ca448bdec9a9c2db 127.0.0.1:9003@19003 slave 591a32f5077a38828ef27073750717678314be7e 0 1599405602000 2 connected
vars currentEpoch 6 lastVoteEpoch 0

步骤四:实测写入与获取数据

1)客户端连接9000主服务器

[root@192 bin]# ./redis-cli -c -p 9000
127.0.0.1:9000> set name ziruchu.com
-> Redirected to slot [5798] located at 127.0.0.1:9001
OK

可以看到与普通的写入方式不一样了。这里把写入的数据重定向到了5798这个槽中

2)客户端连接9000对应的9003从库

[root@192 bin]# ./redis-cli -c -p 9003
127.0.0.1:9003> get name
-> Redirected to slot [5798] located at 127.0.0.1:9001
"ziruchu.com"

可以看到,9003从客户端获取数据时,重定向到了5798这个插槽中获取数据

步骤四:故障恢复

从库宕机测试

停止从库9003服务

 [root@192 bin]# ./redis-cli -p 9003 shutdown

停止之后,其他主库会收到信息,查看9001日志文件,如下

# 从库9003丢失了
1581:M 06 Sep 2020 23:42:03.894 * FAIL message received from c16c9e3a6f7c262a15659d21fc01e3ff8ff160db about 8ae9e888cd78d05f507a0854ca448bdec9a9c2db
1581:M 06 Sep 2020 23:49:37.869 * Clear FAIL state for node 8ae9e888cd78d05f507a0854ca448bdec9a9c2db: replica is reachable again.
# 重新启动9003从库后收到恢复信息
1581:M 06 Sep 2020 23:49:38.843 * Replica 127.0.0.1:9003 asks for synchronization
1581:M 06 Sep 2020 23:49:38.843 * Partial resynchronization request from 127.0.0.1:9003 accepted. Sending 1842 bytes of backlog starting from offset 1.

主库宕机测试

停止主库9000

[root@192 bin]# ./redis-cli -p 9000 shutdown

查看从库9005日志

# 主库断开
1733:S 06 Sep 2020 23:20:02.708 * MASTER <-> REPLICA sync: Finished with success
1733:S 06 Sep 2020 23:42:03.896 * FAIL message received from c16c9e3a6f7c262a15659d21fc01e3ff8ff160db about 8ae9e888cd78d05f507a0854ca448bdec9a9c2db
1733:S 06 Sep 2020 23:49:37.865 * Clear FAIL state for node 8ae9e888cd78d05f507a0854ca448bdec9a9c2db: replica is reachable again.
1733:S 06 Sep 2020 23:54:42.675 # Connection with master lost.
1733:S 06 Sep 2020 23:54:42.675 * Caching the disconnected master state.
1733:S 06 Sep 2020 23:54:43.081 * Connecting to MASTER 127.0.0.1:9000

# 升级为主库
1733:S 06 Sep 2020 23:55:01.771 # configEpoch set to 7 after successful failover
1733:M 06 Sep 2020 23:55:01.771 * Discarding previously cached master state.
1733:M 06 Sep 2020 23:55:01.771 # Setting secondary replication ID to 1a48af953d16323d51ee381cdcf4a1ea7abda728, valid up to offset: 2926. New replication ID is 5528571be70e43f118b8bec7024ebf989788a011
1733:M 06 Sep 2020 23:55:01.771 # Cluster state changed: ok
127.0.0.1:9001> cluster nodes
....
c16c9e3a6f7c262a15659d21fc01e3ff8ff160db 127.0.0.1:9002@19002 master - 0 1599408024817 3 connected 10923-16383
# 标记失败
cf2c6c30ce292036343d6b0422d0eb009b04baf8 127.0.0.1:9000@19000 master,fail - 1599407683597 1599407681000 1 disconnected
请登录后再评论