mongo 当一个副本挂掉后,mongo 客户端连接还在连怎么办

搭建高可用的MongoDB集群(上):MongoDB的配置与副本集
发表于 15:09|
来源个人博客|
摘要:近年来,NoSQL数据库已得到了长足的发展,更成为了许多机构追求性能的第一选择,而在这些技术堆栈中,MongoDB无疑是人气最高的一个,这里为大家分享高可用MongoDB集群的搭建。
这篇文章看完这些问题就可以搞定了。NoSQL的产生就是为了解决大数据量、高扩展性、高性能、灵活数据模型、高可用性。但是光通过主从模式的架构远远达不到上面几点,由此MongoDB设计了副本集和分片的功能。这篇文章主要介绍副本集:
mongoDB官方已经不建议使用主从模式了,替代方案是采用副本集的模式,
那什么是副本集呢?打魔兽世界总说打副本,其实这两个概念差不多一个意思。游戏里的副本是指玩家集中在高峰时间去一个场景打怪,会出现玩家暴多怪物少的情况,游戏开发商为了保证玩家的体验度,就为每一批玩家单独开放一个同样的空间同样的数量的怪物,这一个复制的场景就是一个副本,不管有多少个玩家各自在各自的副本里玩不会互相影响。
mongoDB的副本也是这个,主从模式其实就是一个单副本的应用,没有很好的扩展性和容错性。而副本集具有多个副本保证了容错性,就算一个副本挂掉了还有很多副本存在,并且解决了上面第一个问题“主节点挂掉了,整个集群内会自动切换”。难怪mongoDB官方推荐使用这种模式。我们来看看mongoDB副本集的架构图:
由图可以看到客户端连接到整个副本集,不关心具体哪一台机器是否挂掉。主服务器负责整个副本集的读写,副本集定期同步数据备份,一但主节点挂掉,副本节点就会选举一个新的主服务器,这一切对于应用服务器不需要关心。我们看一下主服务器挂掉后的架构:
副本集中的副本节点在主节点挂掉后通过心跳机制检测到后,就会在集群内发起主节点的选举机制,自动选举一位新的主服务器。看起来很牛X的样子,我们赶紧操作部署一下!
官方推荐的副本集机器数量为至少3个,那我们也按照这个数量配置测试。
1、准备两台机器 192.168.1.136、192.168.1.137、192.168.1.138。 192.168.1.136
当作副本集主节点,192.168.1.137、192.168.1.138作为副本集副本节点。
2、分别在每台机器上建立mongodb副本集测试文件夹
#存放整个mongodb文件
mkdir -p /data/mongodbtest/replset
#存放mongodb数据文件
mkdir -p /data/mongodbtest/replset/data
#进入mongodb文件夹
/data/mongodbtest
3、下载mongodb的安装程序包
wget &a href="http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.4.8.tgz"&http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.4.8.tgz&/a&
注意linux生产环境不能安装32位的mongodb,因为32位受限于操作系统最大2G的文件限制。
#解压下载的压缩包
tar xvzf mongodb-linux-x86_64-2.4.8.tgz
4、分别在每台机器上启动mongodb
/data/mongodbtest/mongodb-linux-x86_64-2.4.8/bin/mongod
--dbpath /data/mongodbtest/replset/data
--replSet repset
可以看到控制台上显示副本集还没有配置初始化信息。
Sun Dec 29 20:12:02.953 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Sun Dec 29 20:12:02.953 [rsStart] replSet info you may need to run
replSetInitiate -- rs.initiate() in the shell -- if that is not already done
5、初始化副本集
在三台机器上任意一台机器登陆mongodb
/data/mongodbtest/mongodb-linux-x86_64-2.4.8/bin/mongo
#使用admin数据库
#定义副本集配置变量,这里的 _id:”repset” 和上面命令参数“ –replSet
repset” 要保持一样。
config = { _id:"repset", members:[
... {_id:0,host:"192.168.1.136:27017"},
... {_id:1,host:"192.168.1.137:27017"},
... {_id:2,host:"192.168.1.138:27017"}]
"_id" : "repset",
"members" : [
"_id" : 0,
"host" : "192.168.1.136:27017"
"_id" : 1,
"host" : "192.168.1.137:27017"
"_id" : 2,
"host" : "192.168.1.138:27017"
#初始化副本集配置
rs.initiate(config);
"info" : "Config now saved locally.
Should come online in about a minute.",
#查看日志,副本集启动成功后,138为主节点PRIMARY,136、137为副本节点SECONDARY。
Sun Dec 29 20:26:13.842 [conn3] replSet replSetInitiate admin command
received from client Sun Dec 29 20:26:13.842 [conn3] replSet replSetInitiate
config object parses ok, 3 members specified Sun Dec 29 20:26:13.847 [conn3]
replSet replSetInitiate all members seem up Sun Dec 29 20:26:13.848 [conn3]
****** Sun Dec 29 20:26:13.848 [conn3] creating replication oplog of size:
990MB... Sun Dec 29 20:26:13.849 [FileAllocator] allocating new datafile
/data/mongodbtest/replset/data/local.1, filling with zeroes... Sun Dec
29 20:26:13.862 [FileAllocator] done allocating datafile /data/mongodbtest/replset/data/local.1,
size: 1024MB, took 0.012 secs Sun Dec 29 20:26:13.863 [conn3] ****** Sun
Dec 29 20:26:13.863 [conn3] replSet info saving a newer config version
to local.system.replset Sun Dec 29 20:26:13.864 [conn3] replSet saveConfigLocally
done Sun Dec 29 20:26:13.864 [conn3] replSet replSetInitiate config now
saved locally. Should come online in about a minute. Sun Dec 29 20:26:23.047
[rsStart] replSet I am 192.168.1.138:27017 Sun Dec 29 20:26:23.048 [rsStart]
replSet STARTUP2 Sun Dec 29 20:26:23.049 [rsHealthPoll] replSet member
192.168.1.137:27017 is up Sun Dec 29 20:26:23.049 [rsHealthPoll] replSet
member 192.168.1.136:27017 is up Sun Dec 29 20:26:24.051 [rsSync] replSet
SECONDARY Sun Dec 29 20:26:25.053 [rsHealthPoll] replset info 192.168.1.136:27017
thinks that we are down Sun Dec 29 20:26:25.053 [rsHealthPoll] replSet
member 192.168.1.136:27017 is now in state STARTUP2 Sun Dec 29 20:26:25.056
[rsMgr] not electing self, 192.168.1.136:27017 would veto with 'I don't
think 192.168.1.138:27017 is electable' Sun Dec 29 20:26:31.059 [rsHealthPoll]
replset info 192.168.1.137:27017 thinks that we are down Sun Dec 29 20:26:31.059
[rsHealthPoll] replSet member 192.168.1.137:27017 is now in state STARTUP2
Sun Dec 29 20:26:31.062 [rsMgr] not electing self, 192.168.1.137:27017
would veto with 'I don't think 192.168.1.138:27017 is electable' Sun Dec
29 20:26:37.074 [rsMgr] replSet info electSelf 2 Sun Dec 29 20:26:38.062
[rsMgr] replSet PRIMARY Sun Dec 29 20:26:39.071 [rsHealthPoll] replSet
member 192.168.1.137:27017 is now in state RECOVERING Sun Dec 29 20:26:39.075
[rsHealthPoll] replSet member 192.168.1.136:27017 is now in state RECOVERING
Sun Dec 29 20:26:42.201 [slaveTracking] build index local.slaves { _id:
1 } Sun Dec 29 20:26:42.207 [slaveTracking] build index done. scanned 0
total records. 0.005 secs Sun Dec 29 20:26:43.079 [rsHealthPoll] replSet
member 192.168.1.136:27017 is now in state SECONDARY Sun Dec 29 20:26:49.080
[rsHealthPoll] replSet member 192.168.1.137:27017 is now in state SECONDARY
#查看集群节点的状态 rs.status();
"set" : "repset",
"date" : ISODate("T12:54:25Z"),
"myState" : 1,
"members" : [
"_id" : 0,
"name" : "192.168.1.136:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1682,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T12:26:13Z"),
"lastHeartbeat" : ISODate("T12:54:25Z"),
"lastHeartbeatRecv" : ISODate("T12:54:24Z"),
"pingMs" : 1,
"syncingTo" : "192.168.1.138:27017"
"_id" : 1,
"name" : "192.168.1.137:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1682,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T12:26:13Z"),
"lastHeartbeat" : ISODate("T12:54:25Z"),
"lastHeartbeatRecv" : ISODate("T12:54:24Z"),
"pingMs" : 1,
"syncingTo" : "192.168.1.138:27017"
"_id" : 2,
"name" : "192.168.1.138:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2543,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T12:26:13Z"),
"self" : true
整个副本集已经搭建成功了。
6、测试副本集数据复制功能
#在主节点192.168.1.138 上连接到终端:
mongo 127.0.0.1
#建立test 数据库。
往testdb表插入数据。
& db.testdb.insert({"test1":"testval1"})
#在副本节点 192.168.1.136、192.168.1.137 上连接到mongodb查看数据是否复制过来。
/data/mongodbtest/mongodb-linux-x86_64-2.4.8/bin/mongo 192.168.1.136:27017
#使用test 数据库。
repset:SECONDARY&
repset:SECONDARY&
Sun Dec 29 21:50:48.590 error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:128
#mongodb默认是从主节点读写数据的,副本节点上不允许读,需要设置副本节点可以读。
repset:SECONDARY& db.getMongo().setSlaveOk();
#可以看到数据已经复制到了副本集。
repset:SECONDARY& db.testdb.find();
{ "_id" : ObjectId("52cf"), "test1" : "testval1" }
7、测试副本集故障转移功能
先停掉主节点mongodb 138,查看136、137的日志可以看到经过一系列的投票选择操作,137 当选主节点,136从137同步数据过来。
Sun Dec 29 22:03:05.351 [rsBackgroundSync] replSet sync source problem:
10278 dbclient error communicating with server: 192.168.1.138:27017 Sun
Dec 29 22:03:05.354 [rsBackgroundSync] replSet syncing to: 192.168.1.138:27017
Sun Dec 29 22:03:05.356 [rsBackgroundSync] repl: couldn't connect to server
192.168.1.138:27017 Sun Dec 29 22:03:05.356 [rsBackgroundSync] replSet
not trying to sync from 192.168.1.138:27017, it is vetoed for 10 more seconds
Sun Dec 29 22:03:05.499 [rsHealthPoll] DBClientCursor::init call() failed
Sun Dec 29 22:03:05.499 [rsHealthPoll] replset info 192.168.1.138:27017
heartbeat failed, retrying Sun Dec 29 22:03:05.501 [rsHealthPoll] replSet
info 192.168.1.138:27017 is down (or slow to respond): Sun Dec 29 22:03:05.501
[rsHealthPoll] replSet member 192.168.1.138:27017 is now in state DOWN
Sun Dec 29 22:03:05.511 [rsMgr] not electing self, 192.168.1.137:27017
would veto with '192.168.1.136:27017 is trying to elect itself but 192.168.1.138:27017
is already primary and more up-to-date' Sun Dec 29 22:03:07.330 [conn393]
replSet info voting yea for 192.168.1.137:27017 (1) Sun Dec 29 22:03:07.503
[rsHealthPoll] replset info 192.168.1.138:27017 heartbeat failed, retrying
Sun Dec 29 22:03:08.462 [rsHealthPoll] replSet member 192.168.1.137:27017
is now in state PRIMARY Sun Dec 29 22:03:09.359 [rsBackgroundSync] replSet
syncing to: 192.168.1.137:27017 Sun Dec 29 22:03:09.507 [rsHealthPoll]
replset info 192.168.1.138:27017 heartbeat failed, retrying
查看整个集群的状态,可以看到138为状态不可达。
/data/mongodbtest/mongodb-linux-x86_64-2.4.8/bin/mongo 192.168.1.136:27017
repset:SECONDARY& rs.status();
"set" : "repset",
"date" : ISODate("T14:28:35Z"),
"myState" : 2,
"syncingTo" : "192.168.1.137:27017",
"members" : [
"_id" : 0,
"name" : "192.168.1.136:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 9072,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T13:48:54Z"),
"self" : true
"_id" : 1,
"name" : "192.168.1.137:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 7329,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T13:48:54Z"),
"lastHeartbeat" : ISODate("T14:28:34Z"),
"lastHeartbeatRecv" : ISODate("T14:28:34Z"),
"pingMs" : 1,
"syncingTo" : "192.168.1.138:27017"
"_id" : 2,
"name" : "192.168.1.138:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(, 1),
"optimeDate" : ISODate("T13:48:54Z"),
"lastHeartbeat" : ISODate("T14:28:35Z"),
"lastHeartbeatRecv" : ISODate("T14:28:23Z"),
"pingMs" : 0,
"syncingTo" : "192.168.1.137:27017"
再启动原来的主节点 138,发现138 变为 SECONDARY,还是137 为主节点 PRIMARY。
Sun Dec 29 22:21:06.619 [rsStart] replSet I am 192.168.1.138:27017
Sun Dec 29 22:21:06.619 [rsStart] replSet STARTUP2
Sun Dec 29 22:21:06.627 [rsHealthPoll] replset info 192.168.1.136:27017 thinks that we are down
Sun Dec 29 22:21:06.627 [rsHealthPoll] replSet member 192.168.1.136:27017 is up
Sun Dec 29 22:21:06.627 [rsHealthPoll] replSet member 192.168.1.136:27017 is now in state SECONDARY
Sun Dec 29 22:21:07.628 [rsSync] replSet SECONDARY
Sun Dec 29 22:21:08.623 [rsHealthPoll] replSet member 192.168.1.137:27017 is up
Sun Dec 29 22:21:08.624 [rsHealthPoll] replSet member 192.168.1.137:27017 is now in state PRIMARY
8、java程序连接副本集测试。三个节点有一个节点挂掉也不会影响应用程序客户端对整个副本集的读写!
public class TestMongoDBReplSet { public static void main(String[] args)
{ try { List&ServerAddress& addresses = new ArrayList&ServerAddress&();
ServerAddress address1 = new ServerAddress("192.168.1.136" , 27017); ServerAddress
address2 = new ServerAddress("192.168.1.137" , 27017); ServerAddress address3
= new ServerAddress("192.168.1.138" , 27017); addresses.add(address1);
addresses.add(address2); addresses.add(address3); MongoClient client =
new MongoClient(addresses); DB db = client.getDB( "test"); DBCollection
coll = db.getCollection( "testdb"); // 插入 BasicDBObject object = new BasicDBObject();
object.append( "test2", "testval2" ); coll.insert(object); DBCursor dbCursor
= coll.find(); while (dbCursor.hasNext()) { DBObject dbObject = dbCursor.next();
System. out.println(dbObject.toString()); } } catch (Exception e) { e.printStackTrace();
目前看起来支持完美的故障转移了,这个架构是不是比较完美了?其实还有很多地方可以优化,比如开头的第二个问题:主节点的读写压力过大如何解决?常见的解决方案是读写分离,mongodb副本集的读写分离如何做呢?
看图说话:
常规写操作来说并没有读操作多,所以一台主节点负责写,两台副本节点负责读。
1、设置读写分离需要先在副本节点SECONDARY 设置 setSlaveOk。
2、在程序中设置副本节点负责读操作,如下代码:
public class TestMongoDBReplSetReadSplit {
public static void main(String[] args) {
List&ServerAddress& addresses = new ArrayList&ServerAddress&();
ServerAddress address1 = new ServerAddress("192.168.1.136" , 27017);
ServerAddress address2 = new ServerAddress("192.168.1.137" , 27017);
ServerAddress address3 = new ServerAddress("192.168.1.138" , 27017);
addresses.add(address1);
addresses.add(address2);
addresses.add(address3);
MongoClient client = new MongoClient(addresses);
DB db = client.getDB( "test" );
DBCollection coll = db.getCollection( "testdb" );
BasicDBObject object = new BasicDBObject();
object.append( "test2" , "testval2" );
//读操作从副本节点读取
ReadPreference preference = ReadPreference. secondary();
DBObject dbObject = coll.findOne(object, null , preference);
System. out .println(dbObject);
} catch (Exception e) {
e.printStackTrace();
读参数除了secondary一共还有五个参数:primary、primaryPreferred、secondary、secondaryPreferred、nearest。
primary:默认参数,只从主节点上进行读取操作;
primaryPreferred:大部分从主节点上读取数据,只有主节点不可用时从secondary节点读取数据。
secondary:只从secondary节点上进行读取操作,存在的问题是secondary节点的数据会比primary节点数据“旧”。
secondaryPreferred:优先从secondary节点进行读取操作,secondary节点不可用时从主节点读取数据;
nearest:不管是主节点、secondary节点,从网络延迟最低的节点上读取数据。
好,读写分离做好我们可以数据分流,减轻压力解决了“主节点的读写压力过大如何解决?”这个问题。不过当我们的副本节点增多时,主节点的复制压力会加大有什么办法解决吗?mongodb早就有了相应的解决方案。
其中的仲裁节点不存储数据,只是负责故障转移的群体投票,这样就少了数据复制的压力。是不是想得很周到啊,一看mongodb的开发兄弟熟知大数据架构体系,其实不只是主节点、副本节点、仲裁节点,还有Secondary-Only、Hidden、Delayed、Non-Voting。
Secondary-Only:不能成为primary节点,只能作为secondary副本节点,防止一些性能不高的节点成为主节点。
Hidden:这类节点是不能够被客户端制定IP引用,也不能被设置为主节点,但是可以投票,一般用于备份数据。
Delayed:可以指定一个时间延迟从primary节点同步数据。主要用于备份数据,如果实时同步,误删除数据马上同步到从节点,恢复又恢复不了。
Non-Voting:没有选举权的secondary节点,纯粹的备份数据节点。
到此整个mongodb副本集搞定了两个问题:
主节点挂了能否自动切换连接?目前需要手工切换。
主节点的读写压力过大如何解决?
还有这两个问题后续解决:
从节点每个上面的数据都是对数据库全量拷贝,从节点压力会不会过大?
数据压力大到机器支撑不了的时候能否做到自动扩展?
做了副本集发现又一些问题:
副本集故障转移,主节点是如何选举的?能否手动干涉下架某一台主节点。
官方说副本集数量最好是奇数,为什么?
mongodb副本集是如何同步的?如果同步不及时会出现什么情况?会不会出现不一致性?
mongodb的故障转移会不会无故自动发生?什么条件会触发?频繁触发可能会带来系统负载加重
原文链接:
(责编/仲浩)
”为主题的&
将于5月20-23日在北京国家会议中心隆重举办。产业观察、技术培训、主题论坛、行业研讨,内容丰富,干货十足。票价优惠,马上&
推荐阅读相关主题:
CSDN官方微信
扫描二维码,向CSDN吐槽
微信号:CSDNnews
相关热门文章我们的一个已投产项目的高可用数据库实战-mongo副本集的搭建详细过程
我们的 mongo 副本集有三台 mongo 服务器:一台主库两台从库。主库进行写操作,两台从库进行读操作(至于某次读操作究竟路由给了哪台,仲裁决定),实现了读写分离。这还不止,如果主库宕掉,还能实现不需要用户干预的情况下,将主库自动切换到另外两台从库中的某一台,真正实现了 db 的高可用。
CPU核数:4内存配置:8G带宽:100MB磁盘:系统盘 40G,数据盘 180G操作系统版本:Ubuntu 14.04 64位
1.2 系统部署结构图
2. MongoDB 副本集环境搭建
2.1 安装包
最新 MongoDB 安装包下载地址:
https://www.mongodb.org/downloads#production
选择适合我们操作系统的版本下载,最新版本是 mongodb-linux-x86_64-ubuntu.1.tgz,大小 74MB。
2.2 MongoDB 的安装和环境变量设置
解压到当前目录:
$ tar zxvf mongodb-linux-x86_64-ubuntu.1.tgz
得到 mongodb-linux-x86_64-ubuntu.1 文件夹,其内有 bin 等文件/目录。
MongoDB 是绿色免安装软件,但我们准备把 MongoDB 作为安装软件放在 /usr/local 目录下:
$ mv mongodb-linux-x86_64-ubuntu.1 /usr/local/mongodb
最后将 MongoDB 加进环境变量并立即生效:
$ echo &export PATH=/usr/local/mongodb/bin:$PATH& && ~/.bashrc
$ source ~/.bashrc
2.3 MongoDB 实例所需先决条件准备
根据目前的架构,MongoDB 副本集需要四个实例:一个仲裁节点、一个 master 节点、两个 slave 节点。每个实例数据、日志文件都需要有一个专门的目录存放,我们分别将其放在数据盘下的 rs1- 端口号 目录下。
每个节点之间访问需要授权,因此需要同一个授权文件。
每个节点数据、日志目录创建:
$ cd /opt/
$ mkdir mongodb
$ cd mongodb/
$ mkdir arb-30000
$ mkdir rs1-27017
$ mkdir rs1-27018
$ mkdir rs1-27019
授权文件创建并修改其访问权限:
$ openssl rand -base64 512 & rs1.key
$ chmod 600 rs1.key
2.4 用户实例创建
2.4.1 副本集四个实例的首次启动
因为用户还没创建,所以首次启动 MongoDB 副本集,不要带上 auth 参数,不然无法创建用户。
master 节点启动(端口号 27017):
$ mongod --port=27017 --fork --dbpath=/opt/mongodb/rs1-27017 --logpath=/opt/mongodb/rs1-27017/mongod.log --replSet=rs1
slave 节点之一启动(端口号 27018):
$ mongod --port=27018 --fork --dbpath=/opt/mongodb/rs1-27018 --logpath=/opt/mongodb/rs1-27018/mongod.log --replSet=rs1
slave 节点之二启动(端口号 27019):
$ mongod --port=27019 --fork --dbpath=/opt/mongodb/rs1-27019 --logpath=/opt/mongodb/rs1-27019/mongod.log --replSet=rs1
输出如下证明启动成功:
about to fork child process, waiting until server is ready for connections.
forked process: 21947
child process started successfully, parent exiting
最后是仲裁节点的启动(端口号 30000):
$ mongod --port=30000 --fork --dbpath=/opt/mongodb/arb-30000 --logpath=/opt/mongodb/arb-30000/mongod.log --replSet=rs1
2.4.2 副本集的创建
现在每个节点都已经启动成功了,现在我们要创建一个副本集,并把这些节点加进来。先登录主节点:
$ mongo -port 27017
进入后执行副本集初始化:
rs.initiate()
建好以后查看本节点是否 master:
rs1:OTHER& rs.isMaster()
&hosts& : [
&somehost:27017&
&setName& : &rs1&,
&setVersion& : 1,
&ismaster& : true,
&secondary& : false,
&primary& : &somehost:27017&,
&me& : &somehost:27017&,
&electionId& : ObjectId(&56b&),
&maxBsonObjectSize& : ,
&maxMessageSizeBytes& : ,
&maxWriteBatchSize& : 1000,
&localTime& : ISODate(&T03:35:26.532Z&),
&maxWireVersion& : 4,
&minWireVersion& : 0,
之后依次把两个 slave 节点、仲裁节点加进来:
rs1:PRIMARY& rs.add('somehost:27018')
{ &ok& : 1 }
rs1:PRIMARY& rs.add('somehost:27019')
{ &ok& : 1 }
rs1:PRIMARY& rs.addArb('somehost:30000')
{ &ok& : 1 }
注意 somehost 是每个节点的主机名(可以使用 hostname 命令查看)。
验证副本集状态:
rs1:PRIMARY& rs.status()
&set& : &rs1&,
&date& : ISODate(&T03:38:56.519Z&),
&myState& : 1,
&term& : NumberLong(1),
&heartbeatIntervalMillis& : NumberLong(2000),
&members& : [
&_id& : 0,
&name& : &somehost:27017&,
&health& : 1,
&state& : 1,
&stateStr& : &PRIMARY&,
&uptime& : 560,
&optime& : {
&ts& : Timestamp(, 1),
&t& : NumberLong(1)
&optimeDate& : ISODate(&T03:38:27Z&),
&electionTime& : Timestamp(, 2),
&electionDate& : ISODate(&T03:34:47Z&),
&configVersion& : 4,
&self& : true
&_id& : 1,
&name& : &somehost:27018&,
&health& : 1,
&state& : 2,
&stateStr& : &SECONDARY&,
&uptime& : 102,
&optime& : {
&ts& : Timestamp(, 1),
&t& : NumberLong(1)
&optimeDate& : ISODate(&T03:38:27Z&),
&lastHeartbeat& : ISODate(&T03:38:55.923Z&),
&lastHeartbeatRecv& : ISODate(&T03:38:55.927Z&
&pingMs& : NumberLong(0),
&syncingTo& : &somehost:27017&,
&configVersion& : 4
&_id& : 2,
&name& : &somehost:27019&,
&health& : 1,
&state& : 2,
&stateStr& : &SECONDARY&,
&uptime& : 85,
&optime& : {
&ts& : Timestamp(, 1),
&t& : NumberLong(1)
&optimeDate& : ISODate(&T03:38:27Z&),
&lastHeartbeat& : ISODate(&T03:38:55.923Z&),
&lastHeartbeatRecv& : ISODate(&T03:38:54.924Z&
&pingMs& : NumberLong(0),
&syncingTo& : &somehost:27018&,
&configVersion& : 4
&_id& : 3,
&name& : &somehost:30000&,
&health& : 1,
&state& : 7,
&stateStr& : &ARBITER&,
&uptime& : 28,
&lastHeartbeat& : ISODate(&T03:38:55.923Z&),
&lastHeartbeatRecv& : ISODate(&T03:38:52.940Z&
&pingMs& : NumberLong(0),
&configVersion& : 4
2.4.3 管理员用户的创建
rs1:PRIMARY& use admin
switched to db admin
rs1:PRIMARY& db.createUser({user:&admin&,pwd:&adminpwd&,roles:[&root&]})
Successfully added user: { &user& : &admin&, &roles& : [ &root& ] }
rs1:PRIMARY& exit
用户名 admin 密码 adminpwd 的管理员用户已创建。
验证所创建的用户是否已同步到其他几个节点:
$ mongo -port 27018
rs1:SECONDARY& use admin
switched to db admin
rs1:SECONDARY& db.auth(&admin&,&adminpwd&)
2.4.4 生产库、生产用户的创建
先把原来启动的实例依次关闭,顺序是两个 slave 节点、主节点、仲裁节点:
rs1:SECONDARY& db.shutdownServer()
server should be down...
T11:46:36.234+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:2.0.1) failed
T11:46:36.235+0800 W NETWORK [thread1] Failed to connect to 127.0.0.1:27018, reason: errno:111 Connection refused
T11:46:36.235+0800 I NETWORK [thread1] reconnect 127.0.0.1:2.0.1) failed failed
T11:46:36.237+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:2.0.1) failed
T11:46:36.237+0800 W NETWORK [thread1] Failed to connect to 127.0.0.1:27018, reason: errno:111 Connection refused
T11:46:36.237+0800 I NETWORK [thread1] reconnect 127.0.0.1:2.0.1) failed failed
$ mongo -port 27019
rs1:SECONDARY& use admin
switched to db admin
rs1:SECONDARY& db.auth(&admin&,&adminpwd&)
rs1:SECONDARY& db.shutdownServer()
server should be down...
T11:48:01.191+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:2.0.1) failed
T11:48:01.191+0800 W NETWORK [thread1] Failed to connect to 127.0.0.:27019, reason: errno:111 Connection refused
T11:48:01.191+0800 I NETWORK [thread1] reconnect 127.0.0.1:2.0.1) failed failed
T11:48:01.193+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:2.0.1) failed
T11:48:01.193+0800 W NETWORK [thread1] Failed to connect to 127.0.0.1:27019, reason: errno:111 Connection refused
T11:48:01.193+0800 I NETWORK [thread1] reconnect 127.0.0.1:2.0.1) failed failed
同理关掉 master 节点和仲裁节点。
然后再按顺序依次启动,注意这次要加上 auth 参数了:
$ mongod --port=27017 --fork --dbpath=/opt/mongodb/rs1-27017 --logpath=/opt/mongodb/rs1-27017/mongod.log --replSet=rs1 --auth --keyFile=/opt/mongodb/rs1.key
about to fork child process, waiting until server is ready for connections.
forked process: 22803
child process started successfully, parent exiting
$ mongod --port=27018 --fork --dbpath=/opt/mongodb/rs1-27018 --logpath=/opt/mongodb/rs1-27018/mongod.log --replSet=rs1 --auth --keyFile=/opt/mongodb/rs1.key
about to fork child process, waiting until server is ready for connections.
forked process: 22803
child process started successfully, parent exiting
$ mongod --port=27019 --fork --dbpath=/opt/mongodb/rs1-27019 --logpath=/opt/mongodb/rs1-27019/mongod.log --replSet=rs1 --auth --keyFile=/opt/mongodb/rs1.key
about to fork child process, waiting until server is ready for connections.
forked process: 23068
child process started successfully, parent exiting
$ mongod --port=30000 --fork --dbpath=/opt/mongodb/arb-30000 --logpath=/opt/mongodb/arb-30000/mongod.log --replSet=rs1 --keyFile=/opt/mongodb/rs1.key
about to fork child process, waiting until server is ready for connections.
forked process: 23068
child process started successfully, parent exiting
每个节点启动以后,在主节点 test 库里插入一条记录测试是否同步:
$ mongo -port 27017
MongoDB shell version: 3.2.1
connecting to: 127.0.0.1:27017/test
rs1:PRIMARY& db.auth('admin','adminpwd')
rs1:PRIMARY& use test
switched to db test
rs1:PRIMARY& db.test.insert({test:&test&})
WriteResult({ &nInserted& : 1 })
rs1:PRIMARY& exit
去从库看看是否同步:
$ mongo -port 27018
MongoDB shell version: 3.2.1
connecting to: 127.0.0.1:27018/test
rs1:SECONDARY& use test
switched to db test
rs1:SECONDARY& use admin
switched to db admin
rs1:SECONDARY& db.auth('admin','adminpwd')
rs1:SECONDARY& use test
switched to db test
rs1:SECONDARY& rs.slaveOk()
rs1:SECONDARY& db.test.find()
{ &_id& : ObjectId(&56b17b7cafc9b&), &test& : &test& }
同步成功。
接下来创建生产数据库、生产用户,先登录 master 节点:
$ mongo -port 27017
rs1:PRIMARY& db.auth(&admin&,&adminpwd&)
rs1:PRIMARY& use quicktest
rs1:PRIMARY& db.createUser({user:&quicktest&,pwd:&quicktest&,roles:[{ role: &readWrite&, db: &quicktest& }]})
Successfully added user: {
&user& : &quicktest&,
&roles& : [
&role& : &readWrite&,
&db& : &quicktest&
quicktest 生产用户创建成功。
rs1:PRIMARY& db.justtest.insert({test:&test&})
WriteResult({ &nInserted& : 1 })
rs1:PRIMARY& show dbs
admin 0.000GB
local 0.000GB
quicktest 0.000GB
test 0.000GB
quicktest 生产数据库创建成功。
2.5 app 应用 admin 用户的创建
一些基础数据表如用户表等需要先行创建,至于交易表等非基础数据则在交易产生的时候会自行创建。
$ mongo -port 27017
MongoDB shell version: 3.2.1
connecting to: 127.0.0.1:27017/test
rs1:PRIMARY& use quicktest
switched to db quicktest
rs1:PRIMARY& db.auth('quicktest','quicktest')
rs1:PRIMARY& show collections
rs1:PRIMARY& db.user.insert({ &_id& : ObjectId(&55ef883acb91f093af390689&), &userName& : &admin&, &nickName& : &admin&, &password& : &ca39dc956feac05ba676ed&, &mail& : &&, &phoneNum& : &&, &userType& : &admin&, &agentCode& : &&, &subAgentCode& : &&, &groupCode& : &&, &merId& : &&, &areaCode& : &222&, &updateTime& : & 17:26:19&, &loginTime& : &&, &lockTime& : && })
WriteResult({ &nInserted& : 1 })
2.6 验证副本集的高可用性
我们手工将 master节点给 kill 掉:
$ kill -9 22803
$ mongo -port 27018
MongoDB shell version: 3.2.1
rs1:SECONDARY& exit
$ mongo -port 27019
MongoDB shell version: 3.2.1
connecting to: 127.0.0.1:27019/test
rs1:PRIMARY&
看见 master 节点已经被选举移交给 27019 端口的那个节点。查看副本集状态:
rs1:PRIMARY& rs.status()
&set& : &rs1&,
&date& : ISODate(&T03:00:28.531Z&),
&myState& : 1,
&term& : NumberLong(3),
&heartbeatIntervalMillis& : NumberLong(2000),
&members& : [
&_id& : 0,
&name& : &somehost:27017&,
&health& : 0,
&state& : 8,
&stateStr& : &(not reachable/healthy)&,
&uptime& : 0,
&optime& : {
&ts& : Timestamp(0, 0),
&t& : NumberLong(-1)
&optimeDate& : ISODate(&T00:00:00Z&),
&lastHeartbeat& : ISODate(&T03:00:27.031Z&),
&lastHeartbeatRecv& : ISODate(&T02:43:57.294Z&
&pingMs& : NumberLong(0),
&lastHeartbeatMessage& : &Connection refused&,
&configVersion& : -1
&_id& : 1,
&name& : &somehost:27018&,
&health& : 1,
&state& : 2,
&stateStr& : &SECONDARY&,
&uptime& : 2070279,
&optime& : {
&ts& : Timestamp(, 2),
&t& : NumberLong(3)
&optimeDate& : ISODate(&T03:00:07Z&),
&lastHeartbeat& : ISODate(&T03:00:26.860Z&),
&lastHeartbeatRecv& : ISODate(&T03:00:27.287Z&
&pingMs& : NumberLong(0),
&syncingTo& : &somehost:27019&,
&configVersion& : 4
&_id& : 2,
&name& : &somehost:27019&,
&health& : 1,
&state& : 1,
&stateStr& : &PRIMARY&,
&uptime& : 2070280,
&optime& : {
&ts& : Timestamp(, 2),
&t& : NumberLong(3)
&optimeDate& : ISODate(&T03:00:07Z&),
&electionTime& : Timestamp(, 1),
&electionDate& : ISODate(&T02:44:08Z&),
&configVersion& : 4,
&self& : true
&_id& : 3,
&name& : &somehost:30000&,
&health& : 1,
&state& : 7,
&stateStr& : &ARBITER&,
&uptime& : 2070226,
&lastHeartbeat& : ISODate(&T03:00:26.859Z&),
&lastHeartbeatRecv& : ISODate(&T03:00:25.360Z&
&pingMs& : NumberLong(0),
&configVersion& : 4
原来的端口号为 27017 的主节点健康标记已为 0(不可用)。查看仲裁日志:
T10:44:02.924+0800 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to somehost:27017
T10:44:02.924+0800 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections
T10:44:02.924+0800 I REPL [ReplicationExecutor] Error in heartbeat request to somehost:27017; HostUnreachable Connection refused
可见虽然仲裁已经放弃了对 27017 的连接,但还是会一直给 27017 发心跳包以待其恢复正常后继续使用。为了验证这个我们再将 27017 重启:
$ mongod --port=27017 --fork --dbpath=/opt/mongodb/rs1-27017 --logpath=/opt/mongodb/rs1-27017/mongod.log --replSet=rs1 --auth --keyFile=/opt/mongodb/rs1.key
about to fork child process, waiting until server is ready for connections.
forked process: 31880
child process started successfully, parent exiting
回到目前的主节点 27019:
rs1:PRIMARY& rs.status()
&set& : &rs1&,
&date& : ISODate(&T03:16:59.520Z&),
&myState& : 1,
&term& : NumberLong(3),
&heartbeatIntervalMillis& : NumberLong(2000),
&members& : [
&_id& : 0,
&name& : &somehost:27017&,
&health& : 1,
&state& : 2,
&stateStr& : &SECONDARY&,
&uptime& : 57,
&optime& : {
&ts& : Timestamp(, 4),
&t& : NumberLong(3)
&optimeDate& : ISODate(&T03:15:07Z&),
&lastHeartbeat& : ISODate(&T03:16:57.609Z&),
&lastHeartbeatRecv& : ISODate(&T03:16:59.124Z&
&pingMs& : NumberLong(0),
&syncingTo& : &somehost:27019&,
&configVersion& : 4
&_id& : 1,
&name& : &somehost:27018&,
&health& : 1,
&state& : 2,
&stateStr& : &SECONDARY&,
&uptime& : 2071270,
&optime& : {
&ts& : Timestamp(, 4),
&t& : NumberLong(3)
&optimeDate& : ISODate(&T03:15:07Z&),
&lastHeartbeat& : ISODate(&T03:16:59.207Z&),
&lastHeartbeatRecv& : ISODate(&T03:16:57.656Z&
&pingMs& : NumberLong(0),
&syncingTo& : &somehost:27019&,
&configVersion& : 4
&_id& : 2,
&name& : &somehost:27019&,
&health& : 1,
&state& : 1,
&stateStr& : &PRIMARY&,
&uptime& : 2071271,
&optime& : {
&ts& : Timestamp(, 4),
&t& : NumberLong(3)
&optimeDate& : ISODate(&T03:15:07Z&),
&electionTime& : Timestamp(, 1),
&electionDate& : ISODate(&T02:44:08Z&),
&configVersion& : 4,
&self& : true
&_id& : 3,
&name& : &somehost:30000&,
&health& : 1,
&state& : 7,
&stateStr& : &ARBITER&,
&uptime& : 2071217,
&lastHeartbeat& : ISODate(&T03:16:59.205Z&),
&lastHeartbeatRecv& : ISODate(&T03:16:55.506Z&
&pingMs& : NumberLong(0),
&configVersion& : 4
可见 27017 节点已恢复正常,并作为 slave 节点进行服务。仲裁的日志也说明了这个:
T11:16:03.721+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to somehost:27017
T11:16:03.722+0800 I REPL [ReplicationExecutor] Member somehost:27017 is now in state SECONDARY
slave 节点的故障及恢复基本类似与此。
(window.slotbydup=window.slotbydup || []).push({
id: '2467140',
container: s,
size: '1000,90',
display: 'inlay-fix'
(window.slotbydup=window.slotbydup || []).push({
id: '2467141',
container: s,
size: '1000,90',
display: 'inlay-fix'
(window.slotbydup=window.slotbydup || []).push({
id: '2467143',
container: s,
size: '1000,90',
display: 'inlay-fix'
(window.slotbydup=window.slotbydup || []).push({
id: '2467148',
container: s,
size: '1000,90',
display: 'inlay-fix'}

我要回帖

更多关于 mongo 客户端工具 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信