MongoDB环境部署

系统优化配置

Limit配置

1
2
3
4
5
$ cat /etc/security/limits.conf
* soft nofile 64000
* hard nofile 64000
* soft nproc 64000
* hard nproc 64000

关闭selinux

1
2
3
$ setenforce 0
$ vi /etc/sysconfig/selinux
SELINUX=disabled

禁用transparent_hugepage

1
2
3
4
$ echo never >  /sys/kernel/mm/transparent_hugepage/enabled
$ echo never > /sys/kernel/mm/transparent_hugepage/defrag
$ echo "echo never >  /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
$ echo "echo never > /sys/kernel/mm/transparent_hugepage/defrag" >> /etc/rc.local

在启动mongodb时禁用NUMA

1
2
$ sysctl -w vm.zone_reclaim_mode = 0
$ numactl --interleave=all /usr/local/mongodb/bin/mongod .....

设置readhead

1
$ blockdev --setra 0 /dev/mapper/dbvg-dblv(lv名称)

Tips: MMAPv1引擎推荐32或16,WiredTiger推荐设置为0,3.2后默认WiredTiger

设置磁盘调度策略

1
$ grubby --update-kernel=ALL --args="elevator=noop"

Tips: 虚拟化或SSD建议使用noop

内核参数设置

1
2
3
4
5
6
$ cat >> /etc/sysctl.conf << EOF
fs.file-max=98000
kernel.pid_max=64000
kernel.threads-max=64000
vm.max_map_count=128000
EOF

设置TCP KEEPALIVE

1
2
$ sysctl -w net.ipv4.tcp_keepalive_time=300
$ echo "net.ipv4.tcp_keepalive_time=300" >> /etc/sysctl.conf

关闭tuned(虚拟化环境)

1
$ service tuned stop

MongoDB安装

解压安装包

1
2
$ tar -xvf mongodb-linux-x86_64-rhel62-4.0.0.tgz -C /usr/local/
$ mv /usr/local/mongodb-linux-x86_64-rhel62-4.0.0 /usr/local/mongodb

创建数据目录

1
2
$ mkdir /service/mongodb/data
$ mkdir /service/mongodb/log

配置参数文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
systemLog:
  destination: file
  path: "/service/mongodb/shard1/logs/mongod.log"
  logAppend: true
storage:
  dbPath: "/service/mongodb/shard1/data"
  journal:
    enabled: true
  directoryPerDB: true
  engine: wiredTiger
  wiredTiger:
    engineConfig:
      directoryForIndexes: true
processManagement:
  fork: true
  pidFilePath: "/service/mongodb/shard1/data/mongod.pid"
net:
  bindIpAll: true
  port: 20000
  maxIncomingConnections: 10000
operationProfiling:
  mode: slowOp
  slowOpThresholdMs: 200
setParameter:
  enableLocalhostAuthBypass: false
replication:
  oplogSizeMB: 2048
  replSetName: shard1

配置环境变量

1
$ echo "export PATH=$PATH:/usr/local/mongodb/bin" >>/etc/profile

启动数据库

1
$ numactl --interleave=all /usr/local/mongodb/bin/mongod --config /service/mongodb/shard1/conf/shard1.conf

replset副本集配置

初始化副本集

1
2
3
$ mongo -u "root" -p "abcd123#" --authenticationDatabase "admin" --host 10.240.204.157 --port 27017
> config = {_id: 'repl01', members: [{_id: 0, host: '10.240.204.157:27017'},{_id: 1, host: '10.240.204.149:27017'},{_id: 2, host: '10.240.204.165:27017', arbiterOnly: true}]}
> rs.initiate(config)

查看副本集状态

1
repl01:PRIMARY>rs.status()

手动切换

在一些情况下,需要对主节点进行手动切换,就可以使用stepDown命令在主节点执行,随后该节点则退化为副本并选举一个新主节点。

1
repl01:PRIMARY> rs.stepDown()

启动登陆认证

主节点创建root用户

1
2
3
4
5
6
7
$ mongo --port 27017
>use admin
>db.createUser(
{
user:"root",
pwd:"abcd123#",
        roles:[{role:"root",db:"admin"}] })

创建key文件并复制到所有节点

1
2
$ openssl rand -base64 756 > /service/mongodb/key/rsa_key
$ chmod 600 /service/mongodb/key/rsa_key

启用认证登陆和key文件

1
2
3
security:
  keyFile: "/service/mongodb/key/rsa_key"
  authorization: enabled

重启数据库

1
2
$ /usr/local/mongodb/bin/mongod --config /service/mongodb/shard1/conf/shard1.conf --shutdown
$ numactl --interleave=all /usr/local/mongodb/bin/mongod --config /service/mongodb/shard1/conf/shard1.conf

shard分片配置

shard节点配置

shard节点配置可参考上面副本集配置即可,只是参数文件中需要加入下列参数

1
2
sharding:
  clusterRole: shardsvr

Config副本集部署

Config节点用于保存群集的一些配置信息及状态,也采用副本集来保证高可用,需要在参数文件中加入下列参数

1
2
sharding:
  clusterRole: configsvr

Mongos路由配置

配置参数文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
$ vi /etc/mongodb/mongos.conf
systemLog: 
  destination: file
  path: "/service/mongos/logs/mongos.log"
  logAppend: true
processManagement:
  fork: true
net:
  bindIpAll: true
  port: 50000
sharding:
  configDB: configSet/10.240.204.157:40000,10.240.204.165:40000,10.240.204.149:40000
security:
  keyFile: "/service/mongodb/key/rsa_key"

配置环境变量

1
$  echo "export PATH=$PATH:/usr/local/mongodb/bin" >>/etc/profile

复制shard的Key文件

1
2
3
[root@t-luhxdb01-p-szzb ~]# scp /service/mongodb/key/rsa_key @10.240.204.157:/service/mongos/key/rsa_key
[root@t-luhxdb01-p-szzb ~]# scp /service/mongodb/key/rsa_key @10.240.204.165:/service/mongos/key/rsa_key
[root@t-luhxdb01-p-szzb ~]# scp /service/mongodb/key/rsa_key @10.240.204.149:/service/mongos/key/rsa_key

启动Mongos

1
$ numactl --interleave=all /usr/local/mongodb/bin/mongos --config /service/mongodb/mongos/conf/mongos.conf

添加分片节点

1
2
3
4
$ mongo -u "root" -p "abcd123#" --authenticationDatabase "admin" --port 50000
mongos>db.runCommand({addshard:"shard1/10.240.204.157:28000,10.240.204.149:28000,10.240.204.165:28000"})
mongos>db.runCommand({addshard:"shard2/10.240.204.165:29000,10.240.204.157:29000,10.240.204.149:29000"})
mongos>db.runCommand({addshard:"shard3/10.240.204.149:30000,10.240.204.165:30000,10.240.204.157:30000"})

查看分片

1
2
3
4
mongos> db.getSiblingDB("config").shards.find();
{ "_id" : "shard1", "host" : "shard1/10.240.204.157:28000,10.240.204.149:29000,10.240.204.165:30000", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.240.204.165:28000,10.240.204.157:29000,10.240.204.149:30000", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.240.204.149:28000,10.240.204.165:29000,10.240.204.157:30000", "state" : 1 }

数据库分片

1
2
mongos>use admin
mongos>db.runCommand({enablesharding:"mytest"})

集合分片

1
2
3
4
##哈希分片
mongos>sh.shardCollection("mytest.student",{_id:"hashed"});
##区间分片
mongos>sh.shardCollection("mytest.student",{_id:1});

Tips: 片键需要先创建索引

查看分片状态

1
mongos> sh.status()

参考链接

production-notes production-checklist-operations

Licensed under CC BY-NC-SA 4.0
comments powered by Disqus