docker compose部署mongodb 分片集群的操作方法

 更新时间:2024年10月16日 10:55:30   作者:栀夏613  
分片机制(Sharding)是MongoDB中用于处理大规模数据集和高负载应用的一种数据分布策略,通过将数据均匀分布在多个服务器上,分片技术能够提高应用的可扩展性和性能,本文给大家介绍docker compose部署mongodb 分片集群的相关操作,感兴趣的朋友一起看看吧

分片机制

分片概念

分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。通过一个名为mongos的路由进程进行操作,mongos知道数据和片的对应关系(通过配置服务器)。大部分使用场景都是解决磁盘空间的问题,对于写入有可能跨分片,查询则尽量避免跨分片查询。

mongodb分片的主要使用场景:

  • 数据量过大,单机磁盘空间不足;
  • 单个mongod不能满足写数据的性能要求,需要通过分片让写压力分散到各个分片上面;
  • 把大量数据放到内存里提高性能,通过分片使用分片服务器自身的资源。

mongodb分片优势**:**

减少单个分片需要处理的请求数,提高集群的存储容量和吞吐量 比如,当插入一条数据时,应用只需要访问存储这条数据的分片 减少单分片存储的数据,提高数据可用性,提高大型数据库查询服务的性能。 当MongoDB单点数据库服务器存储和性能成为瓶颈,或者需要部署大型应用以充分利用内存时,可以使用分片技术

分片集群架构

组件说明:

  • **Config Server:配置服务器,**存储了整个 分片群集的配置信息,其中包括 chunk信息。
  • **Shard:分片服务器,**用于存储实际的数据块,每一个shard都负责存储集群中的一部分数据。例如一个集群有3个分片,假设定义分片的规则为hash,那么整个集群的数据会按照相应规划分割到3个分片当中。任意一个分片挂掉,则整个集群数据不可用。所以在实际生产环境中一个shard server角色一般由一个3节点的replicaSet承担,防止分片的单点故障。
  • **mongos:前端路由,**整个集群的入口。客户端应用通过mongos连接到整个集群,mongos让整个集群看上去像单一数据库,客户端应用可以透明使用

整个mongo分片集群的功能:

  • 请求分流:通过路由节点将请求分发到对应的分片和块中
  • 数据分流:内部提供平衡器保证数据的均匀分布,这是数据平均分布式、请求平均分布的前提
  • 块的拆分:mongodb的单个chunk的最大容量为64M或者10w的数据,当到达这个阈值,触发块的拆分,一分为二
  • 块的迁移:为保证数据在分片节点服务器分片节点服务器均匀分布,块会在节点之间迁移。一般相差8个分块的时候触发

分片集群部署

部署规划

shard 3 个副本集
config server 3 个副本集
mongos 3 个副本集

主机准备

shard

IProleportshardname
192.168.142.157shard127181shard1
192.168.142.157shard227182shard1
192.168.142.157shard327183shard1
192.168.142.155shard127181shard2
192.168.142.155shard227182shard2
192.168.142.155shard327183shard2
192.168.142.156shard127181shard3
192.168.142.156shard227182shard3
192.168.142.156shard327183shard3

config server

IProleportconfig name
192.168.142.157config server127281config1
192.168.142.157config server227282config1
192.168.142.157config server327283config1
192.168.142.155config server127281config2
192.168.142.155config server227282config2
192.168.142.155config server327283config2
192.168.142.156config server127281config3
192.168.142.156config server227282config3
192.168.142.156config server327283config3

mongos

IProleport
192.168.142.155mongos27381
192.168.142.155mongos27382
192.168.142.155mongos27383

开始部署

创建搭建分片集群的文件夹

mkdir /docker/mongo-zone/{configsvr,shard,mongos} -p

进入 /docker/mongo-zone/ 文件夹

configsvr 副本集文件夹准备

mkdir configsvr/{configsvr1,configsvr2,configsvr3}/{data,logs} -p

shard 副本集文件夹准备

mkdir shard/{shard1,shard2,shard3}/{data,logs} -p

mongos 副本集文件夹准备

mkdir mongos/{mongos1,mongos2,mongos3}/{data,logs} -p

生成密钥

openssl rand -base64 756 > mongo.key

发放给其他主机

scp mongo.key slave@192.168.142.156:/home/slave
scp mongo.key slave02@192.168.142.155:/home/slave02
mv /home/slave02/mongo.key .mv /home/slave/mongo.key .
chown root:root mongo.key

搭建 shard 副本集

cd /docker/mongo-zone/shard/shard1

docker-compose.yml

services:
  mongo-shard1:
    image: mongo:7.0
    container_name: mongo-shard1
    restart: always
    volumes:
      - /docker/mongo-zone/shard/shard1/data:/data/db
      - /docker/mongo-zone/shard/shard1/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27181:27181"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: shard1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27181
  mongo-shard2:
    image: mongo:7.0
    container_name: mongo-shard2
    restart: always
    volumes:
      - /docker/mongo-zone/shard/shard2/data:/data/db
      - /docker/mongo-zone/shard/shard2/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27182:27182"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: shard1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27182
  mongo-shard3:
    image: mongo:7.0
    container_name: mongo-shard3
    restart: always
    volumes:
      - /docker/mongo-zone/shard/shard3/data:/data/db
      - /docker/mongo-zone/shard/shard3/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27183:27183"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: shard1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27183

其他三台主机的操作和上面一样,参考上面表格
修改 docker-compose.yml 三处地方即可

MONGO_INITDB_REPLICA_SET_NAME
–replSet

初始化副本集

docker exec -it mongo-shard1 mongosh --port 27181
use admin
rs.initiate()

添加 root 用户

db.createUser({user:"root",pwd:"123456",roles:[{role:"root",db:"admin"}]})

登录 root 用户

db.auth("root","123456")

添加其他节点

rs.add({host:"192.168.142.155:27182",priority:2})
rs.add({host:"192.168.142.155:27183",priority:3})

查看集群状态

rs.status()
{
  set: 'shard1',
  date: ISODate('2024-10-15T03:25:48.706Z'),
  myState: 1,
  term: Long('2'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
    lastCommittedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
    appliedOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
    durableOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
    lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
    lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1728962730, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'priorityTakeover',
    lastElectionDate: ISODate('2024-10-15T03:21:50.316Z'),
    electionTerm: Long('2'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1728962500, i: 1 }), t: Long('1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1728962500, i: 1 }), t: Long('1') },
    numVotesNeeded: 2,
    priorityAtElection: 2,
    electionTimeoutMillis: Long('10000'),
    priorPrimaryMemberId: 0,
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-10-15T03:21:50.320Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-10-15T03:21:50.327Z')
  },
  members: [
    {
      _id: 0,
      name: '4590140ce686:27181',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 250,
      optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDurable: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
      optimeDurableDate: ISODate('2024-10-15T03:25:43.000Z'),
      lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastHeartbeat: ISODate('2024-10-15T03:25:47.403Z'),
      lastHeartbeatRecv: ISODate('2024-10-15T03:25:47.403Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.142.157:27182',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 5,
      configTerm: 2
    },
    {
      _id: 1,
      name: '192.168.142.157:27182',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 435,
      optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
      lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1728962510, i: 1 }),
      electionDate: ISODate('2024-10-15T03:21:50.000Z'),
      configVersion: 5,
      configTerm: 2,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: '192.168.142.157:27183',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 7,
      optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDurable: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
      optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
      optimeDurableDate: ISODate('2024-10-15T03:25:43.000Z'),
      lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
      lastHeartbeat: ISODate('2024-10-15T03:25:47.405Z'),
      lastHeartbeatRecv: ISODate('2024-10-15T03:25:47.906Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.142.157:27182',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 5,
      configTerm: 2
    }
  ],
  ok: 1
}

搭建 config server 集群

操作和上面差不多,下面只提供 docker-compose.yml 文件

services:
  mongo-config1:
    image: mongo:7.0
    container_name: mongo-config1
    restart: always
    volumes:
      - /docker/mongo-zone/configsvr/configsvr1/data:/data/db
      - /docker/mongo-zone/configsvr/configsvr1/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27281:27281"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: config1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --configsvr  --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27281
  mongo-config2:
    image: mongo:7.0
    container_name: mongo-config2
    restart: always
    volumes:
      - /docker/mongo-zone/configsvr/configsvr2/data:/data/db
      - /docker/mongo-zone/configsvr/configsvr2/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27282:27282"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: config1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --configsvr  --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27282
  mongo-config3:
    image: mongo:7.0
    container_name: mongo-config3
    restart: always
    volumes:
      - /docker/mongo-zone/configsvr/configsvr3/data:/data/db
      - /docker/mongo-zone/configsvr/configsvr3/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27283:27283"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: config1
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo.key
        chown 999:999 /etc/mongo.key
        mongod --configsvr  --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27283

搭建 mongos 集群

操作和上面差不多,下面只提供 docker-compose.yml 文件

services:
  mongo-mongos1:
    image: mongo:7.0
    container_name: mongo-mongos1
    restart: always
    volumes:
      - /docker/mongo-zone/mongos/mongos1/data:/data/db
      - /docker/mongo-zone/mongos/mongos1/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27381:27381"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo-mongos1.key
        chown 999:999 /etc/mongo-mongos1.key
        mongos --configdb config1/192.168.142.157:27281,192.168.142.157:27282,192.168.142.157:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key  --port 27381
  mongo-mongos2:
    image: mongo:7.0
    container_name: mongo-mongos2
    restart: always
    volumes:
      - /docker/mongo-zone/mongos/mongos2/data:/data/db
      - /docker/mongo-zone/mongos/mongos2/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27382:27382"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo-mongos1.key
        chown 999:999 /etc/mongo-mongos1.key
        mongos --configdb config2/192.168.142.155:27281,192.168.142.155:27282,192.168.142.155:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key  --port 27382
  mongo-mongos3:
    image: mongo:7.0
    container_name: mongo-mongos3
    restart: always
    volumes:
      - /docker/mongo-zone/mongos/mongos3/data:/data/db
      - /docker/mongo-zone/mongos/mongos3/logs:/var/log/mongodb
      - /docker/mongo-zone/mongo.key:/etc/mongo.key
    ports:
      - "27383:27383"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongo-mongos1.key
        chown 999:999 /etc/mongo-mongos1.key
        mongos --configdb config3/192.168.142.156:27281,192.168.142.156:27282,192.168.142.156:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key  --port 27383

它不再需要单独生成密钥,将 config server 的密钥文件拷贝过来即可,切记一定要使用 config server 的密钥文件,不然会登录不进去

docker exec -it mongo-mongos1 mongosh --port 27381 -u root -p 123456 --authenticationDatabase admin
use admin

没有用户就照上面的方法再创建一个

db.auth("root","123456")

添加分片

sh.addShard("shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183")
sh.addShard("shard3/192.168.142.156:27181,192.168.142.156:27182,192.168.142.156:27183")
sh.addShard("shard2/192.168.142.155:27181,192.168.142.155:27182,192.168.142.155:27183")

此时此刻,可能会报错 找不到 192.168.142.157:27181 主机 不在 shard1
可是它明明就在 shard1 里面

[direct: mongos] admin> sh.addShard("shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183")
MongoServerError[OperationFailed]: in seed list shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183, host 192.168.142.157:27181 does not belong to replica set shard1; found { compression: [ "snappy", "zstd", "zlib" ], topologyVersion: { processId: ObjectId('670e225373d36364f75d8336'), counter: 7 }, hosts: [ "b170b4e78bc6:27181", "192.168.142.157:27182", "192.168.142.157:27183" ], setName: "shard1", setVersion: 5, isWritablePrimary: true, secondary: false, primary: "192.168.142.157:27183", me: "192.168.142.157:27183", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1728984093, 1), t: 3 }, lastWriteDate: new Date(1728984093000), majorityOpTime: { ts: Timestamp(1728984093, 1), t: 3 }, majorityWriteDate: new Date(1728984093000) }, isImplicitDefaultMajorityWC: true, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1728984102377), logicalSessionTimeoutMinutes: 30, connectionId: 57, minWireVersion: 0, maxWireVersion: 21, readOnly: false, ok: 1.0, $clusterTime: { clusterTime: Timestamp(1728984093, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configTime: Timestamp(0, 1), $topologyTime: Timestamp(0, 1), operationTime: Timestamp(1728984093, 1) }

原来问题出在

那么这个时候,要么使用这一串不知名的东西,要么就改这个节点的名字
实现方式比较简单,就是先移除这个节点,再重新添加,我省事就不赘述了

重新添加

sh.addShard("shard1/b170b4e78bc6:27181,192.168.142.157:27182,192.168.142.157:27183")
sh.addShard("shard3/cbfa7ed4415f:27181,192.168.142.156:27182,192.168.142.156:27183")
sh.addShard("shard2/444e6ad7d88c:27181,192.168.142.155:27182,192.168.142.155:27183")

查看 分片状态

sh.status()
shardingVersion
{ _id: 1, clusterId: ObjectId('670e2ed1c3ccdfa3427b6b97') }
---
shards
[
  {
    _id: 'shard1',
    host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728984938, i: 3 })
  },
  {
    _id: 'shard2',
    host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985069, i: 1 })
  },
  {
    _id: 'shard3',
    host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985021, i: 3 })
  }
]
---
active mongoses
[ { '7.0.14': 3 } ]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
  'Currently enabled': 'yes',
  'Currently running': 'no',
  'Failed balancer rounds in last 5 attempts': 0,
  'Migration Results for the last 24 hours': 'No recent migrations'
}
---
databases
[
  {
    database: { _id: 'config', primary: 'config', partitioned: true },
    collections: {
      'config.system.sessions': {
        shardKey: { _id: 1 },
        unique: false,
        balancing: true,
        chunkMetadata: [ { shard: 'shard1', nChunks: 1 } ],
        chunks: [
          { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) }
        ],
        tags: []
      }
    }
  }
]

着重查看

shards
[
  {
    _id: 'shard1',
    host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728984938, i: 3 })
  },
  {
    _id: 'shard2',
    host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985069, i: 1 })
  },
  {
    _id: 'shard3',
    host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985021, i: 3 })
  }
]

节点都齐全就表示分片搭建完成

验证

数据库分片配置

注意: 这些操作都在 mongos 上执行

use test

对数据库启动分片

sh.enableSharding("test")

返回结果

{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1728985516, i: 9 }),
    signature: {
      hash: Binary.createFromBase64('QWe6Dj8TwrM1aVVHmnOtihKsFm0=', 0),
      keyId: Long('7425924310763569175')
    }
  },
  operationTime: Timestamp({ t: 1728985516, i: 3 })
}

对test库的test集合的_id进行哈希分片

sh.enableBalancing("test.test")

返回结果

_id_hashed

sh.shardCollection("test.test", {"_id": "hashed" })

{
  collectionsharded: 'test.test',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1728985594, i: 48 }),
    signature: {
      hash: Binary.createFromBase64('SqkMn9xNXjnsNfNd4WTFiHajLPc=', 0),
      keyId: Long('7425924310763569175')
    }
  },
  operationTime: Timestamp({ t: 1728985594, i: 48 })
}

让当前分片支持平衡

sh.enableBalancing("test.test")
{
  acknowledged: true,
  insertedId: null,
  matchedCount: 1,
  modifiedCount: 0,
  upsertedCount: 0
}

开启平衡

sh.startBalancer()
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1728985656, i: 4 }),
    signature: {
      hash: Binary.createFromBase64('jTVkQGDtAHtLTjhZkBc3CQx+tzM=', 0),
      keyId: Long('7425924310763569175')
    }
  },
  operationTime: Timestamp({ t: 1728985656, i: 4 })
}

创建用户
就在 test 库下

db.createUser({user:"shardtest",pwd:"shardtest",roles:[{role:'dbOwner',db:'test'}]})

插入数据测试

for (i = 1; i <= 300; i=i+1){db.test.insertOne({'name': "test"})}

查看详细分片信息

查看详细分片信息

结果

shardingVersion
{ _id: 1, clusterId: ObjectId('670e2ed1c3ccdfa3427b6b97') }
---
shards
[
  {
    _id: 'shard1',
    host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728984938, i: 3 })
  },
  {
    _id: 'shard2',
    host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985069, i: 1 })
  },
  {
    _id: 'shard3',
    host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',
    state: 1,
    topologyTime: Timestamp({ t: 1728985021, i: 3 })
  }
]
---
active mongoses
[
  {
    _id: '3158a5543d69:27381',
    advisoryHostFQDNs: [],
    created: ISODate('2024-10-15T09:03:06.663Z'),
    mongoVersion: '7.0.14',
    ping: ISODate('2024-10-15T09:51:18.345Z'),
    up: Long('2891'),
    waiting: true
  },
  {
    _id: 'c5a08ca76189:27381',
    advisoryHostFQDNs: [],
    created: ISODate('2024-10-15T09:03:06.647Z'),
    mongoVersion: '7.0.14',
    ping: ISODate('2024-10-15T09:51:18.119Z'),
    up: Long('2891'),
    waiting: true
  },
  {
    _id: '5bb8b2925f52:27381',
    advisoryHostFQDNs: [],
    created: ISODate('2024-10-15T09:03:06.445Z'),
    mongoVersion: '7.0.14',
    ping: ISODate('2024-10-15T09:51:18.075Z'),
    up: Long('2891'),
    waiting: true
  }
]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
  'Currently enabled': 'yes',
  'Currently running': 'no',
  'Failed balancer rounds in last 5 attempts': 0,
  'Migration Results for the last 24 hours': 'No recent migrations'
}
---
databases
[
  {
    database: { _id: 'config', primary: 'config', partitioned: true },
    collections: {
      'config.system.sessions': {
        shardKey: { _id: 1 },
        unique: false,
        balancing: true,
        chunkMetadata: [ { shard: 'shard1', nChunks: 1 } ],
        chunks: [
          { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) }
        ],
        tags: []
      }
    }
  },
  {
    database: {
      _id: 'test',
      primary: 'shard2',
      partitioned: false,
      version: {
        uuid: UUID('3b193276-e88e-42e1-b053-bcb61068a865'),
        timestamp: Timestamp({ t: 1728985516, i: 1 }),
        lastMod: 1
      }
    },
    collections: {
      'test.test': {
        shardKey: { _id: 'hashed' },
        unique: false,
        balancing: true,
        chunkMetadata: [
          { shard: 'shard1', nChunks: 2 },
          { shard: 'shard2', nChunks: 2 },
          { shard: 'shard3', nChunks: 2 }
        ],
        chunks: [
          { min: { _id: MinKey() }, max: { _id: Long('-6148914691236517204') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) },
          { min: { _id: Long('-6148914691236517204') }, max: { _id: Long('-3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 1 }) },
          { min: { _id: Long('-3074457345618258602') }, max: { _id: Long('0') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 2 }) },
          { min: { _id: Long('0') }, max: { _id: Long('3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 3 }) },
          { min: { _id: Long('3074457345618258602') }, max: { _id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) },
          { min: { _id: Long('6148914691236517204') }, max: { _id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) }
        ],
        tags: []
      }
    }
  }
]

重点查看

chunks: [
          { min: { _id: MinKey() }, max: { _id: Long('-6148914691236517204') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) },
          { min: { _id: Long('-6148914691236517204') }, max: { _id: Long('-3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 1 }) },
          { min: { _id: Long('-3074457345618258602') }, max: { _id: Long('0') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 2 }) },
          { min: { _id: Long('0') }, max: { _id: Long('3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 3 }) },
          { min: { _id: Long('3074457345618258602') }, max: { _id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) },
          { min: { _id: Long('6148914691236517204') }, max: { _id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) }
        ],

我们可以清晰的看到 shard1 shard2 shard3

查看该表分片数据信息

db.test.getShardDistribution()
Shard shard2 at shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181
{
  data: '3KiB',
  docs: 108,
  chunks: 2,
  'estimated data per chunk': '1KiB',
  'estimated docs per chunk': 54
}
---
Shard shard1 at shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181
{
  data: '3KiB',
  docs: 89,
  chunks: 2,
  'estimated data per chunk': '1KiB',
  'estimated docs per chunk': 44
}
---
Shard shard3 at shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181
{
  data: '3KiB',
  docs: 103,
  chunks: 2,
  'estimated data per chunk': '1KiB',
  'estimated docs per chunk': 51
}
---
Totals
{
  data: '10KiB',
  docs: 300,
  chunks: 6,
  'Shard shard2': [ '36 % data', '36 % docs in cluster', '37B avg obj size on shard' ],
  'Shard shard1': [
    '29.66 % data',
    '29.66 % docs in cluster',
    '37B avg obj size on shard'
  ],
  'Shard shard3': [
    '34.33 % data',
    '34.33 % docs in cluster',
    '37B avg obj size on shard'
  ]
}

我们可以看到 三个 shard 都平均分了这个些数据

查看sharding状态

db.printShardingStatus()

关闭集合分片

sh.disableBalancing("test.test")

结果

{
  acknowledged: true,
  insertedId: null,
  matchedCount: 1,
  modifiedCount: 1,
  upsertedCount: 0
}

到此这篇关于docker compose部署mongodb 分片集群的文章就介绍到这了,更多相关docker compose mongodb 分片集群内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

相关文章

  • Docker开放2375端口实现远程访问的操作方法

    Docker开放2375端口实现远程访问的操作方法

    因为IDEA集成docker环境,实质上是通过远程访问的形式,进行连接,因此需要开启Docker的2375端口的远程访问权限,这篇文章主要介绍了Docker开放2375端口实现远程访问的操作方法,需要的朋友可以参考下
    2024-05-05
  • 在 docker 之间导出导入镜像的方法

    在 docker 之间导出导入镜像的方法

    本篇文章主要介绍了在 docker 之间导出导入镜像的方法,小编觉得挺不错的,现在分享给大家,也给大家做个参考。一起跟随小编过来看看吧
    2017-07-07
  • docker kubernetes dashboard安装部署详细介绍

    docker kubernetes dashboard安装部署详细介绍

    这篇文章主要介绍了docker kubernetes dashboard安装部署详细介绍的相关资料,需要的朋友可以参考下
    2016-10-10
  • centos搭建部署docker环境的详细步骤

    centos搭建部署docker环境的详细步骤

    Docker 将程序与程序的运行环境打包在一起,从而避免了复杂的环境配置,下面这篇文章主要给大家介绍了关于centos搭建部署docker环境的详细步骤,文中通过实例代码介绍的非常详细,需要的朋友可以参考下
    2022-07-07
  • Docker的安装与配置命令代码实例

    Docker的安装与配置命令代码实例

    这篇文章主要介绍了Docker的安装与配置命令代码实例,这是docker的基础知识点,有正在学习docker的同学可以研究下
    2021-03-03
  • Docker mysql+nacos单机部署的实现步骤

    Docker mysql+nacos单机部署的实现步骤

    本文主要介绍了Docker mysql+nacos单机部署的实现步骤,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
    2023-08-08
  • docker运行项目的方法

    docker运行项目的方法

    在本篇文章里小编给大家分享的是关于docker运行项目的方法和实例,需要的朋友们学习参考下。
    2020-03-03
  • CentOS7 阿里云的yum源使用详解

    CentOS7 阿里云的yum源使用详解

    这篇文章主要介绍了CentOS7 阿里云的yum源使用详解的相关资料,这里对备份yum源,添加EPEL源,和缓存清理,进行了介绍,需要的朋友可以参考下
    2016-11-11
  • Docker中的COPY指令和ADD指令详解

    Docker中的COPY指令和ADD指令详解

    COPY 和 ADD 都是 Dockerfile 中的指令,有着类似的作用。它们允许我们将文件从特定位置复制到 Docker 镜像中,这篇文章主要介绍了Docker中的COPY指令和ADD指令,需要的朋友可以参考下
    2022-06-06
  • Docker容器搭建android编译环境的实践记录

    Docker容器搭建android编译环境的实践记录

    这篇文章主要介绍了Docker容器搭建android编译环境的实践记录,主要包括部署容器、镜像管理、容器管理等相关知识,本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友可以参考下
    2022-07-07

最新评论