发布于 2015-09-14 14:41:49 | 129 次阅读 | 评论: 0 | 来源: 网络整理

This tutorial outlines the process for deploying a replica set with members in multiple locations. The tutorial addresses three-member sets, four-member sets, and sets with more than four members.

For appropriate background, see 副本集基本概念 and 副本集的架构和部署模式. For related tutorials, see 部署一个副本集 and 将成员添加到一个副本集.

Overview

While replica sets provide basic protection against single-instance failure, when all of the members of a replica set reside in a single facility, the replica set is still susceptible to some classes of errors in that facility including power outages, networking distortions, and natural disasters. To protect against these classes of failures, deploy a replica set with one or more members in a geographically distinct facility or data center.

Requirements

For a three-member replica set you need two instances in a primary facility (hereafter, “Site A”) and one member in a secondary facility (hereafter, “Site B”.) Site A should be the same facility or very close to your primary application infrastructure (i.e. application servers, caching layer, users, etc.)

For a four-member replica set you need two members in Site A, two members in Site B (or one member in Site B and one member in Site C,) and a single arbiter in Site A.

For replica sets with additional members in the secondary facility or with multiple secondary facilities, the requirements are the same as above but with the following notes:

  • Ensure that a majority of the voting members are within Site A. This includes secondary-only members and arbiters For more information on the need to keep the voting majority on one site, see :ref`replica-set-elections-and-network-partitions`.
  • If you deploy a replica set with an uneven number of members, deploy an arbiter on Site A. The arbiter must be on site A to keep the majority there.

For all configurations in this tutorial, deploy each replica set member on a separate system. Although you may deploy more than one replica set member on a single system, doing so reduces the redundancy and capacity of the replica set. Such deployments are typically for testing purposes and beyond the scope of this tutorial.

Procedures

Deploy a Distributed Three-Member Replica Set

A geographically distributed three-member deployment has the following features:

  • Each member of the replica set resides on its own machine, and the MongoDB processes all bind to port 27017, which is the standard MongoDB port.

  • Each member of the replica set must be accessible by way of resolvable DNS or hostnames in the following scheme:

    • mongodb0.example.net
    • mongodb1.example.net
    • mongodb2.example.net

    Configure DNS names appropriately, or set up your systems’ /etc/host file to reflect this configuration. Ensure that one system (e.g. mongodb2.example.net) resides in Site B. Host all other systems in Site A.

  • Ensure that network traffic can pass between all members in the network securely and efficiently. Consider the following:

    • Establish a virtual private network between the systems in Site A and Site B to encrypt all traffic between the sites and remains private. Ensure that your network topology routes all traffic between members within a single site over the local area network.

    • Configure authentication using auth and keyFile, so that only servers and process with authentication can connect to the replica set.

    • Configure networking and firewall rules so that only traffic (incoming and outgoing packets) on the default MongoDB port (e.g. 27017) from within your deployment.

      也可以参考

      For more information on security and firewalls, see Security Considerations for Replica Sets.

  • Specify run-time configuration on each system in a configuration file stored in /etc/mongodb.conf or in a related location. Do not specify run-time configuration through command line options.

    For each MongoDB instance, use the following configuration, with values set appropriate to your systems:

    port = 27017
    
    bind_ip = 10.8.0.10
    
    dbpath = /srv/mongodb/
    
    fork = true
    
    replSet = rs0/mongodb0.example.net,mongodb1.example.net,mongodb2.example.net
    

    Modify bind_ip to reflect a secure interface on your system that is able to access all other members of the set and that is accessible to all other members of the replica set. The DNS or host names need to point and resolve to this IP address. Configure network rules or a virtual private network (i.e. “VPN”) to permit this access.

    注解

    The portion of the replSet following the / provides a “seed list” of known members of the replica set. mongod uses this list to fetch configuration changes following restarts. It is acceptable to omit this section entirely, and have the replSet option resemble:

    replSet = rs0
    

    For more documentation on the above run time configurations, as well as additional configuration options, see 配置文件选项.

To deploy a geographically distributed three-member set:

  1. On each system start the mongod process by issuing a command similar to following:

    mongod --config /etc/mongodb.conf
    

    注解

    In production deployments you likely want to use and configure a control script to manage this process based on this command. Control scripts are beyond the scope of this document.

  2. Open a mongo shell connected to one of the mongod instances:

    mongo
    
  3. Use the rs.initiate() method on one member to initiate a replica set consisting of the current member and using the default configuration:

    rs.initiate()
    
  4. Display the current replica configuration:

    rs.conf()
    
  5. Add the remaining members to the replica set by issuing a sequence of commands similar to the following. The example commands assume the current primary is mongodb0.example.net:

    rs.add("mongodb1.example.net")
    rs.add("mongodb2.example.net")
    
  6. Make sure that you have configured the member located in Site B (i.e. mongodb2.example.net) as a secondary-only member:

    1. Issue the following command to determine the _id value for mongodb2.example.net:

      rs.conf()
      
    2. In the members array, save the _id value. The example in the next step assumes this value is 2.

    3. In the mongo shell connected to the replica set’s primary, issue a command sequence similar to the following:

      cfg = rs.conf()
      cfg.members[2].priority = 0
      rs.reconfig(cfg)
      

      注解

      In some situations, the rs.reconfig() shell method can force the current primary to step down and causes an election. When the primary steps down, all clients will disconnect. This is the intended behavior. While, this typically takes 10-20 seconds, attempt to make these changes during scheduled maintenance periods.

    After these commands return you have a geographically distributed three-member replica set.

  7. To check the status of your replica set, issue rs.status().

也可以参考

The documentation of the following shell functions for more information:

Deploy a Distributed Four-Member Replica Set

A geographically distributed four-member deployment has the following features:

  • Each member of the replica set, except for the arbiter (see below), resides on its own machine, and the MongoDB processes all bind to port 27017, which is the standard MongoDB port.

  • Each member of the replica set must be accessible by way of resolvable DNS or hostnames in the following scheme:

    • mongodb0.example.net
    • mongodb1.example.net
    • mongodb2.example.net
    • mongodb3.example.net

    Configure DNS names appropriately, or set up your systems’ /etc/host file to reflect this configuration. Ensure that one system (e.g. mongodb2.example.net) resides in Site B. Host all other systems in Site A.

  • One host (e.g. mongodb3.example.net) will be an arbiter and can run on a system that is also used for an application server or some other shared purpose.

  • There are three possible architectures for this replica set:

    • Two members in Site A, two secondary-only members in Site B, and an arbiter in Site A.
    • Three members in Site A and one secondary-only member in Site B.
    • Two members in Site A, one secondary-only member in Site B, one secondary-only member in Site C, and an arbiter in site A.

    In most cases the first architecture is preferable because it is the least complex.

  • Ensure that network traffic can pass between all members in the network securely and efficiently. Consider the following:

    • Establish a virtual private network between the systems in Site A and Site B (and Site C if it exists) to encrypt all traffic between the sites and remains private. Ensure that your network topology routes all traffic between members within a single site over the local area network.

    • Configure authentication using auth and keyFile, so that only servers and process with authentication can connect to the replica set.

    • Configure networking and firewall rules so that only traffic (incoming and outgoing packets) on the default MongoDB port (e.g. 27017) from within your deployment.

      也可以参考

      For more information on security and firewalls, see Security Considerations for Replica Sets.

  • Specify run-time configuration on each system in a configuration file stored in /etc/mongodb.conf or in a related location. Do not specify run-time configuration through command line options.

    For each MongoDB instance, use the following configuration, with values set appropriate to your systems:

    port = 27017
    
    bind_ip = 10.8.0.10
    
    dbpath = /srv/mongodb/
    
    fork = true
    
    replSet = rs0/mongodb0.example.net,mongodb1.example.net,mongodb2.example.net,mongodb3.example.net
    

    Modify bind_ip to reflect a secure interface on your system that is able to access all other members of the set and that is accessible to all other members of the replica set. The DNS or host names need to point and resolve to this IP address. Configure network rules or a virtual private network (i.e. “VPN”) to permit this access.

    注解

    The portion of the replSet following the / provides a “seed list” of known members of the replica set. mongod uses this list to fetch configuration changes following restarts. It is acceptable to omit this section entirely, and have the replSet option resemble:

    replSet = rs0
    

    For more documentation on the above run time configurations, as well as additional configuration options, see doc:/reference/configuration-options.

To deploy a geographically distributed four-member set:

  1. On each system start the mongod process by issuing a command similar to following:

    mongod --config /etc/mongodb.conf
    

    注解

    In production deployments you likely want to use and configure a control script to manage this process based on this command. Control scripts are beyond the scope of this document.

  2. Open a mongo shell connected to this host:

    mongo
    
  3. Use rs.initiate() to initiate a replica set consisting of the current member and using the default configuration:

    rs.initiate()
    
  4. Display the current replica configuration:

    rs.conf()
    
  5. Add the remaining members to the replica set by issuing a sequence of commands similar to the following. The example commands assume the current primary is mongodb0.example.net:

    rs.add("mongodb1.example.net")
    rs.add("mongodb2.example.net")
    rs.add("mongodb3.example.net")
    
  6. In the same shell session, issue the following command to add the arbiter (e.g. mongodb4.example.net):

    rs.addArb("mongodb4.example.net")
    
  7. Make sure that you have configured each member located in Site B (e.g. mongodb3.example.net) as a secondary-only member:

    1. Issue the following command to determine the _id value for the member:

      rs.conf()
      
    2. In the members array, save the _id value. The example in the next step assumes this value is 2.

    3. In the mongo shell connected to the replica set’s primary, issue a command sequence similar to the following:

      cfg = rs.conf()
      cfg.members[2].priority = 0
      rs.reconfig(cfg)
      

      注解

      In some situations, the rs.reconfig() shell method can force the current primary to step down and causes an election. When the primary steps down, all clients will disconnect. This is the intended behavior. While, this typically takes 10-20 seconds, attempt to make these changes during scheduled maintenance periods.

    After these commands return you have a geographically distributed four-member replica set.

  8. To check the status of your replica set, issue rs.status().

也可以参考

The documentation of the following shell functions for more information:

Deploy a Distributed Set with More than Four Members

The procedure for deploying a geographically distributed set with more than four members is similar to the above procedures, with the following differences:

  • Never deploy more than seven voting members.
  • Use the procedure for a four-member set if you have an even number of members (see Deploy a Distributed Four-Member Replica Set). Ensure that Site A always has a majority of the members by deploying the arbiter within Site A. For six member sets, deploy at least three voting members in addition to the arbiter in Site A, the remaining members in alternate sites.
  • Use the procedure for a three-member set if you have an odd number of members (see Deploy a Distributed Three-Member Replica Set). Ensure that Site A always has a majority of the members of the set. For example, if a set has five members, deploy three members within the primary facility and two members in other facilities.
  • If you have a majority of the members of the set outside of Site A and the network partitions to prevent communication between sites, the current primary in Site A will step down, even if none of the members outside of Site A are eligible to become primary.
最新网友评论  共有(0)条评论 发布评论 返回顶部

Copyright © 2007-2017 PHPERZ.COM All Rights Reserved   冀ICP备14009818号  版权声明  广告服务