发布于 2015-09-14 14:51:17 | 174 次阅读 | 评论: 0 | 来源: 网络整理
There is no single ideal replica set architecture for every deployment or environment. Indeed the flexibility of replica sets might be their greatest strength. This document describes the most commonly used deployment patterns for replica sets. The descriptions are necessarily not mutually exclusive, and you can combine features of each architecture in your own deployment.
For an overview of operational practices and background information, see the Architectures topic in the 副本集基本概念 document.
The minimum recommended architecture for a replica set consists of:
One primary and
Two secondary members, either of which can become the primary at any time.
This makes failover possible and ensures there exists two full and independent copies of the data set at all times. If the primary fails, the replica set elects another member as primary and continues replication until the primary recovers.
注解
While not recommended, the minimum supported configuration for replica sets includes one primary, one secondary, and one arbiter. The arbiter requires fewer resources and lowers costs but sacrifices operational flexibility and redundancy.
也可以参考
To increase redundancy or to provide additional resources for distributing secondary read operations, you can add additional members to a replica set.
When adding additional members, ensure the following architectural conditions are true:
The set has an odd number of voting members.
If you have an even number of voting members, deploy an arbiter to create an odd number.
The set has no more than 7 voting members at a time.
Members that cannot function as primaries in a failover have their priority values set to 0.
If a member cannot function as a primary because of resource or network latency constraints a priority> value of 0 prevents it from being a primary. Any member with a priority value greater than 0 is available to be a primary.
A majority of the set’s members operate in the main data center.
也可以参考
A geographically distributed replica set provides data recovery should one data center fail. These sets include at least one member in a secondary data center. The member has its priority set to 0 to prevent the member from ever becoming primary.
In many circumstances, these deployments consist of the following:
If the primary is unavailable, the replica set will elect a new primary from the primary data center.
If the connection between the primary and secondary data centers fails, the member in the secondary center cannot independently become the primary.
If the primary data center fails, you can manually recover the data set from the secondary data center. With appropriate write concern there will be no data loss and downtime can be minimal.
When you add a secondary data center, make sure to keep an odd number of members overall to prevent ties during elections for primary by deploying an arbiter in your primary data center. For example, if you have three members in the primary data center and add a member in a secondary center, you create an even number. To create an odd number and prevent ties, deploy an arbiter in your primary data center.
也可以参考
In some cases it may be useful to maintain a member that has an always up-to-date copy of the entire data set but that cannot become primary. You might create such a member to provide backups, to support reporting operations, or to act as a cold standby. Such members fall into one or more of the following categories:
注解
All members of a replica set vote in elections except for non-voting members. Priority, hidden, or delayed status does not affect a member’s ability to vote in an election.
For some deployments, keeping a replica set member for dedicated backup purposes is operationally advantageous. Ensure this member is close, from a networking perspective, to the primary or likely primary. Ensure that the replication lag is minimal or non-existent. To create a dedicated hidden member for the purpose of creating backups.
If this member runs with journaling enabled, you can safely use standard block level backup methods to create a backup of this member. Otherwise, if your underlying system does not support snapshots, you can connect mongodump to create a backup directly from the secondary member. In these cases, use the --oplog option to ensure a consistent point-in-time dump of the database state.
也可以参考
Delayed members are special mongod instances in a replica set that apply operations from the oplog on a delay to provide a running “historical” snapshot of the data set, or a rolling backup. Typically these members provide protection against human error, such as unintentionally deleted databases and collections or failed application upgrades or migrations.
Otherwise, delayed member function identically to secondary members, with the following operational differences: they are not eligible for election to primary and do not receive secondary queries. Delayed members do vote in elections for primary.
See Replica Set Delayed Nodes for more information about configuring delayed replica set members.
Typically hidden members provide a substrate for reporting purposes, because the replica set segregates these instances from the cluster. Since no secondary reads reach hidden members, they receive no traffic beyond what replication requires. While hidden members are not electable as primary, they are still able to vote in elections for primary. If your operational parameters requires this kind of reporting functionality, see Hidden Replica Set Nodes and local.system.replset.members[n].hidden for more information regarding this functionality.
For some sets, it may not be possible to initialize a new member in a reasonable amount of time. In these situations, it may be useful to maintain a secondary member with an up-to-date copy for the purpose of replacing another member in the replica set. In most cases, these members can be ordinary members of the replica set, but in large sets, with varied hardware availability, or given some patterns of geographical distribution, you may want to use a member with a different priority, hidden, or voting status.
Cold standbys may be valuable when your primary and “hot standby” secondaries members have a different hardware specification or connect via a different network than the main set. In these cases, deploy members with priority equal to 0 to ensure that they will never become primary. These members will vote in elections for primary but will never be eligible for election to primary. Consider likely failover scenarios, such as inter-site network partitions, and ensure there will be members eligible for election as primary and a quorum of voting members in the main facility.
注解
If your set already has 7 members, set the local.system.replset.members[n].votes value to 0 for these members, so that they won’t vote in elections.
也可以参考
Secondary Only, and Hidden Nodes.
Deploy an arbiter to ensure that a replica set will have a sufficient number of members to elect a primary. While having replica sets with 2 members is not recommended for production environments, if you have just two members, deploy an arbiter. Also, for any replica set with an even number of members, deploy an arbiter.
To deploy an arbiter, see the Arbiters topic in the 副本集的操作和管理 document.