3.3. Configuring Clustering¶
3.3.1. Cluster Options¶
Sets the default number of shards for newly created databases. The default value,
2, splits a database into 2 separate partitions.
[cluster] q = 2
For systems with only a few, heavily accessed, large databases, or for servers with many CPU cores, consider increasing this value to
The value of
qcan also be overridden on a per-DB basis, at DB creation time.
Sets the number of replicas of each document in a cluster. CouchDB will only place one replica per node in a cluster. When set up through the Cluster Setup Wizard, a standalone single node will have
n = 1, a two node cluster will have
n = 2, and any larger cluster will have
n = 3. It is recommended not to set
[cluster] n = 3
Use of this option will override the
noption for replica cardinality. Use with care.
Sets the cluster-wide replica placement policy when creating new databases. The value must be a comma-delimited list of strings of the format
zone_nameis a zone as specified in the
#is an integer indicating the number of replicas to place on nodes with a matching
This parameter is not specified by default.
[cluster] placement = metro-dc-a:2,metro-dc-b:1
An optional, comma-delimited list of node names that this node should contact in order to join a cluster. If a seedlist is configured the
_upendpoint will return a 404 until the node has successfully contacted at least one of the members of the seedlist and replicated an up-to-date copy of the
[cluster] seedlist = firstname.lastname@example.org,email@example.com
3.3.2. RPC Performance Tuning¶
CouchDB uses distributed Erlang to communicate between nodes in a cluster. The
rexilibrary provides an optimized RPC mechanism over this communication channel. There are a few configuration knobs for this system, although in general the defaults work well.
The local RPC server will buffer messages if a remote node goes unavailable. This flag determines how many messages will be buffered before the local server starts dropping messages. Default value is
By default, rexi will spawn one local gen_server process for each node in the cluster. Disabling this flag will cause CouchDB to use a single process for all RPC communication, which is not recommended in high throughput deployments.
New in version 3.0.
This flag comes into play during streaming operations like views and change feeds. It controls how many messages a remote worker process can send to a coordinator without waiting for an acknowledgement from the coordinator process. If this value is too large the coordinator can become overwhelmed by messages from the worker processes and actually deliver lower overall throughput to the client. In CouchDB 2.x this value was hard-coded to
10. In the 3.x series it is configurable and defaults to
5. Databases with a high
qvalue are especially sensitive to this setting.