11.1. Setup

Everything you need to know to prepare the cluster for the installation of CouchDB.

11.1.1. Firewall

If you do not have a firewall between your servers, then you can skip this.

CouchDB in cluster mode uses the port 5984 just as standalone, but is also uses 5986 for node-local APIs.

Erlang uses TCP port 4369 (EPMD) to find other nodes, so all servers must be able to speak to each other on this port. In an Erlang Cluster, all nodes are connected to all other nodes. A mesh.

Warning

If you expose the port 4369 to the Internet or any other untrusted network, then the only thing protecting you is the cookie.

Every Erlang application then uses other ports for talking to each other. Yes, this means random ports. This will obviously not work with a firewall, but it is possible to force an Erlang application to use a specific port rage.

This documentation will use the range TCP 9100-9200. Open up those ports in your firewalls and it is time to test it.

You need 2 servers with working hostnames. Let us call them server1 and server2.

On server1:

erl -sname bus -setcookie 'brumbrum' -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200

Then on server2:

erl -sname car -setcookie 'brumbrum' -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200
An explanation to the commands:
  • erl the Erlang shell.
  • -sname bus the name of the Erlang node.
  • -setcookie 'brumbrum' the “password” used when nodes connect to each other.
  • -kernel inet_dist_listen_min 9100 the lowest port in the rage.
  • -kernel inet_dist_listen_max 9200 the highest port in the rage.

This gives us 2 Erlang shells. shell1 on server1, shell2 on server2. Time to connect them. The . is to Erlang what ; is to C.

In shell1:

net_kernel:connect_node(car@server2).

This will connect to the node called car on the server called server2.

If that returns true, then you have a Erlang cluster, and the firewalls are open. If you get false or nothing at all, then you have a problem with the firewall.

First time in Erlang? Time to play!

Run in both shells:

register(shell, self()).

shell1:

{shell, car@server2} ! {hello, from, self()}.

shell2:

flush().
{shell, bus@server1} ! {"It speaks!", from, self()}.

shell1:

flush().

To close the shells, run in both:

q().

Make CouchDB use the open ports.

Open sys.config, on all nodes, and add inet_dist_listen_min, 9100 and inet_dist_listen_max, 9200 like below:

[
    {lager, [
        {error_logger_hwm, 1000},
        {error_logger_redirect, true},
        {handlers, [
            {lager_console_backend, [debug, {
                lager_default_formatter,
                [
                    date, " ", time,
                    " [", severity, "] ",
                    node, " ", pid, " ",
                    message,
                    "\n"
                ]
            }]}
        ]},
        {inet_dist_listen_min, 9100},
        {inet_dist_listen_max, 9200}
    ]}
].

11.1.2. The Cluster Setup Wizard

Setting up a cluster of Erlang applications correctly can be a daunting task. Luckily, CouchDB 2.0 comes with a convenient Cluster Setup Wizard as part of the Fauxton web administration interface.

After installation and initial startup, visit Fauxton at http://127.0.0.1:5984/_utils#setup. You will be asked to set up CouchDB as a single-node instance or set up a cluster.

When you click “setup cluster” you are asked for admin credentials again and then to add nodes by IP address. To get more nodes, go through the same install procedure on other machines. Be sure to specify the total number of nodes you expect to add to the cluster before adding nodes.

Before you can add nodes to form a cluster, you have to have them listen on a public IP address and set up an admin user. Do this, once per node:

curl -X PUT http://127.0.0.1:5984/_node/couchdb@<this-nodes-ip-address>/_config/admins/admin -d '"password"'
curl -X PUT http://127.0.0.1:5984/_node/couchdb@<this-nodes-ip-address>/_config/chttpd/bind_address -d '"0.0.0.0"'

Now you can enter their IP addresses in the setup screen on your first node. And make sure to put in the admin username and password. And use the same admin username and password on all nodes.

Once you added all nodes, click “Setup” and Fauxton will finish the cluster configuration for you.

See http://127.0.0.1:5984/_membership to get a list of all the nodes in your cluster.

Now your cluster is ready and available. You can send requests to any one of the nodes and get to all the data.

For a proper production setup, you’d now set up a HTTP proxy in front of the nodes, that does load balancing. We recommend HAProxy. See our example configuration for HAProxy. All you need is to adjust the ip addresses and ports.

11.1.3. The Cluster Setup Api

If you would prefer to manually configure your CouchDB cluster, CouchDB exposes the _cluster_setup endpoint for that. After installation and initial setup. We can setup the cluster. On each node we need to run the following command to setup the node:

curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "node_count":"3"}'

After that we can join all the nodes together. Choose one node as the “setup coordination node” to run all these commands on. This is a “setup coordination node” that manages the setup and requires all other nodes to be able to see it and vice versa. Setup will not work with unavailable nodes. The notion of “setup coordination node” will be gone once the setup is finished. From then onwards the cluster will no longer have a “setup coordination node”. To add a node run these two commands:

curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "port": 15984, "node_count": "3", "remote_node": "<remote-node-ip>", "remote_current_user": "<remote-node-username>", "remote_current_password": "<remote-node-password>" }'
curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "add_node", "host":"<remote-node-ip>", "port": "<remote-node-port>", "username": "admin", "password":"password"}'

This will join the two nodes together. Keep running the above commands for each node you want to add to the cluster. Once this is done run the following command to complete the setup and add the missing databases:

curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "finish_cluster"}'

You CouchDB cluster is now setup.