Troubleshooting with Crate-Node CLI

The crate-node CLI enables you to perform certain unsafe operations on a node that are only possible while it is shut down. This command allows you to adjust the role of a node and may be able to recover some data after a disaster.

If a CrateDB cluster unrecoverably looses the majority of the master eligible nodes, the crate-node CLI provides a way to utilize the state of the remaining master nodes to form a new cluster.

Nodes may be temporarily stopped for maintenance reasons or systems failures. Once the issues on the nodes are resolved, a restart is necessary to let them join the cluster again. Unfortunately this is not always possible. A node might be unrepairable and does not start-up, so if the cluster is still available the node can be simply replaced by a fresh node on a new host which will replace the broken node.

crate-node CLI

bin/crate-node repurpose|unsafe-bootstrap|detach-cluster
[--ordinal <Integer>] [-C <KeyValuePair>]
[-h, --help] ([-s, --silent] | [-v, --verbose])

The following commands provide a various operations to modify and recover nodes:

  • crate-node repurpose can be used to delete unwanted data from a node if it used to be a data node or a master-eligible node but has been repurposed not to have one or other of these roles.
  • crate-node unsafe-bootstrap can be used to perform unsafe cluster bootstrapping. It forces one of the nodes to form a brand-new cluster on its own, using its local copy of the cluster metadata.
  • crate-node detach-cluster enables you to move nodes from one cluster to another. This can be used to move nodes into a new cluster created with the crate-node unsafe-bootstap command. If unsafe cluster bootstrapping was not possible, it also enables you to move nodes into a brand-new cluster.

Changing the role of a node

Certain situations may require to change the initial type of a node. The crate-node repurpose command can modify and delete the data stored on a disk from a node.

The intended use is:

  • Stop the node
  • Update crate.yml by setting node.master and node.data as desired.
  • Run crate-node repurpose on the node
  • Start the node

if you run crate-node repurpose on a node with node.data: false and node.master: true then it will delete any remaining shard data on that node, but it will leave the table and cluster metadata alone.

If you run crate-node repurpose on a node with node.data: false and node.master: false then it will delete any remaining shard data and table metadata, but it will leave the cluster metadata alone.

Change the type from a data to become a master node

When a data-node should be repurposed as master-node the following steps are needed: - Change the nodes settings in crate.yml node.master: true and node.data: false

Change the type to a coordination-only node

When a data-node should be repurposed as coordination-only node the following steps are needed: - Change the nodes settings in crate.yml node.master: false and node.data: false

Warning

Running this command can lead to data loss if the locally stored data from the node is not persisted on the nodes in rest of the cluster.

The tool provides a summary of the data to be deleted and asks for confirmation before making any changes. You can get detailed information about the affected tables and shards by passing the verbose (-v) option.

Recovering data after a disaster

Sometimes CrateDB nodes are temporarily stopped, perhaps because of the need to perform some maintenance activity or perhaps because of a hardware failure. After you resolve the temporary condition and restart the node, it will rejoin the cluster and continue normally. Depending on your configuration, your cluster may be able to remain completely available even while one or more of its nodes are stopped.

Each node stores its data in the data directories defined by the path.data setting. This means that in a disaster you can also restart a node by moving its data directories to another host, presuming that those data directories can be recovered from the faulty host.

CrateDB requires a response from a majority of the master-eligible nodes in order to elect a master and to update the cluster state. This means that if you have three master-eligible nodes then the cluster will remain available even if one of them has failed. However if two of the three master-eligible nodes fail then the cluster will be unavailable until at least one of them is restarted.

In some circumstances it is not be possible to restart enough nodes to restore the cluster’s availability. If such a disaster occurs, you should build a new cluster from a recent snapshot and re-import any data that was ingested since that snapshot was taken.

However, if it is not possible to recover your cluster from a recent snapshot, it may be possible to use the crate-node CLI to construct a new cluster that contains some of the data from the failed cluster.

Unsafe cluster bootstrapping

If there is at least one remaining master-eligible node, but it is not possible to restart a majority of them, then the crate-node unsafe-bootstrap command will unsafely override the cluster’s voting configuration as if performing another cluster bootstrapping process. The target node can then form a new cluster on its own by using the cluster metadata held locally on the target node.

Warning

These steps can lead to arbitrary data loss since the target node may not hold the latest cluster metadata, and this out-of-date metadata may make it impossible to use some or all of the tables in the cluster.

Since unsafe bootstrapping forms a new cluster containing a single node, once you have run it you must use the crate-node detach-cluster command to migrate any other surviving nodes from the failed cluster into this new cluster.

When you run the crate-node unsafe-bootstrap tool it will analyse the state of the node and ask for confirmation before taking any action. Before asking for confirmation it reports the term and version of the cluster state on the node on which it runs as follows:

Current node cluster state (term, version) pair is (4, 12)

If you have a choice of nodes on which to run this tool then you should choose one with a term that is as large as possible. If there is more than one node with the same term, pick the one with the largest version. This information identifies the node with the freshest cluster state, which minimizes the quantity of data that might be lost. For example, if the first node reports (4, 12) and a second node reports (5, 3), then the second node is preferred since its term is larger. However if the second node reports (3, 17) then the first node is preferred since its term is larger. If the second node reports (4, 10) then it has the same term as the first node, but has a smaller version, so the first node is preferred.

Warning

These steps can lead to arbitrary data loss since the target node may not hold the latest cluster metadata, and this out-of-date metadata may make it impossible to use some or all of the tables in the cluster.

The sequence of operations for using this tool are as follows:

  • Make sure you have really lost access to at least half of the master-eligible nodes in the cluster, and they cannot be repaired or recovered by moving their data paths to healthy hardware.
  • Stop all remaining nodes.
  • Choose one of the remaining master-eligible nodes to become the new elected master as described above.
  • On this node, run the crate-node unsafe-bootstrap command as shown below. Verify that the tool reported Master node was successfully bootstrapped.
  • Start this node and verify that it is elected as the master node.
  • Run the crate-node detach-cluster tool, described below, on every other node in the cluster.
  • Start all other nodes and verify that each one joins the cluster.
  • Investigate the data in the cluster to discover if any was lost during this process.
  • When you run the tool it will make sure that the node that is being used to bootstrap the cluster is not running. It is important that all other master-eligible nodes are also stopped while this tool is running, but the tool does not check this.

The message Master node was successfully bootstrapped does not mean that there has been no data loss, it just means that tool was able to complete its job.

Detach a node from a cluster

When a node joins a cluster initially, the unique cluster id will be stored in its nodes metadata. This prevents the node from joining a cluster with a different cluster id. There are situations where it necessary to reset the cluster id from a node. When a cluster is not recoverable, it might be worth moving nodes to a new cluster. This is in particular useful when the voting configuration of a node is reset, and a new cluster is bootstrapped.

It is unsafe for nodes to move between clusters, because different clusters have completely different cluster metadata. There is no way to safely merge the metadata from two clusters together.

To protect against inadvertently joining the wrong cluster, each cluster creates a unique identifier, known as the cluster UUID, when it first starts up. Every node records the UUID of its cluster and refuses to join a cluster with a different UUID.

However, if a node’s cluster has permanently failed then it may be desirable to try and move it into a new cluster. The crate-node detach-cluster command lets you detach a node from its cluster by resetting its cluster UUID. It can then join another cluster with a different UUID.

For example, after unsafe cluster bootstrapping you will need to detach all the other surviving nodes from their old cluster so they can join the new, unsafely-bootstrapped cluster.

Warning

Execution of this command can lead to arbitrary data loss. Only run this tool if you understand and accept the possible consequences and have exhausted all other possibilities for recovery of your cluster.

The sequence of operations for using this tool are as follows:

  • Make sure you have really lost access to every one of the master-eligible nodes in the cluster, and they cannot be repaired or recovered by moving their data paths to healthy hardware.
  • Start a new cluster and verify that it is healthy. This cluster may comprise one or more brand-new master-eligible nodes, or may be an unsafely-bootstrapped cluster formed as described above.
  • Stop all remaining data nodes.
  • On each data node, run the crate-node detach-cluster tool as shown below. Verify that the tool reported Node was successfully detached from the cluster.
  • If necessary, configure each data node to discover the new cluster.
  • Start each data node and verify that it has joined the new cluster.
  • Wait for all recoveries to have completed, and investigate the data in the cluster to discover if any was lost during this process.

The message Node was successfully detached from the cluster does not mean that there has been no data loss, it just means that tool was able to complete its job.

Parameters

repurpose
Delete excess data when a node’s roles are changed.
unsafe-bootstrap
Specifies to unsafely bootstrap this node as a new one-node cluster.
detach-cluster
Specifies to unsafely detach this node from its cluster so it can join a different cluster.
–ordinal <Integer>`
If there is more than one node sharing a data path then this specifies which node to target. Defaults to 0, meaning to use the first node in the data path.
-C <KeyValuePair>
Configures a setting.
-h, –help
Returns all of the command parameters.
-s, –silent
Shows minimal output.
-v, –verbose
Shows verbose output.

Examples

Repurposing a node as a dedicated master node (master: true, data: false)

In this example, a former data node is repurposed as a dedicated master node. First update the node’s settings to node.master: true and node.data: false in its crate.yml config file. Then run the crate-node repurpose command to find and remove excess shard data:

node$ ./bin/crate-node repurpose

  WARNING: CrateDB MUST be stopped before running this tool.

Found 2 shards in 2 tables to clean up
Use -v to see list of paths and tables affected
Node is being re-purposed as master and no-data. Clean-up of shard data will be performed.

Do you want to proceed?

Confirm [y/N] y
Node successfully repurposed to master and no-data.

Repurposing a node as a coordinating-only node (master: false, data: false)

In this example, a node that previously held data is repurposed as a coordinating-only node. First update the node’s settings to node.master: false and node.data: false in its crate.yml config file. Then run the crate-node repurpose command to find and remove excess shard data and table metadata:

node$./bin/crate-node repurpose

  WARNING: CrateDB MUST be stopped before running this tool.

Found 2 tables (2 shards and 2 table meta data) to clean up
Use -v to see list of paths and tables affected
Node is being re-purposed as no-master and no-data. Clean-up of table data will be performed.

Do you want to proceed?

Confirm [y/N] y
Node successfully repurposed to no-master and no-data.

Unsafe cluster bootstrapping

Suppose your cluster had five master-eligible nodes and you have permanently lost three of them, leaving two nodes remaining.

  • Run the tool on the first remaining node, but answer n at the confirmation step.
node_1$ ./bin/crate-node unsafe-bootstrap

  WARNING: CrateDB MUST be stopped before running this tool.

Current node cluster state (term, version) pair is (4, 12)

You should only run this tool if you have permanently lost half or more
of the master-eligible nodes in this cluster, and you cannot restore the
cluster from a snapshot. This tool can cause arbitrary data loss and its
use should be your last resort. If you have multiple surviving master
eligible nodes, you should run this tool on the node with the highest
cluster state (term, version) pair.

Do you want to proceed?

Confirm [y/N] n
  • Run the tool on the second remaining node, and again answer n at the confirmation step.
node_2$ ./bin/crate-node unsafe-bootstrap

   WARNING: CrateDB MUST be stopped before running this tool.

Current node cluster state (term, version) pair is (5, 3)

You should only run this tool if you have permanently lost half or more
of the master-eligible nodes in this cluster, and you cannot restore the
cluster from a snapshot. This tool can cause arbitrary data loss and its
use should be your last resort. If you have multiple surviving master
eligible nodes, you should run this tool on the node with the highest
cluster state (term, version) pair.

Do you want to proceed?

Confirm [y/N] n
  • Since the second node has a greater term it has a fresher cluster state, so it is better to unsafely bootstrap the cluster using this node:
node_2$ ./bin/crate-node unsafe-bootstrap

  WARNING: CrateDB MUST be stopped before running this tool.

Current node cluster state (term, version) pair is (5, 3)

You should only run this tool if you have permanently lost half or more
of the master-eligible nodes in this cluster, and you cannot restore the
cluster from a snapshot. This tool can cause arbitrary data loss and its
use should be your last resort. If you have multiple surviving master
eligible nodes, you should run this tool on the node with the highest
cluster state (term, version) pair.

Do you want to proceed?

Confirm [y/N] y
Master node was successfully bootstrapped

Detaching nodes from their cluster

After unsafely bootstrapping a new cluster, run the crate-node detach-cluster command to detach all remaining nodes from the failed cluster so they can join the new cluster:

node_3$ ./bin/crate-node detach-cluster

    WARNING: CrateDB MUST be stopped before running this tool.

You should only run this tool if you have permanently lost all of the
master-eligible nodes in this cluster and you cannot restore the cluster
from a snapshot, or you have already unsafely bootstrapped a new cluster
by running ``crate-node unsafe-bootstrap`` on a master-eligible
node that belonged to the same cluster as this node. This tool can cause
arbitrary data loss and its use should be your last resort.

Do you want to proceed?

Confirm [y/N] y
Node was successfully detached from the cluster