Elasticsearch OpenSearch Low Disk Watermark

Opster Team

Last updated: Dec 13, 2022

| 2 min read

In addition to reading this guide, we recommend you run the Elasticsearch Health Check-Up. It will detect issues and improve your Elasticsearch performance by analyzing your shard sizes, threadpools, memory, snapshots, disk watermarks and more.

The Elasticsearch Check-Up is free and requires no installation.

To manage all aspects of your OpenSearch operation, you can use Opster’s Management Console (OMC). The OMC makes it easy to orchestrate and manage OpenSearch in any environment. Using the OMC you can deploy multiple clusters, configure node roles, scale cluster resources, manage certificates and more – all from a single interface, for free. Check it out here.

Quick links

Overview

There are various “watermark” thresholds on your OpenSearch cluster. As the disk fills up on a node, the first threshold to be crossed will be the “low disk-watermark”.  Once this threshold is crossed, the OpenSearch cluster will stop allocating shards to that node. This means that your cluster may become yellow.

The second threshold will then be the “high disk watermark threshold”. Finally, the “disk flood stage” will be reached. Once this threshold is passed, the cluster will then block writing to ALL indices that have one shard (primary or replica) on the node which has passed the watermark. Reads (searches) will still be possible.

How to resolve it

Passing this threshold is a warning and you should not delay in taking action before the higher thresholds are reached. Here are possible actions you can take to resolve the issue:

How to resolve OpenSearch Low Disk Watermark?

Passing this threshold is a warning and you should not delay in taking action before the higher thresholds are reached. Here are possible actions you can take to resolve the issue:
– Delete old indices
– Remove documents from existing indices
– Increase disk space on the node
– Add new nodes to the cluster

You can see the settings you have applied with this command:

GET _cluster/settings

If they are not appropriate, you can modify them using a command such as below:

PUT _cluster/settings
{
  "transient": {
   
    "cluster.routing.allocation.disk.watermark.low": "85%",
    "cluster.routing.allocation.disk.watermark.high": "90%",
    "cluster.routing.allocation.disk.watermark.flood_stage": "95%",
    "cluster.info.update.interval": "1m"
  }
}

How to avoid it

There are various mechanisms to automatically delete stale data.

How to automatically delete stale data:

  1. Apply ISM (Index State management)

    Using ISM you can get OpenSearch to automatically delete an index when your current index size reaches a given age. 

  2. Use date-based indices

    If your application uses date-based indices, then it is easy to delete old indices using a script.

  3. Use snapshots to store data offline

    It may be appropriate to store snapshotted data offline and restore it in the event that the archived data needs to be reviewed or studied.

  4. Automate / simplify process to add new data nodes

    Use automation tools such as terraform to automate the addition of new nodes to the cluster.  If this is not possible, at the very least ensure you have a clearly documented process to create new nodes, add TLS certificates and configuration and bring them into the OpenSearch cluster in a short and predictable time frame.


Watch product tour

Try AutoOps to find & fix Elasticsearch problems

Analyze Your Cluster
Skip to content