﻿---
title: Configuration properties
description: Once installed, define the configuration for the hdfs repository through the REST API: The following settings are supported: When you initialize a repository,...
url: https://www.elastic.co/elastic/docs-builder/docs/3028/reference/elasticsearch/plugins/repository-hdfs-config
products:
  - Elasticsearch
---

# Configuration properties
Once installed, define the configuration for the `hdfs` repository through the [REST API](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/deploy-manage/tools/snapshot-and-restore):
```json

{
  "type": "hdfs",
  "settings": {
    "uri": "hdfs://namenode:8020/",
    "path": "elasticsearch/repositories/my_hdfs_repository",
    "conf.dfs.client.read.shortcircuit": "true"
  }
}
```

The following settings are supported:
<definitions>
  <definition term="uri">
    The uri address for hdfs. ex: "hdfs://<host>:<port>/". (Required)
  </definition>
  <definition term="path">
    The file path within the filesystem where data is stored/loaded. ex: "path/to/file". (Required)
  </definition>
  <definition term="load_defaults">
    Whether to load the default Hadoop configuration or not. (Enabled by default)
  </definition>
  <definition term="conf.<key>">
    Inlined configuration parameter to be added to Hadoop configuration. (Optional) Only client oriented properties from the hadoop [core](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml) and [hdfs](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml) configuration files will be recognized by the plugin.
  </definition>
  <definition term="compress">
    Whether to compress the metadata or not. (Enabled by default)
  </definition>
  <definition term="max_restore_bytes_per_sec">
    Throttles per node restore rate. Defaults to unlimited. Note that restores are also throttled through [recovery settings](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/elasticsearch/configuration-reference/index-recovery-settings).
  </definition>
  <definition term="max_snapshot_bytes_per_sec">
    Throttles per node snapshot rate. Defaults to `40mb` per second. Note that if the [recovery settings for managed services](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/elasticsearch/configuration-reference/index-recovery-settings) are set, then it defaults to unlimited, and the rate is additionally throttled through [recovery settings](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/elasticsearch/configuration-reference/index-recovery-settings).
  </definition>
  <definition term="readonly">
    Makes repository read-only. Defaults to `false`.
  </definition>
  <definition term="chunk_size">
    Override the chunk size. (Disabled by default)
  </definition>
  <definition term="security.principal">
    Kerberos principal to use when connecting to a secured HDFS cluster. If you are using a service principal for your elasticsearch node, you may use the `_HOST` pattern in the principal name and the plugin will replace the pattern with the hostname of the node at runtime (see [Creating the Secure Repository](/elastic/docs-builder/docs/3028/reference/elasticsearch/plugins/repository-hdfs-security#repository-hdfs-security-runtime)).
  </definition>
  <definition term="replication_factor">
    The replication factor for all new HDFS files created by this repository. Must be greater or equal to `dfs.replication.min` and less or equal to `dfs.replication.max` HDFS option. Defaults to using HDFS cluster setting.
  </definition>
</definitions>


## A note on HDFS availability

When you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then all nodes in the cluster must be able to reach HDFS when starting. If not, then the node will fail to initialize the repository at start up and the repository will be unusable. If this happens, you will need to remove and re-add the repository or restart the offending node.