﻿---
title: Failure store
description: A failure store is a secondary set of indices inside a data stream, dedicated to storing failed documents. A failed document is any document that, without...
url: https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/data-store/data-streams/failure-store
products:
  - Elastic Cloud Serverless
  - Elastic Stack
  - Elasticsearch
applies_to:
  - Elastic Cloud Serverless: Generally available
  - Elastic Stack: Generally available since 9.1
---

# Failure store
A failure store is a secondary set of indices inside a data stream, dedicated to storing failed documents. A failed document is any document that, without the failure store enabled, would cause an ingest pipeline exception or that has a structure that conflicts with a data stream's mappings. In the absence of the failure store, a failed document would cause the indexing operation to fail, with an error message returned in the operation response.
When a data stream's failure store is enabled, these failures are instead captured in a separate index and persisted to be analysed later. Clients receive a successful response with a flag indicating the failure was redirected.
<important>
  Failure stores do not capture failures caused by backpressure or document version conflicts. These failures are always returned as-is since they warrant specific action by the client.
</important>

On this page, you'll learn how to set up, use, and manage a failure store, as well as the structure of failure store documents.
For examples of how to use failure stores to identify and fix errors in ingest pipelines and your data, refer to [Using failure stores to address ingestion issues](https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/data-store/data-streams/failure-store-recipes).

### Required permissions

To view and modify a failure store in Elastic Stack, you need the following data stream level privileges:
- `read_failure_store`
- `manage_failure_store`

For more information, refer to [Granting privileges for data streams and aliases](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/users-roles/cluster-or-deployment-auth/granting-privileges-for-data-streams-aliases).

## Set up a data stream failure store

Each data stream has its own failure store that can be enabled to accept failed documents. By default, this failure store is disabled and any ingestion problems are raised in the response to write operations.

### Set up for new data streams

You can specify in a data stream's [index template](https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/data-store/templates) if it should enable the failure store when it is first created.
<note>
  Unlike the `settings` and `mappings` fields on an [index template](https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/data-store/templates) which are repeatedly applied to new data stream write indices on rollover, the `data_stream_options` section of a template is applied to a data stream only once when the data stream is first created. To configure existing data streams, use the put [data stream options API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options).
</note>

To enable the failure store on a new data stream, enable it in the `data_stream_options` of the template:
```json

{
  "index_patterns": ["my-datastream-*"],
  "data_stream": { },
  "template": {
    "data_stream_options": { <1>
      "failure_store": {
        "enabled": true <2>
      }
    }
  }
}
```

After a matching data stream is created, its failure store will be enabled.

### Set up for existing data streams

Enabling the failure store using [index templates](https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/data-store/templates) can only affect data streams that are newly created. Existing data streams that use a template are not affected by changes to the template's `data_stream_options` field.
To modify an existing data stream's options, use the [put data stream options](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options) API:
```json

{
  "failure_store": {
    "enabled": true <1>
  }
}
```

The failure store redirection can be disabled using this API as well. When the failure store is deactivated, only failed document redirection is halted. Any existing failure data in the data stream will remain until removed by manual deletion or until the data expires due to reaching its max configured retention.
```json

{
  "failure_store": {
    "enabled": false <1>
  }
}
```

<tip applies-to="Elastic Cloud Serverless: Generally available, Elastic Stack: Generally available since 9.2, Elastic Stack: Preview in 9.1">
  You can also enable the data stream failure store in Kibana. Locate the data stream on the **Streams** page, where a stream maps directly to a data stream. Select a stream to view its details and go to the **Retention** tab where you can find the **Enable failure store** option.
</tip>


### Enable failure store using cluster setting

If you have a large number of existing data streams you may want to enable their failure stores in one place. Instead of updating each of their options individually, set `data_streams.failure_store.enabled` to a list of index patterns in the [cluster settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Any data streams that match one of these patterns will operate with their failure store enabled.
```json

{
  "persistent" : {
    "data_streams.failure_store.enabled" : [ "my-datastream-*", "logs-*" ] <1>
  }
}
```

Matching data streams will ignore this configuration if the failure store is explicitly enabled or disabled in their [data stream options](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options).
```json

{
  "persistent" : {
    "data_streams.failure_store.enabled" : [ "my-datastream-*", "logs-*" ] <1>
  }
}
```

```json

{
  "failure_store": {
    "enabled": false <1>
  }
}
```


## Using a failure store

The failure store is meant to ease the burden of detecting and handling failures when ingesting data to Elasticsearch. Clients are less likely to encounter unrecoverable failures when writing documents, and developers are more easily able to troubleshoot faulty pipelines and mappings.
For examples of how to use failure stores to identify and fix errors in ingest pipelines and your data, refer to [Using failure stores to address ingestion issues](https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/data-store/data-streams/failure-store-recipes).

### Failure redirection

Once a failure store is enabled for a data stream it will begin redirecting documents that fail due to common ingestion problems instead of returning errors in write operations. Clients are notified in a non-intrusive way when a document is redirected to the failure store.
Each data stream's failure store is made up of a list of indices that are dedicated to storing failed documents. These failure indices function much like a data stream's normal backing indices: There is a write index that accepts failed documents, the indices can be rolled over, and they're automatically cleaned up over time subject to a lifecycle policy. Failure indices are lazily created the first time they are needed to store a failed document.
When a document bound for a data stream encounters a problem during its ingestion, the response is annotated with the `failure_store` field which describes how Elasticsearch responded to that problem. The `failure_store` field is present on both the [bulk](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) and [index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) API responses when applicable. Clients can use this information to augment their behavior based on the response from Elasticsearch.
Here we have a bulk operation that sends two documents. Both are writing to the `id` field which is mapped as a `long` field type. The first document will be accepted, but the second document would cause a failure because the value `invalid_text` cannot be parsed as a `long`. This second document will be redirected to the failure store:
```json

{"create":{}}
{"@timestamp": "2025-05-01T00:00:00Z", "id": 1234} <1>
{"create":{}}
{"@timestamp": "2025-05-01T00:00:00Z", "id": "invalid_text"} <2>
```

```json
{
  "errors": false, 
  "took": 400,
  "items": [
    {
      "create": {
        "_index": ".ds-my-datastream-new-2025.05.01-000001", 
        "_id": "YUvQipYB_ZAKuDfZRosB",
        "_version": 1,
        "result": "created",
        "_shards": {
          "total": 1,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 3,
        "_primary_term": 1,
        "status": 201
      }
    },
    {
      "create": {
        "_index": ".fs-my-datastream-new-2025.05.01-000002", 
        "_id": "lEu8jZYB_ZAKuDfZNouU",
        "_version": 1,
        "result": "created",
        "_shards": {
          "total": 1,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 10,
        "_primary_term": 1,
        "failure_store": "used", 
        "status": 201
      }
    }
  ]
}
```

If the document was redirected to a data stream's failure store due to a problem, then the `failure_store` field on the response will be `used`, and the response will not return any error information:
```json
{
  "_index": ".fs-my-datastream-new-2025.05.01-000002", 
  "_id": "lEu8jZYB_ZAKuDfZNouU",
  "_version": 1,
  "result": "created",
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "_seq_no": 11,
  "_primary_term": 1,
  "failure_store": "used" 
}
```

If the document could have been redirected to a data stream's failure store but the failure store was disabled, then the `failure_store` field on the response will be `not_enabled`, and the response will display the error encountered as normal.
```json
{
  "error": {
    "root_cause": [ 
      {
        "type": "document_parsing_exception",
        "reason": "[1:53] failed to parse field [id] of type [long] in document with id 'Y0vQipYB_ZAKuDfZR4sR'. Preview of field's value: 'invalid_text'"
      }
    ],
    "type": "document_parsing_exception",
    "reason": "[1:53] failed to parse field [id] of type [long] in document with id 'Y0vQipYB_ZAKuDfZR4sR'. Preview of field's value: 'invalid_text'",
    "caused_by": {
      "type": "illegal_argument_exception",
      "reason": "For input string: \"invalid_text\""
    },
    "failure_store": "not_enabled" 
  },
  "status": 400 
}
```

If the document was redirected to a data stream's failure store but that failed document could not be stored (for example, due to shard unavailability or a similar problem), then the `failure_store` field on the response will be `failed`, and the response will display the error for the original failure, as well as a suppressed error detailing why the failure could not be stored:
```json
{
  "error": {
    "root_cause": [
      {
        "type": "document_parsing_exception", 
        "reason": "[1:53] failed to parse field [id] of type [long] in document with id 'Y0vQipYB_ZAKuDfZR4sR'. Preview of field's value: 'invalid_text'",
        "suppressed": [
          {
            "type": "cluster_block_exception", 
            "reason": "index [.fs-my-datastream-2025.05.01-000002] blocked by: [FORBIDDEN/5/index read-only (api)];"
          }
        ]
      }
    ],
    "type": "document_parsing_exception", 
    "reason": "[1:53] failed to parse field [id] of type [long] in document with id 'Y0vQipYB_ZAKuDfZR4sR'. Preview of field's value: 'invalid_text'",
    "caused_by": {
      "type": "illegal_argument_exception",
      "reason": "For input string: \"invalid_text\""
    },
    "suppressed": [
      {
        "type": "cluster_block_exception",
        "reason": "index [.fs-my-datastream-2025.05.01-000002] blocked by: [FORBIDDEN/5/index read-only (api)];"
      }
    ],
    "failure_store": "failed" 
  },
  "status": 400 
}
```


### Searching failures

Once you have accumulated some failures, the failure store can be searched much like a regular data stream.
<warning>
  Documents redirected to the failure store in the event of a failed ingest pipeline will be stored in their original, unprocessed form. If an ingest pipeline normally redacts sensitive information from a document, then failed documents in their original, unprocessed form may contain sensitive information.Furthermore, failed documents are likely to be structured differently than normal data in a data stream, and special care should be taken when making use of [document level security](/elastic/docs-builder/docs/3016/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level#document-level-security) or [field level security](/elastic/docs-builder/docs/3016/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level#field-level-security). Any security policies that expect to utilize these features for both regular documents and failure documents should account for any differences in document structure between the two document types.To limit visibility on potentially sensitive data, users require the [`read_failure_store`](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/security-privileges#privileges-list-indices) index privilege for a data stream in order to search that data stream's failure store data.
</warning>

Searching a data stream's failure store can be done by making use of the existing search APIs available in Elasticsearch.
To indicate that the search should be performed on failure store data, use the [index component selector syntax](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/rest-apis/api-conventions#api-component-selectors) to indicate which part of the data stream to target in the search operation. Appending the `::failures` suffix to the name of the data stream indicates that the operation should be performed against that data stream's failure store instead of its regular backing indices.
<tab-set>
  <tab-item title="ES|QL">
    ```json

    {
        "query": """FROM my-datastream::failures | DROP error.stack_trace | LIMIT 1""" <1>
    }
    ```
    An example of a search result with the failed document present:
    ```json
           @timestamp       |    document.id     |document.index |document.routing|                                                            error.message                                                            |error.pipeline |error.pipeline_trace|error.processor_tag|error.processor_type|        error.type        
    ------------------------+--------------------+---------------+----------------+-------------------------------------------------------------------------------------------------------------------------------------+---------------+--------------------+-------------------+--------------------+--------------------------
    2025-05-01T12:00:00.000Z|Y0vQipYB_ZAKuDfZR4sR|my-datastream  |null            |[1:45] failed to parse field [id] of type [long] in document with id 'Y0vQipYB_ZAKuDfZR4sR'. Preview of field's value: 'invalid_text'|null           |null                |null               |null                |document_parsing_exception
    ```

    <note>
      Because the `document.source` field is unmapped, it is absent from the ES|QL results.
    </note>
  </tab-item>

  <tab-item title="_search API">
    ```json
    ```
    An example of a search result with the failed document present:
    ```json
    {
      "took": 0,
      "timed_out": false,
      "_shards": {
        "total": 1,
        "successful": 1,
        "skipped": 0,
        "failed": 0
      },
      "hits": {
        "total": {
          "value": 1,
          "relation": "eq"
        },
        "max_score": 1,
        "hits": [
          {
            "_index": ".fs-my-datastream-2025.05.01-000002", 
            "_id": "lEu8jZYB_ZAKuDfZNouU",
            "_score": 1,
            "_source": {
              "@timestamp": "2025-05-01T12:00:00.000Z", 
              "document": { 
                "id": "Y0vQipYB_ZAKuDfZR4sR",
                "index": "my-datastream",
                "source": {
                  "@timestamp": "2025-05-01T00:00:00Z",
                  "id": "invalid_text"
                }
              },
              "error": { 
                "type": "document_parsing_exception",
                "message": "[1:53] failed to parse field [id] of type [long] in document with id 'Y0vQipYB_ZAKuDfZR4sR'. Preview of field's value: 'invalid_text'",
                "stack_trace": """o.e.i.m.DocumentParsingException: [1:53] failed to parse field [id] of type [long] in document with id 'Y0vQipYB_ZAKuDfZR4sR'. Preview of field's value: 'invalid_text'
    	at o.e.i.m.FieldMapper.rethrowAsDocumentParsingException(FieldMapper.java:241)
    	at o.e.i.m.FieldMapper.parse(FieldMapper.java:194)
    	... 24 more
    Caused by: j.l.IllegalArgumentException: For input string: "invalid_text"
    	at o.e.x.s.AbstractXContentParser.toLong(AbstractXContentParser.java:189)
    	at o.e.x.s.AbstractXContentParser.longValue(AbstractXContentParser.java:210)
    	... 31 more
    """
              }
            }
          }
        ]
      }
    }
    ```
  </tab-item>

  <tab-item title="SQL">
    ```json

    {
        "query": """SELECT * FROM "my-datastream::failures" LIMIT 1"""
    }
    ```
    An example of a search result with the failed document present:
    ```json
           @timestamp       |    document.id     |document.index |document.routing|                                                            error.message                                                            |error.pipeline |error.pipeline_trace|error.processor_tag|error.processor_type|                                                                                                                                                                                                                                                                            error.stack_trace                                                                                                                                                                                                                                                                            |        error.type        
    ------------------------+--------------------+---------------+----------------+-------------------------------------------------------------------------------------------------------------------------------------+---------------+--------------------+-------------------+--------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------
    2025-05-05T20:49:10.899Z|sXk1opYBL1dfU_1htCAE|my-datastream  |null            |[1:45] failed to parse field [id] of type [long] in document with id 'sXk1opYBL1dfU_1htCAE'. Preview of field's value: 'invalid_text'|null           |null                |null               |null                |o.e.i.m.DocumentParsingException: [1:45] failed to parse field [id] of type [long] in document with id 'sXk1opYBL1dfU_1htCAE'. Preview of field's value: 'invalid_text'
    	at o.e.i.m.FieldMapper.rethrowAsDocumentParsingException(FieldMapper.java:241)
    	at o.e.i.m.FieldMapper.parse(FieldMapper.java:194)
    	... 19 more
    Caused by: j.l.IllegalArgumentException: For input string: "invalid_text"
    	at o.e.x.s.AbstractXContentParser.toLong(AbstractXContentParser.java:189)
    	at o.e.x.s.AbstractXContentParser.longValue(AbstractXContentParser.java:210)
    	... 26 more
    |document_parsing_exception
    ```

    <note>
      Because the `document.source` field is unmapped, it is absent from the SQL results.
    </note>
  </tab-item>
</tab-set>


### Failure document structure

Failure documents have a uniform structure that is handled internally by Elasticsearch.
<definitions>
  <definition term="@timestamp">
    (`date`) The timestamp at which the document encountered a failure in Elasticsearch.
  </definition>
  <definition term="document">
    (`object`) The document at time of failure. If the document failed in an ingest pipeline, then the document will be the unprocessed version of the document as it arrived in the original indexing request. If the document failed due to a mapping issue, then the document will be as it was after any ingest pipelines were applied to it.
    <definitions>
      <definition term="document.id">
        (`keyword`) The ID of the original document at the time of failure.
      </definition>
      <definition term="document.routing">
        (`keyword`, optional) The routing of the original document at the time of failure if it was specified.
      </definition>
      <definition term="document.index">
        (`keyword`) The index that the document was being written to when it failed.
      </definition>
      <definition term="document.source">
        (unmapped object) The body of the original document. This field is unmapped and only present in the failure document's source. This prevents mapping conflicts in the failure store when redirecting failed documents. If you need to include fields from the original document's source in your queries, use [runtime fields](https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/data-store/mapping/define-runtime-fields-in-search-request) on the search request.
      </definition>
    </definitions>
  </definition>
  <definition term="error">
    (`object`) Information about the failure that prevented this document from being indexed.
    <definitions>
      <definition term="error.message">
        (`match_only_text`) The error message that describes the failure.
      </definition>
      <definition term="error.stack_trace">
        (`text`) A compressed stack trace from Elasticsearch for the failure.
      </definition>
      <definition term="error.type">
        (`keyword`) The type classification of the failure. Values are the same type returned within failed indexing API responses.
      </definition>
      <definition term="error.pipeline">
        (`keyword`, optional) If the failure occurred in an ingest pipeline, this will contain the name of the pipeline.
      </definition>
      <definition term="error.pipeline_trace">
        (`keyword`, optional array) If the failure occurred in an ingest pipeline, this will contain the list of pipelines that the document had visited up until the failure.
      </definition>
      <definition term="error.processor_tag">
        (`keyword`, optional) If the failure occurred in an ingest processor that is annotated with a tag, the tag contents will be present here.
      </definition>
      <definition term="error.processor_type">
        (`keyword`, optional) If the failure occurred in an ingest processor, this will contain the processor type. (e.g. `script`, `append`, `enrich`, etc.)
      </definition>
    </definitions>
  </definition>
</definitions>


#### Failure document source

The contents of a failure's `document` field is dependent on when the failure occurred in ingestion. When sending data to a data stream, documents can fail in two different phases: during an ingest pipeline and during indexing.
1. Documents that fail during an ingest pipeline will store the source of the document as it was originally sent to Elasticsearch. Changes from pipelines are discarded before redirecting the failure.
2. Documents that fail during indexing will store the source of the document as it was during the index operation. Any changes from pipelines will be reflected in the source of the document that is redirected.

To help demonstrate the differences between these kinds of failures, we will use the following pipeline and template definition.
```json

{
  "processors": [
    {
      "set": { <1>
        "override": false,
        "field": "@timestamp",
        "copy_from": "_ingest.timestamp"
      }
    },
    {
      "set": { <2>
        "field": "published",
        "copy_from": "data"
      }
    }
  ]
}
```

```json

{
    "index_patterns": ["my-datastream-ingest*"],
    "data_stream": {},
    "template": {
      "settings": {
        "index.default_pipeline": "my-datastream-example-pipeline" // Calling the pipeline by default.
      },
      "mappings": {
        "properties": {
          "published": { // A field of type long to hold our result.
            "type": "long"
          }
        }
      },
      "data_stream_options": {
        "failure_store": {
          "enabled": true // Failure store is enabled.
        }
      }
    }
}
```

During ingestion, documents are first processed by any applicable ingest pipelines. This process modifies a copy of the document and only saves the changes to the original document after all pipelines have completed. If a document is sent to the failure store because of a failure during an ingest pipeline, any changes to the document made by the pipelines it has been through will be discarded before redirecting the failure. This means that the document will be in the same state as when it was originally sent by the client. This has the benefit of being able to see the document before any pipelines have run on it, and allows for the original document to be used in simulate operations to further troubleshoot any problems in the ingest pipeline.
Using the pipeline and template defined above, we will send a document that is missing a required field for the pipeline. The document will fail:
```json

{
  "random": 42 // Not the field we're looking for.
}
```

```json
{
  "_index": ".fs-my-datastream-ingest-2025.05.09-000002",
  "_id": "eXS-tpYBwrYNjPmat9Cx",
  "_version": 1,
  "result": "created",
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "_seq_no": 0,
  "_primary_term": 1,
  "failure_store": "used"
}
```

Inspecting the corresponding failure document will show the document in its original form as it was sent to Elasticsearch.
```json
```

```json
{
  "took": 0,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 1,
      "relation": "eq"
    },
    "max_score": 1,
    "hits": [
      {
        "_index": ".fs-my-datastream-ingest-2025.05.09-000002",
        "_id": "eXS-tpYBwrYNjPmat9Cx",
        "_score": 1,
        "_source": {
          "@timestamp": "2025-05-09T20:31:13.759Z",
          "document": { 
            "index": "my-datastream-ingest",
            "source": {
              "random": 42
            }
          },
          "error": {
            "type": "illegal_argument_exception",
            "message": "field [data] not present as part of path [data]", 
            "stack_trace": """j.l.IllegalArgumentException: field [data] not present as part of path [data]
	at o.e.i.IngestDocument.getFieldValue(IngestDocument.java:202)
	at o.e.i.c.SetProcessor.execute(SetProcessor.java:86)
	... 14 more
""",
            "pipeline_trace": [
              "my-datastream-example-pipeline"
            ],
            "pipeline": "my-datastream-example-pipeline",
            "processor_type": "set"
          }
        }
      }
    ]
  }
}
```

We can see that the document failed on the second processor in the pipeline. The first processor would have added a `@timestamp` field. Since the pipeline failed, we find that it has no `@timestamp` field added because it did not save any changes from before the pipeline failed.
The second time when failures can occur is during indexing. After the documents have been processed by any applicable pipelines, they are parsed using the index mappings before being indexed into the shard. If a document is sent to the failure store due to a failure in this process, then it will be stored as it was after any ingestion had occurred. This is because, by this point, the original document has already been overwritten by the ingest pipeline changes. This has the benefit of allowing you to see what the document looked like during the mapping and indexing phase of the write operation.
Building on the example above, we send a document that has a text value where we expect a numeric value:
```json

{
  "data": "this field is invalid" <1>
}
```

```json
{
  "_index": ".fs-my-datastream-ingest-2025.05.09-000002",
  "_id": "sXTVtpYBwrYNjPmaFNAY",
  "_version": 1,
  "result": "created",
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "_seq_no": 0,
  "_primary_term": 1,
  "failure_store": "used" 
}
```

If we obtain the corresponding failure document, we can see that the document stored has had the default pipeline applied to it.
```json
```

```json
{
  "took": 0,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 1,
      "relation": "eq"
    },
    "max_score": 1,
    "hits": [
      {
        "_index": ".fs-my-datastream-ingest-2025.05.09-000002",
        "_id": "sXTVtpYBwrYNjPmaFNAY",
        "_score": 1,
        "_source": {
          "@timestamp": "2025-05-09T20:55:38.943Z",
          "document": { 
            "id": "sHTVtpYBwrYNjPmaEdB5",
            "index": "my-datastream-ingest",
            "source": {
              "@timestamp": "2025-05-09T20:55:38.362486755Z",
              "data": "this field is invalid",
              "published": "this field is invalid"
            }
          },
          "error": {
            "type": "document_parsing_exception", 
            "message": "[1:91] failed to parse field [published] of type [long] in document with id 'sHTVtpYBwrYNjPmaEdB5'. Preview of field's value: 'this field is invalid'",
            "stack_trace": """o.e.i.m.DocumentParsingException: [1:91] failed to parse field [published] of type [long] in document with id 'sHTVtpYBwrYNjPmaEdB5'. Preview of field's value: 'this field is invalid'
	at o.e.i.m.FieldMapper.rethrowAsDocumentParsingException(FieldMapper.java:241)
	at o.e.i.m.FieldMapper.parse(FieldMapper.java:194)
	... 24 more
Caused by: j.l.IllegalArgumentException: For input string: "this field is invalid"
	at o.e.x.s.AbstractXContentParser.toLong(AbstractXContentParser.java:189)
	at o.e.x.s.AbstractXContentParser.longValue(AbstractXContentParser.java:210)
	... 31 more
"""
          }
        }
      }
    ]
  }
}
```

The `document` field attempts to show the effective input to whichever process led to the failure occurring. This gives you all the information you need to reproduce the problem.

## Manage a data stream's failure store

Failure data can accumulate in a data stream over time. To help manage this accumulation, most administrative operations that can be done on a data stream can be applied to the data stream's failure store.

### Failure store rollover

A data stream treats its failure store much like a secondary set of [backing indices](/elastic/docs-builder/docs/3016/manage-data/data-store/data-streams#backing-indices). Multiple dedicated hidden indices serve search requests for the failure store, while one index acts as the current write index. You can use the [rollover](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover) API to rollover the failure store. Much like the regular indices in a data stream, a new write index will be created in the failure store to accept new failure documents.
```json
```

```json
{
  "acknowledged": true,
  "shards_acknowledged": true,
  "old_index": ".fs-my-datastream-2025.05.01-000002",
  "new_index": ".fs-my-datastream-2025.05.01-000003",
  "rolled_over": true,
  "dry_run": false,
  "lazy": false,
  "conditions": {}
}
```


### Failure store lifecycle

Failure stores have their retention managed using an internal [data stream lifecycle](https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/lifecycle/data-stream). A thirty day (30d) retention is applied to failure store data. You can view the active lifecycle for a failure store index by calling the [get data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream):
```json
```

```json
{
  "data_streams": [
    {
      "name": "my-datastream",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-datastream-2025.05.01-000001",
          "index_uuid": "jUbUNf-8Re-Nca8vJkHnkA",
          "managed_by": "Data stream lifecycle",
          "prefer_ilm": true,
          "index_mode": "standard"
        }
      ],
      "generation": 2,
      "status": "GREEN",
      "template": "my-datastream-template",
      "lifecycle": {
        "enabled": true
      },
      "next_generation_managed_by": "Data stream lifecycle",
      "prefer_ilm": true,
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false,
      "index_mode": "standard",
      "failure_store": { 
        "enabled": true,
        "rollover_on_write": false,
        "indices": [
          {
            "index_name": ".fs-my-datastream-2025.05.05-000002",
            "index_uuid": "oYS2WsjkSKmdazWuS4RP9Q",
            "managed_by": "Data stream lifecycle"  
          }
        ],
        "lifecycle": {
          "enabled": true,
          "effective_retention": "30d",  <3> 
          "retention_determined_by": "default_failures_retention"  
        }
      }
    }
  ]
}
```

<note>
  The default retention respects any maximum retention values. If [maximum retention](/elastic/docs-builder/docs/3016/manage-data/lifecycle/data-stream/tutorial-data-stream-retention#what-is-retention) is configured lower than thirty days then the maximum retention will be used as the default value.
</note>

You can update the default retention period for failure stores in your deployment by updating the `data_streams.lifecycle.retention.failures_default` cluster setting. New and existing data streams that have no retention configured on their failure stores will use this value to determine their retention period.
```json

{
  "persistent": {
    "data_streams.lifecycle.retention.failures_default": "15d"
  }
}
```

You can also specify the failure store retention period for a data stream on its data stream options. These can be specified using the index template for new data streams, or using the [put data stream options](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-stream-options) API for existing data streams.
```json

{
    "failure_store": {
        "enabled": true, <1>
        "lifecycle": {
            "data_retention": "10d" <2>
        }
    }
}
```


### Add and remove from failure store

Failure stores support adding and removing indices from them using the [modify data stream](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-modify-data-stream) API.
```json

{
  "actions":[   
    {
      "remove_backing_index": { <1>
        "data_stream": "my-datastream", 
        "index": ".fs-my-datastream-2025.05.05-000002", <2>
        "failure_store": true <3>
      }
    },
    {
      "add_backing_index": { <4>
        "data_stream": "my-datastream",
        "index": "restored-failure-index", <5>
        "failure_store": true <6>
      }
    }
  ]
}
```

This API gives you fine-grained control over the indices in your failure store, allowing you to manage backup and restoration operations as well as isolate failure data for later remediation.

## Cross Cluster Search compatibility

<important>
  Accessing the failure store across clusters using `::failures` is not yet supported.
</important>