﻿---
title: Keyword tokenizer
description: The keyword tokenizer is a noop tokenizer that accepts whatever text it is given and outputs the exact same text as a single term. It can be combined...
url: https://www.elastic.co/elastic/docs-builder/docs/3016/reference/text-analysis/analysis-keyword-tokenizer
products:
  - Elasticsearch
---

# Keyword tokenizer
The `keyword` tokenizer is a noop tokenizer that accepts whatever text it is given and outputs the exact same text as a single term. It can be combined with token filters to normalise output, e.g. lower-casing email addresses.

## Example output

```json

{
  "tokenizer": "keyword",
  "text": "New York"
}
```

The above sentence would produce the following term:
```text
[ New York ]
```


## Combine with token filters

You can combine the `keyword` tokenizer with token filters to normalise structured data, such as product IDs or email addresses.
For example, the following [analyze API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-analyze) request uses the `keyword` tokenizer and [`lowercase`](https://www.elastic.co/elastic/docs-builder/docs/3016/reference/text-analysis/analysis-lowercase-tokenfilter) filter to convert an email address to lowercase.
```json

{
  "tokenizer": "keyword",
  "filter": [ "lowercase" ],
  "text": "john.SMITH@example.COM"
}
```

The request produces the following token:
```text
[ john.smith@example.com ]
```


## Configuration

The `keyword` tokenizer accepts the following parameters:
<definitions>
  <definition term="buffer_size">
    The number of characters read into the term buffer in a single pass. Defaults to `256`. The term buffer will grow by this size until all the text has been consumed. It is advisable not to change this setting.
  </definition>
</definitions>