﻿---
title: CJK width token filter
description: Normalizes width differences in CJK (Chinese, Japanese, and Korean) characters as follows: Folds full-width ASCII character variants into the equivalent...
url: https://www.elastic.co/elastic/docs-builder/docs/3028/reference/text-analysis/analysis-cjk-width-tokenfilter
products:
  - Elasticsearch
---

# CJK width token filter
Normalizes width differences in CJK (Chinese, Japanese, and Korean) characters as follows:
- Folds full-width ASCII character variants into the equivalent basic Latin characters
- Folds half-width Katakana character variants into the equivalent Kana characters

This filter is included in Elasticsearch's built-in [CJK language analyzer](/elastic/docs-builder/docs/3028/reference/text-analysis/analysis-lang-analyzer#cjk-analyzer). It uses Lucene’s [CJKWidthFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.md).
<note>
  This token filter can be viewed as a subset of NFKC/NFKD Unicode normalization. See the [`analysis-icu` plugin](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/elasticsearch/plugins/analysis-icu-normalization-charfilter) for full normalization support.
</note>


## Example

```json

{
  "tokenizer" : "standard",
  "filter" : ["cjk_width"],
  "text" : "ｼｰｻｲﾄﾞﾗｲﾅｰ"
}
```

The filter produces the following token:
```text
シーサイドライナー
```


## Add to an analyzer

The following [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) request uses the CJK width token filter to configure a new [custom analyzer](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/manage-data/data-store/text-analysis/create-custom-analyzer).
```json

{
  "settings": {
    "analysis": {
      "analyzer": {
        "standard_cjk_width": {
          "tokenizer": "standard",
          "filter": [ "cjk_width" ]
        }
      }
    }
  }
}
```