Create a custom analyzer
When the built-in analyzers do not fulfill your needs, you can create a custom
analyzer which uses the appropriate combination of:
- zero or more character filters
- a tokenizer
- zero or more token filters.
Configuration ¶
The custom
analyzer accepts the following parameters:
type
- Analyzer type. Accepts built-in analyzer types. For custom analyzers, use
custom
or omit this parameter. tokenizer
- A built-in or customised tokenizer. (Required)
char_filter
- An optional array of built-in or customised character filters.
filter
- An optional array of built-in or customised token filters.
position_increment_gap
- When indexing an array of text values, Elasticsearch inserts a fake "gap" between the last term of one value and the first term of the next value to ensure that a phrase query doesn’t match two terms from different array elements. Defaults to
100
. Seeposition_increment_gap
for more.
Example configuration ¶
Here is an example that combines the following:
- Character Filter
- Tokenizer
- Token Filters
PUT my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"type": "custom", 1
"tokenizer": "standard",
"char_filter": [
"html_strip"
],
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
}
}
POST my-index-000001/_analyze
{
"analyzer": "my_custom_analyzer",
"text": "Is this <b>déjà vu</b>?"
}
- For
custom
analyzers, use atype
ofcustom
or omit thetype
parameter.
The above example produces the following terms:
[ is, this, deja, vu ]
The previous example used tokenizer, token filters, and character filters with their default configurations, but it is possible to create configured versions of each and to use them in a custom analyzer.
Here is a more complicated example that combines the following:
- Character Filter
-
- Mapping Character Filter, configured to replace
:)
with_happy_
and:(
with_sad_
- Mapping Character Filter, configured to replace
- Tokenizer
-
- Pattern Tokenizer, configured to split on punctuation characters
- Token Filters
-
- Lowercase Token Filter
- Stop Token Filter, configured to use the pre-defined list of English stop words
Here is an example:
PUT my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": { 1
"char_filter": [
"emoticons"
],
"tokenizer": "punctuation",
"filter": [
"lowercase",
"english_stop"
]
}
},
"tokenizer": {
"punctuation": { 2
"type": "pattern",
"pattern": "[ .,!?]"
}
},
"char_filter": {
"emoticons": { 3
"type": "mapping",
"mappings": [
":) => _happy_",
":( => _sad_"
]
}
},
"filter": {
"english_stop": { 4
"type": "stop",
"stopwords": "_english_"
}
}
}
}
}
POST my-index-000001/_analyze
{
"analyzer": "my_custom_analyzer",
"text": "I'm a :) person, and you?"
}
- Assigns the index a default custom analyzer,
my_custom_analyzer
. This analyzer uses a custom tokenizer, character filter, and token filter that are defined later in the request. This analyzer also omits thetype
parameter. - Defines the custom
punctuation
tokenizer. - Defines the custom
emoticons
character filter. - Defines the custom
english_stop
token filter.
The above example produces the following terms:
[ i'm, _happy_, person, you ]