Compatible third party models

Note

The minimum dedicated ML node size for deploying and using the natural language processing models is 16 GB in Elasticsearch Service if deployment autoscaling is turned off. Turning on autoscaling is recommended because it allows your deployment to dynamically adjust resources based on demand. Better performance can be achieved by using more allocations or more threads per allocation, which requires bigger ML nodes. Autoscaling provides bigger nodes when required. If autoscaling is turned off, you must provide suitably sized nodes yourself.

The Elastic Stack machine learning features support transformer models that conform to the standard BERT model interface and use the WordPiece tokenization algorithm.

The current list of supported architectures is:

  • BERT
  • BART
  • DPR bi-encoders
  • DeBERTa
  • DistilBERT
  • ELECTRA
  • MobileBERT
  • RoBERTa
  • RetriBERT
  • MPNet
  • SentenceTransformers bi-encoders with the above transformer architectures
  • XLM-RoBERTa

In general, any trained model that has a supported architecture is deployable in Elasticsearch by using eland. However, it is not possible to test every third party model. The following lists are therefore provided for informational purposes only and may not be current. Elastic makes no warranty or assurance that the machine learning features will continue to interoperate with these third party models in the way described, or at all.

These models are listed by NLP task; for more information about those tasks, refer to Overview.

Models highlighted in bold in the list below are recommended for evaluation purposes and to get started with the Elastic natural language processing features.

Third party fill-mask models ¶

Third party named entity recognition models ¶

Third party question answering models ¶

Third party sparse embedding models ¶

Sparse embedding models should be configured with the text_expansion task type.

Third party text embedding models ¶

Text Embedding models are designed to work with specific scoring functions for calculating the similarity between the embeddings they produce. Examples of typical scoring functions are: cosine, dot product and euclidean distance (also known as l2_norm).

The embeddings produced by these models should be indexed in Elasticsearch using the dense vector field type with an appropriate similarity function chosen for the model.

To find similar embeddings in Elasticsearch use the efficient Approximate k-nearest neighbor (kNN) search API with a text embedding as the query vector. Approximate kNN search uses the similarity function defined in the dense vector field mapping is used to calculate the relevance. For the best results the function must be one of the suitable similarity functions for the model.

Using SentenceTransformerWrapper:

Using DPREncoderWrapper:

Third party text classification models ¶

Third party text similarity models ¶

You can use these text similarity models for semantic re-ranking.

Third party zero-shot text classification models ¶

Expected model output ¶

Models used for each NLP task type must output tensors of a specific format to be used in the Elasticsearch NLP pipelines.

Here are the expected outputs for each task type.

Fill mask expected model output ¶

Fill mask is a specific kind of token classification; it is the base training task of many transformer models.

For the Elastic stack’s fill mask NLP task to understand the model output, it must have a specific format. It needs to be a float tensor with shape(<number of sequences>, <number of tokens>, <vocab size>).

Here is an example with a single sequence "The capital of [MASK] is Paris" and with vocabulary ["The", "capital", "of", "is", "Paris", "France", "[MASK]"].

Should output:

			 [
   [
     [ 0, 0, 0, 0, 0, 0, 0 ], 1
     [ 0, 0, 0, 0, 0, 0, 0 ], 2
     [ 0, 0, 0, 0, 0, 0, 0 ], 3
     [ 0.01, 0.01, 0.3, 0.01, 0.2, 1.2, 0.1 ], 4
     [ 0, 0, 0, 0, 0, 0, 0 ], 5
     [ 0, 0, 0, 0, 0, 0, 0 ] 6
   ]
]

		
  1. The
  2. capital
  3. of
  4. [MASK]
  5. is
  6. Paris

The predicted value here for [MASK] is "France" with a score of 1.2.

Named entity recognition expected model output ¶

Named entity recognition is a specific token classification task. Each token in the sequence is scored related to a specific set of classification labels. For the Elastic Stack, we use Inside-Outside-Beginning (IOB) tagging. Elastic supports any NER entities as long as they are IOB tagged. The default values are: "O", "B_MISC", "I_MISC", "B_PER", "I_PER", "B_ORG", "I_ORG", "B_LOC", "I_LOC".

The "O" entity label indicates that the current token is outside any entity. "I" indicates that the token is inside an entity. "B" indicates the beginning of an entity. "MISC" is a miscellaneous entity. "LOC" is a location. "PER" is a person. "ORG" is an organization.

The response format must be a float tensor with shape(<number of sequences>, <number of tokens>, <number of classification labels>).

Here is an example with a single sequence "Waldo is in Paris":

			 [
   [
//    "O", "B_MISC", "I_MISC", "B_PER", "I_PER", "B_ORG", "I_ORG", "B_LOC", "I_LOC"
     [ 0,  0,         0,       0.4,     0.5,     0,       0.1,     0,       0 ], 1
     [ 1,  0,         0,       0,       0,       0,       0,       0,       0 ], 2
     [ 1,  0,         0,       0,       0,       0,       0,       0,       0 ], 3
     [ 0,  0,         0,       0,       0,       0,       0,       0,       1.0 ] 4
   ]
]

		
  1. Waldo
  2. is
  3. in
  4. Paris

Text embedding expected model output ¶

Text embedding allows for semantic embedding of text for dense information retrieval.

The output of the model must be the specific embedding directly without any additional pooling.

Eland does this wrapping for the aforementioned models. But if supplying your own, the model must output the embedding for each inferred sequence.

Text classification expected model output ¶

With text classification (for example, in tasks like sentiment analysis), the entire sequence is classified. The output of the model must be a float tensor with shape(<number of sequences>, <number of classification labels>).

Here is an example with two sequences for a binary classification model of "happy" and "sad":

			 [
   [
//     happy, sad
     [ 0,     1], 1
     [ 1,     0] 2
   ]
]

		
  1. first sequence
  2. second sequence

Zero-shot text classification expected model output ¶

Zero-shot text classification allows text to be classified for arbitrary labels not necessarily part of the original training. Each sequence is combined with the label given some hypothesis template. The model then scores each of these combinations according to [entailment, neutral, contradiction]. The output of the model must be a float tensor with shape(<number of sequences>, <number of labels>, 3).

Here is an example with a single sequence classified against 4 labels:

			 [
   [
//     entailment, neutral, contradiction
     [ 0.5,        0.1,     0.4], 1
     [ 0,          0,       1], 2
     [ 1,          0,       0], 3
     [ 0.7,        0.2,     0.1] 4
   ]
]

		
  1. first label
  2. second label
  3. third label
  4. fourth label