Create a Hugging Face inference endpoint
Generally available; Added in 8.12.0
Create an inference endpoint to perform an inference task with the hugging_face service.
Supported tasks include: text_embedding, completion, and chat_completion.
To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use.
For Elastic's text_embedding task:
The selected model must support the Sentence Embeddings task. On the new endpoint creation page, select the Sentence Embeddings task under the Advanced Configuration section.
After the endpoint has initialized, copy the generated endpoint URL.
Recommended models for text_embedding task:
all-MiniLM-L6-v2all-MiniLM-L12-v2all-mpnet-base-v2e5-base-v2e5-small-v2multilingual-e5-basemultilingual-e5-small
For Elastic's chat_completion and completion tasks:
The selected model must support the Text Generation task and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints for Text Generation. When creating dedicated endpoint select the Text Generation task.
After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes /v1/chat/completions part in URL. Then, copy the full endpoint URL for use.
Recommended models for chat_completion and completion tasks:
Mistral-7B-Instruct-v0.2QwQ-32BPhi-3-mini-128k-instruct
For Elastic's rerank task:
The selected model must support the sentence-ranking task and expose OpenAI API.
HuggingFace supports only dedicated (not serverless) endpoints for Rerank so far.
After the endpoint is initialized, copy the full endpoint URL for use.
Tested models for rerank task:
bge-reranker-basejina-reranker-v1-turbo-en-GGUF
Required authorization
- Cluster privileges:
manage_inference
Path parameters
-
The type of the inference task that the model will perform.
Values are
chat_completion,completion,rerank, ortext_embedding. -
The unique identifier of the inference endpoint.
Query parameters
-
Specifies the amount of time to wait for the inference endpoint to be created.
External documentation
Body
Required
-
The chunking configuration object.
External documentation -
The type of service supported for the specified task type. In this case,
hugging_face.Value is
hugging_face. -
Settings used to install the inference model. These settings are specific to the
hugging_faceservice. -
Settings to configure the inference task. These settings are specific to the task type you specified.
PUT _inference/text_embedding/hugging-face-embeddings
{
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
}
}
resp = client.inference.put(
task_type="text_embedding",
inference_id="hugging-face-embeddings",
inference_config={
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
}
},
)
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "hugging-face-embeddings",
inference_config: {
service: "hugging_face",
service_settings: {
api_key: "hugging-face-access-token",
url: "url-endpoint",
},
},
});
response = client.inference.put(
task_type: "text_embedding",
inference_id: "hugging-face-embeddings",
body: {
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
}
}
)
$resp = $client->inference()->put([
"task_type" => "text_embedding",
"inference_id" => "hugging-face-embeddings",
"body" => [
"service" => "hugging_face",
"service_settings" => [
"api_key" => "hugging-face-access-token",
"url" => "url-endpoint",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"hugging_face","service_settings":{"api_key":"hugging-face-access-token","url":"url-endpoint"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/hugging-face-embeddings"
{
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
}
}
{
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
},
"task_settings": {
"return_documents": true,
"top_n": 3
}
}