Embeddings LLM
POST/v1/embeddings
Generate embeddings for the given input text using the specified model.
The model
field in the request body specifies the model to use. Check the /models route for complete list of embedding models. You can specify the encoding format (float
or np
) and the dimensions (up to 1536) of the output embedding vector.
Request
- application/json
Body
Embedding generation query
model stringrequired
Model to use for generating embeddings.
input stringrequired
Input text to generate embeddings for.
encoding_format string
Possible values: [float
, np
]
Format of the output encoding. Only supported by OpenAI
dimensions integer
Possible values: <= 1536
Number of dimensions for the embedding vector. Maximum value is 384-1536 check the models route for max_tokens per model. Only supported by OpenAI
Responses
- 200
Embedding vector generated successfully
- application/json
- Schema
- Example (from schema)
Schema
- Array [
- ]
object string
data object[]
object string
index integer
embedding object
object string
index integer
embedding number[]
provider string
model string
usage object
prompt_tokens integer
total_tokens integer
prompt_characters integer
cost string
latency_ms integer
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": {
"object": "embedding",
"index": 0,
"embedding": [
[
0.015207428,
-0.0031979256,
-0.042056903,
0.03562467,
0.022221763,
-0.0012624348,
0.013301042
]
]
}
}
],
"provider": "openai",
"model": "text-embedding-3-small",
"usage": {
"prompt_tokens": 6,
"total_tokens": 6,
"prompt_characters": 20,
"cost": "0.0000001260",
"latency_ms": 349
}
}
Loading...