Authorizations
Headers
Query Parameters
Body
Document to be analyzed
Model used for chat completion
JSON schema format used to validate the output data.
Resolution of the image sent to the LLM
96 <= x <= 300Temperature for sampling. If not provided, the default temperature for the model will be used.
0
The effort level for the model to reason about the input data. If not provided, the default reasoning effort for the model will be used.
none, minimal, low, medium, high Number of consensus models to use for extraction. If greater than 1 the temperature cannot be 0.
If true, the extraction will be streamed to the user using the active WebSocket connection
Seed for the random number generator. If not provided, a random seed will be generated.
null
If true, the extraction will be stored in the database
The modality of the document to be analyzed
text, image, native If set, keys to be used for the extraction of long lists of data using Parallel OCR
{
"products": "identity.id",
"properties": "ID"
}User-defined metadata to associate with this extraction
Extraction ID to use for this extraction. If not provided, a new ID will be generated.
Additional chat messages to append after the document content messages. Useful for providing extra context or instructions.
Response
Successful Response
"chat.completion"auto, default, flex, scale, priority Object defining the uncertainties of the fields extracted when using consensus. Follows the same structure as the extraction object.
Flag indicating if the extraction requires human review
Timestamp of the request
Timestamp of the first token of the document. If non-streaming, set to last_token_at
Timestamp of the last token of the document