client.evals.split is the SDK surface for evaluating document splitting. You define the target subdocuments, label datasets with expected split output, create iterations with split-specific overrides, and compare how changes affect split quality before publishing.
Resource Map
Quick Start
Core Resources
Eval
The top-level split eval resource manages the base subdocument config.create(name, split_config): create a split eval.get(eval_id),list(...),update(eval_id, ...),delete(eval_id): standard CRUD methods.publish(eval_id, origin=None): promote the draft config to the published config.process(eval_id, iteration_id=None, document=..., ...): run a document through the split eval.
process() responses as RetabParsedChatCompletion, so read the structured result from the parsed message content.
Datasets
Datasets hold expected split outcomes for sample documents.datasets.create(..., base_split_config=..., base_inference_settings=...): create a dataset tied to a split config.datasets.add_document(...): add a document and optional expected split output inprediction_data.datasets.update_document(...): updatevalidation_flags,prediction_data, orextraction_id.datasets.process_document(...): run one dataset document against the base eval.
prediction_data.prediction.splits.
Iterations
Iterations are where you refine descriptions, partition keys, and model settings.iterations.create(...): create a new iteration.iterations.update_draft(...): update draftinference_settingsorsplit_config_overrides.iterations.get_schema(..., use_draft=True)/getSchema(..., { useDraft: true }): inspect the materialized split schema with pending draft changes.iterations.finalize(...): finalize the draft iteration.iterations.process_documents(...): queue one dataset document for iteration processing.iterations.get_metrics(...): fetch split accuracy metrics for the iteration.
split_config_overrides, which includes fields like descriptions_override and partition_keys_override.
Templates
Split templates expose the same helper methods as the other eval resources.templates.list(...): browse split templates.templates.get(template_id): fetch a split template.templates.clone(template_id, name=None): create a new split eval from that template.templates.list_builder_documents(template_id): fetch the sample builder documents.templates.list_builder_document_previews(template_ids): fetch previews for multiple templates.
SDK Notes
- Python
list()methods on evals, datasets, and iterations return model lists. Node returns paginated API payloads withdata. - Split evals do not expose
process_stream()in the SDK. - Split eval datasets and iterations both accept
prediction_data, which is where expected split output is usually stored for scoring.