Skip to main content
POST
/
v1
/
workflows
/
{workflow_id}
/
experiments
/
{experiment_id}
/
run
from retab import Retab

client = Retab()

run = client.workflows.experiments.runs.create(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
)
print(run.job_id)

# Wait for the metrics to be ready, then read them.
client.jobs.wait_for_completion(run.job_id)
metrics = client.workflows.experiments.get_metrics(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
    view="summary",
)
{
  "experiment_id": "exp_abc",
  "run_id": "exprun_2",
  "job_id": "job_99",
  "status": "pending",
  "definition_fingerprint": "0ff93ddc7cefcb42",
  "document_count": 12,
  "n_consensus": 5,
  "previous_run": {
    "run_id": "exprun_1",
    "definition_fingerprint": "ddd95baadce6045f",
    "score": 0.81
  }
}

Documentation Index

Fetch the complete documentation index at: https://docs.retab.com/llms.txt

Use this file to discover all available pages before exploring further.

Trigger an experiment run with the current draft block configuration. This is the call that produces metrics: it re-processes every experiment document through the block with n_consensus parallel passes per document. Use it after creating an experiment, after editing the block, or after changing the document set. The endpoint is async — it returns a job_id immediately. Poll the job with Get Job until it reaches a terminal status, then read the new metrics with Get Experiment Metrics. Optional fields:
  • n_consensus — override the experiment’s stored consensus count for this run only (3, 5, or 7).
  • retry_failed_only — re-run only the documents that failed in the latest run instead of starting a fresh run. Returns 409 if the latest run has nothing to retry.
from retab import Retab

client = Retab()

run = client.workflows.experiments.runs.create(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
)
print(run.job_id)

# Wait for the metrics to be ready, then read them.
client.jobs.wait_for_completion(run.job_id)
metrics = client.workflows.experiments.get_metrics(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
    view="summary",
)
{
  "experiment_id": "exp_abc",
  "run_id": "exprun_2",
  "job_id": "job_99",
  "status": "pending",
  "definition_fingerprint": "0ff93ddc7cefcb42",
  "document_count": 12,
  "n_consensus": 5,
  "previous_run": {
    "run_id": "exprun_1",
    "definition_fingerprint": "ddd95baadce6045f",
    "score": 0.81
  }
}

Authorizations

Api-Key
string
header
required

Path Parameters

workflow_id
string
required
experiment_id
string
required

Query Parameters

access_token
string | null

Body

application/json
n_consensus
enum<integer> | null
Available options:
3,
5,
7
retry_failed_only
boolean
default:false

Response

Successful Response

experiment_id
string
required
run_id
string
required
job_id
string
required
status
string
required
definition_fingerprint
string
required
document_count
integer
required
n_consensus
enum<integer>
required
Available options:
3,
5,
7
previous_run
PreviousRunSummary · object

Slim summary of the previous completed run, attached to a :class:RunExperimentResponse so the caller can compute deltas without an extra fetch.