Skip to main content
GET
/
v1
/
workflows
/
{workflow_id}
/
experiments
/
{experiment_id}
/
metrics
from retab import Retab

client = Retab()

# Summary view — overall score + block-specific aggregates.
summary = client.workflows.experiments.get_metrics(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
    view="summary",
)

# Per-target view — score for one field across every document.
target_view = client.workflows.experiments.get_metrics(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
    view="by_target",
    target_path="line_items.*.unit_price",
)

# Vote matrix for one document/target cell.
votes = client.workflows.experiments.get_metrics(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
    view="votes",
    document_id="expdoc_xyz",
    target_path="line_items.*.unit_price",
)
{
  "experiment_id": "exp_abc",
  "run_id": "exprun_1",
  "view": "summary",
  "definition_fingerprint": "deadbeef",
  "block_kind": "extract",
  "score": 0.83,
  "prior_score": 0.79,
  "prior_run_id": "exprun_0",
  "documents": [
    { "id": "expdoc_1", "filename": "a.pdf", "score": 0.91, "prior_score": 0.85 }
  ],
  "aggregate": {
    "likelihoods": { "total": 0.92, "vendor.name": 0.78 }
  }
}

Documentation Index

Fetch the complete documentation index at: https://docs.retab.com/llms.txt

Use this file to discover all available pages before exploring further.

Get experiment metrics — consensus likelihoods on the [0.0, 1.0] scale where 0.0 is low agreement and 1.0 is total agreement. The view query parameter selects one of four shapes:
ViewUse it to
summaryRead the overall score plus block-specific aggregates. Start here.
by_documentDrill into one document and see all its targets, sorted ascending. Requires document_id.
by_targetDrill into one target and see its score across every document. Requires target_path.
votesSee the per-voter consensus rows for one document/target cell. Requires both document_id and target_path.
When the latest run is stale (block config or document set changed since the run completed) the response is the stale_metrics error envelope — call Run Experiment to recompute, then retry the metrics call. Pass include_prior=false to omit prior-run comparison fields, or prior_run_id=... to override which run is treated as the prior.
from retab import Retab

client = Retab()

# Summary view — overall score + block-specific aggregates.
summary = client.workflows.experiments.get_metrics(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
    view="summary",
)

# Per-target view — score for one field across every document.
target_view = client.workflows.experiments.get_metrics(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
    view="by_target",
    target_path="line_items.*.unit_price",
)

# Vote matrix for one document/target cell.
votes = client.workflows.experiments.get_metrics(
    workflow_id="wf_abc123",
    experiment_id="exp_abc",
    view="votes",
    document_id="expdoc_xyz",
    target_path="line_items.*.unit_price",
)
{
  "experiment_id": "exp_abc",
  "run_id": "exprun_1",
  "view": "summary",
  "definition_fingerprint": "deadbeef",
  "block_kind": "extract",
  "score": 0.83,
  "prior_score": 0.79,
  "prior_run_id": "exprun_0",
  "documents": [
    { "id": "expdoc_1", "filename": "a.pdf", "score": 0.91, "prior_score": 0.85 }
  ],
  "aggregate": {
    "likelihoods": { "total": 0.92, "vendor.name": 0.78 }
  }
}

Authorizations

Api-Key
string
header
required

Path Parameters

workflow_id
string
required
experiment_id
string
required

Query Parameters

view
enum<string>
default:summary
Available options:
summary,
by_document,
by_target,
votes
run_id
string | null
document_id
string | null
target_path
string | null
include_prior
boolean
default:true
prior_run_id
string | null
access_token
string | null

Response

Successful Response

Run-level summary plus block-specific diagnostics.

prior_run_id + prior_score populate when the request opts into prior-comparison and a completed prior run exists.

experiment_id
string
required
run_id
string
required
block_kind
enum<string>
required
Available options:
extract,
classifier,
split,
for_each
view
string
default:summary
Allowed value: "summary"
definition_fingerprint
string | null
score
number | null
prior_score
number | null
documents
ExperimentSummaryMetricDocument · object[]
aggregate
ExperimentExtractSummaryAggregate · object

Extract-only diagnostics attached to the summary response.

prior_run_id
string | null