Get experiment metrics for REST/frontend consumers.
Prior comparison is inline by default: supported score-bearing rows carry
prior_score when a prior completed run exists. Pass
include_prior=false to strip those fields, or
prior_run_id=... to compare against a specific run.
Get experiment metrics — consensus likelihoods on theDocumentation Index
Fetch the complete documentation index at: https://docs.retab.com/llms.txt
Use this file to discover all available pages before exploring further.
[0.0, 1.0] scale where
0.0 is low agreement and 1.0 is total agreement.
The view query parameter selects one of four shapes:
| View | Use it to |
|---|---|
summary | Read the overall score plus block-specific aggregates. Start here. |
by_document | Drill into one document and see all its targets, sorted ascending. Requires document_id. |
by_target | Drill into one target and see its score across every document. Requires target_path. |
votes | See the per-voter consensus rows for one document/target cell. Requires both document_id and target_path. |
stale_metrics error envelope —
call Run Experiment to
recompute, then retry the metrics call.
Pass include_prior=false to omit prior-run comparison fields, or
prior_run_id=... to override which run is treated as the prior.
summary, by_document, by_target, votes Successful Response
Run-level summary plus block-specific diagnostics.
prior_run_id + prior_score populate when the request opts into
prior-comparison and a completed prior run exists.
extract, classifier, split, for_each "summary"Extract-only diagnostics attached to the summary response.