Skip to main content
POST
/
v1
/
workflows
/
runs
/
export_payload
from retab import Retab

client = Retab()

# Export every successful run's extract output
export = client.workflows.runs.export(
    workflow_id="wf_abc123xyz",
    block_id="extract-1",
    export_source="outputs",
    status="completed",
    from_date="2026-04-01",
    to_date="2026-04-30",
    preferred_columns=["invoice_number", "total", "vendor.name"],
)

print(f"{export.rows} rows, {export.columns} columns")
with open("invoices.csv", "w") as f:
    f.write(export.csv_data)

# Export a hand-picked set of runs
export = client.workflows.runs.export(
    workflow_id="wf_abc123xyz",
    block_id="extract-1",
    selected_run_ids=["run_abc123", "run_def456"],
)
{
  "csv_data": "run_id,invoice_number,total,vendor.name\nrun_abc123,INV-2026-001,1234.56,Acme Corp\nrun_def456,INV-2026-002,987.00,Globex\n",
  "rows": 2,
  "columns": 4
}

Documentation Index

Fetch the complete documentation index at: https://docs.retab.com/llms.txt

Use this file to discover all available pages before exploring further.

Export run results for one block as structured CSV. Useful for pulling extract or classify outputs out of Retab in bulk for downstream analytics, BI tools, or spreadsheets. You pick:
  • Which workflow + which block to export (workflow_id, block_id).
  • What to export for that block (export_source):
    • "outputs" (default) — what the block produced.
    • "inputs" — what the block received.
  • Which runs to include — either a fixed list (selected_run_ids) or all runs that match the filters (status, exclude_status, from_date, to_date, trigger_types).
  • Column ordering via preferred_columns. Columns you don’t list still appear, in the order the block emits them.
The response carries the CSV as a string plus the row / column counts. Write it directly to a file or hand it to your CSV reader of choice.
from retab import Retab

client = Retab()

# Export every successful run's extract output
export = client.workflows.runs.export(
    workflow_id="wf_abc123xyz",
    block_id="extract-1",
    export_source="outputs",
    status="completed",
    from_date="2026-04-01",
    to_date="2026-04-30",
    preferred_columns=["invoice_number", "total", "vendor.name"],
)

print(f"{export.rows} rows, {export.columns} columns")
with open("invoices.csv", "w") as f:
    f.write(export.csv_data)

# Export a hand-picked set of runs
export = client.workflows.runs.export(
    workflow_id="wf_abc123xyz",
    block_id="extract-1",
    selected_run_ids=["run_abc123", "run_def456"],
)
{
  "csv_data": "run_id,invoice_number,total,vendor.name\nrun_abc123,INV-2026-001,1234.56,Acme Corp\nrun_def456,INV-2026-002,987.00,Globex\n",
  "rows": 2,
  "columns": 4
}

Authorizations

Api-Key
string
header
required

Query Parameters

access_token
string | null

Body

application/json
workflow_id
string
required

Workflow ID for server-side row derivation

block_id
string
required

Block ID for server-side row derivation

export_source
enum<string>
default:outputs

Use block outputs or inputs

Available options:
outputs,
inputs
selected_run_ids
string[] | null

Run IDs filter (null means all runs)

selected_doc_types
string[] | null

Doc type filter (null/empty means all)

status
enum<string> | null

Optional status filter (intersects with completed-only export scope)

Available options:
pending,
running,
completed,
error,
waiting_for_human,
cancelled
exclude_status
enum<string> | null

Optional status exclusion filter (intersects with completed-only export scope)

Available options:
pending,
running,
completed,
error,
waiting_for_human,
cancelled
from_date
string | null

Optional start date filter (YYYY-MM-DD)

to_date
string | null

Optional end date filter (YYYY-MM-DD)

trigger_types
enum<string>[] | null

Optional trigger type filters

Available options:
manual,
api,
schedule,
webhook,
email,
restart
preferred_columns
string[]

Preferred data column order

delimiter
string
default:;

CSV delimiter

line_delimiter
string
default:

CSV line delimiter

quote
string
default:"

CSV quote character

Response

Successful Response

csv_data
string
required

CSV content

rows
integer
required

Data row count

columns
integer
required

Column count including fixed columns