What are Workflows?
Workflows are visual, block-based pipelines that let you chain together multiple document processing operations. Instead of writing code for each step, you can drag and drop blocks onto a canvas, connect them, and create powerful document automation flows. A workflow typically consists of:- Input blocks - Entry points for data:
- Document - Upload files (PDF, images, Word, Excel)
- JSON Input - Pass structured JSON data
- Processing blocks - Operations like Extract, Parse, Split, Classifier
- Logic blocks - Conditional flows like Human-in-the-Loop, Function, If/Else routing, and API Call
Creating a Workflow
- Navigate to the Workflows section in your dashboard
- Click Create Workflow to open a new canvas
- Drag blocks from the sidebar onto the canvas
- Connect blocks by dragging from output handles to input handles
- Configure each block by clicking on it
- Your workflow auto-saves as you build
Connecting Blocks
Blocks communicate through handles that define the type of data they accept or produce:| Handle Type | Icon | Description |
|---|---|---|
| File | 📎 | Document files (PDF, images, Word, Excel) |
| JSON | { } | Structured data extracted from documents |
Connection Rules
- File → File: Pass documents between processing blocks
- JSON → JSON: Pass extracted data between logic blocks
- Each input handle accepts only one connection
- Connections validate automatically to prevent incompatible links
Edit Mode vs Run Mode
Workflows have two operational modes:Edit Mode
- Add, remove, and configure blocks
- Create and delete connections
- Rename the workflow
- View generated Python code
Run Mode
- Upload documents to input blocks
- Execute the workflow step-by-step
- View results at each stage
- Download processed files and extracted data
Running a Workflow
A workflow is fundamentally an asynchronous job. When you start it, Retab creates a workflow run, executes each step on the server, and stores the results on that run. You can then poll the run until it finishes and inspect the stored step outputs. For the SDK and HTTP endpoint details, see the workflow API reference:From the Dashboard
- Switch to Run Mode
- Upload a document to each Document input block
- Click Run Workflow
- Watch as each block processes (status indicators show progress)
- Click on output handles to view results
Using the SDK
The Python SDK exposes workflow metadata, graph authoring, run execution, and typed step inspection:client.workflows.*forlist(),get(),create(),publish(),duplicate(), andget_entities()client.workflows.blocks.*andclient.workflows.edges.*for programmatic graph changesclient.workflows.runs.*andclient.workflows.runs.steps.*for running flows and reading results
Discover input block IDs
Workflow run inputs are keyed by the IDs of yourstart and start_json blocks. get_entities() is the easiest way to discover them.
Run and wait for completion
Workflows support two input maps:documentsfor Document (start) blocksjson_inputsfor JSON Input (start_json) blocks
run.steps contains per-block status summaries. For typed inputs and outputs on each block, use the step helpers.
Inspect step outputs
Start withsteps.list(run.id) — it returns every persisted step in a single HTTP call. Avoid looping over run.steps and calling steps.get() per block; that’s N+1.
Step payloads are normalized into HandlePayload objects. For JSON-producing blocks, extracted_data is shorthand for the default output-json-0 handle.
steps.list(run.id, block_ids=[...]) when you only need a subset. Use steps.get_many(run.id, [...]) when you want normalized handle payloads (same shape as steps.get()) for a subset of blocks.
Jump from a step to its typed resource
Inference blocks persist a resource;step.artifact is a {operation, id} pointer you use to fetch the full typed result:
| operation | block type | fetch with |
|---|---|---|
extraction | extract | client.extractions.get(id) |
split | split | client.splits.get(id) |
classification | classifier | client.classifications.get(id) |
parse | parse | client.parses.get(id) |
edit | edit | client.edits.get(id) |
partition | for_each_sentinel_start | client.partitions.get(id) |
Build workflows from code
The same SDK can create and publish workflow graphs:client.workflows.list() or client.workflows.get(workflow_id) when you need to browse existing workflows before launching a run, and client.workflows.duplicate(workflow_id) when you want a draft copy of an existing flow.
Reading Workflow Results
The standard production pattern is to run the workflow, keep the returnedrun.id,
and poll the run until it reaches a terminal status such as completed or error.
- Start the workflow from the SDK or API
- Receive a
run.idand an initial status immediately - Poll the workflow run until it finishes
- Read the step results from the completed run
Workflow Execution Order
Workflows execute in topological order based on the block connections:- Start from Document input blocks
- Process each block once all its inputs are ready
- Continue until all blocks are processed or an error occurs
- Read outputs from the completed run and its step results
Conditional Routing
When using Classifier or If/Else blocks, only the branches that receive data are executed. Blocks on skipped branches are marked as “skipped” rather than failed.Viewing Generated Code
Every workflow can be exported as Python code. Click View Code in the sidebar to see the equivalent SDK calls for your workflow. This is useful for:- Integrating workflows into your existing codebase
- Running workflows in production environments
- Understanding how the visual blocks translate to API calls
Best Practices
Start simple
Start simple
Begin with a single Extract or Parse block, then gradually add complexity. Test each addition before moving on.
Use descriptive labels
Use descriptive labels
Rename blocks to describe their purpose (e.g., “Invoice Data” instead of “Extract 1”). This makes complex workflows easier to understand.
Add notes for documentation
Add notes for documentation
Use Note blocks to document sections of your workflow. They don’t affect execution but help explain the logic.
Validate with Human-in-the-Loop
Validate with Human-in-the-Loop
For critical data, add a HIL block after extraction. This ensures a human reviews low-likelihood results before they proceed.
Use Classifier for document routing
Use Classifier for document routing
When processing different document types, use a Classifier block to route each document to the appropriate extraction schema.
Test with sample documents
Test with sample documents
Before deploying, run your workflow with representative sample documents to catch edge cases.
Example: Invoice Processing Workflow
Here’s a common workflow pattern for processing invoices:- Start block accepts the invoice PDF
- Extract block pulls out vendor, amount, date, line items
- HIL block flags low-likelihood extractions for human review
- Read the verified data from the completed workflow run
Example: Multi-Document Classification Workflow
For workflows that process mixed document bundles:- Classifier routes documents by category (Invoice, Contract, Receipt)
- Each Extract block uses a document-specific schema
- Function blocks compute derived fields for each document type
- Merge JSON combines results from all branches into a single output