Human fallback for AI agents
Fallback is insurance. When tools break, confidence drops, or policy forbids automation, agents should escalate to people with explicit tasks—not improvise or silently drop work.
Triggers worth encoding
- Repeated tool failures or unknown API states.
- Classifier scores below your risk threshold.
- Workflow branches that legal or policy marks as human-only.
Operational metrics
Track fallback rate by workflow. Spikes often reveal upstream data issues or brittle prompts—not lazy humans.
Implementation
Pair this page with how to add a human escalation layer and the docs guide.