Should your Zendesk AI agent run autonomously or as a sidebar copilot?
AI agents in Zendesk have two execution modes: fully autonomous (the agent replies to customers without any human review) or sidebar copilot (the agent helps your team draft replies, but a human ships them). Most teams want both. Here's how to decide which workflow gets which.
AI agents in Zendesk have two execution modes. The same agent — same instructions, same tools, same knowledge — can run either way.
Autonomous mode: the agent reads new tickets, reasons through them, and replies to customers directly. No human in the loop.
Sidebar copilot mode: the agent lives as a widget in the Zendesk agent workspace. Your human teammates ask it to summarize tickets, look things up, draft replies. The human still hits send.
Most teams end up using both. The question is which workflow gets which mode — and that decision matters more than which AI model you pick.
Autonomous mode: what it's good for
Autonomous mode is what people imagine when they hear "AI customer support." A ticket arrives, the agent handles it, the customer never knows there wasn't a human on the other side.
It works well for:
- High-volume repetitive categories — order status checks, password resets, shipping questions, refund eligibility, return label requests. These represent 30–50% of most teams' ticket volume.
- Low-risk decisions — anything where a wrong answer costs you a follow-up ticket, not a lost customer or legal liability.
- Time-sensitive responses — when first-response time directly affects CSAT. An AI agent replying in 8 seconds beats a human replying in 8 hours, every time.
You set it up with a trigger — most commonly Ticket Created — and the agent runs immediately on every new ticket. See the autonomous triggers guide for setup details.
The constraint: in autonomous mode, write operations execute immediately. The agent doesn't ask permission before posting a public reply, updating a status, or processing a refund (if you've given it that tool). That's the deal. You configure it carefully, you test it carefully, then you trust it.
Copilot mode: what it's good for
Copilot mode keeps a human in the loop. The agent surfaces inside Zendesk as a sidebar widget on the agent workspace. Your team chats with it: "@refundAgent what's the status on ticket #4821?" The agent reads the ticket, looks up the order in Stripe, checks the customer's history, and drafts a reply. Your teammate reviews, edits if needed, sends.
It works well for:
- Complex, ambiguous, or sensitive tickets — angry customers, legal-adjacent questions, cancellation requests, anything where tone matters.
- Low-volume but high-stakes work — enterprise account issues, B2B customer escalations, anything where the customer relationship is worth more than the time savings.
- Onboarding the agent — when you're still building trust in what the AI does well and where it slips, copilot mode lets you observe its reasoning without exposing customers to mistakes.
- Knowledge work for human agents — "summarize this thread," "pull the customer's last three orders," "draft a polite no." The agent acts as a research assistant for your team, not a customer-facing voice.
Write operations in copilot mode require explicit confirmation. The agent draws up a refund draft and pauses — your teammate clicks "Confirm" to process it. This is the safety net.
The hybrid play (which is what most teams actually run)
The honest answer for most support teams is both:
- Autonomous mode handles tier 1 — the high-volume, low-risk, repetitive categories
- Copilot mode handles tier 2 — the work humans still do, just faster and better-informed
You can run this two ways, and either works:
Option A: Separate agents per mode
You build one autonomous agent scoped narrowly to safe categories — "only handle order status, shipping questions, and refund-under-$50 requests" — with a trigger on Ticket Created. Anything that doesn't fit those categories, the agent escalates with a private note tagged "needs-human."
You build a separate copilot agent with broader tool access and looser instructions, available as a widget for your team to chat with on the hard tickets.
Option B: One agent, both modes
You configure one agent with full tool access and instructions that say "only auto-reply if you're highly confident in the answer; otherwise escalate." The trigger fires the agent autonomously; the same agent is also available in the sidebar. Same brain, two interfaces.
Option A is cleaner if you want strict separation of concerns. Option B is simpler to manage if you trust your agent's instructions to handle the gating logic. Most teams start with A and migrate to B as confidence builds.
How to decide which mode for which workflow
Three factors:
1. Volume
If a category has fewer than ~50 tickets/month, autonomous mode probably isn't worth setting up — your team handles them faster than you can build the trigger. Copilot mode is a better fit for low-volume work because the setup cost is lower (no trigger, no confidence threshold).
If a category has 500+ tickets/month, autonomous is a no-brainer. The math wins.
2. Risk tolerance
For each category, ask: what's the cost of the agent getting it wrong?
- "Wrong tracking info given" → low cost. Customer asks again, agent corrects, no harm done. → autonomous.
- "Processed a refund that shouldn't have been processed" → moderate cost. Reversible but messy. → copilot, or autonomous with a strict $-cap.
- "Told an enterprise customer the wrong contract terms" → high cost. Damages a multi-thousand-dollar relationship. → copilot, always.
3. Team trust
This is more important than people admit. If your support team doesn't yet trust the AI, autonomous mode breeds resentment — they'll feel like the AI is making them obsolete, and they'll lose context when handoffs happen mid-conversation.
Start in copilot mode for the first month. Let the team use the AI, see what it does well, see what it gets wrong. Build that trust. Then graduate the safe categories to autonomous mode one at a time. The team will pull for it once they see the AI handling the boring tickets they didn't want to do anyway.
The setup difference
Mechanically, the modes differ only in the trigger:
- Autonomous: Add a trigger (Ticket Created / Comment Added / Custom Webhook). Agent runs without human input. See the trigger setup guide.
- Copilot: No trigger needed. Install the Macha widget in your Zendesk agent workspace from the marketplace. Team members invoke the agent on demand.
Same agent. Same tools. Same knowledge. Same model. You can flip between modes by adding or removing the trigger.
A starting recipe
If you're new to AI agents in Zendesk and not sure where to start:
- Build one agent with access to your Help Center, Zendesk tools, and one external system (e.g., your e-commerce platform or payment processor).
- Run it in copilot mode for two weeks. Let your team chat with it on real tickets. Watch where it shines and where it slips.
- Identify one safe, high-volume category — usually order status or refund-eligibility questions.
- Add an autonomous trigger scoped narrowly to that category. Use Test Run on 10 historical tickets before going live.
- Monitor for a week. Measure resolution rate, CSAT, and "reopened ticket" rate.
- Expand to the next category if metrics look good, or tighten instructions if not.
By month three you'll have a clear picture of which categories want autonomous and which want copilot. The split is rarely 100/0 — usually somewhere around 60/40.
Ready to start? The complete setup walkthrough takes you through both modes step by step. Or compare to Zendesk's native AI Agents to see which platform fits your team.