Using AI to Help You Build Agents

How to use ChatGPT, Claude, and Macha's own AI Builder to draft instructions, generate test cases, critique your prompts, and design custom tools — plus pointers to other prompt-engineering resources.

The Best Tool For Drafting Agent Instructions Is Another AI

The recursive insight: the best tool for drafting agent instructions, designing tool sets, and brainstorming edge cases is itself an AI. Tools like ChatGPT, Claude, and Macha's own AI Builder can do a substantial chunk of the work for you — turning "I want an agent that triages support tickets" into a fully fleshed-out instruction set with workflow, tools, and escape hatches in seconds.

This page is about how to get the most out of those tools, plus pointers to other external resources worth knowing about.

Macha's Built-In AI Builder

The fastest way to bootstrap a new agent is the "Build with AI" button on the Agents page. You describe what you want in plain language, and the AI Builder configures the agent's name, instructions, tools, triggers, and sub-agents based on your connected integrations.

The AI Builder is good for getting 80% of the way there. The remaining 20% is the polish that makes the agent yours — the specific tone, the team's particular policies, the edge cases that only you know about.

You can also "Edit with AI" from the agent detail page. This is useful for incremental changes: "add an escape hatch for tickets in non-English languages," "change the tone to be more formal," "add a step to check the customer's order history before replying."

Using ChatGPT Or Claude To Draft Instructions

External AI assistants are excellent at producing first drafts of agent instructions. The trick is to give them enough context. A useful prompt template:

"I am building an AI agent on a platform called Macha. The agent will [do what]. It has access to the following tools: [list with one-line descriptions of each]. It runs on [trigger condition]. Write the agent's instructions in plain English, using a numbered workflow that names the tools by their exact identifiers, with explicit boundaries and escape hatches for ambiguous cases. The tone should be [describe tone]."

Both ChatGPT and Claude will produce a credible first draft from this. You then edit it to match your team's specific patterns.

Iterating With An External AI

Treat the external AI as a thinking partner, not a vending machine. After it produces the first draft, ask follow-up questions:

  • "What edge cases am I missing?"
  • "What are the failure modes for this kind of agent?"
  • "What instructions could I add to make this more robust?"
  • "If you were building this agent, what would you most worry about?"

The follow-up answers are often more valuable than the original draft. They surface the cases you would otherwise have to discover in production.

Using AI To Critique Your Existing Instructions

The reverse is also useful. Take an instruction set you have written, paste it into ChatGPT or Claude, and ask:

"Critique these AI agent instructions. What is ambiguous? What edge cases are not covered? What rules might conflict? What tone signals are missing? Be specific and actionable."

This is the cheapest pre-launch review you can run. The external AI will catch ambiguity, missing escape hatches, and conflicting rules that you have stopped seeing because you have read your own instructions ten times.

Using AI To Generate Test Cases

External AIs are very good at generating test cases for an agent. Useful prompt:

"I have built an AI agent that does [purpose]. Generate 15 test cases I should run through it. Include 5 happy-path cases, 5 edge cases (ambiguous data, language mismatch, missing fields, etc.), and 5 cases the agent should refuse to handle. For each, describe the input the agent will receive and the behavior I should expect."

You then run those 15 cases as your Test Run input set in Stage 2 of the testing pattern.

Using AI For Tone Calibration

If you are unsure what tone to specify, paste a few examples of replies your team has actually sent and ask the external AI to characterise the tone.

"Here are five replies my support team has sent recently: [paste]. Describe the tone in concrete, actionable terms I could put in an AI agent's instructions."

The output will give you specific tone guidance — sentence length, formality level, use of contractions, sign-off style, opening conventions — that you can drop directly into your agent's instructions.

Building Custom Tools With AI Help

For custom HTTP API tools, the AI Tool Builder in Macha (under Custom Tools) lets you describe an API endpoint in conversation and configures the tool for you — auth, parameters, body template, response mapping. Useful when you have an internal API you want an agent to call but do not want to wire up the OpenAPI integration manually.

You can also use ChatGPT or Claude to plan a custom tool: paste the API documentation, ask "what parameters and body template should this tool take?", and use the answer to configure the tool in Macha.

External Resources Worth Knowing

Anthropic's Prompt Engineering Documentation

Anthropic publishes detailed guidance on prompt engineering for Claude models. Even though Macha abstracts most of this, the underlying principles transfer to writing good agent instructions. The "Be clear, direct, and detailed" and "Use XML tags" sections are especially useful.

OpenAI's Prompting Guide

OpenAI's prompt engineering guide covers similar ground from the GPT side. The sections on "split complex tasks into simpler subtasks" and "ask the model to adopt a persona" map directly to the patterns recommended in this guide.

The Prompt Engineering Community

Sites like promptingguide.ai and the LangChain documentation have running collections of prompt patterns that work well across models. Most of the patterns translate directly to Macha agent instructions — the underlying mechanics are the same.

When To Reach For External AI vs Macha's Built-In Builder

Rough heuristics:

  • New agent from scratch: Macha's "Build with AI" button. It knows your connectors and tool catalog, so the first draft is more grounded.
  • Refining an existing agent: Macha's "Edit with AI" or external AI both work. External AI is sometimes better because you can have a longer conversation about the design.
  • Critiquing instructions: External AI. The fresh perspective catches things Macha's builder might miss because it does not have the full prose context.
  • Generating test cases: External AI. Better at creative variation and edge-case generation.
  • Tone calibration: External AI. You can paste examples and get back a tone description.
  • Designing a custom tool: Macha's AI Tool Builder. It directly creates the tool in your workspace.

Treat AI Assistance As A Starting Point

One important caveat: AI-generated instructions are first drafts, not final products. The AI does not know your team's specific policies, your customer base's quirks, the historical incidents that shaped your current rules, or the politics of which workflows are acceptable to automate. All of that has to be added by you.

The pattern that works: AI for the structure and the obvious bits, you for the specifics that only your team knows. The combination is faster and more thorough than either alone.

One Last Trick: Have The Agent Critique Itself

Once your agent is built, you can chat with it on Macha and ask it to critique its own instructions:

"You are configured with the following instructions: [paste them]. What instructions do you find ambiguous? Where might you behave inconsistently? What rules might conflict?"

The agent's answer is essentially what the model "sees" when it reads its own instructions. If it identifies ambiguities, those are the same ambiguities that produce inconsistent behavior in production. Fix them.

This works on any model — and it costs nothing beyond a single chat message. We recommend doing it before any major launch.

© 2026 AGZ Technologies Private Limited