Test your agent before publishing

Agents (Early Access) · Test

Before your agent goes live, test it. Testing lets you run your agent against a real (or simulated) event, watch it work in real time, and catch any issues before they affect your patients or data.

We strongly recommend testing every agent at least once before publishing.

What happens when you test an agent

When you run a test, Keragon starts your agent with a sample event — the same kind of data it would receive when triggered in production. You can watch the agent's thinking and actions unfold live in a chat-style view.

The agent will:

  1. Read your instructions
  2. Use the sample event as its starting point
  3. Call tools as needed, step by step
  4. Return a result or ask you a question if it gets stuck

You can send follow-up messages during the test to give the agent additional context, just like you would in a chat.

Important: Test runs are real. If your agent is configured to create a patient record or send an SMS, it will do those things during the test using your connected accounts. Use a test patient or a staging environment if you don't want live data to be affected.


How to run a test

  1. In the agent builder, click Test Agent — the button sits right below the Tools section.
  2. The test panel opens. You'll see a sample event picker — this shows the type of event your trigger is expecting.
  3. Select a sample event. If your trigger is an Event trigger, you can:
    • Use a recent event that was already received
    • If your trigger is a Schedule, the agent will run as if the scheduled time just arrived
  4. Click Test to start the test.

Reading the test output

As the agent runs, you'll see a live stream of its activity. This includes:

  • Thinking steps — the agent working out what to do next based on your instructions
  • Tool calls — each time the agent uses one of your tools, you'll see what it sent and what it got back
  • Final result — a brief summary of what the agent did and what the outcome was

What to look for

  • The agent followed your instructions in the right order. Each step should match what you wrote. If the sequence is wrong, revisit your instructions and make the order clearer.
  • The right tools were called. If the agent called an unexpected tool or skipped one it should have used, check that your instructions name the apps and actions explicitly.
  • The data was handled correctly. Check the actual records in your apps (e.g., Athena Health, DrChrono) to confirm that what was created or updated looks right.
  • Edge cases were handled. If you described what to do when a patient already exists, or when a field is missing — test those scenarios too, not just the happy path.

Common issues and what to do

The agent stopped unexpectedly

Usually a tool error — check the account credentials and the data being sent. See A tool failed or returned an error.

The agent did something I didn't expect

Instructions are likely ambiguous. Re-read them and tighten any step that could be read more than one way. See My agent returned an unexpected result.

The test run shows "needs action"

The agent hit a decision point and needs input. Reply in the test chat to continue, or update your instructions to handle the decision automatically.

The test run shows "limit reached"

The agent hit its max steps — the task may be too complex or stuck in a loop. Simplify, or split into two agents.


Iterating before publish

After each test run, make any changes to your trigger, instructions, or tools and run another test. Repeat until you're confident the agent handles your expected scenarios correctly.

There's no limit on how many times you can test.

Was this article helpful?
0 out of 0 found this helpful

Articles in this section