Skip to main content
Coded agents are TypeScript functions you write yourself. They execute deterministically from start to finish, with no LLM driving the control flow — though you can call LLMs as needed within your code. When an agent makes a tool call or invokes a subagent, the runtime may suspend it while waiting for the result — sometimes for seconds, sometimes much longer (e.g., waiting for user input). The agent’s local variables, control flow position, and any accumulated data must survive that suspension. This is the “state” that needs to be managed: everything the agent needs to pick up where it left off. The two coded agent types differ in how they handle this. AutomaticallyManagedStateAgent lets the runtime serialize and restore your function’s state transparently — you write a normal async function and don’t think about it. SelfManagedStateAgent gives you explicit control: you decide what to save, when to save it, and how to resume. Use coded agents when you need precise control, predictable costs, or algorithmic logic.

AutomaticallyManagedStateAgent

The recommended starting point. Write a normal async run function with the "use agent" directive, and the runtime serializes and restores state for you automatically.
"use agent"

import {
  type Task,
  agent,
  pick,
  progressLogNotifyEvent,
  userInterfaceTools,
} from "@guildai/agents-sdk"
import { gitHubTools } from "@guildai-services/guildai~github"
import { z } from "zod"

const inputSchema = z.object({
  repo: z.string().describe("The GitHub repository in 'owner/name' format"),
  issue_number: z.number().describe("The issue number to summarize"),
})
type Input = z.infer<typeof inputSchema>

const outputSchema = z.object({
  summary: z.string(),
  labels: z.array(z.string()),
})
type Output = z.infer<typeof outputSchema>

const tools = {
  ...userInterfaceTools,
  ...pick(gitHubTools, [
    "github_issues_get",
    "github_issues_list_comments",
  ]),
}
type Tools = typeof tools

async function run(input: Input, task: Task<Tools>): Promise<Output> {
  const [owner, repo] = input.repo.split("/")

  await task.ui?.notify(progressLogNotifyEvent("Fetching issue..."))

  const issue = await task.tools.github_issues_get({
    owner,
    repo,
    issue_number: input.issue_number,
  })

  // Use LLM to summarize
  const result = await task.llm.generateText({
    prompt: `Summarize this GitHub issue:\n\n${issue?.body}`,
  })

  return {
    summary: result.text,
    labels: issue?.labels?.map((l) => l.name) ?? [],
  }
}

export default agent({
  description: "Summarizes a GitHub issue and extracts its labels.",
  inputSchema,
  outputSchema,
  tools,
  run,
})
Agents run in a sandboxed environment. You can only import @guildai/agents-sdk, zod, and @guildai-services/* packages — no other npm packages or Node.js built-ins are available. See the SDK introduction for details.

Input and output schemas

Define your schemas using Zod. The runtime uses them to validate input and expose the agent as a typed tool for orchestrating agents.
const inputSchema = z.object({
  message: z.string().describe("The message to process"),
})

const outputSchema = z.object({
  response: z.string(),
})

Error handling

Any exception thrown from your run function is returned to the calling agent or user. Use standard TypeScript error handling:
async function run(input: Input, task: Task<Tools>): Promise<Output> {
  try {
    // ...
  } catch (error) {
    const message = error instanceof Error ? error.message : String(error)
    throw new Error(`Failed to process: ${message}`)
  }
}

SelfManagedStateAgent

An event-driven state machine where you control what gets saved and when. Harder to implement than AutomaticallyManagedStateAgent, but has no runtime constraints and works without the "use agent" directive. Use it when:
  • You need to make parallel tool calls (the babel plugin forbids Promise.all across await points)
  • You want explicit control over what gets saved and when
  • You’re building a state machine that doesn’t map cleanly to a procedural run function

Anatomy

A self-managed agent declares callbacks instead of a run function:
FieldRequiredDescription
stateSchemayesZod schema for the state the agent persists via task.save()
initnoRuns immediately before start and before every onToolResults call. Use it to mutate the tool set (e.g., add workspace agents)
start(input, task)yesInitial entry-point. Returns an AgentResult — either output(...) to finish, or callTools(...) / ask(...) to request tool execution
onToolResults(results, task)yes if start may call toolsResumes the agent after tool results arrive. Returns another AgentResult
Every callback returns an AgentResult, built with one of three helpers:
HelperReturnsUse for
output(value)OutputResultAgent is done — value is the final output
callTools(calls)ToolCallsResultRequest the runtime to execute one or more tool calls in parallel, then invoke onToolResults with the results
ask(prompt)ToolCallsResult (via ui_prompt)Shortcut for a single callTools([...]) that asks the user for text input
Tool calls returned from the same start or onToolResults invocation run in parallel; the runtime resolves all of them before calling onToolResults.

Example: marco-polo

A classic state-machine agent that pings back and forth with the user until they stop saying “marco”:
import {
  agent,
  ask,
  output,
  userInterfaceTools,
  type AgentResult,
  type InferToolOutput,
  type Task,
  type TypedToolError,
  type TypedToolResult,
} from "@guildai/agents-sdk"
import { z } from "zod"

const inputSchema = z.object({
  message: z.string().describe("Say 'marco' to start a game"),
})
const outputSchema = z.object({
  count: z.number().describe("Number of back-and-forth exchanges"),
})
const stateSchema = z.object({
  count: z.number().describe("Running exchange count"),
})

const tools = { ...userInterfaceTools }
type Tools = typeof tools
type Input = z.infer<typeof inputSchema>
type Output = z.infer<typeof outputSchema>
type State = z.infer<typeof stateSchema>

async function start(
  input: Input,
  task: Task<Tools, State>,
): Promise<AgentResult<Output, Tools>> {
  if (input.message !== "marco") return output({ count: 0 })
  await task.save({ count: 1 })
  return ask("polo!")
}

async function onToolResults(
  results: Array<TypedToolResult<Tools> | TypedToolError<Tools>>,
  task: Task<Tools, State>,
): Promise<AgentResult<Output, Tools>> {
  const result = results[0]
  if (result.type !== "tool-result" || result.toolName !== "ui_prompt") {
    throw new Error("unexpected tool result")
  }
  const { text } = result.output as InferToolOutput<Tools["ui_prompt"]>

  const state = await task.restore()
  const count = state?.count ?? 0
  if (text === "marco") {
    await task.save({ count: count + 1 })
    return ask("polo!")
  }
  return output({ count })
}

export default agent<Input, Output, State, Tools>({
  description: "Plays marco-polo until you stop saying 'marco'.",
  inputSchema,
  outputSchema,
  stateSchema,
  tools,
  start,
  onToolResults,
})
Walkthrough:
  1. start saves { count: 1 } and returns ask("polo!"). The runtime asks the user “polo!” and suspends the agent.
  2. When the user replies, the runtime calls onToolResults with the ui_prompt result.
  3. If the user said “marco” again, onToolResults restores the state, increments count, saves, and asks again via ask("polo!"). Otherwise it returns output({ count }) and the agent finishes.

The init callback

init runs immediately before start and before every resumption via onToolResults. It’s the right place to mutate the tool set — for example, to add workspace agents as tools, the way llmAgent does internally:
import { agent, withWorkspaceAgentTools, guildTools, type Task } from "@guildai/agents-sdk"

const tools = { ...guildTools }
type Tools = typeof tools

export default agent({
  description: "...",
  inputSchema,
  outputSchema,
  stateSchema,
  tools,
  init: async (task: Task<Tools>): Promise<void> => {
    // Mutates `tools` in place so the runtime can dispatch workspace agents.
    await withWorkspaceAgentTools(task, tools)
  },
  start,
  onToolResults,
})

When to use each type

SituationRecommended type
Task can be described as a prompt + toolsllmAgent
Algorithmic, deterministic logicAutomaticallyManagedStateAgent
Need parallel tool calls or complex stateSelfManagedStateAgent
Want to minimize LLM costsAutomaticallyManagedStateAgent or SelfManagedStateAgent

Performance tips

  • LLM calls are expensive. Store the result of task.llm.generateText() in a variable if you need it more than once.
  • Batch operations. Group related API calls to reduce round-trips.
  • Use progress logs. Keep users informed during long-running operations with task.ui?.notify(progressLogNotifyEvent(...)).