Skip to main content
An llmAgent pairs a system prompt with a tool set. You define what the agent knows and what it can do — the LLM figures out how to do it.

Basic example

import { guildTools, llmAgent } from "@guildai/agents-sdk"
import { gitHubTools } from "@guildai-services/guildai~github"

const systemPrompt = `
You are a code review assistant.

When given a pull request, retrieve the latest changes
using the GitHub tools and provide helpful feedback.
`

export default llmAgent({
  description: "Reviews pull requests and provides feedback.",
  tools: { ...gitHubTools, ...guildTools },
  systemPrompt,
})

Input and output

Every llmAgent uses the same fixed schemas:
// Input: the initial prompt with task details
type Input = { type: "text"; text: string }

// Output: the agent's final response
type Output = { type: "text"; text: string }

Execution modes

llmAgent supports two modes:
mode
'one-shot' | 'multi-turn'
default:"'one-shot'"
  • "one-shot" — The agent processes the input, returns a single response, then terminates.
  • "multi-turn" — The agent continues interacting with the user until it calls the auto-injected __submit__ tool to signal task completion. The argument it passes to __submit__ becomes the agent’s final output (the rest of the conversation is not visible to whoever invoked the agent).
export default llmAgent({
  description: "An agent that can have back-and-forth conversations.",
  tools: {},
  systemPrompt:
    "Help users interactively. Continue asking questions until you have all the information you need.",
  mode: "multi-turn",
})

Auto-injected tools

When you create an llmAgent, the runtime merges additional tools into your tools set that you don’t need to declare yourself:
Tool / SetWhenWhat it does
userInterfaceToolsAlwaysPrompt the user, send notifications
guild_get_task_workspace_agentsAlwaysLets the LLM discover workspace agents
Workspace agentsWhen useWorkspaceAgents is true (default)Every agent installed in the workspace is exposed as a callable tool
__submit__When mode is "multi-turn"The tool the LLM calls to end the conversation and return its final result
Because userInterfaceTools is already injected, you only need to declare agent-specific tools in tools. Include guildTools explicitly if your agent calls platform endpoints (workspace context, credential requests, etc.) beyond workspace agent discovery.

Parameters

NameTypeDefaultDescription
descriptionstringShown to humans and to other LLMs when choosing which agent to invoke
toolsToolSetYour agent-specific tool set. Auto-injected tools are merged on top
systemPromptstringThe system prompt that defines the agent’s behavior
mode"one-shot" | "multi-turn""one-shot"Whether the agent continues interacting with the user after its first reply
useWorkspaceAgentsbooleantrueWhen true, fetches every workspace agent at start-up and exposes them as callable tools

Selecting specific tools

Every tool you include is described in the LLM’s prompt, which costs tokens and gives the model more options to choose from. A smaller, focused tool set means lower cost per turn and less chance of the LLM calling a tool you didn’t intend — for example, an agent that only reads GitHub issues doesn’t need write tools like github_issues_create_comment. Use pick to include only the tools you need from a tool set. See Selecting specific tools in the Tool sets reference.

Next steps

  • Coded agents — Build deterministic TypeScript agents.
  • Task object — Access platform services from inside an agent.
  • Tool sets — See all available tool sets.