Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.guild.ai/llms.txt

Use this file to discover all available pages before exploring further.

An llmAgent pairs a system prompt with a tool set. You define what the agent knows and what it can do — the LLM figures out how to do it. This is the simplest way to build a Guild agent. No TypeScript logic required — just a prompt and tools.

Example

import { guildTools, llmAgent } from "@guildai/agents-sdk"
import { gitHubTools } from "@guildai-services/guildai~github"

const systemPrompt = `
You are a code review assistant.

When given a pull request, retrieve the latest changes
using the GitHub tools and provide helpful feedback.
`

export default llmAgent({
  description: "Reviews pull requests and provides feedback.",
  tools: { ...gitHubTools, ...guildTools },
  systemPrompt,
})

Input and output

Every llmAgent uses the same fixed schemas:
// Input: the initial prompt with task details
type Input = { type: "text"; text: string }

// Output: the agent's final response
type Output = { type: "text"; text: string }

Execution modes

llmAgent supports two modes:
mode
'one-shot' | 'multi-turn'
default:"'one-shot'"
  • "one-shot" — The agent processes the input, returns a single response, then terminates.
  • "multi-turn" — The agent continues interacting with the user until it calls the __submit__ tool to signal task completion.
export default llmAgent({
  description: "An agent that can have back-and-forth conversations.",
  tools: {},
  systemPrompt:
    "Help users interactively. Continue asking questions until you have all the information you need.",
  mode: "multi-turn",
})

Tool recommendations

  • userInterfaceTools is included automatically — no need to add it.
  • Include guildTools if the agent uses tools that require authorization (e.g., GitHub access), so it can request credentials when needed.

Selecting specific tools

Use pick to include only the tools you need. See Selecting specific tools in the SDK reference.

When to use LLM agents

SituationUse llmAgent?
Task is expressible as a prompt + toolsYes
You need deterministic, repeatable behaviorNo — use coded agents
You want to minimize LLM token costsNo — use coded agents