llmAgent pairs a system prompt with a tool set. You define what the agent knows and what it can do — the LLM figures out how to do it.
Basic example
Input and output
EveryllmAgent uses the same fixed schemas:
Execution modes
llmAgent supports two modes:
"one-shot"— The agent processes the input, returns a single response, then terminates."multi-turn"— The agent continues interacting with the user until it calls the auto-injected__submit__tool to signal task completion. The argument it passes to__submit__becomes the agent’s final output (the rest of the conversation is not visible to whoever invoked the agent).
Auto-injected tools
When you create anllmAgent, the runtime merges additional tools into your tools set that you don’t need to declare yourself:
| Tool / Set | When | What it does |
|---|---|---|
userInterfaceTools | Always | Prompt the user, send notifications |
guild_get_task_workspace_agents | Always | Lets the LLM discover workspace agents |
| Workspace agents | When useWorkspaceAgents is true (default) | Every agent installed in the workspace is exposed as a callable tool |
__submit__ | When mode is "multi-turn" | The tool the LLM calls to end the conversation and return its final result |
userInterfaceTools is already injected, you only need to declare agent-specific tools in tools.
Include guildTools explicitly if your agent calls platform endpoints (workspace context, credential requests, etc.) beyond workspace agent discovery.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
description | string | — | Shown to humans and to other LLMs when choosing which agent to invoke |
tools | ToolSet | — | Your agent-specific tool set. Auto-injected tools are merged on top |
systemPrompt | string | — | The system prompt that defines the agent’s behavior |
mode | "one-shot" | "multi-turn" | "one-shot" | Whether the agent continues interacting with the user after its first reply |
useWorkspaceAgents | boolean | true | When true, fetches every workspace agent at start-up and exposes them as callable tools |
Selecting specific tools
Every tool you include is described in the LLM’s prompt, which costs tokens and gives the model more options to choose from. A smaller, focused tool set means lower cost per turn and less chance of the LLM calling a tool you didn’t intend — for example, an agent that only reads GitHub issues doesn’t need write tools likegithub_issues_create_comment.
Use pick to include only the tools you need from a tool set. See Selecting specific tools in the Tool sets reference.
Next steps
- Coded agents — Build deterministic TypeScript agents.
- Task object — Access platform services from inside an agent.
- Tool sets — See all available tool sets.