Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.guild.ai/llms.txt

Use this file to discover all available pages before exploring further.

Every agent has access to task.llm for making language model calls. The provider and model are configured at the workspace level — your agent code doesn’t need to specify them.

Basic usage

const result = await task.llm.generateText({
  prompt: "Summarize this text...",
})

// result.text contains the model's response
console.log(result.text)

Structured generation

Pass a Zod schema to get typed, validated output:
import { z } from "zod"

const result = await task.llm.generateText({
  prompt: "Extract the key details from this issue...",
  schema: z.object({
    severity: z.enum(["low", "medium", "high"]),
    summary: z.string(),
    affectedFiles: z.array(z.string()),
  }),
})

// result is typed according to your schema
console.log(result.severity) // "high"

Best practices

  • Cache results. Store the return value of generateText() in a variable if you need it more than once. Each call costs tokens.
  • Be specific in prompts. Clear, detailed prompts produce better results and reduce the need for follow-up calls.
  • Use schemas for structured data. When you need specific fields, pass a schema rather than parsing free-form text.
  • Keep prompts focused. One clear task per call is better than a complex multi-part prompt.

Configuration

The model and provider are configured at the workspace level, not in agent code. This means:
  • Your agent works with any model the workspace is configured to use
  • Model changes don’t require agent code changes
  • Different workspaces can use different models with the same agent