Every agent has access toDocumentation Index
Fetch the complete documentation index at: https://docs.guild.ai/llms.txt
Use this file to discover all available pages before exploring further.
task.llm for making language model calls. The provider and model are configured at the workspace level — your agent code doesn’t need to specify them.
Basic usage
Structured generation
Pass a Zod schema to get typed, validated output:Best practices
- Cache results. Store the return value of
generateText()in a variable if you need it more than once. Each call costs tokens. - Be specific in prompts. Clear, detailed prompts produce better results and reduce the need for follow-up calls.
- Use schemas for structured data. When you need specific fields, pass a schema rather than parsing free-form text.
- Keep prompts focused. One clear task per call is better than a complex multi-part prompt.
Configuration
The model and provider are configured at the workspace level, not in agent code. This means:- Your agent works with any model the workspace is configured to use
- Model changes don’t require agent code changes
- Different workspaces can use different models with the same agent