Skip to main content

Agents

Agents are the core building blocks of Gen8. Each agent is a customized AI assistant designed for specific tasks, equipped with custom instructions, tools, and access to knowledge.

What is an Agent?

An agent brings together several elements that shape how it interacts with users. Custom instructions tell the AI how to behave and respond. Tools give it capabilities like searching documents, creating files, or connecting to external services. Knowledge provides access to your uploaded documents and data. And context supplies information about projects, users, and your organization that helps the AI give relevant answers.

When these elements come together, you get an AI assistant that understands your specific needs and can take meaningful actions—not just a generic chatbot.

Types of Conversations

Agents support different conversation styles depending on how you want to interact with them.

Regular Chat is the standard back-and-forth conversation most people expect. It works well for customer support, general assistance, and interactive workflows where context builds over multiple exchanges.

Transcribe Mode processes each uploaded file separately rather than as part of a continuous conversation. This is ideal for batch processing tasks like analyzing multiple images, extracting data from a stack of documents, or reviewing content at scale.

Webhook-Only agents respond exclusively to external triggers rather than direct user input. They're designed for automated workflows, system integrations, and scheduled tasks where another system initiates the conversation.

Inline agents provide quick, single-response interactions. They work well for simple questions, quick lookups, and embedded widgets where you need a fast answer without an ongoing conversation.

Agent Components

Every agent is built from several configurable components that determine its behavior.

The System Prompt forms the foundation of how your agent behaves. This is where you define the AI's role and personality, set response guidelines, and establish boundaries and constraints. Think of it as the instruction manual the AI follows for every interaction.

The User Prompt provides initial context when a conversation starts. It can include default questions or instructions, dynamic snippets that personalize the experience, and project context that gives the AI relevant background.

Tools extend what your agent can do beyond conversation. They enable the AI to search knowledge bases, create and edit documents, access integrations with external services, and perform many other actions.

Building Blocks are reusable prompt components you can share across multiple agents. When you update a building block, every agent using it gets the updated content—making it easy to maintain consistency and roll out changes organization-wide.

Snippets inject dynamic content into prompts at runtime. They can include user information like names and preferences, project context, and organization settings—making each conversation feel personalized and relevant.

Getting Started

Ready to build your first agent? Start with Creating Agents to learn the fundamentals. Once you're comfortable with the basics, explore Building Blocks to create reusable components and Snippets to add dynamic content. When you're ready to fine-tune performance, Model Presets lets you configure AI options for different use cases.

Best Practices

The most successful agents start simple and grow more sophisticated over time. Begin with basic instructions that cover the core use case, then add complexity as you learn what works and what doesn't. Test frequently—both the expected scenarios and the edge cases.

Building blocks help you maintain consistency across agents. Extract common content like company policies, response formats, or standard disclaimers into blocks, then reference them wherever needed. When something changes, you update it once and every agent stays current.

Snippets make your agents feel personal and contextual. Use them to greet users by name, reference the current project, or pull in organization-specific settings. The more relevant the context, the more useful the responses.

Finally, test thoroughly before deploying. Try edge cases that might confuse the AI, verify that tools work as expected, and confirm that snippets substitute correctly. A few minutes of testing prevents a lot of user frustration.