Our recent post on AI workflow automation illustrated how AI can transform business processes, handle unstructured data, and free up teams to focus on high-value tasks. Today, we're taking this concept further by diving into AI agentic workflows.

Imagine workflows that don't just follow the rules, but think, adapt, and make context-aware decisions. This is the power of AI agents in automation.

In this guide, we'll explore:

  • What makes AI agentic workflows different from traditional automation;
  • Key components and design patterns for building effective AI agents;
  • Practical examples using n8n, a powerful AI-native workflow tool.

Let's explore the next frontier in automation!

What is an AI agentic workflow?

An AI agentic workflow is a mix of AI agents with traditional workflow automation. Unlike standard workflows that follow predefined steps, AI agentic workflows employ intelligent agents to make decisions, adapt to new situations, and autonomously achieve goals.

Key characteristics of AI agentic workflow:

  • Autonomy: Agents can operate independently, making decisions without constant human input.
  • Adaptability: They adjust their actions in response to changes in the environment or new information.
  • Goal-oriented: Agents work toward specific objectives rather than simply a set of rules.
  • Learning capability: Many AI agents can improve their performance over time.
  • Scalability: as agents learn, they can solve increasingly complex tasks without major reprogramming.

AI agentic workflows leverage large language models (LLMs) as their "brain power", allowing them to understand complex instructions, reason about tasks, and generate appropriate responses or actions.

Traditional workflow automation vs AI-enhanced vs AI agentic

To understand the power of AI agentic workflows, let's compare them to traditional approaches:

Traditional workflow automation:

  • Follows predefined, rigid steps
  • Handles structured data well
  • Limited ability to adapt to new situations
  • Requires manual updates to make changes

AI-enhanced workflows:

  • Use AI for specific tasks within a predefined workflow
  • Can handle unstructured data
  • Limited decision-making capabilities
  • Still follow a largely linear process

AI agentic workflows:

  • Dynamically adapt the workflow depending on context and goals
  • Handle both structured and unstructured data
  • Manage multi-step, non-linear processes
  • Make complex decisions autonomously
  • Can learn and improve over time
💡
In n8n, a workflow automation tool, you can create all three types of workflows. AI agentic workflows combine traditional nodes, AI-powered nodes as well as LangChain Agent nodes.

In the following sections, we’ll explore exactly what these components are and how to create complex AI-driven processes that go beyond traditional automation capabilities.

Key components of AI agentic workflows

AI agentic workflows are built on top of AI agents, combining them with workflow automation tools to create intelligent, adaptive systems. The core components of these workflows stem from the structure of AI agents themselves.

In our comprehensive guide to AI agents for developers, we've covered the fundamental elements of AI agents in detail, including:

  • sensors to gather information
  • actuators to perform actions
  • a reasoning engine (often an LLM) for decision-making
  • memory systems for storing information
💡
In tools like n8n, these components are seamlessly integrated into a workflow automation environment, forming the foundation of AI agentic workflows. The workflow tool provides the structure and execution environment, while the AI agent brings intelligence and adaptability to the process.

To get a good grip on AI agentic workflows, we need to understand the technological stack that powers them. This tech is rapidly changing as AI agents become more advanced.

A recent article by a group of researchers from the University of California, Berkeley provides an insightful breakdown of the current AI agents landscape.

Here’s a summary of the key layers:

  1. Model serving:
  1. Storage:
  • Vector databases: Chroma, Weaviate, Pinecone, Qdrant, Milvus
  • Traditional databases with vector support: Postgres (with pgvector) or Zep
  1. Tools & libraries:
  • Function calling frameworks: OpenAI’s JSON schema
  • Tool providers: Composio, Browserbase, Exa
  1. Agent frameworks:
  • State management
  • Context window structuring
  • Cross-agent communication
  • Memory approaches
  1. Agent hosting and serving:
  • Emerging field for deploying agents as services
Source: https://www.letta.com/blog/ai-agents-stack
Source: https://www.letta.com/blog/ai-agents-stack
💡
In n8n, many of these components are abstracted or simplified through Langchain integrations. This allows you to focus on designing the workflow logic while having access to powerful advanced AI agent technologies.

Create your own AI agentic workflows

Build complex automations 10x faster, without fighting APIs

AI agentic workflow examples

Although developers are familiar with design principles like SOLID or KISS, they have limited applicability to AI agentic workflows. That's why we’ll show agentic design patterns using the n8n workflow automation tool.

💡
If you're new to AI workflow automation, n8n's graphical UI and extensive example library make it easy to get started. For more advanced users, n8n allows custom code and functionality extensions in self-hosted environments.

Below are four key design patterns for AI agentic workflows in n8n, that we’ve identified during our extensive use of the tool. These patterns are tailored to the unique capabilities of AI agents in a low-code environment:

  1. Chained requests for executing a series of predefined commands to various models in a specific order.
  2. A single agent that maintains state and makes decisions throughout the entire workflow.
  3. Multi-agent with gatekeeper to delegate specialized tasks while maintaining centralized control.
  4. Multi-agent teams that collaborate on complex, multi-step processes with distributed decision-making.

Let's take a closer look at each of these patterns.

Example 1. Chained requests

Chained requests design pattern
Chained requests design pattern

The chained requests pattern is the simplest. Essentially, it’s a series of predefined commands to various models (LLMs and others). This pattern is especially useful when you need to process data across multiple stages, each potentially using a different AI model or service.

Key benefits:

  • Rigid logic with flexible components
  • Each step can be refined independently
  • Ideal for well-defined processes, especially for multi-modal content creation

How it works:

In n8n, chained requests are implemented by connecting multiple AI-powered nodes in sequence. Each node performs a specific task and passes its output to the next node in the chain. There may be standard nodes at any stage of the workflow for data import, simple processing, uploads etc. This allows for complex workflows that can handle various types of data and tasks.

Characteristics:

  1. Multi-stage processing: Chained requests excel at breaking down complex tasks into a series of smaller, manageable steps. For example, you can chain together audio transcription, text summarization, and data storage steps in a single workflow.
  2. AI model diversity: This pattern allows you to use different AI models for specific tasks within the same workflow. You might employ one model for text generation, another for image creation, and a third to analyze data.
  3. Data transformation: As data moves through the chain, it can be transformed at each step. This is useful for scenarios where the output of one AI model needs to be formatted or processed before input into the next model.
  4. Scalability: Chained requests can handle both simple and complex workflows. You can start with a basic chain and gradually add more steps as your needs grow.
  5. Customization: Each step in the chain can be fine-tuned independently. This allows for precise control over individual processes while maintaining the overall flow of the workflow. Once new models become available, you can swap them and upgrade individual steps.

When to use:

Choose the Chained Requests pattern when your workflow involves multiple, distinct steps that need to be executed in a specific order. This pattern is especially useful when each step requires different AI capabilities or when you need to process data through various stages of transformation and analysis.

Considerations:

  • Ensure each step in the chain is well-defined and outputs data in a format that the next step can use.
  • Implement error handling between steps to manage potential failures in the chain.
  • Be mindful of performance implications, especially for chains involving multiple AI model calls.

Consider using n8n’s built-in tools for data manipulation between AI steps to ensure smooth data flow through the chain.

💡
Here’s a quick overview of the LangChain concepts in n8n. It is useful for understanding the next sections of this article.

Example 2. Single agent

Single agent design pattern
Single agent design pattern

The single agent pattern involves one AI agent that maintains the state and makes decisions throughout the entire workflow. This agent typically has a special “scratchpad” memory to retain intermediate information when processing a single request, allowing for more context-aware and consistent decision-making. We will show how to build similar agents in the later part of the guide.

Key benefits:

  • Can query various tools as needed
  • Simpler to implement and debug compared to multi-agent systems
  • Able to maintain context across multiple interactions without complex setup

How it works:

In n8n, a single agent is typically implemented using a built-in LangChain node or a custom LangChain Code node that interacts with an LLM. The agent maintains its state either through built-in memory mechanisms or by storing information in a database between interactions. The workflow is designed so that all decisions and actions pass through this central agent.

💡
In certain cases you may want to limit the information available for an agent. Take a look at this workflow template, where an AI-agent has access to only an SQL schema, but not to the actual data.

Characteristics:

  1. Stateful interactions: A single agent can remember previous interactions and use that context to make future decisions. This is especially useful for chatbots or assistants that need to maintain conversation history.
  2. Tool integration: While the agent is singular, it can use multiple tools or APIs to gather information or perform actions. This allows for versatility in handling various tasks without complicating the agent’s core logic.
  3. Adaptability: A single agent can be designed to handle a wide range of tasks or queries, adapting its behavior based on the input it receives and the tools at its disposal.
  4. Centralized logic: All decision-making is centralized within the single agent, which can simplify workflow design and debugging.
  5. Scalable functionality: The capabilities of the agent can be expanded by adding new tools or integrations which it can access.

When to use:

Choose the single agent pattern when you need a consistent interface for various tasks or when maintaining context in interactions is crucial. It’s particularly well suited for chatbots, personal assistants, or systems that need to provide coherent responses across diverse queries.

Considerations:

  • Ensure the agent has access to all necessary tools and data sources to handle its responsibilities.
  • Implement robust error handling and fallback mechanisms, as the entire workflow depends on this single agent.
  • Be mindful of the general memory limitations of LLMs and implement strategies to manage long-term context if needed.
  • Consider using n8n memory nodes or external storage solutions to persist agent state for long-running or stateful applications.
  • When integrating with external services (like Telegram or Google Search Console), ensure proper authentication and data handling practices are followed.

Example 3. Multi-agent with gatekeeper

Multi-agent design pattern
Multi-agent design pattern

The multi-agent with gatekeeper pattern is an evolution of the single agent approach, designed to handle more complex, multi-faceted tasks while maintaining centralized control. In this pattern, a primary AI agent acts as a “gatekeeper,” coordinating and delegating tasks to specialized subordinate agents.

💡
By employing this pattern in a customer support system, you can create sophisticated and scalable AI-driven workflows in n8n. The gatekeeper agent receives incoming user messages and delegates queries to specialized agents for technical support, billing, sentiment analysis, and knowledge retrieval, ultimately providing comprehensive and consistent customer responses.

Key benefits:

  • Centralized control with distributed expertise
  • Ability to handle complex, multi-step processes
  • Can use smaller fine-tuned LLMs for specialized tasks
  • Scalable architecture to add new capabilities

How it works:

In n8n, the gatekeeper is implemented similarly to a single agent, typically using a LangChain node. However, instead of directly performing all tasks, the gatekeeper agent has access to a set of “tools”, which are actually other AI agents, each specialized for specific tasks.

  1. The gatekeeper receives the first input or query.
  2. It analyzes the task and decides which specialist agent(s) to involve.
  3. In certain cases, the gatekeeper can refuse to perform the task, e.g. if the request is irrelevant.
  4. If the request is valid, the gatekeeper formulates instructions for its agent(s).
  5. The specialist agents perform their tasks and return results to the gatekeeper.
  6. The gatekeeper integrates the results and formulates the final response or action.

Characteristics:

  1. Hierarchical structure: The workflow has a clear hierarchy with the gatekeeper at the top, who manages and coordinates specialist agents.
  2. Specialized expertise: Each subordinate agent can be optimized for a specific task, using fine-tuned models or specialized algorithms.
  3. Flexible scaling: New capabilities can be added by introducing new specialist agents without significantly changing the core workflow.
  4. Optimized resource use: By delegating to specialized agents, the system can use less powerful (and often less expensive) models for specific tasks while reserving more powerful models for complex decision-making.
  5. Improved context management: The gatekeeper can maintain the overall context while the specialist agents can focus on the tasks at hand.

When to use:

Choose the multi-agent with gatekeeper pattern if you need to manage complex workflows that require diverse expertise or when you need to integrate multiple AI capabilities while maintaining a consistent interface and decision-making process.

Considerations:

  • Design clear interfaces between the gatekeeper and specialist agents to ensure smooth communication.
  • Implement error handling at both the gatekeeper and specialist agent levels.
  • Consider using different LLMs or AI models for different agents based on their specific requirements.
  • Be mindful of the increased complexity in debugging and monitoring multi-agent systems.

Use n8n’s flow control nodes (Filter, IF, Loop, Merge) to effectively manage routing between the gatekeeper and specialist agents.

💡
We’ve also written an article on LangChain alternatives to give a better overview of the agentic landscape.

Example 4. Multi-agent teams

Multi-agent teams design pattern
Multi-agent teams design pattern

Multi-agent teams represent the most complex and flexible AI agentic workflow pattern. In this approach, several AI agents work together on sophisticated tasks, whereby various interaction structures are possible.

💡
Without the ready-made templates in the current n8n workflow gallery, this section is a bit theoretical. If you are already familiar with previous agentic design patterns, it should be quite straightforward to make the next logical step.

Key benefits:

  • Highly flexible and scalable architecture
  • Can handle complex, multi-step processes
  • Allows the integration of different LLMs and prompting styles
  • Enables different problem-solving approaches

How it works:

In n8n, multi-agent teams are implemented with several LangChain nodes or custom AI integrations. The key lies in the connections and communication between agents. Common structures are:

  1. Mesh network: Agents communicate freely with every other agent.
  2. Hierarchical tree: Multiple layers of gatekeeper agents, each managing specialist agents.
  3. Hybrid approaches: Combination of mesh and hierarchical structures for optimal flexibility.

Characteristics:

  1. Distributed decision making: Decision-making spread across multiple agents.
  2. Complex interactions: Agents can pass tasks, request information, or collaborate.
  3. Model diversity: Different agents can use various LLMs or AI models.
  4. Adaptive task allocation: The team can reconfigure based on the task at hand.

When to use:

Choose multi-agent teams for the most complex workflows that require varying levels of expertise and adaptability. They are ideal for large-scale systems that integrate multiple departments or functions.

Considerations:

  • Implement robust communication protocols between agents.
  • Design clear mechanisms for task assignment and conflict resolution.
  • Prepare for increased complexity in monitoring and debugging.

Utilize n8n’s advanced flow control (i.e. Merge and Compare datasets nodes) and custom function nodes to manage complex interactions.

💡
Multi-agent systems can emerge organically, even without deliberate planning. This happens when API interfaces abstract other agentic systems. For example, a simple customer engagement workflow might unknowingly interact with AI-powered services from different departments (marketing, sales, IT), creating a de facto multi-agent system.

How to build an AI agentic workflow with n8n?

In our previous video tutorial, we looked at how to create a Telegram bot with a custom workflow tool.

Old workflow example with a custom workflow tool
Old workflow example with a custom workflow too

Since then, n8n has introduced two kinds of powerful tools for LangChain AI agents: the HTTP Request tool and nodes as tools. We’ll use a familiar example and demonstrate how different tool nodes affect agentic workflows.

A similar bot with different types of tool nodes
A similar bot with different types of tool nodes
💡
We specifically use Telegram, because it has a developer-friendly ecosystem. You can jump-start in just a few minutes and share tangible results with your colleagues or clients. However, a similar approach remains even if you are building agentic workflows for other services, be it a CRM, helpdesk platform or another IM app.

So, let’s set up and test the workflow:

Step 1. Telegram trigger to listen for incoming events

This node listens for incoming Telegram messages, serving as the entry point for our workflow. If you’ve never created Telegram bots before, check out our step by step guide.

Step 2. AI agent to process incoming messages and interact with users

Connect the output of the Trigger node to the AI Agent node. This is the core of our workflow, this node processes incoming messages and decides on appropriate actions.

  • Select the Tools Agent as an Agent type (some other agents may not be capable of working with tools);
  • For the Prompt drop-down select Define below and enter an expression: {{ $json.message.text }}. This will send the user's text from Telegram to the Agent.
  • To improve the agent response, add a System message option and provide the following expression: You are a helpful assistant. You are communicating with a user named {{ $json.message.from.first_name }}. Address the user by name every time. If the user asks for an image, always send the link to the image in the final reply. You may need to adjust this prompt for your specific case.

Now, let’s make the agent functional.

Step 3. Chat model to provide the ‘brain’ for the agent

Connect the OpenAI chat model to the Agent. This will provide the language model – the “brain” of the whole agent. Just pick the exact model (i.e. gpt-4o). Leave most settings by default except these:

  • Set Sampling Temperature to 0.7
  • Set Frequency Penalty 0.2

Step 4. Window buffer memory to store the conversation history

Now, let’s add memory to store the conversation history of a particular user. Window buffer memory stores the last few interactions, allowing the agent to maintain context. To make this node user-specific, open the configuration screen and provide the following settings:

  • Make the Session Key as an expression and type in the value: chat_with_{{ $('Listen for incoming events').first().json.message.chat.id }}. This way, the bot’s conversations with different users will remain separate.

Now, let’s configure the tool nodes.

Step 5. HTTP Request tool to generate Dall-e 3 images

With the HTTP Request tool node, n8n users can immediately connect to thousands of services in one go. We are using this node to generate images in the Dall-e 3 model.

Here are the configuration tips:

  • Make a POST request to the URL: https://api.openai.com/v1/images/generations
  • Provide a Description: Call this tool to request a Dall-E-3 model, when the user asks to draw something. If you gеt a response from this tool, forward it to the Telegram tool.
  • Re-use your predefined OpenAI credentials for the Authentication.
  • Activate Send Body toggle and provide the following parameters:
    • Add model parameter with the dall-e-3 value
    • Add prompt parameter and select By model (and it is required).

This way we indicate that the prompt should be populated by LLM, whereas the model has always a pre-defined value.

Step 6. Telegram tool node to send back an uncompressed image

Now, we need to add a tool to send back the image. n8n has recently introduced the Nodes- as-tools feature. You can now connect about 20 of the most popular n8n nodes directly to the agent. This approach requires a slightly different configuration.

A new $fromAI() expression simplifies the agent tool configuration
A new $fromAI() expression simplifies the agent tool configuration

In this example, do the following:

  • Keep the Tool Description a default Set Automatically value.
  • Select Message Resource and a Send Document Operation.
  • Make the Chat ID value an expression and provide {{ $('Listen for incoming events').first().json.message.from.id }}. This way, we always reply to the same user that sent the incoming message.

As for the Document value, Telegram allows direct file URLs. Make this value an expression and enter the text: {{ $fromAI("url", "a valid url of an image", "string", " ") }}.

💡
$fromAI() function is a new way of defining tools parameters. It’s a compact form of specifying what JSON key is expected and providing cues to the LLM what it should contain. In our case, we expect the `url` JSON key with the image URL as a value. The exact implementation is hidden under the hood, so you can focus more on building custom agentic workflows instead of grappling with JSON schemas and other LangChain nuances.

Step 7. Send a final reply to provide the last output from the agent node

LangChain agents always provide a final reply. You can pass this output to a new agent (for example, if you are building a multi-agent workflow). For now, let’s just add a simple Telegram node and send the AI agent’s response back to the user.

AI agents sends back the final reply via a second Telegram node
AI agents sends back the final reply via a second Telegram node
  • Select the Send Message Operation
  • Set the Chat ID expression to {{ $('Listen for incoming events').first().json.message.from.id }}
  • Set the Text expression to {{ $json.output }}. This is a standard JSON key for the AI-agent outputs in n8n

Step 8. Test the workflow

  • Activate the workflow
💡
Remember! Telegram allows you to register only one Webhook per bot. This means you cannot have 2 active Telegram triggers connected to the same bot.
  • Start a conversation with your Telegram bot
  • Try sending text messages and image requests
  • Observe how the AI agent handles different types of requests
The agent performed 5 steps may have a different sequence of actions depending on the user request
The agent performed 5 steps, but may have a different sequence of actions depending on the user request

In this screenshot, you can see a typical agentic workflow. The agent always loads a past conversation and sends a request to the LLM. Depending on the output, it will iterate back and forth between tools and LLM until it reaches a final answer.

Below is the chat output that the user receives after submitting a request:

Telegram chat output
Telegram chat output

If you compare this workflow to our previous example with the custom workflow node, you may notice that the new workflow is more flexible.

This is because an Agent makes more LLM requests, decides on the next step several times, and uses tools independently.If you need to improve Agent’s response time or ensure the exact sequence of actions, you may want to keep the workflow more rigid.

When creating new AI-agentic workflows, make sure you understand the implications of using various patterns described earlier.

On the one hand, using custom workflow tools may work better to streamline the Agent behavior. The Agent calls a single workflow tool and several actions are completed in the predefined way.

On the other hand, connecting several independent tools directly to the Agent may be a better approach when the Agent needs to decide every time which one to use.

💡
This simplified workflow demonstrates the power of combining AI agents with HTTP Request tools and nodes as tools in n8n. You can expand this basic structure to create more complex AI agentic workflows, integrating various services and capabilities.

Wrap up

In conclusion, AI agentic workflows offer a powerful evolution over traditional automation by introducing intelligent, adaptable systems that can make decisions and learn from interactions.

Unlike standard automation, which relies on predefined rules, AI agentic workflows leverage AI models and flexible agent stacks to handle more complex, dynamic tasks. Through the use of design patterns like chained requests, single agents, multi-agent systems with gatekeepers, and multi-agent teams, organizations can tailor their workflows to meet specific needs, whether it’s automating simple processes or managing more intricate, collaborative tasks.

Building AI agentic workflows with n8n enables rapid, practical implementation of AI that integrates seamlessly with existing processes, offering real business value by using multiple agents, custom tools, and RAG capabilities.

What’s next?

To deepen your understanding of AI agentic workflows and expand your automation skills,  check out these resources:

  1. Discover alternatives to LangChain to diversify your AI development toolkit.
  2. Explore the top AI tools for business to complement your n8n workflows and boost productivity.
  3. Learn how to run LLMs locally for enhanced privacy and control in your AI workflows.

Ready to start building your own AI agentic workflows? Sign up for n8n cloud and jump-start right away or explore advanced features of the Enterprise edition.