Build Your First Multi-Agent Workflow
Build Your First Multi-Agent Workflow with agno: Supporting Deep Reasoning and Web Search
In the wave of artificial intelligence, large language models (LLMs) have already demonstrated their powerful capabilities. However, a single LLM invocation often struggles to accomplish complex tasks that require planning, information retrieval, and deep analysis. This is where the concepts of Agent and Workflow come into play.
Today, we will use a concrete Python code example to show you how to use the emerging agno
framework to easily build an automated workflow where multiple agents collaborate, supporting deep reasoning and web search.
Our goal is: To create a workflow that, when a user poses a complex question (e.g., "How to write a good thesis?"), the system can automatically:
- Plan: Break down the question into multiple executable steps.
- Research: Use web search tools to collect relevant information for each step.
- Execute: Synthesize all planning and research results, perform logical reasoning, and finally provide a comprehensive and detailed answer.
Let's dive into the code and see how agno
makes all this simple.
Core Concept: A Team of Specialized Agents
The core idea of our workflow MyFirstWorkflow
is to simulate the working mode of a human expert team. We define three distinct agents, each with its own responsibility:
- Planner: Responsible for the overall layout and action planning.
- Researcher: Responsible for information gathering and providing factual data.
- Executor: Responsible for synthesis and analysis, drawing final conclusions.
This "division of responsibility" design is key to building advanced AI applications. It makes the process clearer, more controllable, and the output more reliable and in-depth.
Code Deep Dive
Let's analyze the code that implements this workflow step by step.
1. Environment Setup and Model Definition
from agno.workflow.workflow import Workflow
from agno.models.ollama.chat import Ollama
from agno.agent import Agent, RunResponse
from agno.tools.duckduckgo import DuckDuckGoTools
# ... other imports ...
# Define the LLM we use, here using a locally deployed open-source model via Ollama
OllamaLLM = Ollama(id="qwen3:14b",
timeout=60,
host= "http://141.223.121.59:11434")
agno
framework: We import core components likeWorkflow
,Agent
from theagno
library.Ollama
model: The code highlights that it connects to a locally or privately deployed open-source model (qwen3:14b
) viaOllama
, not a commercial API. This provides users with high data privacy and flexibility.agno
makes integrating local models extremely simple.
2. Define the Workflow and Three Main Agents
Our business logic is encapsulated in the MyFirstWorkflow
class.
class MyFirstWorkflow(Workflow):
description: str = (
"Deep reasoning, web search for user needs"
)
# ... Agent definitions below ...
a. The Planner
planner = Agent(
name="planner",
role="Planner",
goal="Break down the user's question into multiple task steps",
system_message="Please break down this question into several clear task steps:\n\n{input}.",
model=OllamaLLM,
storage=get_agent_storage("planner"),
add_history_to_messages=True, # Enable history memory
)
role
andgoal
: Clearly define the agent's identity and objective.system_message
: This is the key part, essentially the instruction (Prompt) for the LLM. We tell it that no matter what the input ({input}
) is, its job is to break it down into clear steps.
b. The Researcher
researcher = Agent(
name="researcher",
role="Researcher",
goal="Use web search tools to find key information",
system_message="For the task: {input}\nUse tools to search for relevant materials and summarize the most relevant information.",
model=OllamaLLM,
tools=[DuckDuckGoTools(timeout=10, proxy="socks5://127.0.0.1:7890")],
storage=get_agent_storage("researcher"),
add_history_to_messages=True,
)
tools=[DuckDuckGoTools(...)]
: This is the key to empowering the agent! We equip the researcher with theDuckDuckGoTools
, giving it web search capabilities. This means it is no longer limited to the model's training data but can access real-time information.proxy
setting: The code also thoughtfully shows how to set a proxy for the tool, which is very practical in certain network environments.
c. The Executor
executor = Agent(
name="executor",
role="Executor",
goal="Summarize all materials and perform logical reasoning to draw final conclusions",
system_message="Below are the task information and search results:\n{input}\nPlease analyze deeply based on logical relationships and draw detailed conclusions.",
model=OllamaLLM,
storage=get_agent_storage("executor"),
add_history_to_messages=True,
)
- This agent has no tools. Its only job is to "think".
- Its
system_message
instructs it to receive the information ({input}
—the planning and research results), analyze and reason deeply, and finally form a conclusion.
3. Orchestrate the Workflow: The run
Method
If agents are actors, the run
method is the script. It defines how data flows between agents.
def run(self, user_input: str) -> RunResponse:
logger.info(f"Getting input from user.")
# Step 1: User input goes to the planner
planner_contents: RunResponse = self.planner.run(user_input)
# Step 2: Planner's output (task steps) goes to the researcher
reseacher_contents: RunResponse = self.researcher.run(planner_contents.content)
# Step 3: Researcher's output (search summary) goes to the executor
executor_contents: RunResponse = self.executor.run(reseacher_contents.content)
# Return the final conclusion
return executor_contents
This process clearly demonstrates a "chain of thought" implementation:
User input -> Planning -> Research -> Reasoning -> Final output
Each step builds on the solid foundation produced by the previous step, ensuring the quality and depth of the final result.
4. Running and Output
if __name__ == "__main__":
# Run the workflow
report: RunResponse = MyFirstWorkflow(debug_mode=True).run(
user_input="How to write a good thesis?"
)
# Print the final report
pprint_run_response(report, markdown=True, show_time=True)
When you run with input like "How to write a good thesis?", agno
will automatically complete the following tasks:
planner
outputs a plan like:- Determine the research topic and scope
- Conduct literature review
- Propose core arguments and hypotheses
- Design the thesis structure outline
- Draft the initial manuscript
- Citation and formatting
- Revision and proofreading
researcher
receives these steps, uses DuckDuckGo to search for "how to conduct a literature review", "thesis outline design tips", etc., and summarizes the results.executor
takes the planning steps and each research summary, and finally merges them into a complete, well-structured, and detailed guide on how to write a good thesis.
Conclusion
We just used a simple script and agno
to build a powerful AI workflow. It's no longer just a simple Q&A bot, but a system that understands planning, research, and reasoning.
This is just the beginning. You can build on this model, add more specialized tools (like ArXiv search, code execution, database queries) and more detailed agents to create more complex automated solutions tailored to your needs.
The agno
framework opens the door to advanced AI applications. Now, it's time to try it yourself and build your own agent system!