In our last piece, we explored the magic of multi-agent AI using LangGraph, focusing on how agentic frameworks enable multiple specialized AI agents to collaborate effectively on complex tasks. Today, we’ll dive deeper into another powerful framework: CrewAI. While LangGraph offers deep graph-based orchestration, CrewAI shines with simplicity, speed, and practicality. Let’s unpack why CrewAI is capturing attention.
Why Another Framework?
Multi-agent frameworks have proliferated rapidly—each promising new efficiencies. CrewAI distinguishes itself by embracing intentional constraints and simplicity:
- Rapid Prototyping and MVP Development: Most CrewAI projects require only 50-100 lines of code, significantly cutting down the time from idea conception to a working minimum viable product. This speed is invaluable for rapidly iterating on projects, testing new concepts, and quickly delivering demonstrations to stakeholders.
- Learning Curve: CrewAI has a shorter and shallower learning curve compared to LangGraph. Its straightforward design allows developers unfamiliar with graph-based frameworks to quickly get productive.
- Simple Workflow Models: CrewAI offers two streamlined modes—Pipeline (sequential tasks) and Planner/Executor (hierarchical workflows). Pipeline mode is excellent for clearly defined tasks, executed in a straightforward sequence—perfect for content creation or structured data processing. Planner/Executor mode adds an intelligent planner that dynamically breaks complex tasks into subtasks, enabling parallel execution and reducing overall execution time.
- Built-in Schema Validation: CrewAI prioritizes reliability and consistency by integrating schema validation directly into the workflow. This ensures data mismatches and input errors are caught early, significantly reducing debugging efforts and enhancing the robustness of multi-agent interactions.
CrewAI as IKEA Furniture
If LangGraph represents an open-world LEGO set, CrewAI is more like IKEA furniture—easy to assemble, practical, and robust.
Working with Crews
Let's go through some code snippets to get a feel for how crews work in CrewAI.
First Steps: A Minimal Single Agent Crew
Let’s start with a basic example. Suppose we want to write a short blog post on a technical topic. We’ll use just one agent—a writer—and a single task.
from crewai import Agent, Task, Crew
# topic is received via the "topic" key in the input when run
writer = Agent(
role="{topic} Blog Writer",
goal="Write a blog post about {topic}",
backstory="An experienced technical writer specializing in {topic}.",
)
write_task = Task(
name="write_blog",
agent=writer
)
crew = Crew("BlogCrew", tasks=[write_task], process="sequential")
result = crew.kickoff({"input": {"topic": "Vector Databases"}})
print(result)
This runs a single-agent crew and returns the LLM-generated blog post.
Adding an Agent with Tools
Suppose we’d like to add another agent to perform research and find the most relevant information on the web. We can add another agent to our crew, one which has access to the web using tools. We can use the built-in SerperDevTool and ScrapeWebsiteTool provided by CrewAI.
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
researcher = Agent(
role="{topic} Researcher",
goal="Find information about {topic} on the web",
tools=[SerperDevTool(), ScrapeWebsiteTool()]
)
research_task = Task(
name="web_research",
description="Write a blog post using the researched facts.",
expected_output="A draft blog post on {topic}.",
agent=researcher
)
# sequential task with research task before write task
crew = Crew("BlogCrew", tasks=[research_task, write_task], process="sequential")
CrewAI will take care of orchestration and passing the output from one task to the next task.
External Tools Compatibility
While CrewAI provides its own set of tools and it is quite easy to create your own, it also provides compatibility with external tools from LangChain, LlamaIndex and MCP.
# Example of using a stdio based MCP server
from mcp import StdioServerParameters
from crewai_tools import MCPServerAdapter
serverparams = StdioServerParameters(
command="uvx",
args=["--quiet", "pubmedmcp@0.1.3"],
env={"UV_PYTHON": "3.12", **os.environ},
)
with MCPServerAdapter(serverparams) as tools:
# tools is now a list of CrewAI Tools matching 1:1 with the MCP server's tools
agent = Agent(..., tools=tools)
task = Task(...)
crew = Crew(..., agents=[agent], tasks=[task])
crew.kickoff(...)
You can find the latest info at the crewai-tools Github repo site.
Workflow Styles: Sequential vs. Hierarchical
Sequential (Pipeline Mode): Tasks happen one after another—ideal for linear, straightforward processes where the output of each step directly informs the next. This method simplifies debugging since errors can be traced sequentially through logs.
Hierarchical (Planner Mode): Adds a Planner agent that intelligently splits tasks into parallelizable subtasks, allowing multiple tasks to run concurrently. While this boosts efficiency, it introduces additional complexity due to the coordination required between concurrent tasks and potential dependencies.
The only difference in the code to switch from sequential to hierarchical mode is:
crew = Crew("BlogCrew", tasks, process="hierarchical", manager_llm="gpt-4o")
Here, manager_llm is the LLM used by the planner process to do the planning for the tasks.
(Pro tip: Begin with sequential mode to establish stability, then shift to hierarchical mode as project complexity and task concurrency requirements grow.)
Memory: Enhancing Agent Capabilities
CrewAI’s sophisticated memory system significantly enhances the capabilities of AI agents by enabling them to remember, reason, and learn from past interactions. The memory system comprises several components, each serving a unique purpose:
Short-Term Memory
Temporarily stores recent interactions and outcomes using Retrieval-Augmented Generation (RAG), allowing agents to recall and utilize information relevant to their current context during ongoing tasks.
Long-Term Memory
Preserves valuable insights and learnings from past executions, enabling agents to build and refine their knowledge over time.
Entity Memory
Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping.
Contextual Memory
Integrates short-term, long-term, and entity memory to maintain the context of interactions, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation.
Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew’s objectives and the nature of tasks it will perform. By default, the memory system is disabled; you can activate it by setting memory=True in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting the embedder to a different model. It’s also possible to initialize the memory instance with your own configuration.
Here’s an example of how to configure a crew with customized memory components:
from crewai import Crew
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
from crewai.memory.storage import LTMSQLiteStorage, RAGStorage
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
long_term_memory=LongTermMemory(
storage=LTMSQLiteStorage(db_path="./long_term_memory.db")
),
short_term_memory=ShortTermMemory(
storage=RAGStorage(
embedder_config={"provider": "openai", "config": {"model": "text-embedding-3-small"}},
type="short_term",
path="./short_term_memory"
)
),
entity_memory=EntityMemory(
storage=RAGStorage(
embedder_config={"provider": "openai", "config": {"model": "text-embedding-3-small"}},
type="short_term",
path="./entity_memory"
)
),
verbose=True,
)
In this configuration:
LongTermMemory uses a SQLite storage backend to persist valuable insights across sessions.
ShortTermMemory and EntityMemory utilize RAG storage with OpenAI embeddings to manage recent interactions and entity information, respectively.
By integrating these memory components, your agents can maintain context, learn from past experiences, and deliver more coherent and relevant responses.
Memories can also be reset, selectively if required, to clear out what has been learned and stored.
my_crew.reset_memories(command_type = 'all') # Resets all the memory
my_crew.reset_memories(command_type = 'short') # Resets short-term memory
Using an External Memory Provider
CrewAI also provides support for using external memory systems such as mem0. Here's an example:
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
os.environ["MEM0_API_KEY"] = "YOUR-API-KEY"
agent = Agent(
role="You are a helpful assistant",
goal="Plan a vacation for the user",
backstory="You are a helpful assistant that can plan a vacation for the user",
verbose=True,
)
task = Task(
description="Give things related to the user's vacation",
expected_output="A plan for the vacation",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
external_memory=ExternalMemory(
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}} # you can provide an entire Mem0 configuration here
),
)
crew.kickoff(
inputs={"question": "which destination is better for a beach vacation?"}
Flows: Structured Orchestration for Complex Workflows
While Crews excel at autonomous collaboration among agents, Flows in CrewAI provide a structured, event-driven framework for orchestrating complex AI workflows. They offer precise control over task execution, state management, and integration with external systems, making them ideal for applications requiring deterministic outcomes and fine-grained control.
Key Features of Flows
Event-Driven Architecture: Define workflows that respond dynamically to events and data changes.
State Management: Maintain and share state across different tasks and components within the workflow.
Flexible Control Flow: Implement conditional logic, loops, and branching to handle complex execution paths.
Integration with Crews: Seamlessly combine Flows with Crews to leverage both structured orchestration and autonomous agent collaboration.
Implementing a Flow: Personalized Greeting System
Let’s consider a practical example where we build a personalized greeting system. This system generates tailored greetings based on the recipient’s relationship type, such as Family, Friend, Colleague, Client, or General. Each relationship type has a dedicated Crew responsible for crafting appropriate messages.
Step 1: Define the State
We’ll start by defining a structured state to hold the recipient’s details and the generated greeting:
from pydantic import BaseModel
class GreetingState(BaseModel):
recipient_name: str
occasion: str
relationship_type: str
greeting: str = ""
Step 2: Create the Flow
Next, we’ll implement the Flow that routes execution to the correct Crew based on the recipient’s relationship type:
from crewai.flow.flow import Flow, start, router, listen
class GreetingFlow(Flow[GreetingState]):
@start()
def get_recipient_details(self):
# In a real application, these details might come from user input or an external source
self.state.recipient_name = "John Doe"
self.state.occasion = "Promotion"
self.state.relationship_type = "Client"
return self.state
@router(get_recipient_details)
def route_based_on_relationship(self):
relationship = self.state.relationship_type.lower()
if relationship == "family":
return "family_greeting"
elif relationship == "friend":
return "friend_greeting"
elif relationship == "colleague":
return "colleague_greeting"
elif relationship == "client":
return "client_greeting"
else:
return "general_greeting"
@listen(route_based_on_relationship, key="family_greeting")
def generate_family_greeting(self):
self.state.greeting = f"Dear {self.state.recipient_name}, congratulations on your {self.state.occasion}! Your family is proud of you."
return self.state.greeting
@listen(route_based_on_relationship, key="friend_greeting")
def generate_friend_greeting(self):
self.state.greeting = f"Hey {self.state.recipient_name}, cheers to your {self.state.occasion}! Let's celebrate soon."
return self.state.greeting
@listen(route_based_on_relationship, key="colleague_greeting")
def generate_colleague_greeting(self):
self.state.greeting = f"Dear {self.state.recipient_name}, congratulations on your {self.state.occasion}. Wishing you continued success."
return self.state.greeting
@listen(route_based_on_relationship, key="client_greeting")
def generate_client_greeting(self):
self.state.greeting = f"Dear {self.state.recipient_name}, congratulations on your {self.state.occasion}. We appreciate your partnership."
return self.state.greeting
@listen(route_based_on_relationship, key="general_greeting")
def generate_general_greeting(self):
self.state.greeting = f"Dear {self.state.recipient_name}, best wishes on your {self.state.occasion}!"
return self.state.greeting
In this Flow:
The get_recipient_details method initializes the state with recipient information.
The route_based_on_relationship method determines which greeting method to invoke based on the relationship type.
The appropriate generate_*_greeting method crafts a personalized message.
Step 3: Execute the Flow
To run the Flow and obtain the personalized greeting:
flow = GreetingFlow()
final_greeting = flow.kickoff()
print(f"Generated Greeting: {final_greeting}")
This should output something like this:
Generated Greeting: Dear John Doe, congratulations on your Promotion. We appreciate your partnership.
When to Use Flows
Consider using Flows when your application requires:
Deterministic Execution: Precise control over the order and conditions under which tasks are executed.
Complex Logic: Implementation of conditional branches, loops, or other complex control structures.
State Persistence: Maintaining context or data across multiple steps or sessions.
Integration with External Systems: Orchestrating interactions with APIs, databases, or other external services.
By leveraging Flows, you can build robust, maintainable, and scalable AI workflows that meet the demands of complex applications.
Training: Enhancing Agent Performance Through Iterative Learning
CrewAI’s training feature allows you to iteratively improve your AI agents by providing feedback and refining their behavior over multiple sessions. This process ensures that agents become more accurate, efficient, and aligned with your specific requirements.
Why Train Your Agents?
Training enables agents to:
- Adapt to specific tasks and domains.
- Improve decision-making and problem-solving abilities.
- Incorporate human feedback for better alignment.
- Achieve more consistent and reliable outputs.
Training
You can train your crew either via CLI or programmatically. The end result is the same. Via CLI it looks like:
crewai train -n <n_iterations> <filename>
Each iteration, the program pauses and collects feedback from a human which allows improvement by saving the information gathered from the feedback and using it during run-time.
CrewAI vs LangGraph: Quick Comparison
CrewAI is ideal when simplicity and rapid deployment matter most. LangGraph, by contrast, excels in complex, flexible scenarios requiring detailed workflow orchestration and advanced graph handling.
That having been said, it isn't an either-or situation. You can actually combine them to work together! You can find the example code here for LangGraph integration, and the site itself has several other CrewAI code examples.
Deep‑Dive: Why CrewAI’s Simplicity Is a Feature, Not a Bug
During the LangGraph write‑up, some readers asked why LangGraph’s explicit graphs aren’t always the better path. Here’s the nuance:
Cognitive Load – Engineers think in sequences more naturally than in full graphs. CrewAI’s two modes match that mental model.
Micro‑optimization vs. Shipping – Graph tinkering can optimize a pipeline by 5 %. If your bottleneck is uncertain market fit, the 5 % won’t move the needle.
Maintenance Over Time – A graph with dozens of conditional edges feels elegant on day 1 and becomes “spaghetti‑on‑whiteboard” by month 6 unless you invest in graph linting tooling. CrewAI’s opinionated path limits blast radius.
That said, we still spin up LangGraph for:
Streaming chains that need immediate token output while upstream nodes fetch data.
Long‑running research loops where a single plan might revisit previous branches based on partial insight.
Final Thoughts and Next Steps
CrewAI is the pragmatic cousin in the agent‑framework family. It won’t play 4‑D chess, but it will ship product faster than you can say “Stand‑up is in five minutes.” (Ok, not really—but it might feel like it!)
Take it for a spin, maybe combined with LangGraph where granularity is king. Then tell us which hybrid tricks worked for you—stories beat benchmarks every time.
No code was harmed in the making of this article, though several API keys needed refreshments.See you in the SmolAgents showdown next!