👥

CrewAI

AI Workflow & Agents

Framework for orchestrating role-playing autonomous AI agents to work together on complex tasks

Deployment Info

Cleachdadh: 2-5 min
roinn-seòrsa: AI Workflow & Agents
Taic: 24/7

Share this guide

Overview

CrewAI is a cutting-edge framework for orchestrating role-playing autonomous AI agents that work together to accomplish complex tasks. By enabling multiple AI agents to collaborate, delegate, and communicate, CrewAI transforms how we approach automation, research, content creation, and decision-making processes.

At its core, CrewAI allows you to create specialized AI agents, each with distinct roles, goals, and capabilities. These agents can work together in crews, much like a real team, where each member contributes their expertise to achieve a common objective. This multi-agent approach is far more powerful than single-agent systems, as it enables task decomposition, parallel processing, and specialized problem-solving.

CrewAI supports two primary task execution modes: sequential and hierarchical. In sequential mode, tasks are executed one after another in a defined order, with each agent building upon the previous agent's output. In hierarchical mode, a manager agent coordinates and delegates tasks to worker agents, making decisions about task priority and workflow optimization. This flexibility allows you to design workflows that match your specific use case requirements.

One of CrewAI's most powerful features is its tool integration system. Agents can be equipped with custom tools and functions, enabling them to search the web, query databases, call APIs, analyze files, generate images, or perform any Python-based operation. This extensibility makes CrewAI suitable for virtually any automation scenario, from data analysis to content generation to complex research tasks.

CrewAI provides built-in memory and context management, allowing agents to remember previous interactions, learn from past executions, and maintain conversation context. This makes the framework ideal for building sophisticated AI applications that improve over time and can handle multi-step workflows with dependencies between tasks.

The framework is LLM-agnostic, supporting OpenAI, Anthropic Claude, Google Gemini, open-source models via Ollama, and any OpenAI-compatible API. You can mix and match different models for different agents, optimizing for cost, speed, or capability based on each agent's role. This flexibility ensures you're not locked into a single AI provider.

With production-ready features like error handling, retry logic, task callbacks, output validation, and comprehensive logging, CrewAI is built for real-world applications. Whether you're automating content creation, building AI research assistants, or orchestrating complex business workflows, CrewAI provides the tools and architecture to build reliable, scalable multi-agent systems.

Key Features

Role-Based Autonomous Agents

Create specialized AI agents with distinct roles, goals, backstories, and capabilities that work autonomously

Sequential & Hierarchical Execution

Run tasks sequentially for ordered workflows or hierarchically with manager agents delegating to workers

Extensive Tool Integration

Equip agents with custom tools for web search, API calls, database queries, file operations, and any Python function

Memory & Context Management

Built-in memory systems allow agents to remember previous interactions and maintain context across executions

Multi-LLM Support

Use OpenAI, Claude, Gemini, Ollama, or any OpenAI-compatible API with different models for different agents

Production-Ready Architecture

Error handling, retry logic, callbacks, output validation, and comprehensive logging for reliable deployments

Common Use Cases

• **Content Creation Automation**: Generate blog posts, social media content, marketing copy with specialized writer, editor, and SEO agents
• **AI Research Assistants**: Build research crews that search the web, analyze papers, synthesize findings, and produce comprehensive reports
• **Data Analysis Workflows**: Orchestrate agents that collect data, perform analysis, generate visualizations, and create insights
• **Customer Support Automation**: Deploy multi-agent systems that handle inquiries, search knowledge bases, and escalate complex issues
• **Code Generation & Review**: Create developer crews with agents for requirements analysis, code writing, testing, and documentation
• **Business Intelligence**: Build analyst crews that gather market data, perform competitive analysis, and generate strategic recommendations
• **Workflow Automation**: Automate complex multi-step business processes with agents handling different stages of the workflow

Installation Guide

**Installation on Ubuntu VPS:**

1. **Install Python 3.11+ (if not already installed):**
```bash
sudo apt update && sudo apt upgrade -y
sudo apt install python3.11 python3.11-venv python3-pip -y
```

2. **Create Virtual Environment:**
```bash
mkdir ~/crewai-projects
cd ~/crewai-projects
python3.11 -m venv venv
source venv/bin/activate
```

3. **Install CrewAI:**
```bash
# Install CrewAI and extra tools
pip install crewai crewai-tools

# Or install with specific LLM support
pip install 'crewai[anthropic]' # For Claude
pip install 'crewai[google]' # For Gemini
```

4. **Set Up API Keys:**
```bash
# Create .env file for API credentials
nano .env
```
Add your API keys:
```
OPENAI_API_KEY=sk-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here
GOOGLE_API_KEY=your-google-api-key-here
```

5. **Create Your First Research Crew:**
```bash
nano research_crew.py
```
Add this code:
```python
from crewai import Agent, Task, Crew, Process

researcher = Agent(
role='Senior Research Analyst',
goal='Conduct thorough research on given topics',
backstory='Expert researcher with 10 years experience',
verbose=True
)

writer = Agent(
role='Content Writer',
goal='Create comprehensive reports',
backstory='Professional writer specializing in research',
verbose=True
)

research_task = Task(
description='Research latest trends in AI',
agent=researcher,
expected_output='Detailed research findings'
)

writing_task = Task(
description='Write comprehensive report from research',
agent=writer,
expected_output='Professional research report'
)

crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
verbose=2
)

result = crew.kickoff()
print(result)
```

6. **Run Your First Crew:**
```bash
python research_crew.py
```

7. **Verify Installation:**
```bash
python -c "import crewai; print(f'CrewAI version: {crewai.__version__}')"
```

Configuration Tips

**Essential Configuration:**

**1. Agent Configuration:**
```python
from crewai import Agent

agent = Agent(
role='Data Analyst',
goal='Analyze data and provide insights',
backstory='Expert analyst with strong statistical background',
llm='gpt-4-turbo',
verbose=True,
allow_delegation=False,
max_iter=15,
memory=True,
tools=[search_tool, calculator_tool]
)
```

**2. Task Configuration:**
```python
from crewai import Task

task = Task(
description='Analyze sales data for Q4 2024',
agent=data_analyst,
expected_output='Detailed analysis report',
async_execution=False,
context=[previous_task]
)
```

**3. Sequential Process:**
```python
from crewai import Crew, Process

crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, writing_task],
process=Process.sequential
)

result = crew.kickoff()
```

**4. Hierarchical Process:**
```python
crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, writing_task],
process=Process.hierarchical,
manager_llm='gpt-4'
)
```

**5. Memory Configuration:**
```python
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
memory=True,
long_term_memory={
'provider': 'mem0',
'config': {'api_key': 'your-mem0-api-key'}
}
)
```

**6. Different LLM Providers:**
```python
# OpenAI
openai_agent = Agent(role='Researcher', llm='gpt-4-turbo')

# Claude
from langchain_anthropic import ChatAnthropic
claude_llm = ChatAnthropic(model='claude-3-opus-20240229')
claude_agent = Agent(role='Writer', llm=claude_llm)

# Ollama (local)
from langchain_community.llms import Ollama
local_llm = Ollama(model='llama2')
local_agent = Agent(role='Analyst', llm=local_llm)
```

**Best Practices:**
- Design agents with clear, specific roles and goals
- Use descriptive backstories to influence agent behavior
- Chain tasks logically with proper context dependencies
- Enable memory for agents needing conversation history
- Mix different LLMs based on requirements (cost vs capability)
- Implement proper error handling and logging
- Monitor API usage and costs across providers
- Cache results for repeated queries

Thoir ìre don artaigil seo

-
Loading...

Deiseil airson an aplacaid agad a chur an gnìomh? ?

Get started in minutes with our simple VPS deployment process

Chan eil feum air cairt creideis airson clàradh • Cuir air bhog ann an 2-5 mionaidean