Discovering AI Agents with CrewAI: Automating Workflows with Smart Delegation

Why AI Agents?

Artificial Intelligence is changing the way we work, from automating mundane tasks to handling complex decision-making processes. Whether it’s managing customer service complaints, optimizing resumes for job descriptions, or streamlining business operations, AI-powered agents are making everyday tasks more efficient. When exploring solutions, CrewAI offers an AI-powered task delegation tool that turns AI into your personal workforce.

What is CrewAI?

CrewAI is a multi-agent framework that enables large language models (LLMs) to work together in harmony—kind of like The Avengers, if each superhero had a laptop, a love for automation, and zero tolerance for typos. This framework is available as a Python package, making it easy to deploy and manage AI agents for different tasks. It also supports API integrations, allowing seamless connectivity with various tools and platforms.

Key Features of CrewAI:

  • 🤖 Multi-Agent Collaboration – AI agents work together like a dream team.
  • 🧠 LLM-Powered Decision Making – Leverages large language models for smarter automation.
  • 🔌 Flexible Integration – Supports Python and API connectivity for maximum customization.
  • 🏗 Task Delegation & Hierarchy – Assign tasks step-by-step, concurrently, or even establish a manager-agent hierarchy.

Setting Up CrewAI

Getting started with CrewAI is fairly straightforward. The solution has great documentation. You can reference details here.

First you need install the Python package

pip install crewai

Local LLM vs. API

You can install a large language model (LLM) locally using tools like Ollama, which allows you to run CrewAI or similar agent-based frameworks without relying on cloud services. However, if your system doesn’t have enough RAM, the model may time out or run very slowly. The main advantage of this setup is that you’re not charged per token, so you can use it as much as you want for free. I have 16 GB of RAM, which limited the size and complexity of the LLMs I could run locally. Think of it like trying to stream Netflix on dial-up—technically possible, emotionally devastating.

How to locally install a model

Step 1: To locally install a model, you need to first install Ollama by going to Ollama.com

Step 2: Pull in the model you want

Once you’ve got Ollama installed, you’ll need to pull in the AI models you plan to use for your project. Just open your command prompt and use the ollama command to pull the model. For e.g., to pull deepseek-r1:1.5b use the following:

ollama pull deepseek-r1:1.5b

How to setup an API connection to OpenAI ChatGPT

The other option is to use tools like ChatGPT, with much quicker response time and options to use better models. For e.g. gpt-4-turbo

Step 1: Go to platform.openai.com and sign up or log in.

Step 2: Navigate to API Keys in your account settings.

Step 3: Click “Create new secret key”, copy it, and store it securely.

The Project

The project I considered was a common use case that many new job applicants face. When applying for jobs, it’s helpful to tailor your resume to the job description and include keywords that align more closely with the role.

The Agents and Tools

To help assist, I am using two separate agents.

Agent 1: Researcher

This agent’s goal is to analyze the job posting and extract critical information from the job.

The agent will use a search tool and webscraping tool to assist with the task.

Agent 2: Resume Strategist

This agent’s goal is to find all the best ways to make the resume stand out in the job market and get noticed by recruiters.

The agent will use a search tool, webscraping tool, file reading tool and a semantic search tool.

Tool Setup

For searching the web, I used Serper.dev. To set it up, follow these steps:

Step 1: Sign up for a free Serper account at https://serper.dev

Step 2: After logging in, navigate to your dashboard and click on “API Keys.”

Step 3: Create a new API key and copy it for use.

Serper.dev offers 2,500 free queries so you can try out their Google Search API at no cost. It’s a great way to test and integrate search features into your app or project.

The Code

To show the sample, I used gpt-4-turbo to setup. I stored the API keys in a .env file in the project path.

I created a separate utils.py file that had two functions to pull the environment variables.

 import os
from dotenv import load_dotenv

# Load keys from .env into environment variables
load_dotenv()

def get_openai_api_key():
    return os.getenv("OPENAI_API_KEY")

def get_serper_api_key():
    return os.getenv("SERPER_API_KEY")

The final code is below.

# ========================
# AI Resume Tailoring Crew
# ========================

# Suppress warnings
import warnings
warnings.filterwarnings('ignore')

# Import standard libraries
import os
from crewai import Agent, Task, Crew, LLM
from crewai_tools import (
    FileReadTool,
    ScrapeWebsiteTool,
    MDXSearchTool,
    SerperDevTool
)
from IPython.display import Markdown, display

# Import utility functions
from utils import get_openai_api_key, get_serper_api_key

# -------------------------
# API Key Setup
# -------------------------

os.environ["OPENAI_MODEL_NAME"] = 'gpt-4-turbo'
os.environ["OPENAI_API_KEY"] = get_openai_api_key()
os.environ["SERPER_API_KEY"] = get_serper_api_key()

# -------------------------
# LLM Configuration
# -------------------------
llm = LLM(
    model="openai/gpt-4-turbo",
    temperature=0.8,
    max_tokens=150,
    top_p=0.9,
    frequency_penalty=0.1,
    presence_penalty=0.1,
    stop=["END"],
    seed=42
)

# -------------------------
# Tool Setup
# -------------------------
search_tool = SerperDevTool()
scrape_tool = ScrapeWebsiteTool()
read_resume = FileReadTool(file_path='./analyst_Resume.md')
semantic_search_resume = MDXSearchTool(mdx='./analyst_Resume.md')

# -------------------------
# Agent Definitions
# -------------------------
researcher = Agent(
    role="Researcher",
    goal="Perform a high-quality analysis of job postings to support job applicants in tailoring their applications effectively.",
    tools=[scrape_tool, search_tool],
    verbose=True,
    backstory=(
        "You are a highly skilled Job Researcher with expertise in analyzing tech job postings. "
        "Your strength lies in identifying key qualifications, skills, and expectations that employers are seeking. "
        "Your insights serve as the foundation for creating well-aligned resumes and application materials."
    )
)

resume_strategist = Agent(
    role="Resume Strategist for Analysts",
    goal="Identify and apply the most effective strategies to make an analyst's resume stand out in the job market.",
    tools=[scrape_tool, search_tool, read_resume, semantic_search_resume],
    verbose=True,
    backstory=(
        "You are a detail-oriented Resume Strategist with expertise in optimizing resumes for analyst roles. "
        "Your role is to refine and tailor resumes by highlighting the most relevant skills, achievements, and experiences. "
        "You ensure that each resume aligns closely with the specific requirements and language of the target job description, "
        "maximizing its chances of getting noticed by recruiters and applicant tracking systems (ATS)."
    )
)

# -------------------------
# Task Definitions
# -------------------------
research_task = Task(
    description=(
        "You are assigned to analyze the job posting available at the following URL: {job_posting_url}. "
        "Your goal is to extract key information from the job listing, specifically focusing on required skills, relevant experiences, and listed qualifications. "
        "Use the available tools to scrape and search content from the URL, then identify and categorize the extracted requirements."
    ),
    expected_output=(
        "Provide a structured list summarizing the job requirements. "
        "This should include clearly labeled sections for:\n"
        "- Required Skills\n"
        "- Relevant Experiences\n"
        "- Educational or Professional Qualifications"
    ),
    agent=researcher,
    async_execution=True
)

resume_strategy_task = Task(
    description=(
        "You are provided with a candidate’s resume and a list of job requirements extracted from the job posting. "
        "Your task is to revise and tailor the resume to align closely with the job description. "
        "Use the available tools to enhance and adjust the content, focusing on relevance and clarity. "
        "Update all sections of the resume—including the summary, work experience, skills, and education—to reflect the candidate’s strengths as they relate to the job. "
        "Do not fabricate any details. Only enhance what is already provided to ensure it matches the job requirements effectively and professionally."
    ),
    expected_output=(
        "An updated resume that effectively highlights the candidate's "
        "qualifications and experiences relevant to the job."
    ),
    output_file="tailored_resume.md",
    context=[research_task],
    agent=resume_strategist
)

# -------------------------
# Crew Setup
# -------------------------
job_application_crew = Crew(
    agents=[researcher, resume_strategist],
    tasks=[research_task, resume_strategy_task],
    verbose=True
)

# -------------------------
# Execution
# -------------------------
job_application_inputs = {
    'job_posting_url': 'job-link'
}

print("🚀 Running job application crew... this may take a few minutes.")
result = job_application_crew.kickoff(inputs=job_application_inputs)

# -------------------------
# Display Output
# -------------------------
display(Markdown("./tailored_resume.md"))

Takeaway

AI agents like those built with CrewAI can simplify time-consuming tasks and they don’t complain about coffee breaks, office temperatures, or passive-aggressive emails. At least not yet!

By leveraging tools like local or API-based LLMs, and integrating with web search and semantic analysis, you can create a personalized, automated job-seeking assistant.

While this example focused on helping job applicants, the use cases for multi-agent AI frameworks are virtually endless—from customer support bots to market research assistants, internal process automation, content generation, and beyond. With the right configuration, AI agents can become a powerful digital workforce.

Disclaimer: This project is intended for educational and demonstration purposes only. While AI tools can assist in generating application materials and interview preparation, they should not replace human judgment. Always review and edit AI-generated content to ensure accuracy, authenticity, and alignment with your personal experience and qualifications.


Discover more from Bytes Of Data Insights

Subscribe to get the latest posts sent to your email.