Go to article URL


Photo by Steve Johnson on Pexels

In the rapidly evolving world of AI, agentic AI is emerging as a game-changer. These systems go beyond simple chatbots or predictive models: they’re designed to act autonomously, make decisions, and interact with the real world to accomplish goals. In this blog post, we’ll dive into what agents are, explore agentic applications and orchestration, and walk through how to build your own agentic applications with open-source examples.

What Are Agents?

AI agents are autonomous entities that can perceive their environment, reason about it, and take actions to achieve specific objectives. Think of an AI agent as more than just a chatbot, it’s like a digital teammate that can look around, make decisions, and take action on its own. Unlike traditional AI models that respond passively to inputs, agents are proactive: they can break down goals into sub-tasks, use tools (like APIs or databases), maintain memory across interactions, and adapt based on feedback. This makes AI agents ideal for applications requiring independence, such as automation, research, or problem-solving.

At their core, agents typically integrate large language models (LLMs) like GPT-4 or Llama for reasoning, combined with mechanisms for tool invocation and state management. They can operate in loops, iteratively refining their approach until the goal is met.

Open-Source Examples of AI Agents

If you want to get hands-on with agents, there are already plenty of open-source projects you can experiment with and build on. Here are a couple of the most notable:

For a longer list of projects, check out this open-source AI agents directory on Hugging Face.

What Are Agentic Applications and Agent Orchestration?

An agentic application is simply an app that uses one or more AI agents to take on complex tasks by itself. Instead of coding every step by hand, you can plug agents together like modules to tackle real-world problems, whether that’s analyzing data, generating content, or building software. The word “agentic” really just highlights one thing: these systems can act on their own with little human input, following the prompts they are given.

When you bring more than one agent into the mix, you need a way to coordinate them. This is where orchestration comes in—getting multiple agents to work together smoothly. That might mean giving them specific roles (say, a researcher and a writer), setting up how they communicate, keeping track of what they’ve learned, and making sure they stay on task.

To make this manageable, orchestration frameworks provide ready-made structures, like graphs, crews, or conversation flows, that organize the collaboration. With these in place, it becomes much easier to grow from a single helpful agent to an entire team working together.

Open-Source Examples of Agentic Applications and Orchestration

If you want to start building agentic applications yourself, there are already a number of open-source frameworks that can help. Here are some of the most popular:

Other frameworks worth exploring include OpenAI’s Agents SDK (built and evolved upon Swarm) for multi-agent workflows, and MetaGPT, which simulates role-based teams of agents.

Writing Agentic Applications

To build an agentic application, the first step is choosing the right framework for your needs, for instance, CrewAI if you want simplicity and lightweight orchestration, or AutoGen if you need extensibility and layered APIs.

Once you’ve selected a framework, you’ll need to:

With those elements in place, you can start assembling workflows. Below, we’ll look at how to get started with one of the most widely used frameworks (CrewAI).

Building with CrewAI

First, install CrewAI and ddgs (duckduckgo-search) which is a free search tool that I used for this example.

pip install crewai 'crewai[tools]' ddgs

CrewAI uses decorators for agents and tasks. Below is a simple example of a crew for researching and reporting on a given topic. It defines two agents (a researcher and a reporting analyst) and one task for each agent (research task and reporting task). This code creates a sequential workflow:

You can extend it with more agents with more tasks or parallel processes.

To run this script, you need to set your LLM API key. See how to do that in the crewAI docs.

from typing import Type, Optional
import json

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.tools import BaseTool
from pydantic import BaseModel, Field

from ddgs import DDGS

class DDGSearchInput(BaseModel):
    query: str = Field(..., description="Search query")

class DDGSearchTool(BaseTool):
    name: str = "DuckDuckGo Search"
    description: str = "Web search via DuckDuckGo. Returns a JSON list of results."
    args_schema: Type[BaseModel] = DDGSearchInput

    # declare this as pydantic field
    max_results: int = Field(8, description="Maximum number of results to return")

    def _run(self, query: Optional[str] = None, **kwargs) -> str:
        if query is None:
            query = kwargs.get("query")

        if not query:
            return json.dumps([], ensure_ascii=False)

        results = []
        with DDGS() as ddgs:
            for r in ddgs.text(query, max_results=self.max_results):
                results.append({
                    "title": r.get("title"),
                    "url": r.get("href"),
                    "snippet": r.get("body"),
                })
        return json.dumps(results, ensure_ascii=False)

@CrewBase
class ResearchCrew():
    """
    Crew of agents for researching and reporting on given topic.
    """

    def __init__(self, topic):
        self.topic = topic

    @agent
    def researcher(self) -> Agent:
        return Agent(
            role='Senior Data Researcher',
            goal=f'Uncover cutting-edge developments in {self.topic}',
            backstory=f'Seasoned researcher skilled in finding relevant information about {self.topic}.',
            verbose=True,
            tools=[DDGSearchTool(max_results=10)]
        )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            role='Reporting Analyst',
            goal=f'Create detailed reports from {self.topic} research findings',
            backstory=f'Meticulous analyst who turns {self.topic} data into clear reports.',
            verbose=True
        )

    @task
    def research_task(self) -> Task:
        return Task(
            description=f'Conduct thorough research on {self.topic}.',
            expected_output='A list of 10 bullet points with key findings.',
            agent=self.researcher()
        )

    @task
    def reporting_task(self) -> Task:
        return Task(
            description=f'Expand the research on {self.topic} into a full report.',
            expected_output='A markdown-formatted report with detailed sections.',
            agent=self.reporting_analyst(),
            output_file=f'ai_report-{self.topic}.md'
        )

    @crew
    def crew(self) -> Crew:
        """Assembles the crew."""
        return Crew(
            agents=[self.researcher(), self.reporting_analyst()],
            tasks=[self.research_task(), self.reporting_task()],
            process=Process.sequential,
            verbose=True
        )

## run the crew of agents
if __name__ == '__main__':
    topic = input('Topic: ')
    crew = ResearchCrew(topic).crew()
    result = crew.kickoff(inputs={'topic': topic})
    print(result)

Building A Dynamic Agent

We can follow a dynamic approach to create an agentic AI application as well. In this approach, without other 3rd party agent orchestration frameworks we can dynamically create task and agentic processes. Here I present a simple dynamic agent which can work on any given goal until the goal is met. I used OpenAI as the LLM in this example. Using the dynamic approach, the agent crew extracts sub-goals from the given main goal. Then it works on tasks and decides if the goals are met.

import os
import json
from openai import OpenAI

class DynamicAgent:
    def __init__(self, goal: str, model: str = "gpt-4o"):
        self.goal = goal
        self.model = model
        self.llm = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
        self.memory = []  # store context and past decisions
        self.max_iterations = 5  # prevent infinite loops

    def get_llm_response(self, prompt: str) -> str:
        resp = self.llm.chat.completions.create(
            model=self.model,
            messages=[
                {"role": "system", "content": "You are an autonomous agent that creates plans and conditions to achieve goals."},
                {"role": "user", "content": prompt}
            ],
            temperature=0.2,
        )
        result = None
        try:
            result = resp.choices[0].message.content
        except Exception:
            # fallback
            result = str(resp)
        print(f"llm result: {result}")
        return result

    def generate_plan(self) -> list:
        prompt = (
            f"Given the goal '{self.goal}', generate a pure JSON list of actionable steps with conditions for success."
            "Don't use any formatting like markdown outside of the JSON content.\n"
            "Each step should include: {'step': 'description', 'condition': 'success criteria'}.\n"
            "Do not rely on predefined rules; infer the steps and conditions from the goal."
        )
        plan = json.loads(self.get_llm_response(prompt))
        self.memory.append({"action": "planning", "output": plan})
        return plan

    def execute_step(self, step: dict) -> str:
        prompt = (
            f"Execute this step: {step['step']}.\n"
            f"Success condition: {step['condition']}.\n"
            "Provide the result and whether the condition was met."
        )
        result = self.get_llm_response(prompt)
        self.memory.append({"step": step['step'], "result": result})
        return result

    def run(self):
        print(f"Starting agent with goal: {self.goal}")
        plan = self.generate_plan()
        results = []
        for i, step in enumerate(plan[:self.max_iterations]):
            print(f"Executing step {i+1}: {step['step']}")
            result = self.execute_step(step)
            results.append(result)
            print(f"Result: {result}")
            # check if goal met
            if "Goal achieved" in result:
                break
        self.save_results_to_markdown(results)

    def save_results_to_markdown(self, results: list):
        filename = f"dynamic_agent_report-{self.goal}.md"
        with open(filename, "w", encoding="utf-8") as f:
            f.write("\n".join(results))
        print(f"Results saved to {filename}")

if __name__ == "__main__":
    goal = input("Goal: ")
    agent = DynamicAgent(goal=goal)
    agent.run()

Use Cases of Agentic AI Applications

Agentic AI applications are already finding their way into practical scenarios. By combining autonomy with orchestration, these systems go beyond demos and research projects to deliver measurable impact.

At End Point, we use agents to speed up development, using human review to keep our company’s long-standing commitment to high-quality products. We also build agentic AI solutions for clients, speeding up and modernizing legacy workflows.

Challenges Ahead

While agentic AI holds promise, several challenges remain before widespread adoption becomes seamless:

Wrapping up

Agentic AI applications represent the future of intelligent software, blending autonomy with collaboration. By leveraging open-source tools like CrewAI and LangGraph or specialized custom agents, you can create powerful solutions tailored to your needs. As AI evolves, we might expect even more sophisticated orchestration capabilities.

www.endpointdev.com/blog/feed.xml
programming | reporting