March 5, 2025 • 4 min read

Don’t use Langchain anymore : Atomic Agents is the new paradigm !

Rédigé par Timothé Bernard

Timothé Bernard

Introduction

Since the rise of LLMs, numerous libraries have emerged to simplify their integration into applications. Among them, LangChain quickly established itself as the go-to reference. Thanks to its modular approach, it allows chaining model calls, interacting with vector databases, and structuring intelligent agents capable of reasoning and acting. For a long time, it was seen as a must-have for anyone looking to build generative AI solutions.

Over time, however, several limitations became apparent: too many abstraction layers, insufficient optimization for certain projects, hidden costs, lack of native input/output validation, and more. To keep it short and simple : customization is hard. In response to these limitations, new alternatives are emerging with a more efficient and pragmatic approach. One of the most promising solutions is AtomicAgents, a modern framework better suited to the current needs of developers.

AtomicAgents takes a modular and flexible approach to designing autonomous agents, avoiding LangChain’s complexity while enabling better task orchestration. LangChain paved the way, but these new solutions may well surpass it by offering better answers to the real needs of today’s developers.

Langchain’s limitations as LLM Agent library

When it was first introduced, LangChain impressed developers with how incredibly easy it made building applications powered by LLMs. At Theodo Data&AI, LangChain was quickly adopted. Even today, we still use it to rapidly build Proof of Concept projects, test ideas, or validate technical feasibility. LangChain is also used at Theodo for its convenient wrappers and its seamless integration with LangFuse, which provides enhanced monitoring capabilities.

While LangChain became a de facto standard for LLM-based applications, it now comes with significant limitations, driving many developers to look for alternatives.

1. Lack of Control Over Autonomous Agents

One of the main issues is the lack of control when working with agents. When using LangChain agents, the framework makes hidden calls to LLMs, chaining requests together without giving the developer full visibility into the process. The result? Unpredictable costs and inefficient execution, sometimes leading to unnecessarily long workflows.

2. Excessive Abstraction and Rigid Structures

The excessive abstraction and rigid architecture also make optimization difficult. Tweaking a processing flow or customizing an agent becomes frustratingly complex because LangChain enforces its own internal structures. To make matters worse, the documentation is incomplete and confusing, packed with outdated examples — making the learning curve unnecessarily steep

As a library, with its wrappers for integration with LangFuse for example, LangChain could be useful. As a framework, LangChain is too restrictive. LangChain imposes its opaque structure, which reduces the ability to debug, diagnose LLM behavior, and compromises the ease of maintaining a project. Furthermore, this slows down the learning process for junior developers and can even mask gaps in their understanding. These limitations have pushed us to explore other libraries and alternative paradigms for developing LLM agents, such as Atomic Agents, PydanticAI or Marvin.

Atomic Agents as the new paradigm

What if the real game-changer was AtomicAgents and its modularity?

AtomicAgents is a library designed to create and orchestrate autonomous AI agents in a modular and optimized way. Unlike LangChain, it avoids heavy abstractions and gives developers greater control over agent workflows. Its simplified approach allows for better efficiency, transparency, and scalability.AtomicAgents was launched by Kenny Vaneetlvelde en Juin 2024, a highly active contributor on Reddit, in June 2024, and its popularity has been steadily growing ever since.

atomic agents star history langchain
Atomic Agents Github stars history

AtomicAgents introduces several major improvements compared to LangChain, CrewAI, and other similar frameworks:

  • Reduced Complexity: No more excessive abstractions. Just simple components that can be combined and arranged however you want.
  • Control is Power: AtomicAgents is designed to give developers full control over every essential part of the agent (agent, memory, RAG, etc.). This allows for customization, fine-tuning, and optimization without having to guess what’s happening behind the scenes.
  • A Proven Approach: IPO (Input, Process, Output): By adopting the IPO model (Input-Process-Output) and emphasizing atomicity, AtomicAgents promotes modularity, maintainability, and scalability.

1. IPOB: Input, Process, Output — and that’s it

AtomicAgents ensures clarity and simplicity in development by following the IPO model:

  • Input: Data structure validation using Pydantic
  • Process: All operations are handled via agents and tools (memory, context providers, etc.)
  • Output: Output data structure validation using Pydantic

2. Atomicity and the Single Responsibility Principle

The core idea behind AtomicAgents is to structure simple, specialized objects — agents, memory, context providers, etc. — where each component has a single responsibility and can be reused across different pipelines. Designed to be interconnected without rigid dependencies, these modules can be added or removed without disrupting the entire system, ensuring optimal modularity. This approach avoids the opacity of LangChain.

With simple objects, developers can then build more complex pipelines step by step.

atomic agents schema
Atomic Agents example architecture (source: https://github.com/BrainBlend-AI/atomic-agents?tab=readme-ov-file)

AtomicAgents integrates seamlessly with Pydantic, but more importantly, with Instructor. This solves a major issue faced by libraries with limited maintainers: the ecosystem.

Thanks to its integration with Instructor, AtomicAgents gives developers access to a wide range of LLM providers and makes it easy to migrate any existing project to AtomicAgents!

How we use it ?

1. Clear Inputs and Outputs

We start by creating the schemas that will define our input and output. Unlike LangChain, we have validation schemas for both inputs and outputs.

Input/Output validation schema with Atomic Agents

2. Creation of a clear prompt system

We create a clear and easily modular interaction system.

Prompt constraints with Atomic Agents

3. Building the Agent

Agent instanciation with Atomic Agents

4. Example of Modularity: RAG

The ContextProvider itself is a module independent from the agent. A RAG can be shared across multiple agents — just like memory. Two agents could even share the same memory.

Context Provider with Atomic Agents

Conclusion

AtomicAgents is a highly promising module in the LLM ecosystem, directly challenging LangChain. It offers three key advantages over LangChain:

  • Transparency: Unlike LangChain, everything is understandable when using AtomicAgents.
  • Simplicity: No black box — just clear code.
  • Control: Full ability to debug, swap components, and have total control over agent behavior.

That being said, LangChain still remains a solid option and a great entry point:

  • It benefits from a large community ready to help.
  • The internet is full of tutorials and examples.
  • It’s an excellent tool for quickly experimenting with LLMs.
  • Asynchrone easy handling

LangChain also comes with its own ecosystem (LangSmith, LangServe), allowing you to build end-to-end projects easily.

On the flip side, AtomicAgents does not yet integrate smoothly with LangFuse and has some gaps, such as the lack of official documentation. Atomic Agents also requires to be at ease with LLM concepts and interactions. It might no fit for new comers in the LLM world.

However, it brings a fresh paradigm for building LLM agents and introduces a new way of thinking about LLM-based development — making it absolutely worth exploring.

You are looking for GenAI experts? Feel free to contact us!

Cet article a été écrit par

Timothé Bernard

Timothé Bernard