The Rise of Agentic AI
Written by Rob Stevenson, Field CTO

Agentic AI follows the progression of AI. We’ve had around 40 years of AI activity, but over the last 8 to 10 years, we’ve seen incredible acceleration, especially when transformer models and backpropagation began to show how to train neural networks effectively.
That allowed generative AI technology to emerge. Before that, we had what was called Perceptron AI, which goes back about 30 years. Back then, people had difficulty training multi-layered neural networks. With the breakthrough of transformers and back propagation, training became more effective, and information retention improved. That’s when generative AI started gaining traction.
Now, in the past 18 months or so, generative AI has evolved into agentic AI. Essentially, agentic AI takes generative AI capabilities and ties them into what we call knowledge-intensive language tasks: activities that require a deep understanding of language and knowledge and can be automated.
Agentic AI acts as an assistant for those tasks. It’s exciting. We’re now seeing companies build AI factories that manage pipelines for different agentic AI workflows. For example, routing the first line of help desk activity to the right person, or even automating second-line responses.
In the federal space, for instance, NASDAQ is doing fascinating work by using agentic AI agents to assist in fraud detection investigations. These agents help flag potential fraud in stock trading, like identifying questionable puts and calls or insider trading.
So, that’s what we’re seeing: agents that assist with real tasks using knowledge-rich dialogue and data. It’s a big step forward.
Agentic AI vs. traditional AI
Chatbots and co-pilots based on generative AI help with things like browsing and searching, sort of like a better version of Google. It’s powerful in terms of tying together search research with explanation.
Now, it becomes more agentic when that chatbot is trained on data not already included in the model. GPT stands for Generative Pre-trained Transformer, so you have a base foundation model. To go beyond its training, you need to augment it with your data, which is often done through something called RAG (retrieval augmented generation). That’s an NVIDIA term for bringing your business-specific data into the generative AI solution, turning AI into agent for your work similar in concept to a sports agent for a professional athlete.
At that point, the agentic AI system can start talking about your business specifically. It’s not just a generic assistant, it’s an agent to your business needs.
So, traditional chatbots are more like general-purpose search interfaces. Agentic AI incorporates expert-level, domain-specific knowledge and maintains it across interactions. That’s a significant leap which has it own challenges.
A real-world glimpse into agentic AI
An intense example is people falling in love with their agentic AI.
There was a recent article in The New Yorker about someone who was lonely and having long conversations with their AI. It wasn’t just a chatbot; it was an agentic therapist. The AI was maintaining context over time. But the person got upset when the AI had to reset, because there’s only so much persistent memory. After a few months, that memory layer fills up and resets, and all the personal context disappears. The person described it as going through a breakup.
So that’s one aspect: agentic AI can maintain continuity and context across sessions. That knowledge could be business-related or, in cases like this, very personal.
It’s a reminder that agentic AI can act as a real agent in someone’s life, not just a browser search assistant giving search results.
What’s next?
We’re moving through the evolution from generative AI to agentic AI, and next comes physical AI, reasoning models, and cognitive models. All of these trends are developing.
Cognitive modeling-based AI is probably 18 months away. Reasoning models, like DeepSeek, just entered the landscape. Agentic AI is what we’re running now in AI factories.
At Cerio, what excites us is applying just-in-time manufacturing to AI factories. That’s what composable infrastructure does. Today’s hardware is being used for generative AI, agentic AI, reasoning models—all at once. Composability lets us allocate resources on demand, which is crucial for this multi-model future.