PUBLISHED IN
Byline by Chris Stephenson, alliant Managing Director of Intelligent Automation & AI, and
Dhaval Jadav, Chief Executive Officer of alliant
AGI is far off, but companies shouldn’t overlook AI agents
While AI Agents, or agentic AIs, are being touted as the next leap in human productivity, the unadvertised reality is that, like other AI technologies, they are only as good as the humans that design and use them.
They still require a human to know what the problem is and give the right command. But what if you were to combine the horsepower of an agent with an AI that can think like a human? How much more could be achieved?
This question is why many in the AI industry are already looking past agents, and are solely focused on AGI, artificial general intelligence. There are some leaders in the tech industry who believe AGI is as close as two years away. Google co-founder Sergey Brin, recently sent a memo to employees recommending they work 60 hours a week, saying “Competition has accelerated immensely and the final race to A.G.I. is afoot.”
True AGI is Still a Long Ways Away
Much of the conversation around AGI has started with AI reasoning models. Reasoning models like OpenAI’s o1 and X’s Grok 3, are designed to “think through” problems before they respond. The human mind, however, is much more sophisticated than a reasoning machine. Reasoning models don’t solve the three biggest problems with achieving AGI.
The first problem is AI models, even reasoning models, need to be continually updated and trained. That requires continual human oversight to reinforce the AI’s logic. Without a human telling an AI that it is thinking about things in the right way or the wrong way, it will remain in a perpetual embryonic state.
The second problem is that current AI models cannot assimilate information or adapt to novel situations. If an AI model can think through a math problem, that does not mean it can think through a legal problem.
Finally, the biggest problem is that AI’s are unable to form unique ideas. AI “thought” is confined to the data it was trained on. To put it another way, AI can only restate concepts that already existed. We are struggling to find a way for AIs to come up with anything original.
Dr. Robert Ambrose, NASA’s former Chief of its Software, Robotics and Simulation Division, has posited that a possible solution is to induce AI’s to dream. The idea being that if you can make it possible for an AI to think abstractly and spontaneously by suspending its normal reasoning for extended periods of time, and then having the AI incorporate those abstract thoughts into its training data, you could artificially create original thought.
But scientists don’t even know why humans dream or how, making the prospect of having AI’s dream of electric sheep that much more remote.
Don’t Overlook Agents – They Can Help You Right Now
Believe it or not, you don’t need to wait for AGI to control your agents, general intelligence still works. If you are intentional with how you set up your AI agent, you can use it to substantially multiply your productivity. Instead of performing tasks manually or prompting discrete AIs and automations to do work, you can rely on an AI agent to project manage for you.
Don’t mistake AI agents as some sort of magic “Easy Button,” though. No matter what tech companies are promising you, there is still work that needs to be done on the front end to really make AI agents effective.
Think of it this way, if you hire a person, presumably with general intelligence, and hand them a work manual, would you expect they could do your job on day one?
Like a new employee, your agent needs to be trained on your processes to know how to get the job done. It needs to be given tools and trained on how to use them to accomplish each task in the process chain. That means you need to create underlying automations for your AI to execute in pursuance of its objective. It also needs to be trained on how to handle the different exceptions, roadblocks, and variations that may interrupt the nominal process flow.
The difference with an agent is, once built, you are not at risk of losing an employee anymore. Recruiting and training people is not only expensive but continuous no matter the role. The investment of time and money in an agent on the other hand is a mostly a one time cost, you don’t need to recruit or train a new one ever again.
It may seem daunting but we’re doing it today, and guess what, even AGI wouldn’t be able to work without that groundwork being laid.
The payoff is once that initial work is done, you have an agent that cannot only work at a speed a human never could but can also perform far more task simultaneously.
Don’t Pin Your Hopes on Sci-Fi Timelines
According to Back to the Future we’d have flying cars and time travel by 2015. Blade Runner said we’d have fully sentient androids by 2017. In Terminator 2, Skynet’s artificial super intelligence comes online in 1997. AGI is still in the realm of science fiction, if we wait around for it, we’re missing out on the progress available now.
The path forward isn’t about choosing between agents and AGI, but rather about leveraging the concrete benefits of current AI technology while maintaining a pragmatic perspective on future developments. By focusing on thoughtful implementation of AI agents today – establishing clear processes, creating necessary automations, and providing proper training – organizations can realize significant productivity gains without waiting for the uncertain timeline of AGI.
Rather than fixating on when machines might think exactly like humans, we should concentrate on how they can help us think and work better right now.
Featured Leadership
Dhaval Jadav is Chief Executive Officer of alliant, America’s leading consulting and management engineering firm, which helps American businesses overcome the challenges of today to prepare them for the world of the 22nd Century and beyond. Jadav co-founded the firm in 2002 to be unlike any other consultancy, with an emphasis on partnerships with clients to not only identify but also implement quantifiable solutions to their most critical concerns.
Chris Stephenson is Managing Director of Intelligent Automation & AI at alliant and was previously a Managing Principal at Grant Thornton.