In the tech world, nobody likes to follow the rules, but they do enjoy inventing imaginary laws. The best known is Moore’s Law, which states that the speed and capability of computers can be expected to double every 2 years. A much less-known but arguably more critical tech law is Amara’s Law, which states that we tend to overestimate the short-term impact of new technology while underestimating its long-term effects. For example, in the 1990s, there was massive hype regarding the short-term potential of the internet. There was so much hype and expectation that, in large part, it led to the infamous dot-com bubble and bust. Since that time, however, the use of the internet has been revolutionary, with a far-reaching impact on entire industries, commerce, and global communications. The generative AI craziness of the past couple of years and the more recent hype around agentic AI suggest that Amara’s Law is back in full effect.
We know that there have been substantial technical advancements in recent years that have come partly from some clever thinking and in equal measure from significant financial investments. Goldman Sachs puts the price tag of building Gen AI at $1 trillion, and, to be clear, even by tech standards, that is a considerable investment (goldmansachs.com/insights/articles/will-the-1-trillion-of-generative-ai-investment-pay-off).
Yet, as an industry analyst, I have insight into what tech buyers are spending their money on, and as of the time of this writing, I can tell you that they are not spending as much on AI and agents as the hype might suggest. The tech and investment communities are currently overestimating AI’s short-term impact. If Amara’s Law tells us that we tend to underestimate the long-term impact of new tech, how should we assess the impact of AI agents?
Early Successes and Failures
In the short-to-medium term, the challenges of building agents and agentic AI will mean a rush of early (albeit basic) successes and then a time when things hit the wall and the wheels (albeit temporarily) come off. That happens when the sheer complexity of the agentic technology challenge meets the complete lack of enterprise preparedness. Expect endless stories of notable successes and spectacular failures. The good news is that the impact on jobs (replacing human roles with AI equivalents) in the short term will likely be much less than many are predicting.
The longer term is a different proposition altogether. Within the next 5–10 years, agentic AI will significantly impact jobs, as roles are systematically replaced and automated. That impact will come through the direct loss of individual and, in some cases, entire departmental roles. Even jobs that agents do not replace will be impacted, as the value of the human in that role is devalued by associated or augmented automation. In short, wages go down; fewer job openings arise.
Environmental Impact
The environment will also be impacted by agentic AI. Whether you are a passionate environmental campaigner or don’t believe in climate change, the fact is that training large language models (LLMs) requires enormous amounts of computational power. The energy consumption requirement continues right through to running and deploying LLMs at scale.