For more than three decades, I have watched organizations try to make intelligence executable. From early Unix systems and expert systems in the 1990s, through large-scale telecom platforms and public-sector digital transformation, to today’s debates on AI governance and agentic systems, the question has remained remarkably stable: can we embed reasoning into the fabric of organizations?
In the 1990s, we attempted this with expert systems, decision-support tools, and knowledge bases. They promised consistency, speed, and rationality — a way to turn human expertise into software. What they lacked was language, adaptability, and scale. Today, with large language models and agentic architectures, we are revisiting the same ambition under a new technical regime: systems that can interpret goals, reason over unstructured information, and act across digital processes.
This article explores that continuity. Agentic AI is often presented as a rupture, but in reality it is better understood as a second attempt at the same organizational dream: making parts of management, coordination, and decision-making computable. The difference is not in the objective, but in the medium — from brittle rules to probabilistic models, from encoded procedures to interpreted intentions. Understanding this lineage is essential if we want to avoid repeating the same mistakes, only at a much larger scale.
The 1990s: “AI as corporate brain”
In the 90s, companies invested in:
Expert Systems (rule-based engines)
Decision Support Systems (DSS)
Knowledge Management systems
Workflow automation
The corporate vision was:
Encode human expertise into software so the organization can act more rationally and consistently.
These systems aimed to:
Capture procedural knowledge (“if X then Y”)
Automate structured decisions (credit approval, diagnostics, planning)
Reduce dependence on individual experts
But they failed to scale because:
Knowledge had to be manually encoded
They were brittle (couldn’t generalize)
They required stable environments
They had no real language or perception
So AI remained:
an internal tool, not an organizational actor.
Today: Agentic AI as “corporate actor”
Agentic AI systems now:
Perceive via unstructured data (text, images, logs)
Reason probabilistically
Act via APIs and workflows
Learn from interaction
This changes the role:
| 1990s AI | Agentic AI today |
|---|---|
| Tool | Semi-autonomous actor |
| Rule-based | Model-based |
| Encoded knowledge | Learned knowledge |
| Static | Adaptive |
| Back-office | Front-line + back-office |
| Narrow task | Multi-step goal pursuit |
The ambition is the same:
Make part of the organization itself executable.
But now we can:
Represent policies in natural language
Translate goals into plans
Execute actions in digital systems
Monitor outcomes
Iterate
Which is exactly what management theory wanted in the 90s (reengineering, TQM, BPR), but couldn’t technically implement.
What is actually returning from the 90s?
Three ideas are coming back:
1. AI as organizational memory
Then: knowledge bases
Now: vector databases + LLMs
Same function:
preserve and reuse institutional knowledge
2. AI as decision layer
Then: expert systems
Now: agentic planners + copilots
Same goal:
reduce variance in decisions and speed execution
3. AI as coordination engine
Then: workflow engines
Now: multi-agent systems
Same dream:
automate coordination between roles and systems
The real difference: language
The 90s AI failed because:
It could not handle natural language
It required formal logic
It could not absorb informal know-how
Agentic AI succeeds because:
Language becomes the control interface
Policies, goals, constraints can be written like:
“If a citizen requests X, verify Y, then notify Z.”
So we are shifting from:
software that encodes procedures to: systems that interpret intentions
That is the true discontinuity.
Organizational meaning
In the 90s:
AI was automation inside the firm.
Now:
AI becomes a quasi-role inside the firm
(an intern, analyst, planner, dispatcher, controller)
This is why “agentic” matters:
It’s not just smarter automation.
It is:
delegation of bounded agency
Which reopens the same governance problem from the 90s:
Who is accountable?
Who validates decisions?
Who controls drift?
But now at scale.
Strategic insight for companies
You can frame this as:
Agentic AI = the second attempt to computerize management.
First attempt (90s):
❌ failed due to symbolic rigidity
Second attempt (now):
⚠️ risks failing due to:
- hallucination
- misalignment
- over-delegation
- organizational naïveté
So the real competitive advantage is not the model.
It is:
designing AI as an organizational system, not an IT feature.
Exactly the same lesson as ERP in the 90s.
One-liner you can reuse
If we want a sharp formulation:
Agentic AI is not a new idea. It is the 1990s vision of the “intelligent organization”, finally equipped with language, learning, and action.
or:
In the 1990s we tried to encode expertise into software. Today we let software absorb it from language and act on it.
Looking back, what strikes me most is not how much the technology has changed, but how persistent the underlying question has been. From the first expert systems I encountered in the early days of Unix and corporate automation, to today’s agentic architectures discussed in boardrooms and public institutions, we have always been trying to make organizations think more clearly, act more coherently, and depend less on chance and individual heroics.
What is different now is scale and speed — but also responsibility. When reasoning is no longer just embedded in procedures but delegated to systems that can interpret language and act across digital infrastructures, the problem is no longer technical alone. It becomes organizational, institutional, and ultimately political. We are not just designing software anymore; we are shaping how decisions are made.
For someone who has lived through several cycles of technological promise and disappointment, this feels less like a revolution and more like a moment of continuity with consequences. Agentic AI is not a miracle solution, nor a threat in itself. It is a new attempt to formalize judgment. Whether it becomes an instrument of clarity or a source of new opacity will depend less on models and more on how we embed them in rules, roles, and accountability.
Perhaps this is the real lesson of the long arc from expert systems to agentic AI: intelligence has never been just a property of machines. It has always been a property of institutions. And today, more than ever, we are responsible for deciding what kind of institutions we want our machines to serve.
🙂











