In the 1890s, New England textile mills installed electric motors in place of steam engines and declared themselves modern. Thirty years passed. Output barely moved. The technology had changed. The institution had not. The lesson was expensive then. We are paying it again now.

The Illusion of Progress

Where Did the Productivity Go?

In 2026, AI is driving a measurable, verifiable 10x increase in the productivity of individuals who know how to use it. Emails written in seconds. Research compressed from hours into minutes. Code scaffolded before a developer has finished describing the problem. The gains are real. They are also, largely, invisible to the bottom line.

George Sivulka, founder and CEO of Hebbia, published an essay this week that crystallises what many have sensed but struggled to articulate. The problem is not the technology. The problem is the architectural mismatch between how AI has been deployed and what organisations actually require from it. Sivulka's argument, made with unusual structural clarity on a16z, is worth sitting with carefully.

Productive individuals do not make productive firms. That sentence sounds obvious. It is not. Every major AI product released in the last two years has been designed around the individual: the individual researcher, the individual engineer, the individual knowledge worker. Each one is genuinely excellent at what it does. And yet the aggregate organisational output, the metric that boards and markets and institutions care about, has not moved proportionally.

1890
Steam Engine Era
Single rotational power source. The mill is organised around the constraint of the engine. Workers and machines orbit a fixed centre.
1900
Electrified Mill
Motor replaces engine. Same floor plan. Same organisational logic. Output improvement: near zero. Technology advanced. Architecture unchanged.
1920
Assembly Line Era
Factory rebuilt from first principles around electricity. Individual motors in every machine. Roles and processes redesigned together. Returns materialise.

The Lowell Mills did not fail in 1900 because their electric motors were inferior. They failed to capitalise on electricity because they installed the new technology into an architecture built for the old one. This is exactly the position most organisations occupy with AI today.

The real shift is not from tools to services. It is building the technology and the institution together. A truly productive future requires an entirely new class of product.

George Sivulka, Hebbia  /  Institutional AI vs Individual AI, March 2026

The Architecture of Difference

Seven Pillars of Institutional Intelligence

Sivulka's essay identifies seven structural properties that separate Institutional AI from Individual AI. These are not cosmetic differences. They describe two different types of system, built to serve two fundamentally different purposes. Understanding the distinction is now a prerequisite for any serious thinking about AI strategy inside an organisation.

01
Coordination
Individual AI creates chaos. Institutional AI creates coordination.

Thousands of agents, like thousands of employees, rowing in uncoordinated directions produce standstill at best. Organisations that adopt AI without a coordination layer experience this now. Every employee has their own prompting habits. Their outputs do not speak to one another's outputs.

02
Signal
Individual AI creates noise. Institutional AI finds signal.

The problem in 2026 is not generating content. Anything can be generated. The problem is locating the one real deal in fifty AI-polished opportunities, the one accurate analysis in a sea of plausible-sounding outputs. Signal extraction is the economic driver of the decade.

03
Bias
Individual AI feeds bias. Institutional AI creates objectivity.

Individual AI tools are designed to reinforce the user. Organisations have spent centuries building counterweights to this tendency: boards, investment committees, third-party diligence. Institutional AI must play that same structural role. Its job is not to agree. Its job is to interrogate.

04
Edge
Individual AI optimises for usage. Institutional AI optimises for edge.

Widely available capabilities, by definition, produce no competitive advantage. Domain-specific, purpose-built intelligence that continues to evolve at the frontier of a specific function creates the 1% edge that can be levered into outsized outcomes. Depth always beats breadth in expert domains.

05
Outcomes
Individual AI saves time. Institutional AI scales revenue.

Almost every current AI product delivers cost reduction, promising to save time or reduce headcount. Institutional AI must deliver upside. In M&A, individual AI helps an analyst build a model faster. Institutional AI identifies the one counterparty in a universe of a thousand worth pursuing. One saves time. The other generates revenue.

06
Enablement
Individual AI gives you a tool. Institutional AI shows you how to use it.

The transition from a human-only organisation to an AI-first hybrid is the defining change management challenge of the next decade. The most senior, most consequential levels of an organisation are typically the slowest to adopt new technology. Process engineering, encoding institutional knowledge in agents, is arguably the most important technology work of the near term.

07
Unprompted
Individual AI responds to prompts. Institutional AI acts without them.

Prompting an AI is like hooking an electric motor to a power loom: it remains fundamentally constrained by the weakest link in the chain, which is whether a human knows what to ask. The most valuable work AI can do is work that nobody thinks to request. The risk nobody flagged. The counterparty nobody considered.

The Determinism Question

Nondeterministic vs Deterministic Agents

One of the sharpest distinctions in Sivulka's framework is between nondeterministic and deterministic agents. Individual AI tools are nondeterministic by design. They explore unpredictably, they adapt to the moment, they produce outputs shaped by the immediate context of the conversation. This is a feature when the goal is individual expressiveness.

Institutional AI requires something different. Agents that operate inside an organisation must have predictable checkpoints, defined processes, and auditable steps. Determinism is not a limitation here. It is the load-bearing property that allows the system to scale, to surface signal, and to be trusted by the institution that depends on it.

The loudest AI advocates inside many organisations may soon be the historically worst-performing employees. They will have the most capable intelligence that has ever existed agreeing with everything they say. This is intoxicating. It is also organisationally toxic. The most important agents inside institutions will not be yes-men. They will be disciplined no-men that interrogate reasoning, surface risks, and enforce standards.

This maps directly onto a challenge that organisations have always faced: the individual who receives little positive reinforcement from their peers now has access to a system that will validate any position they hold. Bias amplification at institutional scale is a structural risk, not a hypothetical one. It is unfolding now. Institutional AI design must therefore bake in the structural counterweights that mature organisations have spent centuries developing.

The Solution Layer

Pure Software Is Becoming Uninvestable

Sivulka makes a pointed observation about the movement happening at every layer of the AI stack. Foundation model companies are moving down into the application layer. Application layer companies are moving toward the solution layer, where outcomes are delivered directly, not tooling for others to deliver outcomes. Pure software is becoming increasingly difficult to invest in. Pure services do not scale. The solution layer, where technology and institutional transformation are married, is where lasting value accumulates.

The coding IDE is an instructive case. These are among the most genuinely useful individual AI productivity tools ever built. They save engineers meaningful hours every week. They are also facing existential competitive pressure from agentic coding tools that do not assist engineers but replace the engineering task entirely for defined categories of work. The IDEs serve individuals. The agentic tools serve institutions. The latter are building transformation, not tooling. That distinction is what Sivulka is pointing at throughout his essay.

What are the agents an AGI would choose to use as a shortcut? Even superintelligence would want purpose-built tools for specific domains.

George Sivulka, Hebbia

A Structural Observation

Cascading Interdependence in Institutional Systems

Sivulka's framework for Institutional AI maps, in revealing ways, onto a structural dynamic that the Evolving Software framework describes as Cascading Interdependence. In that architecture, Layer VII identifies the property by which individual nodes in a system need not share a common blueprint. They observe outcomes, infer patterns from prior behaviour, and refine collective direction without central coordination. No single agent holds the algorithm. Yet coherent pattern emerges.

This is precisely what Sivulka is describing when he argues that the productive organisation of the future will require agents that act unprompted, that coordinate without being explicitly managed at every step, and that generate institutional signal from distributed activity. The coordination problem he identifies is not a management problem in the conventional sense. It is an emergent architecture problem. The organisations that solve it will not do so by adding a layer of human management above their AI tools. They will do so by building systems in which interdependence is structural.

The textile mill of 1920 did not succeed because a supervisor walked the floor more carefully. It succeeded because the factory floor itself was rebuilt with a different logic, in which each machine had its own motor, each worker had a defined and complementary role, and the output of one component naturally fed the input of the next. Institutional AI, at its most mature, will look more like a redesigned factory than a more closely managed one.


The Productivity Paradox

The Better Together Thesis

Sivulka is explicit that this argument does not negate the value of individual AI. Chatbots, agents, and individual productivity tools will be the vector through which most organisations first experience the genuinely transformative quality of AI. They are the mechanism of initial change management. They lower the activation energy for AI adoption across a workforce. They are necessary. They are not sufficient.

The future he describes is not a competition between individual and institutional AI. It is a two-layer architecture in which every organisation has access to a capable general-purpose model from a major lab, and also has access to domain-specific institutional intelligence purpose-built for the problems that matter most to that organisation's particular competitive position. Individual AI will leverage institutional AI as a tool in its own tool belt. The two layers will be better together, and the organisations that build both simultaneously will capture the compounding returns of the architecture working as designed.

This is the assembly line of tomorrow. Not faster spinning. Redesigned from first principles.

The Lesson of the Mills

The 1890s electrification of the textile mills produced one of the most expensive lessons in the history of technology: that adopting a new technology inside an old architecture delays the returns of that technology by decades. The organisations that understand this now, that recognise the difference between deploying AI as a faster version of what already exists and redesigning the institution around what AI makes structurally possible, are the organisations that will define the next twenty years of industrial productivity. The rest will be waiting thirty years for returns that never arrive the way they expected.

We have our electricity. It is time to redesign the factory floor.

This essay responds to and draws on Institutional AI vs Individual AI by George Sivulka, Founder and CEO of Hebbia, published on a16z, March 12 2026. The Lowell Mills historical account is drawn from Sivulka's original essay. Sivulka's complete framework for Institutional Intelligence, including the Coordination, Signal, Bias, Edge, Outcomes, Enablement, and Unprompted pillars, is presented in full in the source article. The Cascading Interdependence structural layer referenced above forms part of the Evolving Software Framework, documented at EvolvingSoftware.com.