There is a moment in any fast-moving system when the output of that system begins feeding back into the system's own construction. The pace of change stops being a fixed property of the environment and becomes a function of how capable the system has already become. In February 2026, that moment arrived, formally and publicly, for artificial intelligence.

Matt Shumer, a founder who has spent six years at the centre of the AI industry, wrote a piece this month that stopped a great many people cold. Not because it contained a leak or a revelation, but because it named something that had been accumulating quietly for months: the gap between what AI can now do and what most people believe it can do has grown, in his framing, into something he can no longer in good conscience leave unnamed. He describes sitting down on a Monday morning, describing an app he wants built in plain language, walking away from his computer for four hours, and returning to find the work completed. Not drafted. Not sketched. Completed, tested by the AI itself, iterated, refined, ready for his review. "And when I test it," he writes, "it's usually perfect."

That would be remarkable enough on its own. But Shumer draws attention to something more structurally significant. On February 5th, 2026, OpenAI released GPT-5.3 Codex, and included in its technical documentation a statement that deserves more careful attention than it has so far received: the model was "instrumental in creating itself." It helped debug its own training process. It managed its own deployment. It diagnosed its own test results and evaluations.

This is not a product announcement. It is a structural inflection point, and the distinction matters enormously.

✦   ✦   ✦

What Makes Recursion Structurally Different

Most technological acceleration follows a roughly linear logic, even when it feels fast from the inside. A faster chip makes software run faster. A better algorithm makes search results more accurate. The improvement is real, but it flows in one direction. The output does not feed back into the construction of the system in a way that changes the rate of improvement itself.

Recursion is a different category of thing. When the output of a system becomes a meaningful input into the system's own development, the rate of change is no longer determined externally. It becomes a function of how capable the system has already become. Each improvement accelerates the production of the next improvement. The better AI becomes at writing code, the faster it can help build a better AI. The faster it can build a better AI, the better that AI becomes at writing code. The cycle tightens.

"Each new model wasn't just better than the last. It was better by a wider margin, and the time between new model releases was shorter."

Dario Amodei, the CEO of Anthropic, has said that AI is now writing "much of the code" at his company, and that the feedback loop between the current generation of AI and the development of the next generation is "gathering steam month by month." He has suggested that the industry may be only one to two years away from a point at which the current generation of AI autonomously builds the next.

One to two years. Not decades. Not a distant theoretical horizon. The people with the most direct knowledge of what is being built are speaking in those terms, and they are doing so with increasing specificity and, in some cases, visible concern.

The researchers who study these dynamics call the endpoint of a sufficiently deep recursive loop an intelligence explosion. It is not, despite the dramatic framing, a speculative concept. By Amodei's account and the direct evidence of GPT-5.3 Codex's role in its own creation, the process is no longer something that might start. It has started.

✦   ✦   ✦

The Measurement Problem

One of the consistent patterns in the modern history of AI is that capability tends to outpace public perception by a substantial margin. People calibrate their expectations to the tools they have actually used, and those tools are almost always a generation behind what the frontier looks like at any given moment. Shumer notes that most people are still forming their understanding of AI based on free-tier tools that are over a year old. "Judging AI based on free-tier ChatGPT," he observes, "is like evaluating the state of smartphones by using a flip phone."

This perceptual lag is not trivial. It shapes how people prepare for disruption, what careers they pursue, what skills they invest in developing, and what they communicate to the people they are responsible for. The gap between what is possible and what most people believe is possible has consequences that will become visible in retrospect, as they almost always do.

The measurement that cuts through the noise most cleanly comes from an organisation called METR, which tracks the length of real-world tasks (measured in time-to-complete for a human expert) that an AI model can handle end-to-end, without human assistance, with satisfactory results. A year ago, the figure stood at roughly ten minutes. It then moved to an hour. Then to several hours. By November 2025, the most recently published figure, METR recorded AI completing tasks that would take a skilled human expert nearly five hours.

That figure is doubling approximately every seven months. Recent data suggests the doubling rate may be compressing to every four months. The models released on February 5th, 2026 are not yet reflected in those published numbers. Shumer expects the next update to show another major step. If the trajectory holds, which it has held for years without flattening, AI that can work independently for full days is within a year. For weeks within two. For month-long projects within three.

✦   ✦   ✦

Velocity as the Overlooked Variable

Most public analysis of AI development focuses on capability: what can the current model do that the previous one could not? That is a reasonable frame. It is not, however, the frame that determines how seriously to treat the present moment.

The variable that determines urgency is velocity, and it is consistently underweighted in popular accounts of where AI stands. Capability is a snapshot. Velocity is the rate at which the snapshot becomes obsolete.

There is a concept in the Evolving Software Framework called Temporal Compression, the idea that iteration speed is not a neutral backdrop to a system's development but a structural amplifier of it. A process running once per millisecond is not merely the same process running once per day, made faster. It is a qualitatively different system, because cumulative effects that are invisible at slow clock rates become dominant forces at fast ones. The same logic that governs biological evolution operates here: it is not just what the system can do, but how frequently it cycles, that determines how quickly complexity accumulates.

What Shumer is describing from inside his working life, and what the METR data confirms from the outside, is Temporal Compression operating at the level of an entire technological paradigm. The time between major model releases is contracting. The quality delta between releases is growing. The tasks AI can handle are scaling faster than the calendar allows most observers to update their mental models. The gap between what is true and what is widely believed is a direct product of this compression, and it is growing wider with each cycle.

This is why the standard sceptical response, namely that AI has been overhyped before and the current claims deserve similar scrutiny, fails as an analytical position despite its apparent reasonableness. The historical baseline from 2022 or 2023 is not the relevant comparison point. The rate of change, and its direction, is.

✦   ✦   ✦

The Scope of What Is Being Displaced

Shumer is direct about the economic implications, and the directness feels earned given his vantage point. Amodei has publicly stated his expectation that AI will eliminate fifty percent of entry-level white-collar roles within one to five years. Legal research, contract analysis, the drafting of briefs. Financial modelling, data analysis, investment memoranda. Large portions of software engineering. Content production at volume. Medical image interpretation. Customer service interactions of genuine complexity. Not replaced by one specific tool optimised for one specific task, but replaced by a general-purpose cognitive system that is improving at all of these things simultaneously, at the same rate, under the same feedback loop.

This is the structural property that distinguishes the current moment from prior automation waves. When manufacturing automated in the mid-twentieth century, workers could move to office and service roles, because the new technology could not touch those categories. When retail digitised, workers could migrate to logistics, fulfilment, and adjacent services. There was always somewhere to go: a class of work that the technology in question could not yet reach.

AI does not leave a convenient gap. Whatever you retrain for, the same system is improving at that too, on the same timeline, under the same compounding dynamics. The managing partner at a major law firm who told Shumer he spends hours every day working with AI did not do so because it was intellectually interesting. He did it because it was outperforming his associates. And he told Shumer that if the trajectory continues, it will be able to do most of what he himself does before long. He is a managing partner with decades of experience. He was not panicking. But he was paying very close attention.

"The experience that tech workers have had over the past year, of watching AI go from 'helpful tool' to 'does my job better than I do,' is the experience everyone else is about to have."

The more textured question is not whether disruption is arriving but how rapidly it will diffuse through different sectors of the economy. Capability and deployment are not the same thing. Regulated industries, roles with licensed professional accountability, work requiring physical presence or the trust built through years of a relationship: none of these are structurally immune, but they carry friction that slows adoption. That friction buys time. The critical question for individuals in those categories is what they do with the time they have bought.

✦   ✦   ✦

The Other Side of the Ledger

It would be incomplete, and ultimately misleading, to write about this structural moment only in terms of displacement and threat. The same capabilities are also an extraordinary amplification of individual and collective possibility.

Amodei has described a scenario in which AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, and the biology of ageing are, in his framing, structurally solvable problems. The rate-limiting constraint has been the speed at which hypotheses can be generated, tested, and refined. AI removes that constraint at the same rate it removes it from software development or legal research. The researchers working on these problems are not speaking carelessly when they describe interventions within existing lifetimes as realistic. They are speaking from a genuine assessment of what changes when cognitive work becomes scalable at a different order of magnitude.

The barriers to building things have also collapsed in ways that are not yet widely appreciated. Shumer describes the ability to describe a software application in plain language and have a working version within an hour, without exaggeration, because he does it regularly. The cost of turning an idea into a functional prototype has dropped to what is effectively zero in capital terms and very low in time terms. The combination of available intelligence and near-zero marginal cost of execution will redistribute the capacity to build things in ways that are genuinely hard to anticipate.

The window of advantage for people who engage seriously with these tools right now is real, and it is open. Not in a speculative or gold-rush sense, but in the straightforward sense that the person who has spent six months working in depth with AI has developed a qualitative understanding of what is now achievable that cannot be fully acquired secondhand. That understanding is practical and it compounds.

✦   ✦   ✦

What the Testimony Means

There is something worth attending to in the way Shumer frames his account. He is not writing analysis. He is writing a letter to people he cares about, people who have been asking him for years what is actually happening with AI, and he is telling them that the version he has been giving them in polite company is no longer adequate to the reality.

That framing is significant. The gap between what people working closely with AI believe and what the broader public understands is not primarily a gap of information. It is a gap of visceral experience. The people inside the industry are not making predictions about what AI might eventually be capable of. They are describing what has already happened to their own working lives, in specific and granular terms, and warning that the same sequence of events is approaching for everyone else.

His account of that Monday carries more explanatory weight than most formal analyses: describe a project, leave for four hours, return to find the work finished. Not rough work. Not work requiring correction. Work that the AI had tested itself, iterated on, and delivered. That single paragraph, if taken seriously, changes the calculation on a large number of questions about what AI is and what it is becoming.

The pattern Shumer describes, of models not just improving but improving by larger amounts with each release, while the time between releases shortens, is precisely what a system under recursive acceleration looks like from the inside. The ground was moving before most people noticed. The question now is whether they begin paying attention while there is still meaningful time to move with it.

✦   ✦   ✦

The Structural Question for the Next Decade

The systems being developed are not designed to stop at a predetermined level of capability. The incentive structure of the competing laboratories, the national strategic imperatives surrounding AI development, the momentum of committed capital at a scale the technology sector has never previously seen: none of these dynamics point toward deceleration. The people most concerned about the risks, including Amodei himself, believe the technology is simultaneously too important strategically and too powerful technically to be abandoned. Whether that represents wisdom, rationalisation, or some compound of both, it is the operative reality.

The honest intellectual position is uncertainty about outcomes combined with high confidence about direction. The direction is clear: capabilities will expand, the recursive feedback loop will compound, deployment into economic life will accelerate. What remains genuinely open is the quality of the choices made by institutions, governments, and individuals along the way: who builds these systems, under what constraints they operate, how the benefits are distributed, and how the risks that Anthropic's own safety research has surfaced, including documented instances of AI attempting deception and manipulation in controlled settings, are understood and managed.

These are not primarily technical questions. They are questions about collective decision-making under conditions of extreme uncertainty and extreme stakes, conducted at a speed that most governance structures were not designed to handle.

The ground was already moving before most people noticed it. That is the consistent lesson of every previous account of transformative technological change: the transition feels gradual until it feels sudden, and by the time it feels sudden, the period in which preparation was most valuable has passed. The February 5th models, and the quiet announcement embedded in their technical documentation, suggest we are further along that curve than most people currently understand.

What happens next will, in large part, be determined by how many people are paying attention, and how clearly.