The Accidental Infrastructure
How the Most Tedious Software Ever Written Became the Substrate of Artificial Minds
Among the more delicious ironies in the recent history of technology is this: The software category that engineers have spent two decades complaining about, ridiculing in conference talks, and conspiring to abolish has quietly become indispensable to the artificial agents now being deployed to do their work. I refer, of course, to the issue tracker, that bureaucratic apparatus of tickets, statuses, assignees, and audit logs that constitutes, depending on one’s mood, either the connective tissue of modern software development or its largest single source of human misery.
The story is worth telling not because issue trackers are intrinsically interesting (they are not) but because it illuminates a general principle about the relationship between tools designed for human cognition and tools required by machine cognition. The principle, stated baldly, is that we have spent thirty years building elaborate prosthetic devices to compensate for the limitations of the human mind, its forgetfulness, its parochialism, its tendency to mistake the contents of its inbox for the state of the world. These same prosthetics turn out to compensate, with eerie precision, for the limitations of large language models. What we built to scaffold ourselves now scaffolds our successors.
A Productive Contradiction
Consider two events from the past several months, which appear at first to contradict each other and, on closer inspection, do not.
In March, Kari Saarinen, the chief executive of Linear, published a manifesto titled “Issue Tracking is Dead.” The argument was reasonable enough. Issue trackers, he observed, were artifacts of a particular mode of software development, one in which a product manager translates a customer’s complaint into a ticket, an engineer translates the ticket into code, and a reviewer translates the code back into something resembling the original intent. Each translation is lossy, each consumes time, and the cumulative tax of all this translation is enormous. Agents, having been trained on roughly the textual output of the human species, can in principle skip several of these steps. They can ingest the customer complaint directly, consult the codebase, and produce the change. The elaborate ceremony of ticket grooming becomes vestigial.
A month later, OpenAI released Symfony, an open-source framework for orchestrating autonomous coding agents. Its central architectural decision was to use Linear, that very issue tracker whose obituary had just been published, as the control plane for coordinating those agents. Tasks would be read from the board. Each issue would spawn a workspace. Agents would run continuously, polling the tracker, claiming work, returning results for human review. OpenAI reported that internal teams using this arrangement saw a fivefold increase in merged pull requests.
The two pronouncements look incompatible. They are not. They concern different layers of the same artifact. Saarinen was writing the obituary of the user interface, the part where humans laboriously type descriptions into text fields and drag cards across columns. OpenAI was endorsing the substrate, the underlying data structure of records, states, owners, dependencies, and histories. The interface is dying. The substrate is being promoted from a tool of coordination to a piece of infrastructure on which artificial intelligence will run.
The Original Problem and Its Strange Persistence
To see why this should be so, it helps to recall what the issue tracker was designed to solve in the first place. In 1998, Terry Weissman wrote Bugzilla, initially in Tcl, later in Perl, to replace Netscape’s internal defect tracking system. The problem was not technical but cognitive and social. A population of programmers, distributed across continents and time zones, was producing software faster than any individual mind could remember what was broken in it. Bugs reported in hallway conversations evaporated. Promises made in email vanished into archives. The collective enterprise required a memory external to any one nervous system.
What Weissman built was, in effect, a cognitive prosthesis, a device that maintained, on behalf of the group, the things the group could not maintain in its own heads. Each bug acquired a durable existence outside any particular human’s awareness. It had a state (new, assigned, resolved, verified, closed, and the magnificently candid “won’t fix”). It had an owner. It had a history of who changed what, when, and from what to what. It had relationships to other bugs, this one blocks that one, this one duplicates that one. None of this was designed for artificial intelligence. The very phrase, in 1998, would have suggested chess programs and expert systems, not the descendants of statistical language models.
And yet examine the list of properties an autonomous coding agent requires in order to do useful work over a span of time longer than its context window. It needs durable state, because its working memory is volatile and frequently truncated. It needs explicit ownership, because in any nontrivial system multiple agents must coordinate without trampling one another. It needs a state machine, because work has phases and not all transitions are legal. It needs an audit trail, because when something goes wrong, and something will go wrong, someone must be able to ask what the agent saw, what it decided, and why. It needs a permission system, because granting unbounded authority to a probabilistic system is a category of mistake that one ought to make at most once. The issue tracker, designed two and a half decades ago to compensate for the cognitive limitations of biological programmers, supplies exactly these properties. The convergence is not entirely accidental, since we did, after all, build agents in part by training them on the textual exhaust of human coordination, but the precision of the fit is still arresting.
A Brief Detour Through Bad Interfaces
There is a sub-lesson here about user interface design that deserves its own paragraph, because it inverts an assumption many people in the field hold.
When Atlassian’s Jira appeared in 2002 and brought the Bugzilla model into the enterprise, it added something Bugzilla had deliberately avoided: configurability. A Jira deployment could be molded to fit any organization’s idiosyncratic workflow, with custom fields, custom states, custom approval chains, custom everything. This was a commercial triumph and a humanitarian catastrophe. Engineers came to despise Jira not because the underlying primitives were wrong but because the configuration surface was so vast that every deployment became a unique labyrinth, faithfully reproducing every dysfunction of the surrounding organization.
Linear, arriving later, took the opposite approach: a strong opinion, a small configuration surface, a fast interface, and an aesthetic that engineers found pleasant rather than punitive. The result was that engineers used it voluntarily, consistently, and with reasonable hygiene. They filled in the fields. They updated the statuses. They wrote real descriptions. The data inside Linear was, on average, cleaner than the data inside Jira, not because Linear’s schema was cleverer but because its users were not in active rebellion against it.
This turns out to matter enormously for agents, who do not care whether the interface is elegant but care intensely whether the underlying state is reliable. A pleasant tool gets used. A used tool accumulates honest data. Honest data is what an agent can act on. There is a moral in this for anyone designing software in 2026: the user experience you build for humans is, increasingly, the data quality you offer to machines. Aesthetics has become a strategic input to artificial intelligence, by the somewhat indirect route of inducing humans to behave themselves.
The Substrate Hypothesis
Once you see the pattern in issue trackers, you start to see it everywhere, and a general theory comes into view. Call it the substrate hypothesis: software systems that maintain durable records, defined verbs, explicit ownership, and queryable history are agent-usable nearly accidentally, while systems that lack these properties require expensive scaffolding to make them so.
The customer relationship management system is an issue tracker for revenue. Salesforce and HubSpot maintain accounts, contacts, opportunities, owners, stages, next steps, and histories. A sales agent, the artificial kind, can research an account, draft a follow-up, update a field, flag a risk, and request human approval before sending anything to an actual customer. The state machine already exists. The agent need only inhabit it.
The service desk, whether Zendesk, ServiceNow, or Intercom, is an issue tracker for customer problems. Tickets, assignees, escalation paths, service level agreements, customer histories. A support agent built from scratch would have to invent most of this. An agent built on top of an existing service desk inherits it.
Enterprise resource planning systems, including SAP, Oracle, Workday, and NetSuite, are the issue trackers of money, inventory, and headcount. They are nobody’s favorite software. They have records, permissions, approval chains, and audit trails, which is to say they have everything an agent needs to move resources around an organization without committing fraud or starting a fire.
The pattern continues. Calendars are issue trackers for time. Version control systems are issue trackers for code changes. Procurement systems are issue trackers for spending. Payroll is an issue tracker for compensation. Whenever a piece of software was built to coordinate human beings asynchronously around something consequential, it tends to have grown the same vertebrae of records, states, owners, verbs, history, and permissions, and these vertebrae are exactly what an agent needs to grasp.
The contrast cases are equally instructive. Email has state and history but its verbs are anemic. Reply, forward, and archive offer no native vocabulary for assignment, resolution, or approval. Slack and Teams contain enormous quantities of context but encode it as transcript rather than structure. The state of the work is implied by the conversation rather than represented in fields. Documents like Google Docs, Notion, and Confluence sit somewhere in the middle, with permissions and version histories but fuzzy ownership and impoverished verbs. Spreadsheets are the most variable case of all, capable of remarkable structure when designed by a disciplined human, capable of complete opacity when designed by anyone else.
This yields a diagnostic one can apply to any tool in one’s organization. Does it have records or only content? A state machine or only labels? Explicit ownership or implicit convention? Structural verbs or merely conversational ones? Queryable history or just visible history? The tools that score well are candidates to become agent infrastructure. The tools that score poorly will, at best, serve as context that more structured tools query, and at worst will be displaced by something built around them.
The Repricing of Boredom
If the substrate hypothesis is right, a great deal of unsexy enterprise software has been quietly accumulating strategic value while the attention of the industry was directed elsewhere. Atlassian, which owns Jira and Confluence, sits atop one of the world’s largest collections of agent-readable work state. In May 2025, it launched a remote MCP server in beta. By February 2026, the offering was generally available, capable of searching, summarizing, creating, and updating across its product line, with permissions and admin controls intact. This is not an integration in the old sense. It is the deliberate exposure of an installed base as machine-consumable infrastructure.
Whether Anthropic will purchase Atlassian, as rumors have lately suggested, is a question I have no inside information about and would not predict in either direction. What is significant is that the rumor is no longer absurd. A few years ago, the proposition that a frontier AI laboratory would acquire the company that makes the issue tracker would have read as a category error, rather like a pharmaceutical company buying a stationery firm. Today the logic is sufficiently obvious that one can argue about the price. The model knows how to reason. The issue tracker knows what the work is. Combining the two is no longer eccentric.
The same revaluation applies, with appropriate modifications, to Salesforce, ServiceNow, Microsoft, Oracle, SAP, and Workday, companies whose products have for years been treated as legacy obligations rather than strategic assets. These companies own systems of record. The systems of record turn out to be the maps on which artificial agents will build their understanding of the enterprise. Maps are difficult to displace, particularly when they are continuously updated by the activity of the people they describe.
This is not an argument that incumbents will win every battle. It is an argument that the substrate they own has become more valuable, not less, in the agentic era, and that the popular thesis of a few years ago, in which an AI-native upstart would render all this infrastructure obsolete, badly underestimated how much of the upstart’s job was already done by the infrastructure.
Practical Consequences
For builders of new software, the implication is a quiet inversion of recent fashion. The question to ask of one’s product is no longer whether it has a chatbot in the corner, which is approximately the 2024 question and was never very deep. The question is whether an agent can safely understand and modify the state of work inside the product. This requires that records be exposed, verbs be defined, ownership be explicit, history be preserved, permissions be honored, and the important actions be reachable through a real interface, an API or an MCP server, rather than through brittle scraping of a user interface designed for eyes and fingers. A product whose internal state is opaque will force the agent to guess. A product whose state is legible will let the agent operate. The difference will, in time, separate the platforms from the wrappers.
For organizations adopting agents, the implication is more uncomfortable. The hygiene of one’s work data, whether tickets get filled in, statuses mean what they say, ownership reflects reality, and decisions live in fields rather than in Slack threads and oral tradition, has graduated from a matter of mild operational virtue to a determinant of how much value agents can create. Messy operations were a tax humans could partly absorb through meetings, memory, and heroic last-minute saves. Agents have neither memory nor heroism. They require legibility. The boring work of cleaning up workflows and consolidating systems and enforcing fields is no longer just hygiene. It is preparation for a workforce that cannot read between the lines because there are no lines, only fields.
For executives and investors, the implication is strategic. A great deal of the unglamorous infrastructure on which the modern firm runs, the trackers, the records, the workflows, the approvals, and the dependency graphs, is being repriced. It is not merely overhead. It is the surface on which the next layer of automation will operate. Companies that own such substrates and expose them thoughtfully will be in a different position than companies that treat them as cost centers to be wrapped or replaced.
A Small Closing Observation
There is one final irony worth dwelling on. Kari Saarinen wrote that issue tracking is dead, and OpenAI then made an issue tracker the centerpiece of its agent orchestration framework. Both were correct. The ritual of human translation, in which a human being spends half a day converting messy reality into a well-formed ticket, is indeed dying, and good riddance. But the structure those tickets encoded is being inherited, almost intact, by a new population of users who happen to be made of statistics rather than carbon.
Issue trackers have won. They have won in the most undignified possible way: not by being beloved, not by being beautiful, not by being designed for the future, but by having quietly encoded, over a period of decades, the basic grammar of asynchronous coordination. It turns out that the problem of coordinating human beings across time and the problem of coordinating artificial agents across time share more structure than anyone expected. The cognitive prostheses we built for ourselves are now, with minor modifications, the cognitive prostheses for our software. The boring infrastructure was infrastructure all along. We just had to wait for the right tenants to move in.
The next time you encounter a piece of enterprise software that seems too dull to merit attention, do not ask whether it has an AI assistant in the corner. Ask whether it has records, states, owners, verbs, permissions, and history, and ask whether anyone is willing to expose them. If yes, the tool is more important than it looks. If no, someone is about to build a more important tool around it. Either way, the boredom is a clue, not a verdict.









