Parsa's Blog

Dependency in the Age of AI

The modern software project is a house of cards built on a foundation of strangers' code. A typical web application imports hundreds, sometimes thousands, of packages—each one a black box maintained by someone you've never met, governed by incentives you don't understand, and subject to abandonment without notice. I accepted this bargain for years because the alternative seemed worse: writing everything yourself was simply too expensive.

That calculus has changed.

The Economics of Reuse

Dependencies exist because developer time is expensive and scarce. When it takes a senior engineer a week to write a proper date parsing library, and a battle-tested one already exists on npm, the math is trivial. You npm install and move on. The cognitive overhead of understanding someone else's API is almost always cheaper than building from scratch.

This economic logic drove the explosion of package ecosystems. npm now hosts over two million packages. PyPI has half a million. The JavaScript ecosystem, in particular, became notorious for its granularity—packages for padding strings, packages for checking if a number is odd, packages that are literally eleven lines of code. The transaction cost of importing was so low, and the cost of writing was comparatively so high, that even trivial functionality got externalized.

The result is software that resembles a Rube Goldberg machine. Your application doesn't run on your code; it runs on a towering dependency graph where a single maintainer mass-deleting their packages can break thousands of builds worldwide. I normalized this fragility because I had no alternative.

The AI Discontinuity

Large language models have introduced a discontinuity in the economics of software development. The relevant metric isn't that AI makes developers "faster" in some incremental sense—it's that AI has fundamentally altered the ratio between thinking about code and producing code.

A competent developer working with a modern AI assistant can now produce, review, and maintain perhaps ten times the volume of code they could before. Tasks that once took days—implementing a parser, writing a test suite, building out CRUD operations—now take hours or minutes. The bottleneck has shifted from "how do I write this?" to "what should I build?"

This changes the dependency calculus entirely. If writing a date parsing function took a week before and takes fifteen minutes now, the case for importing a dependency becomes much weaker. The costs of the dependency—the security surface, the API constraints, the update treadmill, the risk of abandonment—start to outweigh the benefits.

The Hidden Costs I Stopped Counting

The dependency model has always had costs that I systematically undercounted because I had no alternative.

Security surface area. Every dependency is an attack vector. The average npm package has 79 transitive dependencies. Each one can be compromised, typosquatted, or taken over by a malicious actor. Supply chain security tooling exists to manage this risk, but the tooling is a patch on a fundamentally leaky abstraction.

Cognitive overhead. Dependencies aren't free to use. You must learn their APIs, understand their conventions, debug their interactions, and work around their limitations. When a library doesn't quite do what you need, you face the choice between ugly workarounds or painful migrations. This friction compounds across hundreds of packages.

Update treadmill. Dependencies evolve on their own schedules. Security patches demand immediate updates. Major versions introduce breaking changes. The maintenance burden of keeping dependencies current is a constant tax on development time—time that produces no user-visible value.

Implicit constraints. A dependency's design decisions become your design decisions. If your ORM handles connection pooling in a particular way, you're bound by that choice. Dependencies create an invisible architecture that constrains your system in ways you often don't discover until it's too late.

Runtime bloat. Packages optimize for generality, not your specific use case. You import a utility library for one function and ship kilobytes of code you'll never call. In the browser, this bloat directly impacts user experience. On the server, it increases memory footprint and cold start times.

The Case for Vendoring In

If AI has reduced the cost of writing code by an order of magnitude, the strategic response is to vendor in—to own your code rather than rent it.

Vendoring means taking a dependency's source code and incorporating it directly into your project, or more radically, writing your own implementation of the functionality you need. The advantages compound:

Consistency. Your codebase has one style, one set of conventions, one error handling philosophy. You don't context-switch between your patterns and a dozen library authors' patterns. New developers can understand the entire system by learning one approach.

Minimalism. You write exactly what you need. No dead code, no unused features, no configuration options for use cases you'll never have. The result is software that's smaller, faster, and easier to understand.

Adaptability. When requirements change, you change your code. No more fighting library abstractions or waiting for upstream maintainers to accept your patch. You control the entire stack.

Debuggability. When something breaks, you can read the code. There's no boundary between "your code" and "library code"—it's all your code. Stack traces make sense. Debugging is straightforward.

Stability. Your code changes when you change it. There are no surprise updates, no deprecation warnings, no scrambling to migrate before a version goes end-of-life. You control the pace of change.

What This Looks Like in Practice

This isn't an argument for writing everything from scratch. That would be absurd—no one should implement their own cryptographic primitives or database engines. The argument is for a dramatic shift in where to draw the line.

The new heuristic might be: if you can understand it, own it.

Cryptography? Use a dependency. It requires specialized expertise and the consequences of mistakes are catastrophic. Database drivers? Use a dependency. They handle protocol-level details that aren't worth reinventing. HTTP servers? Maybe. The core is well-understood, and owning it gives you control over a critical layer.

But the utility libraries, the string helpers, the data transformation pipelines, the validation logic, the API clients for third-party services—all of this is code you can understand, code you can write, code you can own. With AI assistance, the cost of writing it is low enough that the benefits of ownership outweigh the savings of reuse.

The Objections

Some will argue that this approach doesn't scale, that organizations can't maintain all that code. But the maintenance burden of owned code is often overstated. Code you understand is code you can fix. Dependencies introduce a different kind of maintenance burden—staying current with external changes—that's harder to predict and control.

There's also the collective efficiency argument: if everyone implements their own date parser, society wastes effort on solved problems. This is true in theory but ignores the reality that most "shared" code isn't truly shared—it's cargo-culted, wrapped in compatibility layers, and customized beyond recognition. The mythical reuse benefits rarely materialize.

The strongest objection is that AI-written code introduces its own risks. If you can't fully review ten times more code, aren't you trading known problems for unknown ones? This is a legitimate concern, but it's a training and process problem, not a fundamental flaw. Teams can adapt their review practices, invest in testing, and develop intuitions for AI-assisted development. The risks are manageable in ways that dependency risks often aren't.

A Leaner Future

The dependency explosion was a rational response to scarcity. Developer time was expensive, so the industry economized on it by sharing code. AI has broken the scarcity constraint. The response should be equally dramatic.

Imagine software that ships with a fraction of its current dependencies. Software where you can grep the codebase and find every relevant line. Software that doesn't break because a maintainer in another timezone decided to make a breaking change. Software that's lean, consistent, and fully understood by the team that builds it.

This isn't a return to the past. I'm not suggesting developers write everything in assembly or reject all external code. The argument is subtler: the threshold for what's worth depending on has risen dramatically. Code that once justified a dependency—because writing it was too expensive—no longer does.

The AI age demands reconsidering assumptions baked in during an era of scarcity. Dependencies were a hack around the high cost of development. That cost has collapsed. It's time to own your code again.