The “Shared Library” Trap: Why AI Is the Future of Distributed Knowledge

We’ve all been there.    

A few years ago, we created a “Core” or “Shared” library to prevent code duplication across teams. It was the classic DRY (Don’t Repeat Yourself) dream: reusable code, consistency, and faster delivery.

But over time, many shared libraries become a burden rather than a benefit.

Fast forward to today. That library hasn’t had a dedicated owner in 18 months. The documentation is a time capsule. Every time a team wants to update it, they risk breaking a downstream service they didn’t even know existed. Evolutions that require breaking changes become nearly impossible, unless a team decides to fork the library and diverge forever.

Often, there’s no real ownership anymore. And as the saying goes: if everyone owns it, no one owns it.  

In large companies, shared libraries frequently turn into bottlenecks rather than accelerators. As team roadmaps diverge, simple changes become long discussions, and the “shared” code becomes a compromise that serves no one particularly well.

What if we shifted our focus from sharing code to sharing knowledge?

So how can we still share knowledge and avoid reinventing the wheel, without creating hard dependencies between teams?

A Different Model: From Shared Libraries to Shared Intelligence

We cling to shared libraries because we don’t want teams reinventing the wheel every time. Historically, writing new code was expensive, and reuse was the safest path.

Generative AI is changing that cost.

Instead of forcing teams to depend on a rigid, often unmaintained library with 50 functions when they only need three, we can use AI to distribute patterns and blueprints. AI can help generate exactly what a team needs, in their own repository, while still following agreed architectural principles, coding standards, and best practices.

The abstraction shifts from “use this library” to “apply this way of thinking”.

Share Knowledge and Patterns, Not Libraries

Rather than maintaining shared runtime libraries, teams can focus on documenting what actually matters: architectural patterns, trade-offs, and decision history; coding standards and conventions; and a small number of high-quality reference implementations for common problems, along with the reasoning behind them.

With this approach, AI becomes the interface to that knowledge. It can explain patterns to new teams, generate code that follows established conventions, and adapt solutions to different teach stacks or constraints.

The result is that teams get “best practice” code directly in their own repositories. They fully own it, can evolve it independently, and don’t risk breaking other teams when their needs change.

Making Internal Knowledge Available to AI

This raises the next obvious question:

How do we give AI access to our internal knowledge in a safe, useful, and maintainable way?

This is where Retrieval-Augmented Generation (RAG) comes in.

At a high level, RAG means we don’t ask the AI to guess based on general Internet knowledge. Instead, we allow it to retrieve relevant internal context, such as Architecture Decision Records. (ADRs), coding standards, reference implementations, or postmortems, and then generate answers grounded in that material.

When a developer asks, How should we implement X in this service?, the AI first looks for how similar problems have been solved internally, what trade-offs were made, and what standards apply. Only then does it generate code and explanations aligned with the company’s way of working.

The source of truth remains human-owned. AI doesn’t replace documentation; it makes it usable.

Start Small, Not Perfect

In practice, this doesn’t require a massive platform or a dedicated AI team.

Many organizations can start with a curated set of high-quality documents, clear ownership of architectural knowledge (rather than shared code), and an internal AI assistant connected to that content.

From there, the system improves naturally. As teams add new examples, document new decisions, and refine patterns based on real usage, the shared knowledge base grows stronger. The AI simply becomes better at surfacing and applying it.

The result is shared understanding without shared runtime dependencies.

The Goal: Autonomy Over Centralization

In today’s world, we need to move fast. That often means removing blockers created by a “write once, use everywhere” mindset and moving toward a “generate once, adapt everywhere” philosophy.

AI doesn’t just help us write code. It helps us encode institutional knowledge in a way that scales without the traditional maintenance burden of shared libraries.

The goal isn’t to eliminate all code reuse. It’s to make reuse intelligent, contextual, and maintainable by default.

I’m curious how other teams and companies are approaching this. Are you reducing shared libraries? Replacing them? Or evolving them with AI?

Scroll to Top