LLM Framework Selection: Why React Wins by Default

When LLMs Pick the Framework: The Coming Ossification of Software Ecosystems

As AI-generated code overtakes human-written code on GitHub, a quieter shift is happening underneath the obvious one. LLM framework selection — the model quietly picking React over Vue, Postgres over SQLite, Next.js over whatever — is increasingly where architectural decisions actually get made. It’s not just that code is being written differently. It’s that the choices behind the code are being made by systems with very different selection pressures than the humans they’re replacing. Architectural decisions, framework selections, library picks, language idioms: all of these used to be the output of a messy, taste-driven human process. Increasingly, they’re the output of whatever falls out of a model’s weights.

That change has implications for how software ecosystems evolve, and specifically for whether a Vue-shaped upstart has a chance against an entrenched React in 2026 and beyond. I think the honest answer is: probably not, and the reason is structural.

The old selection process had productive noise

When a developer in 2018 picked between React and Vue, the decision was shaped by blog posts, conference talks, tutorials, coworker recommendations, and the hard-to-quantify feeling of whether a framework was pleasant to use. That process had noise in it, but the noise was exploratory. People tried Vue because they were curious, or contrarian, or because Evan You wrote compelling docs and the framework felt good in ways that were hard to articulate but easy to experience.

Vue got traction on qualitative merits. That’s a selection process where an upstart can win by being taste-forward and community-driven, even without corporate backing or overwhelming technical differentiation.

How LLM framework selection changes the game

When an LLM makes that choice, the pressure is fundamentally different. The model suggests whatever was best-represented in its training data, weighted by whatever signals got reinforced during RLHF. React has something like 5-10x the GitHub presence of Vue, vastly more Stack Overflow answers, more tutorials, more everything. Every model trained on public code will have a strong React prior.

And here’s the flywheel: developers increasingly ask the model “what should I use?” The model’s prior becomes the choice. The choice becomes more React code on GitHub. The next model’s prior is even stronger. This loop doesn’t exist in the human selection process. Humans get bored of things, crave novelty, defect to alternatives out of sheer curiosity. Models don’t.

Two scenarios, and the worse one is winning

There’s a version of this where the developer asks explicitly: “React or Vue for this dashboard?” The model hedges, lists tradeoffs, leans React-ward but surfaces the alternatives. Mild homogenization pressure, human still in the loop.

Then there’s the version where the developer doesn’t ask. They say “build me a dashboard” and the model picks React silently because React is what falls out of the weights. The choice never surfaces. This is where the flywheel really bites, and it’s increasingly the default as agentic coding tools take over more scaffolding work.

Upstarts don’t die in this world because someone decided against them. They die because they were never considered.

What survives LLM-driven framework selection

Not everything is doomed. A few paths remain viable.

Legible technical differentiation. If your framework wins on something a model can evaluate — bundle size, benchmark performance, type safety, measurable metrics — you have a shot. Svelte benefits from this because “compiles away the framework” is a technical claim that shows up in numbers. “Feels nicer than React” does not survive the transition.

Corporate backing. Next.js, Remix, Solid — recent frameworks that broke through all had distribution behind them. A major player can guarantee enough documentation, example code, and training-data presence to overcome the cold-start problem. The organic grassroots path is narrowing; the “someone with distribution decided this should exist” path is widening.

Deliberate counter-weighting by model providers. Anthropic, OpenAI, and Google could, in principle, over-represent newer frameworks during training to prevent ossification. Whether they will is a business question. The incentive is weak unless users push for it.

Writing for the models. This is already emerging as a discipline. Framework authors are producing “LLM-friendly docs,” publishing structured guides optimized for ingestion, getting their work into popular eval sets. It’s a strange new form of developer relations where part of your target audience is a training pipeline. Ten years from now, optimizing for model adoption may matter as much as optimizing for developer adoption did in the 2010s.

What actually happens, probably

The top tier ossifies harder than it used to. React becomes more difficult to dislodge than jQuery ever was. jQuery got displaced by a generation of developers who wanted something new, and that generational hunger doesn’t translate into model weights.

The second tier stays surprisingly dynamic, because the corporate-backing path still works. Frameworks with distribution or legible technical wins can still break in.

What dies is the middle: the Vue-shaped path of independent projects winning on taste and community. That path required humans making qualitative judgments at scale, and that’s precisely what’s being automated away. A Vue launching today probably doesn’t reach Vue’s actual market share. A Svelte might. A Vercel-backed framework definitely does.

The broader pattern impacting all design choices

Framework choice is just the visible case. The same dynamic applies to architectural patterns, ORM choices, testing approaches, database selection, language idioms. Every layer where “there’s a dominant choice and some interesting alternatives” gets compressed toward “there’s a dominant choice.” LLM framework selection is the most legible symptom of a much broader ossification.

The alternatives being squeezed out aren’t worse by any measurable criterion. They’re just less represented. And the filters people sometimes invoke to argue against code-model collapse — “the compiler will catch it, the tests will catch it, production will catch it” — do nothing to prevent this kind of loss. These choices aren’t wrong. They’re just gone.

The deeper worry is that software culture has historically relied on a long tail of weird, opinionated, taste-driven alternatives as the reservoir from which future mainstream choices emerge. Compress the tail and you don’t just lose present-day diversity. You lose the most valuable input that drives the next generation of mainstream tools. The selection pressure has shifted from “does this feel good to developers” to “can this get into training data with positive associations,” and those are different games with different winners.

We’ll find out over the next five years which ecosystems adapt to the new game and which ones quietly stop producing challengers at all.

April 17, 2026
Share:

Deliver a superior client experience with truly customized investment solutions

Alphathena’s cloud-based platform eliminates the complexities associated with direct and custom indexing, simplifying personalization through tax-loss harvesting, auto-rebalancing, and index lifecycle management capabilities.

Table of Contents:

Share:

Deliver a superior client experience with truly customized investment solutions

Alphathena’s cloud-based platform eliminates the complexities associated with direct and custom indexing, simplifying personalization through tax-loss harvesting, auto-rebalancing, and index lifecycle management capabilities.

What’s next

Are you a
Registered Investment Advisor?

Schedule a meeting with our experts!

Here’s what to expect: Pick a 30 minute time slot. We’ll discuss your unique investing workflows. You’ll see tax-loss harvesting, custom index creation, and agentic portfolio management in action.

Or provide your information and one of our team members will reach out to you.

Schedule a meeting with our experts!

Or provide your information and one of our team members will reach out to you.

Please provide your information and one of our team members will reach out to you.