Administrator
Published on 2026-03-28 / 5 Visits
0
0

When Code Cost Approaches Zero

The Spinning Jenny didn't eliminate textile workers—it expanded the entire industry tenfold. AI won't make programmers disappear either, but it will redefine where their true value lies.

1. Lessons from the Spinning Jenny

In 1764, James Hargreaves invented the Spinning Jenny. The initial version could spin 8 threads simultaneously; later improvements reached 80. The textile workers panicked—the skills they had spent over a decade perfecting had become worthless overnight.

But history reveals a counterintuitive result:

Year Cotton Imports (UK) Textile Employment Production Efficiency
1764 3.87 million lbs Baseline 1 person, 1 thread
1780s Rapid growth Significantly increased 1 person, 8-80 threads
1800s Multiples growth Continued expansion Factory production

Data source: Wikipedia - Spinning Jenny

The Spinning Jenny didn't eliminate textile workers. Production efficiency increased dozens of times, demand exploded, and the entire industry expanded. Workers who learned to operate the new machines actually earned more than before. Those who refused to learn the new technology were the ones truly eliminated.

The impact of AI on programmers is essentially the same as the Spinning Jenny's impact on textile workers. But there's a key difference: the Spinning Jenny changed production efficiency, while AI changes the definition of value itself.

2. The Truth Revealed by AI: Cognition Is the Moat

Comparison 1: Why do some programmers fall further behind after using AI?

You may have noticed a phenomenon: everyone uses Copilot and ChatGPT, yet some programmers' productivity increases 2-3x while others only see 20-30% improvement.

What's the difference?

Those who achieve 2-3x improvements aren't using AI to "write code"—they're using AI to "validate ideas." They have AI quickly generate three different architectural approaches, compare pros and cons, then make decisions. They use AI to identify potential problems in their designs before launch, not after.

Those who only see 20-30% improvement are still using AI as a "code completion tool." Their mindset is still "what code should I write," not "what problem am I solving."

AI is an amplifier—it amplifies your cognitive ability. If your cognition is stuck at the "writing code" level, AI will just make you write code faster. If your cognition upgrades to "understanding problems and evaluating solutions," AI will multiply your influence exponentially.

Comparison 2: Why are you hesitant to use code written by a product manager?

In the past, product managers needed programmers to turn their ideas into products. Now?

I've seen a product manager use Cursor to build a prototype of an internal tool by themselves. The code ran, the features were implemented.

But would you dare deploy this code to a production environment?

Probably not. Because:

  • No error handling (crashes on abnormal user input)
  • No performance optimization (slows down significantly with large datasets)
  • No security considerations (SQL injection, XSS vulnerabilities)
  • Hard to maintain (unreadable three months later)

AI lowers the barrier to "writing code," but it doesn't lower the barrier to "writing good code."

A product manager can use AI to write code that "runs." A programmer can use AI to write code that is "good." What's the difference?

It lies in the ability to judge what is "good":

  • What kind of architecture can support future scalability?
  • Which edge cases need to be handled?
  • Will this approach have performance problems?
  • Will this code be readable three months from now?

These judgments come from experience, from the pitfalls you've witnessed, from your understanding of "why not do it that way."

Comparison 3: How top developers use AI differently

How average programmers use AI:

  • "Write a user login function for me"
  • "This code has a bug, help me fix it"
  • "Optimize the performance of this query"

How top developers use AI:

  • "Generate three different authentication architecture approaches and compare their performance under high-concurrency scenarios"
  • "Analyze potential bottlenecks this design might encounter in the future"
  • "Validate my hypothesis: this caching strategy fails under edge cases"

See the difference?

Average programmers use AI to accelerate execution—making "writing code" faster.
Top developers use AI to explore possibilities—making "decision-making" better.

In their hands, AI is a "cognitive amplifier"—amplifying understanding of problems, not typing speed.

This reveals a harsh truth: AI amplifies your cognitive ability, not your execution ability.

If your cognition is "how to implement this feature," AI will help you implement it faster. But if your cognition is "why design it this way" and "why not design it that way," AI will help you explore more possibilities and make better decisions.

3. The Difficulty of Accumulation: Why "How Others Failed" Is Hardest to Learn

Transitioning from executor to decision-maker requires accumulating decision-making ability. But accumulating decision-making ability has three levels:

Level 1: Accumulating "how I did it right"
Easiest, because you experienced it firsthand. Example: I refactored a component using React Hooks, and performance improved by 30%.

Level 2: Accumulating "how others did it right"
Moderate difficulty, requires active learning. Example: Studying Vue's source code and learning Evan You's reactive design.

Level 3: Accumulating "how others failed"
Hardest, but most valuable. Example: Why was Angular 1.x's dirty checking mechanism abandoned? Why is Redux overengineering for small projects?

Why is Level 3 the hardest?

Because successful cases get widely传播, but failed cases are often hidden. More importantly:

In mature companies, your leaders have already helped you avoid most pitfalls. You see "how things should be done," but not "why not do it that way." In startups, you encounter pitfalls, but often repeat ones others have already encountered—no new cognition is accumulated.

This is why many programmers with 5 years of experience still have decision-making ability at a junior level—they've accumulated only "how to do it right," not "why not do it that way."

4. Two Actionable Paths

The good news: The AI era provides unprecedented opportunities for accumulation.

Path 1: Track frontier exploration in emerging fields (fastest)

Why is this the fastest path?

In emerging fields like AI applications, Web3, and edge computing, nobody knows the "correct answer." All companies are experimenting publicly and openly.

OpenAI releases new APIs and best practices every month—you can see their exploration direction. Vercel's AI SDK evolution from v1 to v3 is completely open. Anthropic's Claude prompt engineering guide documents numerous "don't do this" cases.

How to do it specifically?

  1. Choose an emerging field (AI Agents, RAG applications, real-time collaboration, etc.)
  2. Follow 3-5 leading companies' tech blogs, GitHub, and Discord
  3. Record their directional changes—not just "what they did," but more importantly "why they changed direction"
  4. Validate these "pitfalls" with small projects

My case study:

I tracked Anthropic's engineering practice evolution and discovered a clear pattern:

February 2025 → August 2025 (6 months):

  • Claude Code task complexity: improved from 3.2 to 3.8 (1-5 scale)
  • Continuous autonomous operations: increased from 9.8 to 21.2 times (+116%)
  • Frequency of human intervention: decreased by 33%

What does this indicate?

Early on, engineers treated AI as a "code completion tool," requiring human checks every few lines of code. Six months later, AI can autonomously complete more complex tasks, and the engineer's role shifted from "writing code" to "managing AI."

Deeper changes:

  • December 2024: "Building effective agents" - emphasizing explicit workflows
  • March 2025: "The 'think' tool" - discovering the need to "stop and think"
  • September 2025: "Effective context engineering" - realizing context management is key
  • December 2025: "How AI is transforming work" - engineers becoming "AI managers"

This evolution reveals what? Early on, the core of AI was thought to be "tool calling," then it was discovered to be "reasoning ability," and finally the real bottleneck was realized to be "how to manage AI."

This cognition changed how I design my own workflow—I focus not on "having AI write more code" but on "how to better manage and validate AI's output."

How do you know if you're doing it right?

After 3 months, ask yourself: Can I predict the next directional shift in this field? If yes, you've understood the underlying logic of this field.

Time cost: 2-3 hours per week, 3 months to establish a cognitive framework.

Path 2: Rapid iteration through independent projects (moderate speed)

Why are independent projects effective?

In company projects, your decision-making space is constrained: tech stack is set, architecture is set, processes are set. But in independent projects, all decisions are yours—which means you'll hit all the pitfalls and accumulate all the cognition.

The key: Use AI to reduce trial-and-error cost by 10x.

Previously, trying an architectural approach required writing thousands of lines of code over 1-2 weeks. Now, you can have AI generate a prototype in a few hours and validate quickly.

How to do it specifically?

  1. Choose a real problem (build something you'd actually use yourself, not a Todo App)
  2. Iterate rapidly, but write a "decision log" after each iteration:
  3. What decision did you make?
  4. Why did you make it that way?
  5. What was the result?
  6. What did you learn?
  7. Compare with competitors' choices and think: Why did they choose that way? How is my choice different from theirs?

Case 1: A small game (Tetris)

  • Version 1: Used Canvas to draw directly; code quickly became messy
  • Decision log: Why write directly? Thought it was simple. Why did it get messy? Underestimated the complexity of game logic
  • Version 2: Switched to a game engine (Phaser.js), but loading was too slow
  • Decision log: Why use an engine? Thought "professional tools" were better. Why was it slow? The engine was too heavy for a small game that doesn't need all those features
  • Version 3: Returned to native Canvas but refactored with a state machine
  • Decision log: What did I learn? Simple problems don't need complex tools, but they do need clear architecture

Case 2: Early education software (children's literacy App)

  • Version 1: React Native cross-platform; animations were choppy and kids didn't like it
  • Decision log: Why choose cross-platform? Wanted to save development time. Why did it fail? Underestimated how important smooth experience is for children
  • Version 2: Switched to Flutter; performance improved and kids were willing to use it
  • Decision log: Why Flutter? Balances performance and cross-platform. What did I learn? Technology selection should prioritize user experience over development efficiency

After 3 months, I had clear cognition in both fields: games need "simple architecture + clear state," and children's apps need "smooth experience > everything."

How do you know if you're doing it right?

After 6 months, ask yourself: Can I explain the key decision points in this field to someone in 1 hour? If yes, you've made tacit knowledge explicit.

Time cost: 5-10 hours per week, 6 months to complete 2-3 projects.

Key: Making Tacit Knowledge Explicit

The common thread in both paths: making tacit knowledge explicit.

Don't just "know"—be able to "articulate":

  • Why is this approach better?
  • Why won't that approach work?
  • Under what conditions would this judgment change?

This process of explicitness is the process of turning experience into cognitive assets. Code can be generated by AI, but cognition can only be accumulated by you.

5. How Long Is the Transformation Window?

Returning to the original question: Will programmers be replaced by AI?

The answer depends on what you consider your core ability.

If your core ability is "writing code," you're already on the path to being replaced. Product managers can also write code with AI—just with slightly lower quality. But this gap is closing rapidly.

If your core ability is "understanding problems, evaluating solutions, and accumulating cognition," you won't be replaced. Because AI is an amplifier—it will amplify these abilities of yours.

The transformation window is 2-3 years.

  • Now: AI is still not reliable enough; people are needed to check and optimize
  • In 3 years: AI reliability will improve significantly; the value of "writing code" will shrink further
  • In 5 years: Only "cognitive assets" will retain irreplaceable value

Three things you can do immediately:

  1. Stop optimizing "code writing" speed, start optimizing "problem understanding" depth
    Next time you encounter a requirement, spend 30 minutes thinking about "why do this" and "what approaches are available" before writing any code.

  2. Choose an emerging field and follow 3-5 leading companies
    Subscribe to their tech blogs and record their directional changes. After 3 months, you'll see the patterns.

  3. Start writing decision logs
    After each technical decision, spend 10 minutes recording: what decision was made, why, what was the result, and what was learned.

The Spinning Jenny didn't eliminate textile workers, but it eliminated those who refused to learn. AI will be no different.

The difference is: this transformation's core is not learning new tools, but redefining your value. From "person who writes code" to "person who accumulates cognition."

There's still time. Start now.


This article is based on one year of practical experience with AI applications. If you want to learn more about using AI to enhance decision-making ability, welcome to follow my blog.


Comment