Will AI Replace Your Job? A Four-Dimension Self-Assessment Framework (Not Another List)
Last updated: April 2026
Most articles about AI and job security give you a list. "These 20 jobs are safe." "These 30 roles won't be automated." What they don't tell you is that the list is almost useless — because AI doesn't replace job titles, it replaces specific types of value delivery. A "safe" job title can contain deeply vulnerable roles, and an "endangered" occupation can include professionals who've successfully rebuilt their position around AI augmentation.
This article gives you something different: a diagnostic framework you can apply to your specific role, with specific decisions you make, with specific contexts you operate in. By the time you're done reading, you'll have a vulnerability score — and more importantly, a clear set of actions tailored to where you actually sit.
The Fundamental Mistake Everyone Makes
Before we get into the framework, let's name the mistake that renders most career advice on this topic useless.
People ask: "Is my industry safe?"
Wrong question. AI doesn't replace industries. It doesn't look at "healthcare" or "finance" or "marketing" and decide to deploy automation there. It replaces specific value types — and those value types can exist inside any industry.
Consider: a radiologist and a stock trader seem to work in completely different worlds. But they share something critical — their core value delivery is pattern recognition on well-structured inputs (medical images vs. market data), and their outputs can be verified against ground truth (diagnosis correct/not correct; trade profit/loss). Both are more vulnerable than they think.
Meanwhile, a customer success manager at a SaaS company and a procurement officer at a manufacturing firm seem like generic "corporate" roles. But the CS manager who builds genuine client relationships, navigates organizational politics, and makes judgment calls about when to waive terms to save a relationship — that's much harder to automate than the procurement officer who follows a vendor selection matrix.
The question isn't "what industry am I in?" The question is: "What type of value do I actually produce, and can that value be codified, reversed, verified, and contextualized by an AI system?"
That's what the four dimensions measure.
The Four-Dimension Diagnostic Matrix
Each dimension captures a fundamental property of your work. Score yourself 1-5 on each. Lower scores mean more vulnerable; higher scores mean more resilient.
Dimension 1: Decision Codifiability
The question: Can your core decisions be expressed as rules, constraints, or decision trees?
This isn't about whether you've written down your decision process. It's about whether the process is, in principle, codifiable — whether an intelligent system could learn to make the same decisions you make, given enough examples.
A junior accountant making journal entries operates with highly codifiable decisions. The rules are GAAP. The edge cases are documented in the accounting standards. Given enough historical entries, an AI could learn to code these correctly.
A senior tax strategist operating in gray areas — navigating ambiguous regulations, client relationships, and strategic business context to recommend a tax structure that survives audit — is making decisions that are NOT codifiable. The rules exist, but the judgment about how to apply them in context, when to push, when to retreat, is tacit knowledge accumulated over decades.
Scoring guide for Dimension 1:
- 1: My decisions follow explicit rules/regulations with minimal judgment required
- 2: There are rules, but I apply them with moderate contextual judgment
- 3: I follow some documented processes but regularly make judgment calls
- 4: My decisions require significant contextual judgment that isn't easily codified
- 5: My decisions depend on tacit knowledge, intuition, and relationship dynamics that can't be written down
Dimension 2: Decision Reversibility
The question: If you make a wrong decision, how easily can it be corrected?
This dimension captures something subtle but critical: AI systems can iterate. They can try something, evaluate the result, and adjust. This makes them powerful for reversible processes but limited for irreversible ones.
Surgery is the classic example. A wrong cut can't be uncut. But here's where it gets interesting: the execution of surgery is irreversible, but the planning of surgery is increasingly reversible. A surgical plan can be simulated, revised, simulated again. The AI doesn't need to do the cutting to provide value — it can provide value in the reversible planning phase, compressing the human surgeon's cognitive work.
This creates a split within many roles: the automatable part is the reversible execution, the safe part is the irreversible judgment.
Code deployment follows the same pattern. A bad deployment can be rolled back. That's why AI coding tools are so effective — the reversibility of code changes means errors can be corrected quickly. But system design — deciding what to build, what dependencies matter, what failure modes exist — that's not easily reversible. You can't unbuild a system architecture.
Scoring guide for Dimension 2:
- 1: My decisions are easily reversible — I can undo/roll back with no significant cost
- 2: My decisions can be reversed with moderate effort and some cost
- 3: Reversing my decisions requires significant effort but is possible
- 4: My decisions have lasting effects but can be corrected over time
- 5: My decisions are largely irreversible — errors have permanent consequences
Dimension 3: Quality Verifiability
The question: Can the quality of your output be automatically checked?
This is perhaps the most operationally important dimension. When AI produces something, how do you know if it's good?
For code, you run a linter. You run tests. You have CI/CD pipelines that automatically verify correctness. This means the execution layer of software development is highly vulnerable — the AI can generate code, and the system can automatically verify whether that code works.
For marketing copy, you run A/B tests. You measure click-through rates, conversion rates, bounce rates. Highly verifiable. Which means AI-generated copy can be automatically evaluated against performance metrics.
For strategic recommendations — "should we enter this market?" — there's no automatic verifier. The quality is judged by human stakeholders with competing interests, by outcomes that won't be known for years, by contextual factors that can't be captured in a metric.
The verification gap is where humans remain necessary. Not because they're better (often they aren't), but because the process of verification requires judgment that can't be automated.
Scoring guide for Dimension 3:
- 1: Output quality can be automatically verified with clear metrics
- 2: Quality can be verified automatically but metrics have some limitations
- 3: Quality requires human review but criteria are relatively objective
- 4: Quality requires expert human judgment with some subjective elements
- 5: Quality assessment is highly subjective and context-dependent
Dimension 4: Context Dependency
The question: Is the information you need to do your job in documents, or in people's heads and relationships?
AI systems are extraordinarily good at processing documents. They can read every patent ever filed, every legal ruling, every research paper. What they can't do is access the context that lives in relationships, organizational politics, industry intuition, and tacit knowledge.
A lawyer preparing for a negotiation has access to documents — contracts, correspondence, legal precedents. But the lawyer also has context that isn't in any document: how the opposing counsel has behaved in previous negotiations, what the client's real priorities are (which may differ from what's stated), the political dynamics within the client's organization that affect what they can actually agree to. This contextual knowledge is largely uncapturable by AI.
Compare this to a compliance officer checking whether a transaction meets regulatory requirements. The regulations are documents. The transaction details are documents. The context needed is almost entirely documentary. This is why compliance work is more vulnerable than it appears — the "judgment" involved is often the application of documented rules to documented facts.
Scoring guide for Dimension 4:
- 1: Most critical context is in documents that AI can access and process
- 2: Some context is in documents, but important elements exist in organizational knowledge
- 3: Context is mixed — documents provide foundation but relationships/organizational context matter
- 4: Important context lives in relationships, organizational politics, and tacit knowledge
- 5: Critical context is almost entirely relationship-based and uncapturable by AI
Your Vulnerability Score
Add up your scores across all four dimensions. The maximum is 20 (most vulnerable); the minimum is 4 (most resilient).
Total score 4-8 (Low Vulnerability): Your role delivers value through tacit knowledge, makes irreversible decisions with lasting consequences, operates in low-verifiability contexts, and depends heavily on relationship-based context. Examples: senior consultants, trial lawyers, creative directors, founders, therapists, surgeons.
Total score 9-14 (Medium Vulnerability): Your role mixes automatable and non-automatable elements. Some of your value is codifiable and reversible; some depends on judgment, relationships, and context. Most professionals sit here. Examples: product managers, financial analysts, marketing managers, project managers, engineers.
Total score 15-20 (High Vulnerability): Your role primarily delivers value through information processing, standardized decisions, reversible outcomes, and document-based context. This doesn't mean you'll lose your job — but it means the nature of your role will change rapidly. Examples: junior developers, data entry, basic content production, first-line customer service, standardized compliance checks.
| Dimension | 1 (Most Vulnerable) | 2 | 3 | 4 | 5 (Most Resilient) |
|---|---|---|---|---|---|
| Decision Codifiability | Follows explicit rules | Rules with moderate judgment | Some processes, regular judgment calls | Significant judgment required | Tacit knowledge, intuition |
| Decision Reversibility | Easily reversible | Reversible with moderate cost | Reversible with significant effort | Lasting effects, correctable | Largely irreversible |
| Quality Verifiability | Automatically verifiable | Metrics with limitations | Human review, objective criteria | Expert judgment, subjective | Highly subjective |
| Context Dependency | Primarily documents | Mixed documents/relationships | Foundation documents, relationships matter | Relationships and politics | Almost entirely relationship-based |
The Five Layers of AI Replacement: A Timeline
Understanding when different vulnerability levels face disruption helps you prioritize your response.
Layer 1: Information Transport (2024-2026, Already Happening)
This is AI doing what databases and the internet already did — moving information from where it exists to where it's needed, in processed form. Code writing. Content drafting. Data cleaning. First-line support that matches problems to solutions.
The data is striking. OpenAI's Symphony team — 3 people managing AI agents — merged 1,500 pull requests in 5 months. That's roughly 3.5 PRs per person per day, every day, for five months. The comparison point isn't other 3-person teams; it's entire engineering organizations.
If your role is primarily information transport — taking requirements, processing them, producing structured output — you're in Layer 1. The replacement is not coming; it's here.
Layer 2: Standardized Decision Roles (2026-2027)
Once AI can handle the execution, it can increasingly handle the decision-making — when the decisions follow predictable patterns.
Basic audit. Compliance screening against documented criteria. Standard research synthesis. Routine legal review where the issues are well-established. Testing and QA where pass/fail criteria are explicit.
This is where it gets uncomfortable for middle-skill knowledge workers. The person doing standardized compliance checks isn't just at risk — they're in a role that makes less economic sense with each passing month. The company paying $80K/year for compliance screening could pay $20K/year for AI that does it faster and more consistently.
But note: this is about standardized decisions. A compliance officer who can navigate ambiguous regulatory gray areas, make judgment calls about novel situations, and advise on regulatory strategy — that's Layer 4 work, not Layer 2.
Layer 3: Coordination Middle Management (2027-2028)
Project coordination. Status reporting. Vendor management for standard categories. The work of making sure work happens according to plan.
Gartner projects that 20% of organizations will use AI to flatten their structure by 2026. When AI can generate the status report, track the dependencies, identify the bottlenecks, and flag the risks — the project coordinator's value proposition shrinks.
This doesn't mean project managers disappear. It means the coordination function compresses. The project manager who adds value through strategic judgment, stakeholder management, and navigating organizational complexity — they're still necessary. The one who primarily tracks tasks and updates spreadsheets — that's Layer 3.
Layer 4: Professional Services, Restructured (2028-2030)
Here's where the narrative gets complicated. Lawyers, doctors, consultants, strategic product managers — these roles are not going away, but they're restructuring around AI.
The automatable execution compresses. A lawyer who used to spend 60% of time on document review now spends 10% — because AI handles the document review. That sounds like liberation. It is — but it also means there's less of that work to do, which means fewer lawyers are needed to do it.
The judgment and responsibility value rises. When execution is cheap and abundant, the premium moves to accountability. The senior partner who signs off on the brief, who makes the judgment call about the risky strategy, who takes the career risk of the aggressive position — that's where the value concentrates.
For professionals in this layer: you have more time than Layer 1-3 workers, but the time is running out. The window is to build your judgment muscle, not just your execution skill.
Layer 5: Meaning-Confirmation Roles (Long-Term)
Teachers. Therapists. Creative directors. Founders. Caregivers. Ministers.
What these roles share is that they're not really about information transfer or decision execution — they're about human meaning-making. A teacher who inspires isn't conveying information faster than a video; they're confirming meaning, building identity, creating belonging. A therapist isn't providing information about mental health; they're providing a human witness to suffering.
AI can't confirm meaning for humans. It can provide information, it can provide interaction, but the existential dimension — the "you are seen, you matter, your life has meaning" — that's essentially human.
If you're in this layer, your job is safe. But "safe" doesn't mean unchanged. It means your role becomes more valuable, and potentially more demanding, as the rest of the world restructures around AI.
What the Data Actually Shows
Let's be honest about what we know — and what we don't.
The productivity paradox: NBER surveys show 80% of companies feel zero impact from AI despite individual productivity gains. This seems counterintuitive. The answer is probably organizational: individual productivity gains don't automatically translate to firm-level impact when the surrounding processes, incentives, and structures haven't changed.
Individual gains are real: Goldman Sachs data shows AI boosts individual productivity 30% in specific use cases. This is consistent with what we see in the Layer 1 roles — individuals with AI augmentation can produce dramatically more output.
The quality gap is closing: Harness reports that AI coding heavy users have 69% higher deployment failure rates. This tells us something important: speed of AI-assisted execution outpaces quality assurance. The people shipping 3.5 PRs per day are also shipping bugs. The bottleneck is moving from "writing code" to "verifying code works in context."
Token economics are restructuring knowledge work: Jensen Huang noted that a $250,000/year token budget represents roughly 50% of an engineer's salary. Power users are consuming 200-300 million tokens per day. This isn't just a cost metric — it's a signal about where value is flowing. The person who can effectively direct massive AI resources is worth multiples of the person who does the work manually.
The Chinese AI Builder pattern: One person in the 75,000-line skill document community doing team-level output. This isn't about AI replacing jobs — it's about one person with AI doing what previously required a team. The unit economics are changing at the individual level.
Real People, Real Transitions: Three Case Studies
Numbers and frameworks are abstract. Let's make this concrete with three professionals who scored differently on the vulnerability matrix — and what they did about it.
Case Study 1: The High-Vulnerability Developer
Alex is a backend developer at a mid-size SaaS company. Three years ago, his day looked like: write CRUD endpoints, write tests, fix bugs, repeat. Highly codified decisions, highly reversible (git revert exists), highly verifiable (tests pass or fail), context that's mostly in code/docs.
Score: 17/20. High vulnerability.
What happened: When AI coding tools became mainstream in 2024, Alex watched his output multiply. Where he once wrote 2-3 endpoints per week, he was now shipping 8-10. But he noticed something: the bugs he shipped also multiplied. And his manager started asking why, with AI helping, the team wasn't delivering significantly more value.
Alex's realization: The execution was being commoditized faster than his judgment was being valued. He had a choice: keep racing to produce more code (a race he'd eventually lose to AI), or move up the stack.
What he did: He started spending his AI-generated time on system design discussions, architecture reviews, and mentoring the junior developers who were still learning the fundamentals. He positioned himself as the person who could be trusted with the decisions that mattered — not because he coded faster, but because he understood the system better.
18 months later: Alex leads a 4-person team that includes AI agents. His individual contribution as a coder has been replaced by AI, but his contribution as a system thinker and decision-maker has been amplified. He didn't escape the vulnerability — he traded execution vulnerability for judgment value.
Case Study 2: The Medium-Vulnerability Product Manager
Jordan is a PM at a consumer app company. Day-to-day: write PRDs, coordinate with engineering, track metrics, manage stakeholder communications. Some codifiable (requirements templates, roadmap processes), some judgment-based (prioritization decisions, stakeholder management).
Score: 12/20. Medium vulnerability.
What happened: Jordan watched the project coordination piece get automated first. AI-generated status reports, AI-synthesized stakeholder feedback, AI-monitored project health. The "project coordinator" aspect of the role was clearly compressible.
But the judgment aspects — deciding what to build, navigating conflicting stakeholder priorities, making trade-off calls under uncertainty — those remained firmly human territory.
What Jordan did: Instead of fighting the automation of coordination, Jordan leaned into it. Used AI to handle status updates and meeting summaries, then used the time saved to go deeper on strategic questions: "Are we building the right things? Are our priorities aligned with business outcomes?" Jordan became the PM who brought strategic clarity, not just project management.
18 months later: Jordan is now a Director of Product, with a scope that would have required 3 PMs before AI. The execution layer compressed, but the strategic layer expanded. Jordan used AI as a lever to move up, not as a threat to resist.
Case Study 3: The Low-Vulnerability Consultant
Sam is a strategy consultant specializing in operational transformation for manufacturing clients. Core work: understand client's complex organizational dynamics, diagnose systemic issues, recommend changes that require navigating political resistance and behavioral change. Highly tacit knowledge, largely irreversible recommendations, low verifiability (outcomes depend on implementation, not just recommendations), deeply relationship and context-dependent.
Score: 6/20. Low vulnerability.
What happened: Nothing immediate. The consulting industry moved slower than software. AI could help with research and analysis, but the judgment required for client work — understanding what the client actually needs vs. what they're asking for, navigating the political dynamics to get recommendations implemented — that remained human territory.
But Sam noticed something subtler: clients were becoming more sophisticated. They had seen AI presentations. They knew what AI could do. Some started asking whether they needed consultants at all, or just AI tools.
Sam's response: Doubled down on the human elements that AI couldn't replicate. Made the implicit explicit — documented the tacit knowledge that had been in his head, built frameworks that could be taught to clients, positioned himself not as a source of answers but as a thinking partner who could navigate complexity with clients. The value wasn't in the recommendations; it was in the judgment process and the relationship trust.
18 months later: Sam's clients value him more, not less. But the nature of engagements changed. Shorter, more focused, more advisory. The "body shopping" model of consultants doing analysis work is dying; the "trusted advisor" model is thriving.
The pattern across all three: None of them survived by resisting AI. They survived by identifying which of their skills would be amplified and which would be commoditized, then positioning themselves accordingly. The answer was never "fight AI" or "surrender to AI" — it was always "figure out where the judgment premium is and move there."
Your Career Strategy by Vulnerability Level
If You Score High Vulnerability (15-20): Move Now
You're not going to be replaced overnight. But the economic logic is against you — the value you're delivering is being commoditized, and the trend will continue.
The strategy: Shift from execution to judgment. Not "learn AI tools" — that's necessary but not sufficient. The deeper shift is to become the person who can audit AI output, not the person who produces it.
The junior developer writing boilerplate code: AI can write the boilerplate. The value moves to the senior engineer who can review that code, catch the subtle bugs, understand the system implications, make the architectural calls. The person who audits AI output is safer than the person the AI replaces.
Concrete actions:
- Start reviewing AI-generated code/ content/ analysis in your domain
- Build judgment about quality, not just ability to produce
- Find the edge cases where AI fails and become the expert on those
- Position yourself as the "human in the loop" for AI-generated work
If You Score Medium Vulnerability (9-14): You Have Time, But Not Unlimited Time
You're in the most common position — mixed value delivery, some automatable and some not. The automatable portion will compress; the judgment portion will appreciate.
The strategy: Deepen your non-codifiable skills while using AI to amplify your codifiable ones. The goal is to become the person who uses AI to do 10x — not the person AI replaces.
The project manager who automates status reporting but brings strategic judgment to stakeholder management. The analyst who automates data gathering but brings market intuition to investment decisions. The marketing manager who uses AI for execution but brings brand intuition to strategy.
The risk is staying in the middle — doing enough execution to be threatened by automation, but not building enough judgment to move up the value chain.
Concrete actions:
- Identify which 40% of your work is execution (likely automatable) vs. judgment (not automatable)
- Systematically use AI to compress the execution work
- Invest the time saved in judgment-heavy activities: strategy, stakeholder management, complex problem-solving
- Build relationships that create context AI can't access
- Get comfortable with "I use AI to do X" as a core professional competency
If You Score Low Vulnerability (4-8): Don't Get Complacent
Your role is genuinely hard to automate. But the world around you is changing, and the implications are subtler than "my job is safe."
The strategy: Your organization may restructure around you. Prepare to manage AI agents instead of managing junior staff. The economics that make junior roles vulnerable will eventually make junior staff structures inefficient.
The senior consultant who previously managed a team of analysts now manages AI agents. The creative director who previously directed a design team now directs AI-generated options. The surgeon who previously had a large surgical team now operates with a more compact, AI-augmented team.
Your value is safe. Your team's structure is not.
Concrete actions:
- Learn to direct AI systems, not just use them
- Develop skills in prompting, evaluating, and iterating on AI outputs
- Build expertise in areas where human judgment remains essential
- Consider how AI changes the economics of your practice
- Lead the integration of AI in your domain rather than being被动地 affected
The Question Nobody Asks: What About the 80%?
NBER found that 80% of companies feel zero impact from AI despite all the media coverage. This is a critical data point that should shape your thinking.
Individual productivity gains don't automatically translate to organizational impact. A developer using Copilot who ships 3x more code isn't necessarily creating 3x more value for their company — if the bottleneck is code review, or requirements gathering, or deployment, or stakeholder alignment.
This means the AI transformation of work won't be as smooth or as complete as the most optimistic predictions. It also means the organizational barriers to AI adoption are as important as the technical ones.
For your career: the question isn't just "can AI do my job?" It's "will my organization create the conditions for AI to do my job?" In many cases, the answer is "no, not yet" — which means you have more runway than the vulnerability score suggests.
But don't mistake organizational inertia for genuine resilience. The capabilities are coming. The question is timing.
FAQ: What People Actually Ask
Q: Will there be mass layoffs because of AI in 2026?
The honest answer: mass layoffs are a management decision, not a technological inevitability. AI makes individual workers more productive, but organizational restructuring requires managerial will, economic pressure, and institutional change.
We should expect workforce composition changes, not necessarily headcount reductions. The company that uses AI to do more with fewer people is one outcome. The company that uses AI to serve more customers with the same people is another. Both are happening.
The companies most likely to pursue aggressive headcount reduction are those with: (1) highly codified work, (2) clear output metrics, (3) management incentive to cut costs, and (4) limited need for human judgment in service delivery. Layer 1-2 roles at companies with these characteristics may see displacement. Layer 4-5 roles, less so.
Q: What skills are truly AI-proof?
There are no AI-proof skills — only currently hard-to-automate competencies. But some categories of skill are more durable:
- Tacit knowledge: Intuition developed through years of experience in a specific domain, especially under ambiguous conditions
- Social intelligence: Reading people, understanding motivations, navigating complex interpersonal dynamics
- Creative judgment: Not generating creative options (AI can do that) but evaluating and selecting among options in context
- Responsibility acceptance: Willingness to be accountable for decisions, including their irreversible consequences
- Meaning-making: Helping others find purpose, significance, and connection
These aren't skills you can learn from a course. They're developed through experience, reflection, and sustained engagement with difficult problems.
Q: Is my specific job [developer/designer/PM/analyst] safe?
The question is unanswerable at the job title level. A "developer" who writes boilerplate code is Layer 1 vulnerable. A developer who makes architectural decisions, navigates ambiguous requirements, and takes responsibility for system outcomes is Layer 4. Same job title, very different vulnerability.
What matters: which layer of your specific role are you operating in? If 70% of your time is Layer 1 execution, your trajectory matters. If 70% is Layer 4 judgment, you're in a different position.
Use the four-dimension framework to score your actual work, not your job title.
Q: Should I learn AI tools or change careers?
Both/and, not either/or. Everyone should learn to direct AI systems effectively — that's becoming a baseline professional competency, like using spreadsheets or email. But "learn AI tools" isn't a career strategy; it's table stakes.
The more important question: does your current career trajectory lead toward more or less codifiable work? If you're moving toward judgment-heavy, context-dependent work — stay and invest in that trajectory. If you're moving toward execution-heavy, standardized work — the direction matters more than the tools.
Q: What's the one thing I should do right now?
If you're in a Layer 1-2 vulnerable role: identify the AI system that's disrupting your domain, and become excellent at auditing its output. The gap between "using AI" and "evaluating AI output critically" is where professional value is moving.
If you're in a Layer 3-4 role: find the one judgment-heavy activity you do that AI currently can't do, and invest in getting dramatically better at it. Deepen the judgment muscle, don't just maintain execution skills.
The common mistake: spending energy worrying about AI instead of building the specific capabilities that remain valuable in an AI-augmented world.
The Skills That Appreciate vs. The Skills That Depreciate
If the four-dimension framework tells you where you stand, this distinction tells you which direction to run.
Skills that depreciate in an AI world:
- Information recall: Knowing facts. AI has all the facts. This was never really a high-value skill anyway — the value was in knowing where to find facts and how to apply them, not in the knowing itself.
- Execution consistency: Doing the same task the same way repeatedly. AI does this better, faster, and without fatigue.
- Pattern matching on structured data: Finding anomalies in data,识别异常 — when the data is well-structured and the patterns are known. AI sees these patterns faster.
- Information synthesis at surface level: Summarizing documents, aggregating reports, creating standard analysis templates. AI can do this at scale.
Skills that appreciate in an AI world:
- Systems judgment: Understanding how complex interdependent systems work, where the failure points are, what happens when you change one part. This requires mental models AI doesn't have.
- Contextual decision-making: Making decisions where the relevant context isn't in any document — it's in relationships, organizational dynamics, and tacit understanding of how things actually work.
- Stakeholder navigation: Understanding what different people need, what they can actually say yes to, how to build consensus without explicit authority.
- Accountability bearing: Taking responsibility for irreversible decisions. When something goes wrong, being the person who signed off, who stands behind the call.
- Creative integration: Not generating creative options (AI is good at this), but recognizing which creative options fit the situation, which ideas will resonate with stakeholders, which direction has momentum.
The pattern: skills that operate on explicitly codifiable inputs depreciate. Skills that require tacit knowledge, contextual judgment, and accountability appreciation.
This isn't about being "creative" vs. "analytical" — both categories have appreciating and depreciating sub-skills. It's about whether your skill operates on information (which AI can process) or on context (which AI can't access).
The Self-Assessment Checklist
Before you close this article, do the actual work. Don't just read the framework — apply it.
Dimension 1: Decision Codifiability - [ ] Can I describe my core decisions as a set of rules or decision trees? - [ ] Would an AI system need to learn "tacit knowledge" to make these decisions, or just patterns? - [ ] Do my colleagues describe my value as "judgment" or "following process"?
Dimension 2: Decision Reversibility - [ ] If I make a wrong decision, what's the recovery process? - [ ] Are my decisions reversible through standard operational procedures? - [ ] Do my decisions create irreversible commitments or change states?
Dimension 3: Quality Verifiability - [ ] Can I run automated tests or checks on my work? - [ ] Would an AI-generated version of my work be hard to distinguish from mine? - [ ] Is there a clear, objective metric for whether I'm doing a good job?
Dimension 4: Context Dependency - [ ] Is the information I need to do my job available in documents/databases? - [ ] Do my most important decisions depend on relationships or organizational context? - [ ] Would an AI system be missing critical information if it only had access to documented knowledge?
Scoring: Add 1-5 for each dimension. Lower total = more vulnerable. Total your score and identify which vulnerability category you're in.
The Only Conclusion That Matters
AI is not coming for your industry. It's coming for specific types of value delivery — and those types exist in every industry.
The question isn't "will AI replace my job?" It's "which parts of my work are AI-replaceable, and where do I provide value that AI can't replicate?"
The four dimensions — codifiability, reversibility, verifiability, context-dependency — give you a language for answering that question precisely. The scoring gives you a starting point for action.
What you do with that information depends on where you sit. High vulnerability means move deliberately toward judgment. Medium vulnerability means use AI to amplify yourself while building judgment muscle. Low vulnerability means don't mistake role safety for organizational stability.
The common thread: in an AI-augmented world, judgment is the premium skill. Everything that can be codified, reversed, verified, and automated will be. The residual value of human work is in the decisions that can't be.
That's not optimistic or pessimistic. It's just the direction of travel.
What dimension did you score highest on? What's your current strategy for building judgment-heavy value? The comments are open — but use the framework to ground the discussion in specifics, not generalities.
Tags: AI, Career, Future of Work, Job Replacement, Skills, Self-Assessment
SEO targets: will AI replace my job, AI proof career, future proof skills, will AI take my job, AI job displacement