Last week, an intern joined our team.

On the first day, while familiarizing himself with the environment, he sat beside me watching me write code. After observing for a while, he asked me a question that would haunt me for days:

"Teacher, how fast can you write code without using AI?"

I responded naturally, almost reflexively: "Of course I can write code. I've been writing code for twelve years."

Then he asked a second question—a question that would stop me cold:

"When was the last time you wrote code by hand without using AI?"

I paused.

I genuinely thought about it, searching my memory. And I couldn't remember.

That evening, I made a decision: I would turn off Cursor, disable Copilot, close ChatGPT, and use the most basic IDEA installation to write code by hand for two hours. I wanted to discover what would actually happen.

Part One: Test Rules

Practicing with arbitrary requirements holds little meaning. I wanted a genuine comparison—a true before-and-after snapshot of my capabilities.

I retrieved a feature I had written one year earlier: a user tagging system. The logic wasn't complex but covered substantial ground: users could receive multiple tags, support tag-based user queries, enable batch tag imports, and implement priority sorting. When I originally wrote this purely by hand, it took approximately four hours.

This time, I established strict rules:

  • Complete AI Disablement: Turn off all AI tools, including IDEA's built-in AI plugins
  • No External References: No Stack Overflow, no documentation lookup (if I couldn't remember APIs, I couldn't remember them)
  • Basic IDE Features Only: IDEA's fundamental code completion was permissible, but I wouldn't depend on it
  • Continuous Timing: Record every instance of being stuck, noting duration and cause

Two hours. Let's discover what percentage I could complete.

Part Two: The First Hour—Degraded Muscle Memory, But Not Alarming

The first twenty minutes felt acceptable.

Core database table structures, entity classes, Mapper interfaces—these represented muscle memory still intact in my brain.

@Data
@TableName("user_tag")
public class UserTag {
    @TableId(type = IdType.AUTO)
    private Long id;
    private Long userId;
    private String tagCode;
    private Integer priority;
    private LocalDateTime createTime;
}

This kind of code—I wrote it a year ago, and I could still write it now. These patterns had been burned into neural pathways through thousands of repetitions.

But reaching the Service layer, I encountered my first stall.

I wanted to use Stream for grouping aggregation. I knew in my brain it was Collectors.groupingBy, but how to connect the subsequent parameters—I thought for approximately thirty seconds before remembering.

// Stuck here for 30 seconds
Map<Long, List<UserTag>> tagMap = userTags.stream()
    .collect(Collectors.groupingBy(UserTag::getUserId));

Thirty seconds seems insignificant. But previously, this line of code required no pause. My hands would type it automatically, without conscious thought.

Similar pauses occurred five times during that first hour.

Each time, the cause wasn't inability—it was disappeared "feel."

Like a fast typist suddenly using a keyboard untouched for a month: you know where the letters are, but hand speed and muscle reflexes cannot keep pace.

This layer of degradation, I could accept.

Use it or lose it. Completely normal. One year without handwriting, naturally some rust would form. This matched expectations.

Part Three: The Second Hour—When Truly Unsettling Things Began Appearing

The first hour concluded. I had completed approximately 40%—a normal pace.

Entering the second hour, while writing batch import logic, something unexpected occurred.

I wrote several lines of logic inside a method body. After finishing, I suddenly stopped.

Not because I was stuck.

Because I felt uncertain whether what I wrote was correct. I wanted to—

Have AI take a look.

The moment this thought appeared, I recognized something significant.

I suppressed this impulse, personally reviewed the logic again, confirmed no problems, and continued writing.

But subsequently, this thought appeared two more times.

When the two hours concluded, I compiled statistics:

Completion Rate: Approximately 55%
Stuck Instances: 11 times
Stuck Causes:

  • Couldn't remember API/method names: 5 times
  • Uncertain about logic starting point, didn't know where to begin: 3 times
  • After writing, wanted to "let AI verify": 3 times

The first two categories I could accept. The final three instances made me stop and think deeply.

It wasn't about being unable to write. It was about not trusting my own judgment after writing.

This differed fundamentally from muscle memory degradation.

Muscle memory degradation represents a skill issue—practice can restore it.

Not trusting one's own judgment represents psychological dependency.

Part Four: What Exactly Has Degraded?

After the test concluded, I sat there reviewing my experience, realizing degradation wasn't one thing but three things, with severity increasing progressively.

First Layer: Muscle Memory Degradation (Minor)

API names, method signatures, syntax details—these I couldn't remember clearly.

This represents normal phenomenon, no cause for concern. One week of deliberate practice could restore it.

Like becoming slower at mental arithmetic after long-term calculator usage—it doesn't mean you've forgotten how to calculate.

Recovery Strategy: Regular handwriting practice sessions, perhaps 30 minutes weekly, focusing on commonly-used APIs and patterns. Flash cards for method signatures. Deliberate recall exercises.

Second Layer: Disappeared Starting Gesture (Worth Noticing)

This layer proves more hidden than simple feel degradation.

Previously, my coding habit followed: think through logic clearly → begin writing.

After one year of AI usage, the habit transformed into: write a comment describing intent → wait for AI completion → I make adjustments.

These represent two completely different thinking modes.

After turning off AI, sitting before a blank method body, I discovered I didn't know where to start. Not because I couldn't write—but the "actively constructing logic" starting gesture had been replaced by "describe intent, wait for generation."

Once AI disappeared, this starting gesture needed rediscovering.

The Psychological Shift: This represents a fundamental change in how I approach problems. Previously, I owned the solution from conception to implementation. Now, I own the problem description, but AI owns the initial solution draft. The mental muscles for solution generation have atrophied from disuse.

Recovery Strategy: Before any AI assistance, force myself to sketch the solution approach first. Write pseudocode. Outline the algorithm. Only then consult AI for refinement. Maintain ownership of the intellectual work.

Third Layer: Self-Verification Capability Transfer (Requires Vigilance)

This represents the most serious layer, also the most difficult to detect.

After completing a code section, my previous first reaction was: run unit tests, or personally review the logic again.

Now, the first reaction is: let AI take a look.

This transcends habit—it represents trust transfer.

I began transferring the judgment "is this code correct?" from myself to AI.

This point, accumulated long-term, would expose itself in two scenarios: whiteboard interviews and genuine urgent production failures.

These two scenarios share one characteristic: no AI available. You can only depend on yourself.

The Danger: In critical moments—production outages, security vulnerabilities, time-sensitive fixes—the ability to independently verify correctness becomes essential. Outsourcing this judgment to AI creates dangerous dependency.

Recovery Strategy: Implement a personal verification checklist before consulting AI. Run through edge cases mentally. Trace execution paths. Only after personal verification should AI review become an additional safety net, not the primary validation mechanism.

Part Five: An Eye-Opening Comparison

After the test concluded, I did something revealing: I opened Cursor and repeated the identical task, timing myself again.

Results:

ConditionTimeCompletionQuality
With AI1 hour 10 minutes95%High
Without AI2 hours55%Medium

The gap exceeded my expectations.

But what concerned me more wasn't this number—it was the mental experience in both states.

With AI, I was thinking: How do I describe this requirement clearly? Where does the generated code need adjustment? Have I missed any boundary conditions?

Without AI, I was thinking: How is this method name spelled? Can Stream be used this way? Is the logic I wrote actually correct?

One represents a designer's state. The other represents an executor's state.

AI kept me长期 in the designer's state—this is good. Higher efficiency, better output quality.

But the cost: the executor state's muscles hadn't been exercised for a year.

The Designer-Executor Balance: Effective development requires both capabilities. Designers architect solutions; executors implement them. AI has shifted my balance heavily toward design, leaving execution capabilities underdeveloped. The ideal: maintain fluency in both states, transitioning seamlessly as needed.

Part Six: I Don't Plan to "Quit AI," But I've Changed Three Habits

Reading this, you might expect me to say "AI is harmful, write more code by hand."

No.

I still use AI tools, and I will continue using them. The efficiency sits right there—no reason to abandon it.

But those three instances of "wanting AI to verify" sent me a signal: if I don't deliberately do certain things, this dependency will deepen until one day, when truly needing to solve problems independently, I'll discover I've become incapable.

Therefore, I've changed three habits.

Habit One: Weekly "No AI Periods"

Every week, select a fixed two-hour period, completely turn off all AI tools, write code by hand.

Don't seek speed, don't worry about completion percentage—just maintain that "building logic from blank slate" capability without rusting.

Like running: you don't need to run marathons daily, but if you don't run for a year, you can't run anymore.

Implementation Details:

  • Schedule it like any important meeting—non-negotiable
  • Choose moderately complex tasks that challenge without overwhelming
  • Track progress over time to ensure capability maintenance
  • Use this time for learning new technologies where AI dependency would prevent genuine understanding

Habit Two: Key Logic—Write First, AI Validates After

Reverse the sequence.

Previously: AI generates → I adjust.

Now: I write first → AI reviews, finds what I missed.

This sequence adjustment doesn't affect final efficiency but maintains "I am the leader" state rather than "AI is the leader."

Practical Application:

  • For algorithm implementation: write the core logic myself first
  • For API integration: design the interface contract before AI fills implementation
  • For data modeling: define the schema based on my understanding, then have AI suggest optimizations
  • For refactoring: identify the problems and propose solutions, let AI execute the mechanical changes

Habit Three: After Writing Code, Review Personally Before Asking AI

This specifically addresses those three "wanting AI to verify" moments.

It's not about never letting AI look—it's about first forcing myself to re-examine the logic personally, then letting AI find supplements.

This rebuilds self-verification muscles rather than permanently outsourcing this judgment.

The Personal Review Checklist:

  1. Does this handle all edge cases I can identify?
  2. Are there any null pointer possibilities?
  3. Is the error handling appropriate?
  4. Does this match the existing codebase patterns?
  5. Have I tested the happy path and at least one failure mode mentally?

Only after answering these questions should AI review begin.

Part Seven: The Broader Implications

This personal experiment reveals broader truths about AI-assisted development that extend beyond individual habit formation.

The Skill Atrophy Problem

As AI tools become more capable, certain human skills inevitably atrophy from disuse. This isn't unique to programming—calculators affected mental arithmetic, GPS affected navigation skills, spell checkers affected spelling ability.

The question isn't whether this happens—it's whether we care, and what we do about it.

For Individual Developers: Conscious skill maintenance becomes essential. Identify which capabilities matter most for your career and life, then deliberately practice them even when AI could do them faster.

For Teams and Organizations: Balance efficiency gains against capability preservation. Teams that completely depend on AI may struggle when AI fails or when situations require human judgment.

The Interview Reality

Whiteboard interviews aren't disappearing. Technical assessments still evaluate fundamental understanding. Developers who've completely outsourced thinking to AI will struggle in these scenarios—not because they're incapable, but because they're out of practice.

Preparation Strategy: Regular no-AI practice sessions double as interview preparation. The skills maintained are exactly those assessed in technical interviews.

The Production Emergency Scenario

When production systems fail at 3 AM, you need engineers who can think clearly without assistance. AI tools may be unavailable, too slow, or simply unreliable under pressure.

Team Resilience: Organizations should ensure multiple team members maintain strong independent capabilities. Complete AI dependency creates single points of failure.

Part Eight: A Conversation with the Intern

That intern now uses Cursor every day.

I didn't stop him.

But I told him:

"The smoother the tool becomes to use, the more you must occasionally turn it off, confirming you remain someone who knows how to use tools—not someone who cannot work without tools."

He thought for a moment and said: "So it's like getting used to navigation—you still need to occasionally recognize roads yourself?"

I said yes.

Final Reflections: Finding Balance in the AI Era

The answer to the intern's question now exists: last week.

The result: 55% completion in two hours, stuck 11 times, first reaction after writing code was wanting AI to take a look.

This isn't a disaster.

But it is a signal—a signal saying: certain capabilities, without your awareness, are quietly weakening.

AI has made coding faster. But "being able to write code" and "being able to write code without AI" are quietly separating.

I don't want to wait until the day I truly need it, only to discover I've become incapable.

The Path Forward

Moving forward, the goal isn't rejecting AI assistance—it's maintaining balanced capability:

  1. Embrace AI for what it does best: Rapid prototyping, boilerplate generation, routine refactoring, comprehensive reviews
  2. Reserve human effort for what matters most: Architecture decisions, complex algorithm design, edge case identification, final quality judgment
  3. Practice deliberately: Regular no-AI sessions maintain fundamental skills
  4. Stay aware: Monitor your own dependency levels, adjust habits before problems emerge

A Question for Every Developer

Have you tried turning off AI to see how fast you can still write?

The answer might surprise you. It might concern you. Or it might reassure you.

Whatever the result, knowing where you stand enables intentional choices about where you're going.

The AI era doesn't require abandoning tools. It requires using them wisely—knowing when to leverage their power and when to depend on your own. The developers who thrive will be those who master this balance, remaining capable with or without assistance.

After all, the best tool users aren't those who cannot function without their tools. They're those who choose to use their tools—not because they must, but because they enhance already-solid capabilities.

That's the goal. That's the path. And that's the commitment I'm making, one two-hour session at a time.