Last week, an intern joined our team.

On the first day while familiarizing with the environment, he sat beside me watching me write code.

After watching for a while, he asked me:

"Teacher, how fast can you write code without using AI normally?"

I naturally responded: "Of course I can write code. I've been writing code for 12 years."

Then he asked a second question:

"When was the last time you wrote code manually without AI?"

I paused.

After thinking carefully, I couldn't remember.

That evening, I made a decision: Turn off Cursor, turn off Copilot, turn off ChatGPT, and use the most primitive IDEA to write code manually for two hours, seeing what would actually happen.

Test Rules

Practicing with random requirements holds little significance. I wanted a real comparison.

So I dug out a feature I had written one year ago—a user tag system.

The logic wasn't complex: users could have multiple tags, support querying users by tags, batch tag import, and priority sorting. When I originally wrote it purely manually, it took about 4 hours.

This time's rules:

  • Turn off all AI tools, including IDEA's built-in AI plugins
  • No Stack Overflow, no documentation lookup (if I can't remember APIs, I can't remember them)
  • Can use IDEA's basic completion, but don't rely on it
  • Time the entire process, recording each stall's duration and reason

Two hours to see how far I could get.

First Hour: Skill Degradation, But Not Alarming

The first 20 minutes felt okay.

Core database table structures, entity classes, Mapper interfaces—these were muscle memory, still present in my brain.

@Data
@TableName("user_tag")
public class UserTag {
    @TableId(type = IdType.AUTO)
    private Long id;
    private Long userId;
    private String tagCode;
    private Integer priority;
    private LocalDateTime createTime;
}

This kind of code, I could write a year ago, I can still write now.

But when writing the Service layer, I stalled for the first time.

I wanted to use Stream for grouping aggregation. I knew in my brain it was Collectors.groupingBy, but how to connect the following parameters—I thought for about 30 seconds before remembering.

// Stuck here for 30 seconds
Map<Long, List<UserTag>> tagMap = userTags.stream()
    .collect(Collectors.groupingBy(UserTag::getUserId));

30 seconds isn't much, but previously this line of code required no pausing—my hands would type it directly.

Similar pauses occurred 5 times during the first hour.

Each time wasn't because I didn't know how, but because that "feel" had disappeared.

It's like a fast typist suddenly using a keyboard they haven't touched for a month—they know where the letters are, but hand speed and muscle reactions can't keep up.

This layer of degradation, I can accept.

Use it or lose it—very normal. Not writing manually for a year, skills became rusty, matching expectations.

Second Hour: Truly Unsettling Things Started Appearing

The first hour ended, I completed about 40%, a normal pace.

Entering the second hour, while writing batch import logic, something unexpected occurred.

I wrote several lines of logic inside a method body, then after finishing, suddenly stopped.

Not because I was stuck.

Because I wasn't sure if what I wrote was correct, and wanted to—

Have AI take a look.

The moment this thought appeared, I realized something.

I held back this impulse, went through the logic again myself, confirmed no problems, and continued.

But subsequently, this thought appeared two more times.

After two hours ended, I tallied the data:

Completion: Approximately 55%
Stall Count: 11 times

Stall Reasons:

  • Can't remember API/method names: 5 times
  • Uncertain about logic starting point, don't know where to begin: 3 times
  • After writing, want to "have AI verify": 3 times

The first two categories I can accept. The last 3 times made me stop and think for a long time.

It wasn't that I couldn't write—it was not trusting my own judgment after writing.

This is different from skill degradation.

Skill degradation is a capability problem—practice brings it back.

Not trusting your own judgment is psychological dependency.

What Exactly Degraded?

After the two-hour test ended, I sat there reviewing and discovered not one thing degraded, but three things, with increasing severity.

First Layer: Muscle Memory Degradation (Minor)

API names, method signatures, syntax details—these I can't remember anymore.

This is normal phenomenon, no need to worry. One week of deliberate practice brings it back.

Like becoming slower at mental arithmetic after long-term calculator use, but it doesn't mean you can't calculate anymore.

Second Layer: Disappearing Starting Moves (Worth Noticing)

This layer is more hidden than feel.

Previously when writing code, the habit was: think through the logic → start writing.

After using AI for a year, the habit became: write a comment describing intent → wait for AI completion → I modify.

These are two completely different thinking modes.

After turning off AI, sitting in front of a blank method body, I found myself not knowing where to start. Not that I couldn't write—it's that the "actively constructing logic" starting move was replaced by "describe intent, wait for generation."

Once AI is gone, this starting move must be found again.

Third Layer: Self-Verification Capability Transfer (Requires Vigilance)

This is the most serious layer, and also the hardest to detect.

After writing a logic segment, the previous first reaction was: run unit tests, or go through the logic again myself.

Now the first reaction is: have AI take a look.

This isn't just a habit problem—it's trust transfer.

I began transferring the judgment of "is this code correct" from myself to AI.

This, accumulated long-term, will expose itself in two scenarios: whiteboard interviews and truly urgent production failures.

These two scenarios share one characteristic: No AI, you can only rely on yourself.

An Eye-Opening Comparison

After the test ended, I did one thing: Opened Cursor and re-did the same task, timing it.

Results:

ConditionTimeCompletion
With AI1 hour 10 minutes95%
Without AI2 hours55%

The gap was larger than I expected.

But what concerned me more wasn't this number—it was the feeling in my brain during both states.

With AI, I was thinking: How to describe this requirement clearly, where does the generated code need adjustment, are there any missed boundaries.

Without AI, I was thinking: How to spell this method name, can this Stream be used this way, is the logic I wrote actually correct.

One is a designer state, the other is an executor state.

AI kept me long-term in the designer state—this is good, higher efficiency, better output quality too.

But the cost is: the executor state's muscles haven't been exercised for a year.

I Don't Plan to "Quit AI," But I Changed 3 Habits

Reading this, you might think I'm about to say "AI is harmful, write more manual code."

No.

I still use AI tools, and will continue using them. The efficiency is there—no reason to abandon it.

But those 3 times of "wanting AI to verify" gave me a signal: If I don't deliberately do some things, this dependency will deepen, until one day when truly needing to solve problems independently, I'll discover I can't anymore.

So I changed 3 habits.

Habit 1: Weekly "No AI Period"

Every week, select a fixed 2-hour period, completely turn off all AI tools, write code manually.

Not seeking speed, not seeking how much to complete—just maintaining that "building logic from blank" capability from rusting.

Like running—not needing to run marathons daily, but if you don't run for a year, you can't run anymore.

Habit 2: Critical Logic, Write First, AI Verify After

Reverse the order.

Previously: AI generates → I modify.

Now: I write first → AI reviews, finds what I missed.

This order adjustment doesn't affect final efficiency, but maintains the "I am the leader" state, rather than "AI is the leader."

Habit 3: After Writing Code, Review Myself First Before Asking AI

This targets those 3 times of "wanting AI to verify."

It's not that AI can't look—rather, first force myself to go through the logic again, then let AI find supplements.

This is rebuilding self-verification muscles, not permanently outsourcing this judgment.

Final Thoughts

Returning to that intern's question: When was the last time you wrote code manually without AI?

Now I have an answer—just last week.

The result: Two hours completed 55%, stalled 11 times, first reaction after writing code was wanting AI to take a look.

This isn't a disaster.

But it is a signal. A signal saying: Some capabilities, while you weren't noticing, are quietly weakening.

AI made writing code faster, but "can write code" and "can write code without AI" are quietly separating.

I don't want to wait until that truly needed day to discover I can't anymore.

That intern now uses Cursor every day.

I didn't stop him.

But I told him:

"The smoother the tool usage, the more you must occasionally turn it off, confirming you're still the person who uses tools, not someone who can't work without tools."

He thought for a moment and said: "So it's like being accustomed to navigation, but occasionally recognizing roads manually?"

I said yes.

Have you tried turning off AI to see how fast you can still write?

Welcome to share in the comments.


Backend AI Laboratory — No concepts, only practical combat. Code open source, weekly updates.