Human Memory, Struggle & AI-Assisted Coding

How AI tools can actually make our memory worse, and the personal strategies I use to stay cognitively engaged while coding.

Illustration showing the intersection of human memory and AI-assisted coding

Since I can remember, I've had some kind of memory problem. At least since I was 7 years old. I was constantly forgetting to pick up my stuff from school, missing assignment deadlines, and zoning out mid-conversation. If I had to semantically analyze the feedback I've received in life, the most common phrase would probably be: "Your memory sucks."

You can imagine how that plays out — in school, in friendships, and especially later when I became a software engineer, working on complex systems and even managing teams. Memory matters.

So how did I make it work?

Over time, I learned that memory isn't entirely out of our control. Sure, part of it is genetic and part environmental, but a lot of it comes down to skills. There's solid science on this: memory is something you can train.

🧠 Harvard Health – Train your brain: "You can train your brain to improve your memory, attention, and even your mood."

🧠 Harvard GSE – Think hard and dig deep: "Depth of thinking enhances memory retention."

I started absorbing every trick I could find — books, podcasts, YouTube rabbit holes. But one insight hit harder than all:
Memorizing something usually requires struggle.

Learning ≠ Easy Consumption

Let me give you an example: say you're learning piano. You find a great instructor and start lessons. Things move slowly. You hit the wrong keys again and again. But with each mistake, your brain adjusts. You fight through discomfort, unlock a level, and finally — it sticks.

That struggle is the learning. Your brain builds new memory blocks through friction.

So eventually, I trained myself to lean into struggle. At the micro level — like when I catch my brain drifting to weekend mountain biking plans during a technical meeting — I pull myself back. I know if I don't fight for focus, my brain won't register the details I need. And at the macro level — I accept that building memory is like building muscle. You can't skip the reps.

Then AI Entered the Chat

Like many engineers, I got deep into AI-assisted coding starting in 2022.

GitHub Copilot, ChatGPT, Claude, Cursor, Gemini — I was hooked. You could build functional prototypes overnight. It was mind-blowing.

In three days, I built GenShell, a CLI tool in Golang — a language I don't even normally use — and submitted it to a Google hackathon. Wild times.

But when the fun turned serious, things got messy.

The Production Phase Reality Check

I noticed a pattern: I could build POCs super fast. v1 was up, running, and I was pumped. But when it came time for iteration, debugging, or production stability — I felt lost.

Cursor was showing me the exact edits I had applied. I had reviewed them. I had tested the code. But in a moment of pressure — something broke in production — and I couldn't remember why we had made a certain architectural choice.

Worse: I had missed obvious things in code review.

The old feeling came back — that frustration, that childhood stress of "Why can't I remember this?"

But Neuroscience Says... No Surprise

There's a reason for this. When you're not actively engaged in the process — not struggling, not reflecting, not deeply encoding the why — your brain doesn't form lasting memory.

🧪 PMC – Exercise and cognitive function: "Physical exercise supports memory, attention, and problem-solving."

This applies to reading, too. If you just scan documentation with your eyes, don't expect much. To actually learn, you have to pause, reflect, summarize — you need active processing.

And AI tools — unless used carefully — remove that friction. They solve problems too fast. You stay passive.

With AI, My Memory Got Worse

I started making more obvious mistakes. And when they surfaced, I couldn't explain why I had done things a certain way. I had used the tools correctly. But I hadn't engaged enough with the why of my decisions.

I don't think AI is inherently harmful to cognition. But it definitely reduces the need for challenge. And that concerns me.

There's growing research about designing AI to augment rather than replace:

🤖 Stanford HAI – A Human-Centered Approach to the AI Revolution: “AI should augment human capability—not substitute it entirely.”

So I Changed My Habits

Here are some personal rules I follow now when coding with AI assistance:

  1. For core services, I write it myself.
    Or at least define the schema, types, and comments before using AI to fill in the rest.

  2. I make code self-explanatory.
    AI tools tend to "over-generate." That extra fluff leads to technical debt fast. Keep it clean.

  3. I go offline sometimes.
    For certain tasks, I code old-school. No suggestions. No autocomplete. Just me and the editor.

  4. No multitasking. Ever.
    Context switching kills focus — and memory. Think twice before you Slack mid-debug.

    📊 APA – Multitasking: “Multitasking negatively affects learning and productivity.”

  5. I move more — I especially love racket sports.
    It sounds random, but aerobic exercise has a proven boost on memory and brain health.

    🎾 Harvard Health – Improving Memory: “Staying physically and mentally active helps maintain a healthy brain.”

An Open Question to Engineering Leaders

Here’s what I'm wondering now:

Are organizations seeing longer "time to resolution" for high‑stakes bugs? Not the easy ones AI can catch — but the messy, ambiguous, nobody-remembers‑why‑we‑did‑it‑this‑way ones.

I'd love to hear from other technical leaders. Are we challenging ourselves enough? Are our brains still in the loop when things get serious?


Final Thought

I'm not anti‑AI.
I'm just pro‑awareness.

There's power in letting AI handle the repetitive parts. But the moment we lose friction entirely, we lose something vital: memory, reflection, and the struggle that makes us grow.

Let’s be hybrid. Let’s stay human in the loop.