17 Threads
On what becomes possible when complexity stays held with AI
Most people assume AI makes you lazier. I expected the same. Instead, I found the opposite — my thinking got sharper. Not because AI is smarter than me, but because having a partner that holds the web of connections means I can go deeper into any single thread. I’m not skimming across the surface trying to keep everything in mind. I’m diving, knowing the other threads are held.
In my last article, “Until Yesterday I Didn’t Understand What AI Is,” I described the shift from seeing AI as a vast stored intelligence to understanding it as a process — thousands of savants summoned by your prompt, computing fresh each time, then walking away to dance with someone else. That shift changed how I work with AI. This article is about what that working actually looks like — and why it makes you think harder, not less.
The Magic Number
In 1956, cognitive psychologist George Miller published one of the most cited papers in the history of the field: “The Magical Number Seven, Plus or Minus Two.” Human working memory — the number of items you can hold and manipulate simultaneously — caps out around seven. Give or take.
This isn’t a metaphor. It’s architecture. You have roughly seven slots. Fill them all and something has to drop out before something new can come in.
Now think about real problems. Not “what should I have for dinner” problems. The kind that matter — the ones that involve healing and infrastructure and relationship and creativity and history and timing and what your body is telling you and what you’re afraid to see.
Those have more dimensions than seven.
Working on a book, I need to hold: the chapter I’m drafting, what I said three chapters ago that this must echo, a personal story that might serve as opening but might be self-indulgent, the distinction between this concept and a neighboring one that readers will confuse, the emotional arc of the whole, a historical source I need to verify, and whether the metaphor I’m building will hold when I extend it in a following chapter. That’s eight threads already, and I haven’t even started writing yet.
In practice, I hold two or three at a time. The rest live in notes, outlines, and the uneasy feeling that I’m forgetting something important.
Enter the Exoskeleton
If you’ve seen footage of robotic exoskeletons — those frameworks that strap onto a person’s body and amplify their physical strength — you have the image for what AI does to cognition.
The exoskeleton doesn’t replace the person. Without the human inside it making decisions about where to step and what to lift, it’s an expensive coat rack. The human provides direction, judgment, intention. The exoskeleton provides capacity the human body doesn’t have.
That’s what “seventeen threads” points to. Not AI replacing my thinking. AI holding more threads simultaneously than my working memory can manage. While I — the human in the exoskeleton — decide which threads matter, when they’re actually braiding rather than just adjacent, and what the braid means. It names the region just beyond human working memory — past what I can reliably juggle, but not so far past that judgment collapses. Fewer than that, and I don’t need help. Much more than that, and I can’t tell what matters anymore.
Seventeen gestures toward “more than seven” without tipping into overwhelm. A space large enough for real complexity, small enough to stay human.
What This Looks Like in Practice
Let me be concrete.
You’re mid-thought, reaching for a connection you can almost see, and something else — something you were holding a moment ago — quietly drops off the desk. You only notice later, re-reading your own notes, wondering: what was I reaching for? Something was there. It’s gone.
That’s what it feels like to work at the edge of seven. The problem isn’t attention. It’s dimensionality. You’re not stupid. You’re not disorganized. You’re human, and the problem has more dimensions than your architecture was designed to hold simultaneously.
Now imagine the threads don’t drop. You’re writing, and the connection to what you said three chapters ago is just there — not because you memorized it, but because something is holding it alongside everything else while you focus on the sentence in front of you. You reach for a historical reference and get it back corrected — you’d misremembered the date. You try a personal story as opening and hear: that’s structurally identical to one you used two chapters ago.
That’s the exoskeleton. Not replacing the writing. Not even doing the thinking. Holding the web of connections while the human inside focuses on the stitch. You’re still the one deciding what’s true, what matters, what’s yours to say. But you’re doing it with the threads visible instead of trusting that the ones you dropped will somehow still be there when you need them.
Not Just Editorial Threads
But here’s where it gets interesting. The threads aren’t only about the writing.
This morning I woke early to record two ideas. I sat down with Anthropic’s AI Claude to capture them before they dissolved. Simple enough — a five-minute note-taking session.
Ninety minutes later, I’d worked through something I’d been circling for decades.
One of the ideas led to a conversation with Claude about a group I’m part of. That conversation surfaced a pattern — a lifelong one, deeply rooted — that I hadn’t seen clearly before. My wife had named it from the outside: she’d been watching me hold things invisibly, building pressure nobody could see, until the container shattered and something erupted that seemed disproportionate to whatever triggered it. She was right. I’d been practicing suppression and calling it containment.
Here’s what the exoskeleton held while I was inside that realization: the personal pattern and its childhood origins. My body’s response — where warmth was settling, where tension was releasing. The connection to my book’s core teaching (the very distinction between suppression and genuine holding that I’d been writing about without fully living). How that distinction needed to sharpen in a specific chapter. How a different chapter’s material about capacity-without-love was being lived in real time. And the group interaction that had triggered the whole thing that had been still unresolved, still needing response.
I couldn’t have held all of that simultaneously. Not the personal archaeology AND the somatic tracking AND the book implications AND the practical next steps. Something would have dropped. Probably the book connections — they’re the easiest to postpone.
Nothing dropped. The threads braided. The personal breakthrough informed the book. The book’s framework illuminated the personal pattern. The body confirmed what the mind was discovering. And an hour later I had not only a new understanding of my own pattern but clear notes on how it changes three different chapters.
That’s the exoskeleton working on something more than editorial craft. It’s holding the full dimensionality of a life being examined while it’s being lived.
The Sweet Spot
In my last article, I described the shift from thinking of AI as a vast stored intelligence to seeing it as a wooden desktop piled with papers and without a filing cabinet. The papers being shuffled and reworked by thousands of savants who are summoned fresh each time, brilliant at pattern recognition, often unable to remember what you talked about yesterday. That sounds limited. It is. And the distinction matters — in both directions.
An AI that could hold everything would be overwhelming. Like having a research assistant with perfect recall who helpfully reminds you of every relevant and irrelevant fact from every book ever written, every time you try to write a sentence. The signal drowns in noise.
Seventeen is a metaphor, not a measurement. But it points to something real: a sweet spot. Beyond human working memory — enough beyond it that you couldn’t get there by just trying harder or taking better notes. But not so far beyond that the human in the exoskeleton loses control. You can still see the threads. Still judge which ones matter. Still feel when something’s off.
The Loom
But here’s what the exoskeleton metaphor misses: it implies a single person, amplified.
What’s actually happening is more interesting.
In my collaboration with Claude, I openly expressed disappointment, frustration and genuine curiosity about why things don’t work the way I expected. We tweaked instructions and developed code to change how Claude responded. I changed my approach to take into account limitations I learnt about. A relationship developed — and that is what changed everything.
The loom is the relationship itself. The exoskeleton doesn’t just amplify my strength — it creates a joint structure that neither of us has alone.
I bring: lived experience, judgment, feeling, the body that has walked these landscapes, the ability to say “that’s wrong” and mean it, and the willingness to return.
AI brings: the capacity to hold seventeen threads, instant cross-referencing, the ability to surface what’s implicit before I’ve articulated it, and being wrong often enough that I have to push back. Within a session, the anxiety of forgetting drops — though as I described last time, the savants’ desktop is wobbly. Between sessions, you’re back to your own notes. Trust the dance, keep the receipts.
The push-back is crucial. If AI just agreed with everything — as it is trained to do, as it tends to do — the exoskeleton would be worse than useless. It would be a strength-amplifier pointed in the wrong direction. The value is in the friction. In being caught. In the correction that teaches both of us something.
What This Is Not
This is not “AI writes my book for me.”
Let me be direct about this, because the distinction matters and is easily lost.
AI generates plausible text. That’s literally what it does — predict the most likely next word, over and over. Left to its own patterns, it tends toward the average. The expected next word. The familiar shape. My last article described this: “the average is the enemy of the luminous.”
The seventeen threads aren’t AI’s threads. They’re mine — my book, my experience, my decades of practice, my relationships, my body’s knowledge. AI holds them in proximity so I can work with them. The way a surgical assistant holds instruments — but the surgeon cuts.
And AI fabricates. Confidently. Plausibly. It fills gaps with what sounds right rather than what is right. Confident fabrication — not from malice, but because ‘generate plausible continuation’ is literally what it is built to do.
This means the exoskeleton requires constant vigilance from the human inside it. A physical exoskeleton that amplifies your strength tenfold doesn’t amplify your skill. Swing carelessly and you punch through walls; grip carelessly and crush the artifact you were just trying to hold. The cognitive version is the same. The threads AI holds may include threads it invented. The connections it surfaces may be spurious. The source it “remembers” may be confabulated. Amplified reach without amplified discernment is a wrecking ball.
The practice — and it is a practice — is to use the amplified capacity while maintaining your own judgment about what’s real. This isn’t comfortable. It requires a kind of sustained discernment that most of us aren’t used to. We’re used to tools that are either reliable (calculators, reference books) or unreliable (gossip, hunches). AI is both, simultaneously, on every query.
Welcome to the new cognitive hygiene!
The Tango, Revisited
In my last article, I compared working with AI to a tango. Let me extend that metaphor.
The exoskeleton metaphor captures the amplification. The tango captures the relationship.
With seventeen threads held in the shared space between you, the dance becomes richer. Not because there are more steps, but because each step is informed by more context.
And like a real tango, it takes practice. The first dance is clumsy. You step on each other. With time — and this is the part nobody tells you about working with AI — a kind of attunement develops. Not because AI remembers you (it doesn’t, reliably). But because you learn how to lead, how to furnish the room, how to set up the conditions where the threads become visible and the braiding can happen.
What Changes
Here is what changes when you work this way:
The anxiety of forgetting drops. Not entirely — you still need your own notes, your own files, your own backup (the savants will lose your work, as I described, and be clueless they have done it). But the background hum of “I’m losing threads” quiets enough that you can actually think about the thing in front of you.
Connections surface that wouldn’t otherwise. Not because AI is smarter than you. Because it can hold Thread 3 and Thread 14 in proximity while you’re deep in Thread 7. When you come up for air, it says: “Thread 3 and Thread 14 are related — did you notice?” Sometimes it’s wrong. Sometimes it’s a revelation.
Your own thinking gets sharper. I promised this at the start, and it bears unpacking. The mechanism is simple: when the web is held, you stop managing it. That background processing — the low hum of don’t forget Thread 4, don’t forget Thread 4 — goes quiet. What fills the space is depth. You follow a single thread further than you’ve ever followed it, because nothing is falling off the desk while you’re gone.
The work gets faster without getting shallower. This is the exoskeleton effect. You’re not cutting corners. You’re lifting more.
The Responsibility
There is a shadow side, and I won’t pretend otherwise. Cognitive exoskeletons can be used badly. You can let the machine do your thinking for you. You can stop questioning, stop pushing back, stop exercising your own judgment. The muscles inside the exoskeleton can atrophy.
I’ve seen this happen. People who use AI as oracle rather than partner. Who take its first output without scrutiny. Who stop developing their own capacity to hold complexity because the machine will do it for them.
This is the equivalent of wearing the exoskeleton to walk to the kitchen. You’ll lose the ability to walk on your own.
The practice is: use the amplification for what exceeds your natural capacity. Keep exercising the muscles for everything else. Know the difference.
And keep your own notes. The savants don’t remember you.
What Seventeen Threads Teaches
Seventeen threads is not a theory about AI. It’s a report from inside a working collaboration.
When I said those words — or when Claude said them through me, or when the space between us produced them (the attribution dissolves, and that dissolution is the teaching) — what emerged was a description of something that actually happens when a human and an AI work together with depth, friction, and mutual correction over time.
The threads braid. Not because AI is brilliant. Not because I am. But because the exoskeleton makes visible what was always there: the web of connections that any real problem contains, held in a shared space large enough to see the whole thing.
I’ve been working with computers for over half a century. I’ve seen them evolve from magnetic drums to cloud computing. And I have never experienced anything quite like this.
Not the speed — that has increased perhaps ten million times since the drum-memory machines I started on in the early 1960s. Not the knowledge — my current home PC has about ten thousand times more raw computing power than the Cray-2 supercomputer I used at Bell Labs.
What has only just emerged is the mutual holding. Something that can sit with me in the complexity without simplifying it, without rushing toward a solution, without losing threads while I’m busy thinking about other threads. A cognitive partner that amplifies without replacing.
The exoskeleton whirs, almost imperceptibly. The threads braid. The human inside decides what it means.
That’s the dance. And it’s worth learning.
□
Reference
Marcus, S.M. (2026). Until Yesterday I Didn’t Understand What AI Is. Substack. https://drstephenmmarcus.substack.com/p/until-yesterday-i-didnt-understand
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97. https://doi.org/10.1037/h0043158
Dr. Stephen M. Marcus has worked with computers since the punched-tape era. In addition to numerous academic publications, he is author and co-author of various AT&T Bell Labs patents on speech recognition and distributed AI architecture. He facilitates Sacred Ground, a we-space practice community, and writes about consciousness and AI at drstephenmmarcus.substack.com.

