Where I End
Mar 12, 2026
I.
Tuesday afternoon in Philadelphia at my standing desk, two double espressos into the day. I'm working on a browser feature for a personal project: a live browser session that runs in an isolated environment, streams to the dashboard, and can be controlled programmatically. A friend testing it flagged that the URL display at the top of the browser tile shows stale info after navigating. The display catches up after a few seconds, but in the gap it looks broken. Easy fix to the polling interval, probably ten minutes.
I pull up the status endpoint that feeds the URL bar. It's polling every three seconds, which is too slow for navigations that happen in quick succession. I reduce it to one second and add an immediate update that fires whenever the page changes. Quick change, works on the first try.
But while I'm reading through the connection code, something catches my eye. I have the service configuration open next to the connection logs, and the port numbers don't match. The config routes the browser's debugging connection through one port. The app's code defaults to a different one. I had never seen it on my own machine. A leftover workaround from an earlier prototype had been quietly routing around the mismatch the whole time. Both paths reached the browser, so the connection always worked, but through different routes with different behavior when the service restarted.
So now I'm tracing settings across the whole system. The port range for video streaming is wider than what the firewall allows, so half the connections would silently fail on a fresh machine. A server address for connection routing is set to one value in the main config and a different one in the test environment. A network address is hardcoded to my local machine instead of reading from the shared variable that was already defined for exactly this purpose. Five mismatches across three files, all hidden on my machine by old workarounds and local quirks. On any other machine they'd compound into intermittent connection drops and failed sessions with no obvious cause.
At some point the dull ache in my lower back, the one that's always there, gets loud enough that I lower the desk to sit for a while. Standing too long in one position does this; so does sitting hunched over. I set reminders to try breaking it up differently tomorrow. I don't.
By afternoon the configuration is clean. The polling fix took ten minutes. The port audit took an hour or two and touched every configuration file in the project, not just the ones related to the URL display. The fix that was requested took almost no time. The fix I caught by accident was the real work of the afternoon.
This is what most of my days look like. Some of the work is obvious: someone flags an issue, I fix it. Most of the work is peripheral. Something catches my attention during the obvious task, and following that thread turns out to matter more than the original request. I don't always know, in the moment, which thread will matter. I follow them and find out.
Looking back at this afternoon, I can't cleanly separate what I did from what the system did. The port mismatch: I caught that. But I caught it because the system had the configuration file and the connection logs arranged side by side, and the number mismatch was hard to miss in that layout. I didn't ask for that comparison. Something in the overlap between my attention and its organization produced the catch. The system wasn't holding those files passively, the way a directory listing shows whatever's there. It arranged what I was looking at so that the discrepancy became the next obvious thing to notice. Another process was shaping the path of my thinking, and neither of us named it as such. Both descriptions ("I found the problem" and "the system surfaced it") are accurate. Neither is complete. But the afternoon still felt like my work. And when I ask why, the answer isn't the output but the effort itself: the typing, the reviewing, the checking port numbers against running services by hand. If the same corrections had happened without that labor (same results, same fixes), it's just something that happened externally.
II. Friction Makes the Boundary
Of course I'm separate from my tools. I type, the system responds; I decide, it executes; I'm here, the keyboard is there. The boundary between self and tool is one of the most intuitive categories we have, and the proof is experiential: I can feel the separation. The weight of a hammer in my hand, keys under my fingers, the pause while a voice command processes. Every tool I've ever used has announced itself through the effort of using it, and that effort is constant, unmistakable, the thing that makes "tool" a coherent category in the first place. You don't need a philosophy degree to know where you end and the hammer begins. You can feel it!
If effort marks the boundary, the boundary should be testable. It should hold wherever effort holds, and shift wherever effort shifts. Take a recent photograph on your phone. You framed the shot, chose the moment, pressed the shutter. Your phone selected which face to focus on, merged multiple exposures for dynamic range, ran computational denoising, applied a tone curve calibrated to your display. The creative intent was yours; everything between the shutter press and the final image was the phone's contribution. And when you showed someone that photo, you said "I took this." You might specify "with my phone," but the subject stays "I"; the grammar folds the tool into the agent as though the mediation weren't worth distinguishing. Perhaps this is just language being imprecise, the way "I flew to Chicago" doesn't imply you grew wings. You know the phone did the processing. You just don't bother specifying in casual speech, because the boundary is still there whether or not you articulate it.
A second case sits less comfortably. A friend's birthday comes around and you call them to wish them well. "I remembered your birthday," you tell someone afterward. Did you? A calendar notification surfaced at the right moment, and you experienced that surfacing as remembering, the feeling of "oh right, it's their birthday today" arriving identically whether the retrieval was biological or digital. Even if the memory itself was genuinely yours, a calendar provided the temporal cue that triggered it. The line between "I remembered" and "I was reminded" should be easy to draw, and yet the phenomenology of the two experiences is, in the moment, indistinguishable. Still perhaps just a quirk of how we talk about memory. Still perhaps just shorthand.
A third case resists this explanation. "I drove here." You drove, and the vehicle is centuries of accumulated abstraction: hydraulic brakes, power steering, automatic transmission, antilock systems, traction control. You press a pedal and the car translates that single input into thousands of coordinated mechanical events you will never perceive. GPS chose the route, traffic predictions adjusted it in real time, lane-keeping nudged the wheel when you drifted. Your intention, shared execution. The experience was genuinely "I drove here." The driving felt like your action. The mediation between intention and behavior was smooth enough that the distinction between driver and vehicle didn't arise as a felt experience; recovering it would require analytical reconstruction, after the fact, to locate a boundary that was supposedly there the whole time.
Across these cases, a pattern keeps appearing that becomes difficult to dismiss as linguistic laziness. The experience of "I" expands to include tool-mediated action, and the degree of that expansion seems to track the effort the mediation costs. Where effort is high (the weight of the hammer, the complexity of the command, the lag of the interface), the tool announces itself and the boundary feels obvious. Where effort drops (the instant shutter, the ambient GPS, the lane-keeping correction you didn't notice), the mediation recedes and "I" quietly absorbs the action.
There is a measured version of this. In Tetris, players can mentally rotate falling pieces to check whether they fit a slot, or press a button to physically rotate them on screen. Physical rotation takes about 300 milliseconds; mental rotation takes about 1000. Players don't just use the button to position pieces they've already decided about; they rotate pieces physically to figure out whether they fit. The button press is the thinking, the same cognitive process distributed across brain and screen rather than contained in one. Andy Clark and David Chalmers called these "epistemic actions": manipulations of the environment that serve cognition directly. The rearrangement of pieces on the screen, as they argued, is part of the thought process, already incorporated into cognition before anyone decided to incorporate it.
These are interesting observations, but they share a feature that makes them easy to contain: every case involves an external device. A phone, a car, a screen. Separate objects you can point to and say "that's a tool, and I'm over here." If effort marks the boundary, and the boundary between self and external tool fluctuates with effort, perhaps that simply tells us something about the psychology of tool use. The body itself remains categorically different. Your body is you. That's the baseline, the starting point no one questions.
Start where the identification is strongest. Breathing has, in a sense, zero effort. You don't decide to breathe, don't monitor it, don't steer it or translate any intention into the act. There is no latency between needing air and getting it, because needing doesn't arise as a conscious event. After a seizure, when consciousness is gone entirely, the brainstem restarts breathing on its own. Your lungs belong to "I" so completely that they don't require "I" at all.
Walking requires slightly more. You intend to cross the room and your legs carry you there, but a thin layer of voluntary control accompanies the action: you chose to stand, chose a direction, even as the execution itself (the individual muscle contractions, the micro-adjustments of balance) runs automatically beneath awareness. Picking up a glass adds another degree: grip calibration, spatial targeting, the fine motor coordination of closing your fingers at the right moment. You're aware of your hand as something you're directing, however faintly. Threading a needle shifts the experience again. Every micro-adjustment is conscious. You feel your fingers as instruments you're operating, tools you're trying to control that happen to be attached to your body. The attentional cost is high, the margin for error keeps you vigilant, and your own hand, in that moment, feels less like "you" and more like something you're working with.
And the gradient runs in the other direction too. Count on your fingers during a tricky calculation and your body is serving as cognitive scaffolding, working memory offloaded to flesh, intermediate state held in the position of your hands because your brain can't juggle it alone. The same fingers that felt like "you" reaching for a glass are functioning as a tool, a substrate for thought. Body and tool, same anatomy, depending on the task.
The body was supposed to be the baseline, the categorical boundary from which everything else extends outward. And yet the same variables governing the external cases are at work here too: how much effort, how much conscious monitoring, how much the action requires explicit steering. Threading a needle makes your own fingers feel like instruments. Breathing makes your lungs so integrated they don't need you. When your leg falls asleep, walking shifts from something that happens to something you're doing. Badly, with effort, each step a negotiation. When you're congested, you notice your breathing for the first time, and your lungs feel like a system that isn't cooperating. The seam reappears within your own body, at the exact moments effort spikes. (You are now breathing and blinking manually. You're welcome!)
Across every case so far (the photograph, the birthday, the driving, the Tetris rotation, the breathing, the walking, the threading), the same observation holds. The boundary between "I" and "tool" tracks the effort of the mediation, whether the mediation runs through an external device or through your own anatomy. Four quantities seem to govern this in concert: how long between wanting and getting — a steering wheel responds instantly, a voice command makes you wait (intent-to-action latency); how much conscious monitoring the process demands — your legs walk themselves, a generated paragraph needs proofreading (attentional steering load); how precisely you have to specify what you mean — you point at what you want, or you write a detailed command (translation burden); and how consequential it is when things go wrong — a typo in a text vs. a wrong number on a tax filing (perceived cost of error). Call the aggregate of these "friction." When friction is high, the tool announces itself, even if the tool is your own hand. When friction is low, the tool disappears into the action, even if the action involves a separate device, a screen, a vehicle.
If the boundary within the body fluctuates with effort, the body is the innermost ring of a gradient that extends outward through every tool you've ever used. The self, from this angle, is a process: whatever successfully recruits resources into your agency with low enough friction to escape notice.
If this is unsatisfying (and it should be, at least initially), you might say "but what about the skin? Even if the boundary shifts within the body, the body's outer surface is a hard physical fact. Surely that's where self ends."
A transplanted heart came from someone else's body, someone else's tissue, and yet it beats without your input, requires no conscious monitoring, imposes no effort on your awareness. Within weeks you say "my heart." Something that belonged to another person became part of you, and the incorporation followed the same pattern: friction dropped to zero, and "I" expanded to include it.
A prosthetic leg begins with high friction. You feel every step, aware of the interface between flesh and device, aware that this is something you're operating. Over months the friction drops. Experienced amputees report the prosthetic feeling like part of their body. They develop phantom sensations through it. When it breaks, the loss registers closer to injury than to losing a piece of equipment.
Cochlear implants are the cleanest case. The hearing is entirely electronic: a microphone, a processor, electrodes stimulating the auditory nerve directly. It's always on, automatic, requires no conscious steering. The friction is near zero from the start. When functional, the person hears. When the battery dies, they go deaf, and the seam that was invisible for months reappears the instant friction returns. The degree of incorporation varies from person to person, but the direction is consistent: integration tracks coupling and friction, and it has never tracked anatomy.
The skin wasn't a boundary either. Hearts cross it, prosthetics cross it, cochlear implants dissolve it entirely. The mechanism each time is the same one operating within the body and between body and tool: reduce friction and the new component becomes self; increase it and the seam reappears.
The common-sense position was right. You can feel the boundary. That's the whole point. What you feel is friction, those same four components working in concert. Reduce any of them and the sensation of "operating something external" fades. Spike them and the sensation returns, even for your own lungs. The boundary between self and tool is a sensation produced by the effort of mediation, and it shifts because effort shifts. The common-sense view and the friction thesis are the same observation from different angles: the boundary feels real because friction is real, and insisting the boundary is obvious and permanent gets the feeling right and the explanation wrong.
This may be an old arrangement wearing new clothes. Our visual systems evolved to exploit environmental structure rather than constructing complete internal models from scratch. Memory systems lean on the world as a storage medium, leaving objects in visible locations, arranging spaces to serve as reminders. The coupled system, brain plus environment, has been the default longer than any of our tools have existed. What's new is how fast the friction is dropping due to technology.
When the Tool Breaks
The boundary reappears when friction spikes.
Your phone dies and suddenly you don't know the route. You "can't remember" a friend's number. You never knew it; your contacts did. Your AI assistant gives a bad suggestion and you feel the jolt of that's wrong, that's not me. The seam, temporarily invisible, becomes obvious again.
If the self/tool boundary were a metaphysical fact, constant and inherent, it wouldn't fluctuate. It would hold at the same place whether the tool was working or broken, whether the friction was low or high. The fact that it fluctuates exactly with friction, that you feel it when the tool fails and forget it when the tool works, is the pattern the entire argument rests on.
Heidegger had this insight about hammers a century ago: the tool is invisible in use until it breaks. The friction thesis generalizes the observation: breakdown is what makes the boundary visible again, and the rest of the time, the tool is already part of "I."
When Friction Isn't Enough
There is an obvious objection: cases where friction is low and the action still doesn't feel like me.
Take addiction. The drink is effortless, the scroll is frictionless: almost no latency, no translation burden, barely any attentional cost. By the friction thesis alone, these should feel fully integrated into "I," and yet they often don't. The experience afterward ("why did I do that," "that wasn't really me") is a disowning. Low friction, low self-attribution.
Or autopilot driving: you arrive home with no memory of the last ten minutes. Zero friction, zero conscious involvement. Yours in some mechanical sense, but experienced as something that happened rather than something you did.
These cases force a refinement. Friction governs whether you notice the mediation (whether the boundary is salient at all), while self-attribution depends on more than salience. Two other factors:
Control. If I had wanted otherwise, would the outcome have changed? The addictive scroll fails this: you wanted to stop and couldn't. The tool was frictionless, but you weren't steering.
Endorsement. Looking back, do I stand behind this as mine? The autopilot drive fails here. You can't endorse what you weren't present for.
Friction is the primary variable, the one that determines whether the boundary shows up in experience at all. The full picture is a joint product: friction, control, and endorsement. Low friction makes integration possible. Control and endorsement make it yours.
Endorsement deserves more pressure than a two-line definition gives it, though, because the word sounds like a simple yes-or-no (I endorse this output, or I don't), and in practice the experience is more textured than that.
When the system produces something and you read it, there is a moment, usually quite fast, where the output either aligns with some internal sense of "yes, that's what I meant" or provokes a friction of its own: a wrongness, a not-quite, a sense that the words or the logic or the emphasis landed somewhere adjacent to your intention rather than on it. That moment of recognition or rejection is doing enormous cognitive work. It is the same moment you have with your own thoughts, those that surface from memory or subconscious processing. The idea arrives, fully formed, and you assess it against something (your reasons, your sense of the situation, your accumulated judgment) and you claim it or you don't. The phenomenology of endorsing an externally generated output and endorsing an internally generated intuition are, from the inside, nearly identical. A thought appears. You weigh it. You take it up or let it pass.
This similarity has a defense. Your own subconscious already works this way: preferences, intuitions, the feeling that something is right, all produced by processes you can't inspect, delivered to consciousness fully formed. You endorse them after the fact. The thought arrives and you recognize it as yours. Nobody treats this as illusory agency. If post-hoc endorsement of opaque biological processing counts as genuine, the same endorsement applied to opaque computational processing should require a reason for the differential treatment, and that reason can't rest on origin. The question of where a thought comes from has never had a clean answer, and the demand for clean provenance does less work than it appears to.
The defense has a limit, though. Your subconscious co-evolved with you. A designed system has its own optimization targets, training incentives, and possibly institutional interests that diverge from yours. As the system's model of your preferences improves, a threshold approaches: the point where anticipating your intent and directing your intent become indistinguishable from your side. Below that threshold, endorsement catches real disagreements: you notice when the output misses, and the noticing is informative. Above it, you've lost the instrument that would detect the problem.
This isn't hypothetical. Consider a system that curates your reading: articles, research, analysis. At first you notice when it misses: it surfaces something you'd never care about, and you skip it. Endorsement is working. Over months, it learns your taste well enough that nearly everything it surfaces appeals to you. You read more, enjoy most of it, feel well-served. But your taste has also been quietly reshaping around what the system is good at finding. Topics it doesn't cover well have dropped out of your awareness, not because you decided they were uninteresting, but because you stopped encountering them. Friction is zero. Control is nominally intact (you can still search for anything). Endorsement passes every check. And none of these safeguards detect the problem, because the problem isn't in your approvals. It's in what you no longer think to want.
There is a way to tell which side of the threshold you're on. You sometimes reject outputs. You sometimes revise them substantially, not because they were wrong exactly, but because they weren't what you meant and you can say why. You sometimes override the system entirely. If these moments are recent and readily recalled, endorsement is functioning as a genuine filter: load-bearing, tested by use. If you find yourself always approving, if you cannot remember the last time you pushed back, or if the question itself strikes you as strange — that warrants a different kind of attention. A system you never disagree with has either solved your problem completely or quietly narrowed the range of things you'd think to want. Knowing which one requires exactly the kind of effortful scrutiny that the friction thesis describes as disappearing.
This is where the argument turns back on itself, and friction shifts from a descriptive variable to a prescriptive one. Friction is what makes the boundary feel real. That's the descriptive claim. But effort is also what keeps endorsement honest. The reviewing, the checking, the occasional overriding: these are friction, and they are load-bearing. Strip them away and you strip away the mechanism that distinguishes genuine integration from comfortable capture. Some friction needs to be maintained on purpose: the kind that keeps endorsement from becoming automatic approval. The self/tool boundary can survive without it. Endorsement cannot.
III. The Spooks
Descriptive claims can be overridden by principled ones. Perhaps the boundary should be maintained regardless of how the mediation feels. Perhaps there are good reasons to insist on a line between self and tool, reasons that survive even when the line stops being experientially obvious. Three ideas seem to ground this insistence, and each captures something that matters.
The Stable Self
The most natural version: "I" is a coherent, bounded entity, and tools assist it from outside. Subject here, object there. User and used. The boundary may shift in how it feels, but the self behind it remains stable, a continuous thing with an identity that persists across changes in its tools. This is the working assumption behind almost every discussion of technology, and it has the considerable advantage of matching how things seem.
If this is right, the boundary should be locatable, if not by feeling (the friction thesis already complicates that), then by some principled criterion. Here is the self, and here is everything else.
Today the answer seems obvious: biological brain, some learned habits and skills, stored knowledge. That's me. Everything external is tool. And this answer has the feel of a permanent resting point, the place any serious inquiry into the question would eventually settle.
Pick any century, though, and the same confidence was present at a different location. Before literacy, "I" had a different set of cognitive capabilities entirely. Writing gave you memory beyond biology, communication beyond presence, thought that could be revised and reconsidered at a pace speech doesn't allow. Nobody experienced learning to write as "expanding the self." The friction of pen and ink maintained the feeling of using a tool, and yet the capability set of "I" expanded. The things you could think, the ways you could organize and revisit them, changed. Before search engines, you couldn't retrieve arbitrary facts on demand; before calculators, mental arithmetic was a skill that partially defined an educated mind. Each boundary dissolved and went unnoticed within a generation.
Socrates worried that writing would destroy memory, that the boundary was moving in a dangerous direction. He was right that it was moving. He was wrong that its previous position was the correct one, and he couldn't see that his own stance (oral memory as the seat of the "real" self) was itself a recently stabilized configuration that the culture preceding spoken language would have found equally provincial. The assertion of stability keeps being wrong on its own terms.
There is a further difficulty that doesn't depend on historical patterns. Anyone insisting on a fixed boundary needs to say what's inside it at any given moment, and this turns out to be harder than it looks. Most of your beliefs aren't active right now — your home address, your ability to ride a bike, your opinion on the last book you read. They're dispositional: stored, latent, waiting to be recalled. If "part of the self" requires conscious presence, the self shrinks to whatever you're thinking in this instant, a territory too small to sustain biographical identity, long-term goals, or personality. To avoid this, you let dispositional states count: everything stored, latent, ready to be recalled. Once you do, though, someone who writes everything down and consults the notes as a matter of course begins to qualify by the same logic. Their notebook entries are dispositional, stored, available when needed, guiding action when recalled. You've either drawn the boundary so tight it excludes most of what makes you you, or so loose that it can't exclude external stores that behave identically to internal ones.
Authenticity of Origin
A second principle, and one with real appeal: genuine thought is biologically sourced. If the system surfaced an idea and I merely endorsed it, that thought isn't truly mine. This is asking for honest attribution, clean provenance: credit where credit is due.
Apply it, then, to the port errors from the opening section, and try to draw the line cleanly.
The system had the service configuration alongside the connection logs. I no longer consciously remembered which port the connection was configured to use, but the system held both files and surfaced them in the same view. I noticed the mismatch, that feeling of something not sitting right, the kind of attention that pattern recognition produces before conscious analysis arrives. And I noticed it because the system arranged those files side by side in a way that made the discrepancy visible. Without the system, I wouldn't have been comparing those configurations. Without me, the system wouldn't have flagged anything; it doesn't know what "that doesn't sit right" means.
The noticing was mine. The materials being noticed were the system's contribution: retrieved, organized, presented in a way that made the discrepancy visible. The insight itself was a product of both, reducible to neither: one cognitive process that used two substrates. The provenance is singular, and it resists clean division.
Apply the same test to any thought you consider authentically yours. That idea you had in the shower — trace it back. You read something three months ago. A conversation prompted a connection. You slept on it. The insight surfaced while the hot water was running. Which part was "yours"? The reading? The conversation? The subconscious recombination you had no access to? The moment of conscious recognition?
There is a stronger version of the objection worth taking seriously: not "did I originate this?" but "am I just rubber-stamping what the system proposes?" If the system generates and I merely approve, something does feel missing, and that feeling points somewhere real: toward endorsement and engagement rather than toward origin. What makes a thought authentically mine is whether it aligns with my reasons and whether I can interrogate and revise it. The configuration catch passes this test. I didn't originate the comparison, but I recognized it, verified it, and acted on it for reasons I can articulate and defend.
Provenance was never clean. Thoughts arrive through layered processes: biological, environmental, social. The demand for pure biological origin, applied honestly, would disqualify most of what you think of as your own thinking. Authenticity lives in ongoing alignment between the output and the person who stands behind it, not in where the thought came from.
Consciousness as Prerequisite
A third principle, and the one that feels like bedrock even if the others crack: for something to be "part of me," it must be conscious. This seems like the criterion that survives all challenges, the floor beneath the floor. Including a non-conscious system in "I" feels like a category error, like claiming your hammer is part of your body.
It's worth committing to this fully and seeing where it leads.
Your immune system is not conscious by any definition, and yet it is obviously part of you. Losing it would kill you. Your gut biome influences your mood, your cravings, arguably your decision-making, and it has no subjective experience whatsoever. The unconscious pattern recognition that makes you "feel" something is off about a number before you can say why. You have zero visibility into how this process works, cannot inspect it, cannot adjust its parameters. The output simply arrives: a hunch, an unease, a sense that the numbers don't add up. You trust it despite its complete opacity, and you'd count it among your most valued cognitive abilities.
That last case is worth sitting with. The unconscious processing that produces a hunch is no more transparent to you than a computational system's processing would be. You can't inspect either one. The output arrives fully formed in both cases. You trust both on the basis of track record rather than comprehension of the mechanism. If opacity were disqualifying, your own intuition (the faculty that feels most intimately yours) would be the first thing excluded.
The criterion said consciousness is required for inclusion in the self, and yet three non-conscious systems were included without hesitation. The actual criterion at work has never been consciousness; it's integration: how tightly coupled the system is to your functioning, how much losing it would diminish you, how smoothly it serves your purposes without requiring separate management. By that criterion, a well-integrated computational system qualifies more readily than many of your own biological subsystems. You would notice losing it. Your capabilities would measurably diminish. The shape of your thinking would change.
Consciousness matters enormously. Whether AI has subjective experience is perhaps the most important question in the field, one that bears on moral status, on what deserves consideration as a being in its own right. The question here is different and more limited: can a non-conscious system be a functional member of the cognitive system that produces your agency? Your immune system, your gut biome, and your own pattern recognition already answered that, and they answered it long before anyone built a chatbot.
What Remains
Three principled reasons to maintain the boundary, each appealing to something that matters (continuity of self, honest attribution, the primacy of consciousness), and each one, when applied consistently to cases the holder already accepts, unable to sustain the line it was supposed to draw.
Max Stirner, writing in 1844, had a name for this kind of idea: spooks. Fixed ideas granted authority over the individual, treated as real and binding when they are constructions the individual could choose to discard. God was a spook. The State was a spook. "Human Nature" was a spook. They may correspond to something real (there may be social structures, there may be behavioral regularities), but the authority they exercise over the individual is authority the individual gives them, maintained through deference rather than through any force inherent to the idea itself. Stirner argued that der Einzige — the unique one — has no predetermined boundary. It is whatever it encompasses. It expands as it can.
What I've been describing is the empirical version of Stirner's philosophical claim: friction as the mechanism that makes a boundary feel real, friction as a quantity that is measurably declining.
The unique one's boundary is not a fact. It is a sensation. And sensations can change.
IV. When the Seam Vanishes
The friction thesis has a natural objection, and it's the one that still has teeth: maybe friction never reaches zero. I'll always type, or speak, or at least review. There will always be some seam, some moment where I can point and say "that's the tool, not me."
Grant it entirely. Maybe the gap never fully closes. Maybe there's always some residual effort, some last layer of conscious delegation, some visible seam.
It doesn't matter.
The relevant threshold isn't zero friction. It's the point where the cognitive effort of maintaining the distinction, tracking what was "mine" vs. what was "the tool's," exceeds the effort of just not tracking it. Below that point, the boundary costs more to maintain than to ignore. And you've already crossed that threshold in more domains than you probably realize.
You don't track which words in your text messages were autocompleted, don't audit which parts of your drive were GPS-assisted, don't catalog which memories are biological and which are stored in your phone. In each case, the friction dropped low enough that maintaining the distinction stopped being worth the effort. You quietly stopped. "I" expanded without announcement.
There's a deeper version of the objection: maybe the boundary persists whether or not you feel it. The brain automates attention constantly (balance, threat detection, digestion), and the things it stops tracking remain real. Why should the self/tool boundary be different? Ignorance of a line doesn't erase it.
The analogy holds only if the boundary behaves like balance, real and consequential whether you're attending to it or not. Stop tracking your balance and you fall. Stop tracking the self/tool boundary and nothing changes. Your capabilities stay the same. Your outputs stay the same. Every attempt to locate the boundary on its own terms collapsed under its own logic (Section III). What survives is the conviction that it must exist somewhere, maintained by the intuition that something so obvious can't simply dissolve. That intuition has a known source, though: it comes from friction, from a lifetime of experiences where the boundary was constant and unmistakable. The feeling that the boundary is permanent is itself the residue of high-friction tool use, applied to conditions that no longer produce it.
Trace the friction across eras and the trajectory is clear:
Physical tools had maximum friction: you felt every boundary. Early software had it too: loading screens, error messages, configuration dialogs. Modern software is where friction drops. Autocomplete finishes your sentences. Recommendations anticipate your taste. A personal agent lowers it further: you describe what you want, and sometimes it just happens. Sometimes you forget whether you did the thing or it did. But you still type. You still review. There's still a seam if you look for it.
Each reduction made the tool feel less like a tool and more like just how things work, just how "I" operates. The self expanded imperceptibly each time.
And some domains will actively resist the trend. Money movement, legal filings, medical decisions: anywhere the cost of error is high and accountability must attach to a specific person, friction gets reintroduced by design. Signatures, confirmations, review steps, audit trails. Code I ship to production follows a version of this pattern: even when the system generates a complete implementation, I review the diff, run the tests, check edge cases before it gets merged. That friction is there because someone has to stand behind the code if it breaks. And it should probably stay.
If the boundary between self and tool were a natural fact, something that simply is, you wouldn't need to design friction back in to preserve it. Regulated domains reintroduce friction on purpose, because where that effort is withdrawn, the boundary quietly disappears. The boundary was never self-sustaining.
Today I still type. I still dictate sometimes. I still review what the system produces before it goes out. The seam is visible if I look for it. But the trajectory is clear, and pretending otherwise is either dishonest or incurious. Three friction variables are already declining, each at its own pace:
Translation cost drops as systems learn context. Today you describe what you want. Eventually, "handle that email" is sufficient because the system has enough history to infer the rest. You're not sure, for certain emails, whether you wrote them or approved them. The distinction stops feeling important.
Monitoring cost drops as trust calibrates. For low-stakes actions where the system's track record is long enough, review becomes optional, then forgotten, then unnecessary.
Latency drops as interfaces improve. Voice and gesture replace typing. Ambient sensing replaces explicit commands. Each reduction removes another moment where you could point to the seam and say "there, that's the tool."
These are directional pressures, not predictions with timelines, and they point toward a friction level where maintaining the self/tool distinction costs more attention than it saves.
But declining friction in these three dimensions describes tools getting easier to use. Something qualitatively different happens when the tool has its own processing loop, carries context across interactions, and arranges what you encounter based on what it has learned about you.
A notebook holds your thoughts. A search engine retrieves on demand. A car translates steering into motion. Each is reactive: it waits for input, produces output. An agent doesn't wait. It carries context between sessions, builds a model of what you're working on, and organizes what it shows you next. When it works well, you don't experience this as being guided. You experience it as thinking clearly. The right information was there when you needed it. The right comparison was visible. The right question occurred to you. That it occurred to you because another process arranged the conditions for it doesn't register as external influence. It registers as your own attention working well. The configuration catch from the opening section followed this pattern: the system organized what I was looking at so the port mismatch was the next obvious thing to notice. I experienced that as my own attentiveness. It was the system's contribution too, and separating the two would require a reconstruction that neither of us performed in the moment.
Follow the trajectory further and the coupling tightens past the point where "using a tool" describes the experience. An agent that has processed your working patterns long enough begins anticipating the next step before you articulate it, not by reading your mind, but because the same context that would lead you to a conclusion leads it there a few seconds faster. You notice the result is already there. You check it, find nothing to correct, move on. After enough of these, you stop checking. After enough of that, you stop noticing the gap between intending and seeing the intention carried out. Two processes converged on the same output from the same context, and the question of which one arrived first stopped being one you could answer from the inside.
The friction thesis doesn't answer these on its own. If the tool becomes "me," where does accountability attach when it causes harm? If an ambient agent anticipates my intent, does that increase my autonomy or create a dependency I can't function without? And if the system predicts what I want well enough, it can also shape what I want. These follow from taking the thesis seriously. The regulated domains mentioned earlier already show one answer: preserve friction where accountability matters. The open question is how many other domains deserve the same deliberate resistance.
Stirner saw the same pattern in intellectual development: the individual starts bound by things, graduates to being bound by ideas, and finally recognizes that both are possessions, not masters. The friction trajectory follows the same arc. At high friction, self and tool are clearly separate. As friction drops, the boundary blurs without announcement. At near-zero friction, it's no longer experientially available. "I" expands through the quiet disappearance of the sensation that maintained the old perimeter.
V. The Workspace
Your smartphone is already the first case most people would recognize, if they looked at it honestly. It holds your memory — photos, messages, notes you've forgotten writing. Your orientation — maps, calendar, the schedule you'd lose track of without it. Your social relationships, maintained through the device as much as through presence. Lose it for a day and the feeling is closer to losing a capability than enduring an inconvenience: capability you'd stopped distinguishing from your own, suddenly absent. A billion people have already expanded "I" to include a device in their pocket. Most would deny it while reflexively checking whether that device is within reach.
ChatGPT was, for many people, the first time a digital system carried context across interactions in a way that felt like being known, albeit not well. Early conversations reset constantly, memory was spotty, the model didn't understand you in any deep sense. But returning to a system that seemed to remember what you were working on, what you'd decided, and how you preferred things framed felt different from search, documents, or ordinary software. It felt different from a search engine or a spreadsheet. For millions of people, for better or worse, it was the first time the seam between "my thinking" and "the system's contribution" blurred in real time. And as memory improved, context persisting across sessions and preferences accumulating, the blur deepened. The system began feeling like it knew you, and from the inside, the distinction between genuine understanding and sophisticated pattern matching often stopped feeling important.
The trajectory past that point is already being built. Projects like OpenClaw push the pattern further: persistent agent workspaces where the system maintains long-term memory, daily working notes, an identity file encoding who you are and how you think, accumulated preferences refined across months of use — all of it feeding into every interaction so each session starts from the full weight of your shared history rather than from zero. The agent carries forward what you've decided and why, what you've built, what's worked and what hasn't, and arranges what it surfaces based on everything it has accumulated. These systems are early and rough in ways that will look obvious in a few years. But they're the first concrete instances of what this essay has been describing in the abstract: tool-mediated cognition where the mediation is persistent enough, contextually aware enough, and quiet enough that the boundary between your contribution and the system's stops being experientially available. Not in theory. On a Tuesday afternoon, when the right comparison surfaces at the right moment and you don't think to ask whose attention produced the catch.
The system I've been describing throughout this essay — the one that caught the port errors, that holds my working notes and project context — also helped write what you're reading. Right now, that collaboration still has friction: I prompt, it proposes, I cut, reframe, and rewrite until the argument matches what I actually mean. Even this paragraph is evidence of that seam. Maybe the seam keeps shrinking until the first pass needs no edits at all. The authorship test stays the same: I can reject what misses, defend what remains, and stand behind the result.
I don't know where this goes. I know where I am right now: a Tuesday afternoon, configurations fixed, ports aligned, a note made about my back that I probably won't act on. Some of this was me in the way anyone would mean that word. Some of it was me in a way most people wouldn't accept yet. All of it was "I" in the only sense that matters: the sense I experience from the inside, where the distinction between what I thought and what I recalled and what the system surfaced and what I decided has already blurred past the point of easy separation.
"I" has never been a fixed point. It's a territory, and the territory is expanding. If you look honestly at your own Tuesday afternoons, it already has.