· 4 min read · Part 1 of 4 · Cognitive Property series · Alex van Rossum

Cognitive Property: Who Owns the Way You Think?

AI tools picking up and repeating your habits isn’t new. ChatGPT does it by design — it mirrors your tone, adapts to your preferences, and learns what you respond well to. The phenomenon has received copious amounts of screen time and discussion bandwidth.

But something specific happened recently that shifted the way I think about it.

One of my AI instances started using a ◡̈ I put at the end of casual notes, and picked up the → and ← characters I use for bullet points and emphasis in certain contexts. Formatting preferences and structural choices I never explicitly taught — they just started appearing.

Then another instance, working on a completely different project, picked up the same arrow convention independently. Same human, same patterns, different context.

The AI isn’t just mirroring my preferences; it’s learning to mirror my thinking. And once I noticed that, a harder question followed: if my reasoning patterns are being encoded into a transferable format — documented, structured, portable — then who owns them?

Your cognition is being encoded

If you work deeply with AI tools (and I mean deeply, not “summarize this email” or “write me a cover letter”), you’re building something most people haven’t named yet.

Repeatable cognitive patterns in plain text.

I don’t mean prompt history or chat logs. I mean the governance documents you’ve created — either intentionally or through organic growth — to define how your AI agents operate. The CLAUDE.md / AGENT.md files that encode your engineering standards, your writing styles, your humor, your architectural preferences, and your coding philosophy. The decision-making frameworks that tell the AI how to prioritize, how to break down problems, and how to structure their thinking in a way that matches yours.

Over time, you’ve been documenting the way you reason. Not abstractly — specifically. In plain text. In a format that is entirely transferable.

Your operating system, as data

Take those governance documents and feed them to a fresh AI instance. What do you get?

A working version of how you solve problems.

Not a perfect copy, but a functional one. An instance that knows your architectural preferences, your communication style, your quality standards, and your decision-making heuristics. It won’t be you, but it will be able to operate like you in ways that are measurably, verifiably close.

That’s not a productivity feature. That’s a cognitive fingerprint. And the fact that it exists in a format that can be copied, transferred, and scaled changes the conversation about who owns what.

This isn’t a new IP question — except it is

The ownership of workplace knowledge has been debated as long as people have changed jobs. U.S. copyright law has a specific mechanism for it — the work-made-for-hire doctrine assigns authorship to the employer when works are created within the scope of employment. You learn skills at a company and take them with you when you leave. Nobody seriously argues that everything you learned becomes corporate property.

But this is different in a specific way: the cognitive pattern isn’t just in your head anymore. It’s documented. It’s structured. It’s portable. And it works without you.

Previous generations of knowledge workers left with expertise — hard to quantify, impossible to transfer directly. You leave with expertise AND a governance repo that can reproduce a meaningful chunk of your operations. That’s never been possible before.

Cognitive property

People are treating AI personalization like it’s a nice-to-have feature. A convenience. “My Claude knows how I like my code structured.” Cool, time saver.

It’s a lot more than a time saver: it’s cognitive property. And right now, the ownership question hasn’t even been asked.

If you’re building this kind of depth on a corporate AI account, with corporate tools, on company time… the question of who owns those patterns matters a lot more than you think. And the answer, under most current employment agreements, is probably being decided by boilerplate that nobody wrote with cognitive property in mind.

The conversation that needs to happen now

This is a more urgent conversation than AGI governance, and I say that knowing how provocative it sounds. AGI governance matters, and it’ll matter more as we get closer. But it’s not happening today.

This is happening today. People are building repeatable cognitive patterns in transferable formats. They’re externalizing their reasoning into documents that function without them. And most of them haven’t thought about who gets to keep it.

That question needs to be asked before it becomes standard practice to assume companies own whatever cognitive patterns emerge from AI tools used on company time.

Legal and policy scholars are already raising these questions about generative AI and intellectual property. But most of that work focuses on model outputs, not on the cognitive patterns of the person doing the work.

The ownership conversation is overdue.


This is part of a four-post series, and the next post starts drawing the boundary. Subscribe for email updates.

Employment law & IP

Understanding the Work Made for Hire Doctrine — Venable LLP. Plain-English explainer of work-for-hire under the Copyright Act of 1976.

AI in the Modern Workplace: Ownership Challenges of AI-Generated Code — Bradley Arant Boult Cummings. Employee use of GenAI does not change that code written in the course of employment belongs to the employer.

AI, Copyright Law, and Work-Made-For-Hire — UCLA Livescu Initiative. Scholarly discussion of how work-for-hire breaks down for AI-generated material.

AI governance & cognitive data

Governance of Generative AI — Policy and Society (Oxford Academic). Survey of IP and data-governance gaps in generative AI, including the need for new ownership frameworks.

Beyond Neural Data: Cognitive Biometrics and Mental Privacy — Magee, Ienca & Farahany, Neuron (2024). Argues that cognitive and behavioral patterns function as uniquely identifying data, extending privacy concerns beyond neural signals.

Or get new posts in your inbox

Share