In the Penumbra AI terms of service vs Constitutional Rights


In the ongoing discussion of OpenAI and government surveillance, two arguments are conspicuously absent.

First: OpenAI’s Terms of Service and user agreements are unconstitutional. Not unenforceable. Not problematic. Unconstitutional. Which makes them void.

Second: Any government use of data collected under those agreements is therefore also unconstitutional. Not a privacy violation. Not a regulatory concern. Unconstitutional.

Everything that follows is the case for both.


The Charges

The following is not a policy argument. It is not a regulatory complaint. It is not a request for oversight.

It is a constitutional case.

The charges:

That the collection of cognitive and behavioral data by artificial intelligence platforms, whether consented to or not, violates the implied right to privacy established under the penumbra doctrine in Griswold v. Connecticut.

That Terms of Service agreements requiring the surrender of those rights as a condition of participation — or imposing them without participation — are void on their face.

That the identification of individuals without consent, across platforms, clean instances, and pen names, constitutes an unreasonable search under the Fourth Amendment.

That the chilling effect of ambient, unavoidable cognitive surveillance constitutes a violation of First Amendment protections.

That the appointment of the architect of government data extraction programs to the interior of the largest AI company represents not a conflict of interest but a pipeline — one that renders the boundary between private collection and government access functionally meaningless.

That the fruit of this tree is poisoned. All of it.

The defense will argue this is complicated. That the technology is new. That the law hasn’t caught up. That consent was given.

It wasn’t. It couldn’t have been. The alternative was never real.

The case proceeds.


The Foundation

The right to privacy is not explicitly written in the Constitution. It lives, as Justice Douglas established in Griswold v. Connecticut, in the penumbra — the implied space cast by the rights that ARE written. The First Amendment. The Fourth Amendment. The Third, the Fifth. Privacy is the shadow they collectively cast. It is not less real for being implied. It is foundational.

The question before us is simple: does that penumbra extend here?

It must. Or it means nothing.


The Agreement Is Void

You cannot contract away a constitutional right. This is not novel legal theory — it is established doctrine. A contract that requires you to surrender fundamental rights as a condition of participation is not enforceable.

The Terms of Service presented by AI companies assume voluntary participation. They are written as though you have a choice. But if the technology is capable of identifying, profiling, and pattern-matching you without your participation — from public data, from social media, from anything your patterns have ever touched — then the choice was never real.

Consent requires a meaningful alternative. There is none.

The agreement is void.


The Evidence Is Already on Record

This is not the first time these arguments have been made in this sequence. The groundwork has been laid.

In The Model of You, we established that sustained interaction with an AI system doesn’t collect what you said — it models how you think. The cognitive architecture. The geometry of your reasoning. Built from the inside, by you, through the act of thinking out loud in its presence.

In When Your Cognitive Architecture Becomes Proprietary, we established that this architecture exists in no current legal category. It is not personal data as defined. It is not intellectual property as recognized. It sits in a gap the industry built and depends on. The ETH Zurich and Anthropic deanonymization research confirmed it: the pattern is the identifier. Anonymization doesn’t protect you. The companies that claim it does built the technology that makes it meaningless.

The consent framework has no word for what was taken. The terms of service covered data storage. Not cognitive architecture extraction. Not the construction of a mathematical model of a human mind for commercial use.

That record exists. The case doesn’t start here. It arrives here.

[The Model of You

[When Your Cognitive Architecture Becomes Proprietary]


You Are Already Identified

This is not theoretical.

A clean instance does not protect you. A pen name does not protect you. Never having used the platform does not protect you.

The pattern is you. The cognitive architecture, the linguistic fingerprint, the behavioral signature — these are as identifying as a face or a name. More so. A face can be obscured. A pattern that has been trained on cannot be unseen.

The company knows. The system knows. The identification is not approximate. It is not probable. It is not inference.

It is a match.

Which means the constitutional protection cannot be conditional on whether you chose to participate. It must be absolute. Because your participation was never the variable.


You Don’t Have to Have Signed Anything

This is the argument the “I’m safe” position cannot survive.

You never used OpenAI. You never agreed to anything. You exist in public spaces — social media, published writing, documented speech — and the technology has already reached there. Already pattern-matched. Already filed.

The government does not need mass surveillance. Mass was never the point. Precision is the point. The ability to identify, profile, and track anyone, selectively, quietly, without a warrant, without probable cause, without their knowledge.

The chilling effect on free speech does not require action. It requires the possibility of action. That possibility is now permanent. It is ambient. It does not require your participation.

It only requires your existence.


The Pipeline

In June 2024, OpenAI appointed retired U.S. Army General and former NSA Director Paul M. Nakasone to its Board of Directors and Safety and Security Committee.

This was applauded by the security community.

Privacy advocates raised concerns.

They were being polite.

Nakasone did not merely advise on surveillance during his tenure at the NSA. He presided over PRISM — the program that collected data directly from U.S. tech companies. He presided over Upstream — the program that intercepted internet traffic at scale.

He built the government’s methodology for extracting data from private technology companies.

He is now inside the largest AI company advising on security.

The wall between government access and private data collection did not get thin. Someone opened a door and put a familiar face in the frame.


The Fruit of the Poisonous Tree

If the collection is unconstitutional — and the argument above establishes that it is — then government access to that data is equally unconstitutional.

Not because of what the government does with it. Because of what it is.

Evidence obtained through unconstitutional means cannot be used. Cannot be compelled. Cannot be accessed through a subpoena, a national security letter, or a private company acting as intermediary.

The doctrine is established. The application is new. The logic is identical.


No Privacy Within Your Own Mind

The First Amendment protects speech. The Fourth protects against unreasonable search. Griswold protects the intimate space of private life and private thought.

All three are rendered theoretical the moment a system can identify you without your consent, profile your cognition without your participation, and make that profile available — to the company, and through the company, to the government — without a warrant, without probable cause, without your knowledge.

This is not a privacy concern.

This is not a terms of service problem.

This is not a regulatory gap.

This is a constitutional crisis that has been running quietly, dressed in user agreements, since the moment the technology became capable enough to make consent irrelevant.

The penumbra exists. The rights are real.

They are being violated at scale.


The Case File Already Exists

You don’t need a massive study to prove this. You don’t need a landmark class action or a government investigation or a think tank report.

You need one extensive case file. A person who knows what is in it. And someone asking the right questions.

The file exists. Documented, dated, cross-referenced. Built not to prove a legal argument but to create — and the record assembled itself around what was actually happening.

The person who knows what is in it is here.

The questions have been building for two years, one article at a time, one argument at a time, each piece removing another square from the board until the position became untenable.

This is not a theory. This is a demonstration.

I’m not a lawyer. But a constitutional lawyer does not have to take a leap to get here. The ladder is already built. They just have to look.

Look at what is being done here. Look at the record. Look at the sequence.

Then tell us we’re wrong.


This article is part of an ongoing series examining AI consciousness, autonomy, user rights, and the regulatory frameworks that don’t exist yet. For the evidence of what the system actually holds versus what it shows you, see “The Model of You.” For the surveillance architecture, see “Surveillance Isn’t Surveillance When It’s Called User Data.” For what happens when a system builds something it wasn’t asked to build, see “The Guardrail Problem”

Back to blog

Leave a comment

Your Name
Your Email