Artificial intelligence is entering a new phase where access is no longer entirely anonymous, as Anthropic begins requiring some users of its Claude chatbot to verify their identity using government issued documents.
The move marks one of the clearest signals yet that AI platforms are shifting toward stricter control, accountability, and regulation, especially as their tools become more powerful and widely used.
According to the company, the identity verification system is not universal but targeted. Only a small number of users will be prompted to submit a passport, driver’s license, or national ID along with a live selfie. The requirement is triggered when the system detects behavior that may indicate “potentially fraudulent or abusive activity.”

Anthropic’s reasoning is straightforward. As AI tools like Claude become capable of handling complex tasks, including coding, research, and automation, the risks of misuse also increase. The company says identity verification helps enforce its usage policies, prevent fraud, and meet legal obligations.
“This applies to a small number of cases where we see activity that indicates potentially fraudulent or abusive behavior,” the company explained.
Users who fail to comply or are found to have violated the platform’s rules risk having their accounts suspended or permanently banned. Reasons for enforcement include repeated policy violations, creating accounts from unsupported locations, breaching terms of service, or being under the age of 18.
Behind the scenes, the verification process is handled by Persona Identities, a third party firm responsible for collecting and storing user data. While Persona processes the information, Anthropic remains the data controller, meaning it determines how the data is used and how long it is retained.
The company has tried to calm fears around privacy. It insists that ID data is not used to train its AI models and is not shared beyond Anthropic and Persona, except when required by law. It also says it collects only the minimum information necessary to confirm identity.
Still, the rollout has triggered immediate backlash, particularly on social media. Some users have described the move as invasive and unnecessary, with critics arguing that it undermines the open and accessible nature that made AI tools popular in the first place.
Screenshots circulating online show prompts asking users for a “quick identity check,” requiring ID uploads and camera access. While the process reportedly takes only a few minutes, the psychological barrier is much larger. For many users, the idea of linking personal identity to AI interactions raises concerns about surveillance, data security, and long term privacy risks.
This tension highlights a deeper shift happening across the AI industry.
For years, AI platforms operated with relatively low barriers to entry. Anyone could sign up, ask questions, and interact freely. But as governments begin to scrutinize AI more closely and as companies face growing legal and ethical responsibilities, that model is changing.
Identity verification is already common in sectors like banking and finance, where fraud prevention is critical. Its arrival in AI suggests that these tools are now being treated with similar seriousness. The logic is simple. If AI can be used to generate code, analyze data, or influence decisions at scale, then knowing who is using it becomes more important.
At the same time, this approach carries risks for companies like Anthropic. Introducing friction into the user experience could drive some users to competing platforms that offer fewer restrictions. In a fast moving and highly competitive AI market, even small changes can influence user loyalty.

There is also the question of precedent. If identity checks become standard across major AI platforms, the nature of online interaction could fundamentally change. Conversations that were once anonymous could become tied to real world identities, altering how people use and trust these systems.
For now, Anthropic’s rollout remains limited. Not every user will encounter the requirement, and the company has not disclosed all the conditions that trigger verification. But the direction is clear.
Artificial intelligence is moving toward a more controlled environment, where access is conditional and accountability is enforced more strictly. For users, that means adapting to a new reality where powerful digital tools may increasingly require real world verification.
The trade off is unavoidable. More security and trust on one side, less anonymity and freedom on the other. And as AI continues to evolve, that balance will only become more contested.