Sony is requiring UK and Ireland PlayStation users to verify their age – or lose access to communication features including voice chat, messaging, parties, and game broadcasting – with enforcement beginning June 2026. Users who choose not to comply won’t be locked out of their consoles entirely: they can still play games, access the PlayStation Store, and earn trophies. But voice chat, messaging, livestreaming to YouTube and Twitch, and third-party integrations like Discord will all be blocked until identity is confirmed.
The three available methods: complete a facial scan, provide credit card information, or allow a check against your mobile carrier’s records. Sony’s chosen partner for all three is a company called Yoti.
Meet Yoti: Sony’s “Privacy-First” Partner With a €950,000 Fine
Sony uses Yoti, a company that markets itself with a “privacy-first approach to age checks.” Sony is not alone in choosing them – Microsoft adopted Yoti when Xbox rolled out its own age checks in July 2025 to comply with the same law.
But Yoti’s privacy-first branding has a problem: regulators disagree with it.
Spain’s data protection authority, the AEPD, imposed a total fine of €950,000 on Yoti Ltd for three distinct violations of the GDPR in the operation of its Digital ID application. The breakdown is instructive: €500,000 for unlawful processing of biometric data, €200,000 for processing data without valid consent, and €250,000 for excessive data retention.
What exactly did Yoti do wrong? Users could click through the privacy policy screen without actually opening it. The app defaulted to consent for using biometric data in research and development, no affirmative opt-in required, just a pre-checked box. Geolocation data was retained for five years. Video recordings from liveness detection were stored for 30 days. Fraudulent ID documents submitted during failed verifications were retained indefinitely to train Yoti’s algorithms.
Read that last point again: if you tried to verify with a fake ID and failed, your document became permanent training data for Yoti’s AI, without your knowledge or consent.
Behind the scenes, Yoti doesn’t just verify identities; it repurposes the data to train and refine its systems, sweeping users into research and algorithmic improvements they never actively agreed to. Even after getting stripped of names and addresses, the remaining faces, videos, and ID-derived attributes are permanent and deeply personal.
The AEPD concluded that Yoti violated GDPR articles on excessive data retention, valid consent, and unlawful processing. Yoti disputes the ruling and has appealed to Spain’s High Court. But the violations were established by regulators, and this is the company now holding the biometric data of PlayStation’s UK player base.
Sony Has Global Plans
If you think this stays in the UK, think again. Globally, concerns have been raised about age verification being used to harvest player data that could be leaked, used to track consumer behavior across apps, or even end up in the hands of the government, without the user’s knowledge or consent.
Sony’s own communications make the scope explicit. The email sent to players mentions compliance with “global regulations”, not UK regulations. In the United States, California’s Digital Age Assurance Act (AB 1043), signed by Governor Gavin Newsom in late 2025, will require any internet-connected device with an operating system to run an age check at account creation, taking effect January 1, 2027.
The US House of Representatives is also currently considering the “Parents Decide Act,” which, if passed, could require age verification before users can access a computer’s operating system.
Today it’s PlayStation voice chat in the UK. Next year, every US PlayStation account. After that, potentially every device with an OS.
The Language of Normalization
The words being used here are not accidental. “Age verification” sounds protective, almost parental. “Identity verification” sounds like what it is: a government-linked biometric check on a consumer entertainment device. The industry, and the legislators behind these laws, chose “age” deliberately.
Rossmann’s video titled “stop calling it age verification” says it plainly. The Consumer Rights Wiki entry he linked defines the problem in structural terms: the UK’s Online Safety Act is already exerting extraterritorial control through age-verification changes being implemented in the US, even for companies whose customer bases are not subject to UK law. Because the law effectively ignores national borders, non-UK companies face only two options: geo-block affected content for UK users, or apply the same verification measures globally, and most choose the latter.
Safety is so often used as a rhetorical accelerant for systems that would otherwise face much harder questioning. Once framed as protection, a design can become strangely insulated from critique, even when it moves the line of what ordinary users are expected to surrender.
A game console should not quietly train people to accept inspection as the price of belonging.
This Is a Ratchet, Not a Policy
Laws like this have a well-documented dynamic: they are easy to pass and nearly impossible to reverse. The infrastructure, once built, does not disappear when political winds shift.
Many believe the real motive is to connect online and real-world identities, with fears this could lead to greater government and corporate surveillance of individuals.
Consider the cascade already underway. Xbox implemented Yoti-based checks in July 2025. PlayStation follows in June 2026. Discord faced immense backlash after announcing plans for age verification and pushed its rollout to the end of 2026. Roblox, Steam… the list grows. Half of the United States now mandates or is considering mandating age verification for accessing adult content or social media platforms, with nine states seeing their laws take effect in 2025 alone.
The pattern is not coincidental. It is coordinated legislative pressure, and gaming is the entry point, precisely because gaming is socially acceptable, the player base is broad, and child safety arguments land easily. Once identity verification is standard on consoles, the argument for extending it to PC operating systems, mobile devices, and eventually any networked service becomes structurally easier to make. The precedent is being set right now, in the form of a June 2026 PlayStation deadline in the UK.
Platforms which implement age checks that require sensitive information such as a government-issued ID will likely be more of a target for cybercriminals. As more platforms comply with the age checks, it becomes more likely that a data breach on at least one of these platforms can reveal extremely sensitive information, which can likely result in identity theft. This is not hypothetical. Last year, Sony’s PlayStation Network suffered a data breach disclosing the personal details of users.
What You Can Do
Make noise now. Legislation is reversible before it becomes entrenched infrastructure. Once the verification layer is embedded in every major platform, rolling it back requires overturning laws, unwinding corporate agreements, and rebuilding political will, a much higher bar than it is today.
Watch Rossmann’s video. Watch GN’s breakdown. Share both. Contact your representatives. Support digital rights organizations: the Electronic Frontier Foundation (EFF), the Open Rights Group (UK), and equivalent bodies in your region. The EFF has published analysis arguing that OS-level age verification bills are unconstitutional.
The question is not whether Sony is complying with the law. It is. The question is whether this law should exist, and whether the infrastructure being built in its name, powered by a company with an active regulatory fine for mishandling biometric data, is something players should accept without a word.