News
When “Know It All” Turns Into “Know You Too Much”: Privacy Group Files GDPR Complaint Against AI Surveillance Service
A Lithuania‑based startup is promising to track everything about you online — and then charge you to see the report. But now a European privacy watchdog is pushing back, filing a formal GDPR complaint. The case marks a fresh battleground in the struggle over personal data, AI, and who really controls our digital identities.
A Reputation Report You Didn’t Ask For
Whitebridge.ai offers “reputation reports” — dossiers compiled from social media, news articles, images, and AI‑generated inferences. Want to know what it says about you? That’ll cost you. Meanwhile, anyone else can buy the same package.
The service markets itself as a comprehensive tool: photos, personality traits, “background checks,” alerts to possible political or adult content, and suggestions for how to interact with the person profiled.
By August 2025, Whitebridge claimed to have generated roughly 560,000 reports, with around 80,000 registered users and 2.6 million people searched.
What triggered the complaint: Two individuals discovered their own dossiers in the system without having ever been alerted. When they exercised their GDPR rights — to access their data, correct it, and understand how it was collected — they faced refusals, demands for payment, and even requests for a “qualified electronic signature,” a legal quagmire many ordinary users can’t navigate.
The Legal Challenge: GDPR Violations Alleged
The nonprofit noyb (European Center for Digital Rights) filed the complaint on 29 September 2025 with Lithuania’s data protection authority.
Whitebridge claims its data processing is justified under “freedom to conduct a business” and that it draws from “publicly available sources.” But noyb argues that neither suffices as a legal basis under the GDPR.
Crucially, much of the data comes from social media accounts that aren’t indexed by web search or have privacy limitations such as friends-only visibility. Courts have already ruled that sharing within a social network doesn’t equal making that data “manifestly public.”
The complaint also highlights that in some of the purchased reports, Whitebridge flagged “sexual nudity” or “dangerous political content” about users, both of which may fall under special categories protected under Article 9 of the GDPR. Whitebridge refused to correct these “inaccurate” claims even when asked.
Under GDPR, individuals have the right to access their data free of charge, to rectify inaccuracies, and to be informed when their data is processed from third‑party sources. Whitebridge allegedly demanded payment for access requests and withheld responses unless the person provided a qualified electronic signature. It also neglected to notify the individuals that their data was being collected and shared.
Furthermore, Whitebridge’s invocation of a “disproportionate effort” excuse for not notifying data subjects runs contrary to interpretations by regulators, especially when the company is able to identify social media accounts and contact information.
In sum, noyb argues that Whitebridge may have breached a wide array of GDPR articles — including Articles 5, 6, 9, 12, 14, 15, and 16 — and seeks a declaratory ruling, orders to stop the unlawful processing, compliance with access and rectification, and sanctions.
What This Case Reveals About AI, Privacy and Reputation
This dispute sits at the intersection of three broad trends in digital life.
First, data brokering is being enhanced by AI inference. Traditional data brokers collect and resell personal information. What Whitebridge adds is a layer of AI-generated personality traits, risk scores, and behavioral warnings. That amplifies both the privacy risks and the potential for misinformation.
Second, the company’s model capitalizes on fear. Its marketing suggests users should check what information is “out there” about them — essentially monetizing curiosity or anxiety about one’s own data. The complaint contends this is exploitative, especially when users can’t legitimately access the reports without paying.
Third, GDPR enforcement is emerging as a frontier for AI oversight. As AI systems increasingly touch personal data, GDPR is becoming one of the primary legal frameworks through which privacy advocates and regulators challenge potentially harmful business models. Whitebridge may be a bellwether case for how the law adapts to AI-driven surveillance.
What Might Come Next
If Lithuania’s data protection authority finds merit in the complaint, they could force Whitebridge to comply with access and rectification requests, stop processing unlawfully gathered or inferred data, notify affected individuals about the processing, and impose fines or bans.
But beyond Whitebridge, the case may send ripples across the AI surveillance industry. Companies building user profiles from scraped data, especially when layered with AI inferences, may face more scrutiny under GDPR and similar privacy regulations.
For users concerned about their digital reputation, this case underscores the value of knowing what data exists about you — and pushing your rights under GDPR when something feels off.