Lawyers for a group of UK residents whose Facebook data was harvested by Cambridge Analytica are now threatening to sue for damages. In a 27-page letter served to the company Tuesday, they accuse Facebook of violating British data privacy regulations. The letter before claim, as it’s called, is the first step in the UK’s legal process for filing a class action suit. It warns Facebook that if it does not adequately respond to a list of questions regarding user privacy within 14 days, the claimants may take legal action against the company in the United States, the United Kingdom, and Ireland. Nearly 1.1 million British citizens could be eligible to join such a suit if it goes forward.
The warning comes from the UK-based law firm Irvine Thanvi Natas Solicitors, which is representing dozens of people who argue that Facebook misused their personal data in violation of UK law. It follows an announcement Monday by separate group, called the Fair Vote Project, that is also launching a class action suit against Facebook in the UK.
The UK Information Commissioner’s Office already said earlier this month that it intends to fine Facebook more than $600,000 under the country’s Data Protection Act for allowing Cambridge Analytica, the now-defunct political consulting firm, to collect information on tens of millions of users without their knowledge. But Ravi Naik, a lawyer with Irvine Thanvi Natas Solicitors, says the individuals affected also have a right to answers, and may have a right to damages if those answers aren’t satisfactory.
“People should realize that data rights are real rights, and we have a mechanism to enforce them,” Naik says.
WIRED reviewed a copy of the letter Naik submitted to Facebook Tuesday. It reads like a checklist of Facebook’s failures, dating back to its decision in 2009 to no longer allow users to keep their friend lists private. The letter also notes Facebook’s decision to let app developers scoop up data not just on their apps’ own users but on those users’ friends. Facebook didn’t put a stop to this practice until 2015. By then, Cambridge Analytica had already gained access to as many as 87 million Facebook users’ data through a personality quiz app designed by a University of Cambridge researcher named Aleksandr Kogan.
These original sins, the letter argues, opened Facebook up to legal liability from several angles. For starters, the UK’s Data Protection Act requires companies to get user consent to process their data. Naik says that Facebook essentially misled users about what they were consenting to. In Facebook’s privacy settings at the time, users could determine whether to share their posts with “only friends” or “friends of friends,” but this only applied to what other Facebook users could see. App developers had their own permissions, which, in some cases, gave them the ability to tap into their users’ friends’ data.
This broad allowance for app developers also triggered regulators in the United States to investigate Facebook for deceptive practices in 2011. At the time, the US Federal Trade Commission wrote that while Facebook had given users the impression they could share information only with their friends, those privacy restrictions “would be ineffective as to certain third parties.” The FTC argued Facebook failed to make that clear to users. It also considered Facebook’s 2009 decision to stop letting users make their friend lists private to be an “unfair and deceptive” practice. Facebook settled the matter by signing a consent decree with the FTC in 2011, and vowed not to mislead users about their privacy. After reports about Cambridge Analytica surfaced this spring, the FTC announced it was investigating whether Facebook broke that promise, a violation that could come with hefty fines.
Facebook CEO Mark Zuckerberg, for his part, has told Congress he doesn’t believe the actions of a rogue app developer working for Cambridge Analytica constitute a breach of the consent decree. Still, in the letter, Naik uses the FTC’s own arguments to paint a picture of a company that failed to protect its users years after it was first informed of the risks. “We’re not talking about something passive here,” Naik says. “This was spelled out by the FTC.”
The claimants are seeking detailed answers to questions about how their data was shared, who exactly had access to it, and how Facebook has tried to protect them. They argue that the language Facebook used to alert people their data had been given to Cambridge Analytica was “vague and caveated.” It explained at a high level the kind of information that might have been shared, but the notification wasn’t tailored to each individual. That means, for instance, users had no way of knowing whether they were among the 1,500 people who had agreed to share their private messages with the Cambridge Analytica-linked app.
Naik’s clients want to know what other third parties may have had similar access. Facebook has shared some of that information with users already. Anyone can see the advertisers they’ve interacted with on Facebook and the apps they’ve logged into through Facebook under their account settings, but the company offers users limited details about the specific data those third parties can see.
In addition to answers, Naik’s clients are seeking unspecified damages, alleging Facebook misused private information, committed a breach of confidence, and violated the Data Protection Act, which would entitle the claimants to compensation under British law. Naik acknowledges this is a relatively untested argument as it pertains to social media companies, even in a country with relatively strict data privacy laws. “We’re watching data rights evolve in practice. This is going to be a really important test,” he says.
Naik is also representing David Carroll, an American professor at The New School, in a separate claim against Cambridge Analytica. Carroll is seeking additional information about the sources of data the company used to develop predictions about his political leanings. That case is still ongoing. But while the two issues overlap, Carroll is not a party in the potential claim against Facebook.
Facebook faced several lawsuits in the United States after the Cambridge Analytica scandal broke this spring. In its earnings report last week, Facebook mentioned multiple lawsuits that have been filed since March, but the company wrote that it believes they are “without merit” and is “vigorously defending them.”
But Naik says his clients’ claim is fortified not only by the strength of UK data protection law but by timing. Key players including Zuckerberg, Kogan, and former Cambridge Analytica CEO Alexander Nix have repeatedly testified before international regulators. The Information Commissioner’s Office announced its intention to fine Facebook the maximum penalty allowed under British law earlier this month. And over the weekend, following an 18-month investigation, a Parliamentary committee in the UK issued a scathing report condemning Facebook’s actions and suggesting the government go even further in establishing “clear legal liability for the tech companies to act against harmful and illegal content on their platforms.”
Naik isn’t the only person looking to capitalize on this heightened scrutiny. The Fair Vote Project, which is also preparing a suit against Facebook, is an activist group working to uncover alleged misdeeds in the Brexit Leave campaign. In a statement on the group’s website Monday, director Kyle Taylor wrote, “It is now abundantly clear the status quo with regard to how we hold internet giants to account does not work. The solution is urgent reform to properly regulate and oversee companies like Facebook.”
The timing couldn’t be worse for Facebook. Last week, following a quarterly earnings report that predicted slowed revenue growth ahead, the company lost more than $123 billion in value overnight. Facebook will undoubtedly have to answer to investors in the months ahead. The company is already facing one shareholder lawsuit over the stock market dive.
But Naik says the company has to answer to the individuals whose data was misused, too. “There’s one economic consequence of what they’ve done,” he says, “but there’s a human consequence as well.”
This is a developing story, and WIRED may update with more information as it becomes available.
More Great WIRED Stories
- Crispr and the mutant future of food
- Your next phone’s screen will be much tougher to crack
- The 10 most difficult-to-defend online fandoms
- Schools can get free facial recognition tech. Should they?
- A landmark legal shift opens Pandora’s box for DIY guns
- Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories