Meta will finally give researchers access to targeting data for political ads — information that academics have been clamoring for and using legally risky workarounds to collect on their own for years.
The company had argued in the past that sharing targeting information would risk violating user privacy. Last year, it went so far as to permanently shut down a widely used ad transparency project run out of New York University after serving researchers there with a cease and desist.
In a blog post published Monday, Meta said it would share targeting data on individual ads with pre-vetted researchers who are part of its Facebook Open Research and Transparency project. The company piloted this type of data-sharing last year with a subset of 2020 election ads. Now, it's expanding that work for researchers in its network.
The company is also offering more information about political ads in its Ad Library, which anyone can access. Rather than sharing targeting data on individual ads, the Ad Library will show aggregate data about the number of ads a page has run targeting a given demographic and how much that page has spent targeting that demographic. "For example, the Ad Library could show that over the last 30 days, a Page ran 2,000 ads about social issues, elections or politics, and that 40% of their spend on these ads was targeted to 'people who live in Pennsylvania' or 'people who are interested in politics,'" the blog post reads.
One reason Meta has been reluctant to widely share more granular targeting data is because the company believes it would be too easy to reverse engineer who saw what ads and infer certain characteristics about individual users. "If you combine those two data sets, you could potentially learn things about the people who engaged with the ad," Steve Satterfield, Facebook's director of privacy and public policy, told Protocol last year.
In the absence of that information, researchers at NYU developed a browser extension through which Facebook users could opt-in to share political ads they saw in their own feeds with researchers. The extension also scraped the information Facebook shared with those users about why they were seeing the ad — information that, collected en masse, amounts to targeting data. The researchers then published that information in a public archive. That work came to an end last summer, however, when Facebook suspended the researchers' accounts following a months-long legal standoff.
Targeting data is critical to understanding a political ad's underlying motivation. By bringing that data to more researchers, which will begin later this month, Meta could significantly contribute to the public understanding of how political campaigns and groups are operating. It will also undoubtedly lead to more scrutiny of Meta and the way it enables political actors to manipulate and microtarget users.
Meta's FORT program also hasn't been without issue. Last year, the company admitted that it had sent a flawed dataset to researchers in the program, leading to potentially erroneous results for the academics who had been relying on it.
Meta also opened up additional data to an even more selective group of outside researchers, who studied the platform's impact on the 2020 election through the Jan. 6 riot. But the results of that research, due last year, were postponed and have yet to be released.