Meta opens brand safety tools in Facebook and Instagram feeds

After initial tests, more advertisers are getting hands on adjacency tools that rely on AI and apply industry suitability standards.

Meta opens brand safety tools in Facebook and Instagram feeds

Facebook and Instagram are rolling out ad controls to more brands, expanding a pilot program from last year that relies partly on AI to make in-feed environments safer for marketing. 

On Thursday, Meta, Facebook’s parent company, announced that it was ready to offer “brand suitability” filters to more advertisers. Since last year, Meta has been taking a new approach to brand safety, by introducing filters that prevent ads from appearing in placements that are deemed risky in feeds.

Also read: Meta drops in-stream reserve ads

Meta partnered with Zefr, a verification service, to report independently to brands how the filters performed—whether they kept ads away from unfortunate content. Artificial intelligence is working behind the scenes to apply the filters that correspond to industry standards that have been hashed out through the industry group Global Alliance for Responsible Media.

Meta has tested the program with more than 25 advertisers since last year, when it first announced the tool, according to Samantha Stetson, Meta’s VP of client council and industry trades. Meta will bring on more third-party verification partners, DoubleVerify and Integral Ad Science, Stetson said. Meta also intends to develop the same filters for Reels videos and Stories.

Reports to brands will show “the content that appeared, one above and one below, each ad,” Stetson said in a phone call with Ad Age, “They’ll show you what was ‘low,’ ‘medium,’ and ‘high’ risk, and which category in the GARM suitability framework it fell into. So, the advertiser will have total transparency.”

The ad controls could affect the ad auctions if more brands adopt them, because that means more competition for valuable slots in the feed. “So far, we see that we are able to maintain performance across reach, CPM [cost per thousand ad impressions], and cost per acquisition,” Stetson said.

Meta has been working on this level of brand safety controls since 2020, after it heard from advertisers concerned about ads running alongside objectionable material. There are 2 billion daily active users on Facebook, according to Meta’s fourth-quarter financial report, and that means 2 billion feeds with personalized content streams going to each one. That kind of volume made it very complex for Meta to steer ads so that they never appeared next to a post that would make a major marketer uncomfortable. But brands have been calling for more accountability from social media and internet services. GARM has been working with the major platforms to set standards over what content is, and isn’t, appropriate for marketing.

With GARM’s suitability standards, brands can dial up or down their threshold for certain subject matter. Platforms are supposed to remove the worst offenses, which would fall under categories that are totally unsafe for brands. That would include racism, explicit violence, hardcore pornography and egregious misinformation.

In Meta’s early tests, 99.9% of the time brands avoided appearing next to any posts that would fall under the brand safety floor, Stetson said.

As for “suitability,” brands can set the level at which they are OK being next to hot topics. “That is content that is allowed on the platform,” Stetson said. “It doesn’t violate our standards, but it may be content that an advertiser isn’t comfortable being next to.”

Meta’s AI takes a brand’s safety settings and looks for the appropriate slot in the feed to drop an ad. There have been general concerns about an overreliance on AI in moderating social media, and questions about how well AI can pick up nuances that would sort harmful content from permissible content. Meta's suitability program uses the third-party reporting to add that layer of human review, Stetson said. 

The in-feed adjacency question is a pressing one for marketers, especially the top advertisers at many Fortune 500 companies that have been emphasizing “brand values.” Marketers want to insulate themselves from getting caught monetizing extreme content that contradicts their stated values.

The brand safety movement has only heated up the months since Elon Musk bought Twitter, which sent some advertisers packing over fears that the conversation there would get too wild. Also, there is more scrutiny on TikTok, given the platform’s growing influence. Last week, TikTok CEO Shou Zi Chew testified before Congress, as lawmakers considered banning the app, and Chew faced questions about harmful content. TikTok has introduced in-feed ad filters, and it has been working with verification partners such as Zefr.

Meta’s brand safety controls will start appearing as an option in its ads manager platform, starting today. Not all advertisers will see it right away, however, as there will be a build-up to full deployment, Stetson said.