What generative AI means for data clean rooms in a post-cookie world
Amazon Web Services’ Jon Williams talks how artificial intelligence is coming to the cloud.
Third-party cookies are well on their way to being replaced by first-party data solutions, a transition that is being led by giants such as Amazon Web Services, whose data clean room ecosystem offers advertisers a safer, more advanced way to analyze troves of customer data.
But clean rooms aren’t the only emerging technology that AWS is developing. The cloud service company in April launched Amazon Bedrock, a tool that combines numerous AI models so that customers can build their own generative AI applications. Last week, AWS announced a partnership with Omnicom that will incorporate Bedrock, among other AI products, into the ad holding company’s Omni marketing platform. The announcement was joined by the launch of the AWS Generative AI Innovation Center, which will seek to help advertisers and other partners develop generative AI strategies for their businesses.
First-party data and generative AI may seem like two unrelated spaces, but AWS is already envisioning ways that they can come together. Jon Williams, global head of agency business development at AWS, spoke with Ad Age about this budding relationship, as well as how the company is working with agencies, what responsible AI development looks like and how to avoid an AI arms race.
This interview has been lightly edited and condensed for clarity.
Could you give a quick rundown of the main generative AI capabilities that AWS is currently offering advertisers?
Amazon Bedrock, which is built on top of AWS, allows customers to build and scale generative applications. We give you access to pre-trained models from a variety of providers, including AI21 Labs to generate text and text-to-image image foundations from Stability AI, all via an application programming interface (API).
We’re also providing two out-of-the-box models under Amazon Titan. One is for text generations, such as to create blog posts. The other one is for translating text into numerical representations for semantic understanding of text.
AWS announced an AI-focused partnership with Omnicom last week, while also unveiling a generative AI Innovation Center—what’s the purpose of this center and what does it mean for partnerships with more advertising companies?
The Innovation Center provides agency customers, as well as customers from other verticals, with machine learning science and strategy expertise, which will help them experiment and use generative AI across their businesses. It will help them to understand and select what use cases to experiment with, to improve accuracy in AI models that we just talked about and to fine-tune and customize the models for use cases—which is going to be key for advertising.
Agencies have a different process within each holding company, so we'll bring our strategy and science experts to the table to sit down and listen to the challenges they have, work backward from those challenges to identify the best use cases for generative AI and then we’ll provide a variety of expert services.
AWS works a lot with data clean rooms to allow for post-cookie advertising—how might these ecosystems fit alongside generative AI capabilities?
The ability to consolidate generative AI with the understanding of a customer profile that comes with a data clean room is early. With a solid data foundation, which affords you that understanding, you can merge it with the creative personalization that generative AI is particularly capable of enabling. But unless you have that data foundation, it's going to be really difficult to put that understanding of a customer profile to work in the most effective way possible, particularly with the explosion of the number of creative assets that can be facilitated to hit all those new user touch points.
So generative AI wouldn’t help in the actual data analysis within a clean room, but rather pick up the creative work for when that information gleaned from that data is ready to be deployed in a marketing activation?
Right. You need a vast amount of data to work with generative AI. Once you've got those creative assets oriented, it’s about how you apply the data to create the most accurate format. So these things aren't necessarily coupled together right now, but they live in different places.
A lot of agencies are already leveraging cleanroom capabilities in general. What they want to be able to do is leverage generative AI to more easily create the number of formats required for personalization in order to make the media more effective for advertising.
How is AWS approaching the issue of developing AI in a responsible way?
As we develop our AI models, we’ll detect and remove harmful content from the data that customers may provide for customization. We're going to reject inappropriate content in the user input. And we're going to filter outputs containing inappropriate content, which would include hate speech, profanity, violence, etc.
The other thing that we think is really important for customers to consider is making sure that the customization enabled by the models is secure. So within Amazon Bedrock, none of our customer’s data is used to train the underlying models. And because all the data is encrypted, it doesn't leave the customers' virtual private cloud (VPC). It’s important to make sure that customers can trust that their data will remain private and confidential from anybody else, but also particularly from their competitors.
There are also wider-reaching concerns that the current space is out of control, with companies blowing past important considerations in order to be first-to-market. How does AWS square this tension with promoting responsible development?
The only way to deliver responsible and lasting AI innovation, we think, is to focus on customers and work backward from their problems, not to approach AI as an arms race, which could potentially happen in this situation.