The rapid evolution of generative AI technology has created uncertainty between brands and their agencies.
But finding answers can be difficult when advertisers often don’t even know what questions to ask.
To help facilitate these conversations, the Association of National Advertisers (ANA) launched a standardized AI contract rider earlier this month that can be added to service agreements between advertisers and agencies.
The AI rider outlines the practical and legal aspects of using generative AI, including the need to disclose AI usage, the ownership of AI-generated work and ethical considerations related to misinformation and data privacy.
With this new template in hand, buyers can “ensure they’re covering all their legal bases for incorporating AI tools,” said Greg Wright, the ANA’s SVP of brand and media.
After all, “nobody wants their agency doing something that wasn’t approved by the advertiser,” said Wright.
Early development
As with so many resources offered by the ANA, the new AI contract rider was created as a reaction to growing concern among its members.
The ANA Advertising Financial Management Committee, which handles marketing procurement and agency/advertiser relationship management, started seeing a pronounced influx of AI-related queries from member organizations roughly seven months ago.
For example, some brands began noticing new AI-related clauses in their agency contracts and were unsure how best to proceed, said Wright, who runs the committee. Others simply wanted to know whether their agencies were using AI in the first place and how they should best protect themselves against potential risk.
Given the complexity of the topic, the ANA enlisted the help of its outside legal counsel, Venable LLP (which, fun fact, also represents Taylor Swift) to develop the rider. They also created an informal checklist that breaks down all the most important takeaways in simpler, non-legalese language.
Although it specifically addresses generative AI concerns, the rider was designed to be as broad as possible so it can adapt to individual advertiser and agency needs and to developing artificial intelligence laws. (As written, it already touches on specific existing AI legislation in California, Colorado and the European Union.)
Inside the ANA’s legal generative AI template
So, what does the rider include?
One of the most important issues it addresses, at least in Wright’s view, is disclosure.
According to the rider as written, agencies are required to tell their clients when they use AI tools for any work output and cannot pass off AI-generated work as being done by humans. There must also be some degree of human oversight on any AI-generated work, and an acknowledgement that AI-generated work “may not be completely error-free or up to date” – meaning, essentially, that advertisers have to accept the possibility that the AI might get things wrong.
Speaking of wrong information, there are also ethical implications to consider as well. The rider stipulates that agencies cannot use AI tools to do anything that “has reasonable potential to inflict harm,” including impersonating individuals (as is commonly the case with deepfakes) and engaging in discriminatory scoring or profiling practices.
Agencies must also never intentionally distort images to spread misinformation or send unsolicited marketing messages that promote harassment, violence or hate speech.
AI tools also cannot be used in areas related to employment opportunities, education enrollment, housing, finance, insurance, health care or legal services, among other “high-risk uses.”
However, the rider doesn’t provide a concrete legal definition for what constitutes high-risk use. That’s up to agencies and advertisers to decide for themselves, said Wright.
Who owns the final product can get a bit murky, too. The rider grants the client copyright for any AI-generated work that is made on behalf of that client – unless, that is, the work can’t be protected under copyright laws to begin with.
The rider goes on to clarify that work typically isn’t protected if it was generated using third-party training data that was not properly licensed (more on that below) or if a government body, like the US Copyright Office, decides against it.
(For a real-life example, that’s more or less what happened with 2023’s “Zarya of the Dawn,” a graphic novel that was written and composited by a human author, but illustrated with Midjourney-generated images.)
Furthermore, because the use of AI tools “may produce similar output/responses for other clients,” the rider notes that copyrights for AI-generated work “may not be enforceable against certain third parties.”
In other words, if an advertiser receives AI-generated assets that it feels look too similar to another agency client – maybe if two clothing retailers use the same font treatment or highlight the same type of garment, as a hypothetical – then they might not be able to do anything about it.
And going back to the question of licensing data: Agencies have to be transparent with their clients about how their AI models are trained, the rider says. All of the data that goes into those models must also be secure and legally and ethically procured. If an agency uses third-party AI tools, it has to be transparent about that as well.
Adapting advertising for the future
Because the rider is so new, the ANA couldn’t share any details about adoption. Still, the response from ANA members has been very positive, said Wright, particularly where customizability is concerned.
The ANA itself also plans to continue updating the rider as needed, although Wright said that it’s more likely any updates will be spurred by “significant legal changes” rather than following a regular cadence.
Ultimately, said Wright, the rider is intended to serve as a jumping-off point for a larger conversation between clients and agencies – and their legal teams, of course – about where AI fits into their business model across both sides of the aisle.
“It’s about transparency and it’s about communication,” added Wright.