Adobe debuts new icon as a ‘nutrition label’ for generative AI content

Please vote:

As AI-generated content becomes more ubiquitous, Adobe has developed a new “nutrition label” with the goal of improving transparency and trust across photos and videos created or edited with AI.

During its Adobe Max conference yesterday, the company debuted a new “Content Credentials” icon that will be embedded into the metadata of all content created using Adobe software. When someone hovers over the tamper-proof icon (“CR”) — they’ll see information about when the image was created, who created it, the AI software used and additional information about the content and its history.

Along with Adobe, other companies are already adopting the icon include Microsoft, which will use it for AI-generated images made with Bing Image Creator. Publicis Groupe will use the icon on all AI-generated content for its clients. Hardware companies like Nikon and Leica will start adding it to future cameras to show which camera able to produce verifiable images. The updates are part of the ongoing Content Authenticity Initiative, an organization created by Adobe and various partners.

The “Content Credentials” icon has been in development for the past year, said Andy Parsons, senior director of the Content Authenticity Initiative. According to Parsons, the end result aims to “balance the vision of showing these nutrition facts in the moment you need them, but also the importance of understanding these.”

“We know that a nutrition label that’s five miles long is not helpful to anyone,” Parsons said. “Yes, it’s transparent. But it’s not — to extend the metaphor — digestible. It [has to be] really intelligible. And at the end of the day, if this doesn’t work for consumers in a visceral way at the velocity of social media, then we haven’t succeeded in our mission.”

From cereal boxes to large language models

Adobe isn’t the first company to add “nutrition labels” or other equivalents to AI services. In August, Google debuted a new watermark for GenAI images that can’t be removed. The same month, Twilio announced its own label showing companies how it will use their data. Meanwhile, companies like Bria AI have begun providing a way to show which images were used to create an AI-generated image. However, some experts argue it’s still impossible to prove with certainty.

Beyond Silicon Valley, the potential benefits of AI “nutrition labels” have also been discussed in Congress. (Food nutrition labels first became mandatory in 1990 with passage of the Nutrition Labeling and Education Act, which required companies to provide consistent and accurate claims in a standardized format.) During the U.S. Senate Judiciary Committee’s first AI hearing in May, OpenAI CEO Sam Altman said the labels were “a great idea,” adding that companies should release the results of their models and also undergo independent audits. However, at the time, there were concerns about the technical challenges of creating nutrition labels, especially for large language model. Also testifying at the same hearing as Altman was Gary Marcus, an AI expert and professor at NYU, said scientists first need to understand what goes into AI models before it’s possible to provide an accurate label.

“I think we absolutely need to do that,” Marcus said. “I think that there are some technical challenges in that building proper nutrition labels goes hand in hand with transparency.”

Marketers that use AI and non-AI tools from Adobe also see the benefits of labeling generative content. For Klarna CMO David Sandstrom, the new icon sounds like another “amazing feature” following various others Adobe has released this past year. However, he said he’s still trying to figure out if consumers will care if content is generated by AI or not.

“This sounds fantastic on a B2B level,” Sandstrom said. “I’m not sure consumers care. When we talk to our consumers — at least the younger the consumers get — they do not care if it’s AI-generated…So although I do appreciate some sort of nutritional label on all of these images, I think they are more for professional use or for companies to use rather than the consumer caring about it.”

Beyond verifying images and videos, Parsons mentioned other examples of why the icon could be useful. Creators might use the icon for digital signature. Freelance designers might use it to get new business. And in war zones or countries without free press, the “CR” could help verify photos and videos without compromising the identity of the person who took the photo of video.

There’s an economic incentive for authenticating AI content, said Adam Rose, chief operating officer of The Starling Lab. (The organization works with Stanford University and other organizations to help authenticate content using decentralized tools and has also worked with the Content Authenticity Initiative.) Although authenticated content is still rare, Rose gave the analogy of the lock icon showing the security of a website. Although it wasn’t as common a decade ago, over time it’s become the default.

“There is now effectively, or there is soon to be, an infinite supply of photorealistic content,” Rose said. “And so basic supply and demand says the value of that content will drop, but there’s still a scarcity — and there’s a need and I would say a demand — for authenticated content. And that’s really what’s different here.”

 

 

source

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.
On Key

Related Posts

Windows chief sees AI bridging the cloud, PCs

Windows chief sees AI bridging the cloud, PCs

Microsoft has show AI living in the cloud. AMD, Intel, and Qualcomm want AI to live on the PC, powered by their own processors. Does that pose a potential conflict? Apparently not. At AMD’s “Advancing AI” presentation, where the company launched the Ryzen 8040 family of AI-enhanced mobile

Read More »