- Advertisement -spot_img
HomeNewsGoogle Introduces Watermarks For Photographs Produced By AI

Google Introduces Watermarks For Photographs Produced By AI

- Advertisement -

Google unveiled an invisible, permanent watermark on photographs on Tuesday to help identify them as being created digitally and to help stop the spread of disinformation.

The watermark is embedded within the images made by Imagen, one of Google’s most recent text-to-image producers, using a technology called SynthID. No matter how the image is edited, for as by adding filters or changing the colours, the AI-generated label stays.

Additionally, by analyzing incoming photos for the watermark with three levels of certainty—detected, not detected, and probably detected—the SynthID tool can determine if it is likely that Imagen created them.

In a blog post on Tuesday, Google stated, “While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations.”

Some Vertex AI clients now have access to a test version of SynthID, Google’s generative AI platform for programmers. According to the firm, Google’s DeepMind division’s SynthID, developed in collaboration with Google Cloud, will continue to develop and may even spread to other Google products or third parties.

False Photos And Edited Images

False photos and edited images

Tech companies are frantically trying to devise a reliable method to recognize and flag modified information as deepfakes and edited photos and videos become more convincing. In recent months, AI-generated images of Pope Francis wearing a puffer jacket and the former president of the United States being detained were extensively shared before he was charged.

Vera Jourova, vice president of the European Commission, encouraged signatories to the EU Code of Practise on Disinformation, which also includes Google, Meta, Microsoft, and TikTok, to “put in place technology to recognize such content and clearly label this to users” in June.

With the introduction of SynthID, Google has joined the expanding list of Big Tech firms and startups that are looking for answers. Some of these businesses go by names like Truepic and Reality Defender, highlighting the possible risks involved in the quest to safeguard our perception of what is genuine and what is false.

Tracking The Source Of Content

While Google has mostly adopted its own strategy, the Coalition for Content Provenance and Authenticity (C2PA), an Adobe-backed group, has been at the forefront of digital watermark efforts.

Google introduced a service called About This Image in May, giving visitors the option to view information about the origin, first location, and other internet locations of photographs found on the company’s website.

Additionally, the tech corporation disclosed that each AI-generated image produced by Google will include markup in the original file to “give context” if the image is seen on another website or platform.

But it’s uncertain whether these technical fixes will be able to properly address the issue because AI technology is developing more quickly than humans can keep up. Dall-E and ChatGPT creators OpenAI acknowledged earlier this year that their own attempt to identify AI-generated prose rather than visuals is “imperfect” and cautioned that it should be “taken with a grain of salt.”

- Advertisement -spot_img
- Advertisement -

Must Read

- Advertisement -Samli Drones

Recent Published Startup Stories

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Select Language »