Google DeepMind, a subsidiary of Google that focuses on Artificial Intelligence, is testing a new tool for identifying AI-generated images. This is the latest endeavor from the company in a bid to regulate generative AI and to prevent the spread of misinformation.
In a blog released on the company’s website, DeepMind states, “Today, in partnership with Google Cloud, we’re launching a beta version of SynthID, a tool for watermarking and identifying AI-generated images..”.
The technology works by embedding a digital watermark to the pixels of the images. Unlike traditional watermarks, these digital counterparts will be invisible to the naked eye but “detectable for identification”, the company claims.
One of the significant applications of generative AI tools is to create highly detailed, realistic images that are hard to distinguish as fake. This has led to concerns in some sectors about the potential spread of misinformation on the internet.
Addressing the issue of information authenticity, the company states, “While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally.”.
According to the company’s admission, the technology is not “foolproof”. However, Google hopes the technology can evolve to be more functional and efficient. SynthID is currently in a beta launch.