Google will identify AI-generated photographs, provide metadata, and trace image provenance.
According to Bloomberg, Google unveiled three new capabilities on Wednesday at Google I/O 2023 to aid users in identifying artificial intelligence (AI)-generated phony photos in search results. The features will be able to recognize an image’s known sources, add metadata to photos produced by Google’s AI, and tag other AI-generated images in search results.
The ease with which large numbers of photorealistic fakes can now be produced, made possible by AI images synthesis models like Midjourney and Stable Diffusion, may have an impact on not only political and ideological propaganda but also our understanding of history as a result of the widespread distribution of fake media artifacts.
Google has announced that it will be adding new capabilities to its picture search product “in coming months” in an effort to buck some of these trends:
According to a 2022 Poynter poll, 62% of individuals think they encounter false information on a daily or weekly basis. In order to assist you in identifying incorrect information online, rapidly evaluating the content, and better comprehend the context of what you’re seeing, we continue to develop simple-to-use tools and features on Google Search. But we also understand that assessing the visual content you encounter is crucial.
Through the first feature, “About this image,” users will be able to learn more about an image’s history, including when the image (or similar images) was first indexed by Google, where the image may have first appeared, and where else the image has been seen online (such as news, social, or fact-checking sites), by clicking three dots on an image in Google Images results, conducting a search with an image or screenshot in Google Lens, or swiping up in the Google app.
Google claims that later this year, users will also be able to access this feature in Chrome on desktop and mobile devices by right-clicking or long-pressing a picture.
This added information for an image can help assess its validity or suggest whether it needs more investigation. Users could learn, for example, that news outlets identified a photo depicting a fake Moon landing as being produced by AI by using the “About this image” function. Additionally, it could provide historical context by revealing whether the image was in the search record before the decision to forge it.
The usage of AI tools in image creation is a second element that is being increasingly discussed. Google aims to include specific “markup,” or metadata, saved in each file that explicitly identifies its AI origins to all photos created by its AI tools when it starts to provide image synthesis tools.
Third, Google claims it is working with other websites and services to get them to include comparable labels to their AI-generated photographs. Midjourney and Shutterstock have joined the project; as a result, each of their AI-generated photos will include metadata that Google Image Search will read and provide to users in search results.
Since metadata can be later removed or possibly changed, these efforts might not be ideal. Still, they nonetheless represent a noteworthy, high-profile attempt to address the problem of online deep fakes.
The distinction between “real” and “fake” may start to blur when more images are produced or enhanced by AI over time, impacted by altering cultural values. It may therefore depend, as it always has, on our faith in the source as to which information we should accept as a reliable representation of reality (regardless of how it was produced). Therefore, a source’s reliability is still crucial despite the rapid advancement of technology. In the interim, technical solutions like Google’s could help us determine that credibility.