Content Authentication – Photographs vs AI-Generative Images
By Sheldon Boles
Artificial Intelligence generated image in Midjourney by Dr. Robert Ito.
Introduction
Generative AI is creating highly realistic synthetic images, presenting challenges for photographic organizations to distinguish these from real photographs.
This article offers guidance on detecting AI-generated images through two methods: Subjective Image Assessment and Objective Technical Analysis.
Key Generative AI Terms
In this article, the following terms will be introduced, and their definitions are outlined below:
Text-to-Image – creating images algorithmically from text prompts without original photographs.
Image In-painting – system feature which realistically predicts missing image elements or adds a requested element provided from the AI-generative system’s dataset of images scraped from the internet.
Image Out-painting – expanding the visual contents of images beyond their original dimensions.
Subjective Image Assessment
This type of analysis relies on human judgement to visually identify AI images inconsistencies such as:
The image appears excessively spectacular and lacking natural imperfections.
Image appears to be unrealistic.
“Unrealistic and Distorted Face” – Dream Studio AI generated by Sheldon Boles
- Inconsistencies in the direction of lighting, shadows, or reflections such as the mismatched reflections in the eyes.
“Inconsistent Lighting” – NightCafe AI generated by Sheldon Boles.
- A landscape that is not representative of an actual geographical location.
“Unrealistic Location” – generated by Dream Studio AI – Sheldon Boles
- A closeup examination may reveal physical human oddities, such as distorted faces, the absence of blood veins in eyes, eye’s iris & public are not circular, excessively smooth skin, unnatural representation of fingers, toes or legs.
“No blood veins in Eyes” – AI-generated image by Sheldon Boles
“Unrealistic human features” – AI generated image by Sheldon Boles
- Closely examine all the details contained within an image for the purpose of identifying unrealistic discrepancies and strange artifacts.
“Oddities contained in image” – AI generated image by Sheldon Boles.
- Motor vehicles may seem unrealistic with other clues being license plates and vehicle identifying lettering are nonsensical.
“Unrealistic vehicle and nonsensical letter on license plate” – AI generated image by Sheldon Boles.
Many of these imperfections can be corrected by refining the generative AI prompt, employing negative prompts, and subsequently regenerating the prompt.
One major challenge for all generative AI systems that generate images and text is ensuring the text accurately reflects the visual content. Currently, some methods to add this challenge includes:
Using AI techniques like in-painting to replace generative text.
Specifying the exact text in double-quotations within the AI prompt.
Using a post-processing application to insert the required text into the generated image.
AI image of a movie marquee generated in Dream Studio AI by Sheldon Boles.
Humans usually excel at identifying errors or logical inconsistencies in images due to their innate ability to recognize visual patterns. However, the performance of humans is quite variable and depends on personal experience and diligence and cannot be relied upon in many instances of recognition of AI images. As technology progresses, relying on subjective image analysis to identify AI clues will become progressively even more difficult to differentiate AI-generated images from photographs. More reliable and objective methods are needed and such methods are described in the next section.
Objective Technical Analysis
These types of technical assessments evaluate images using impartial quantitative metrics to identify patterns, attributes, and anomalies. It examines distinctive features, color values, resolution, and noise. Other tools include image metadata analysis and reverse image searches. All these processes use standardized techniques for evaluating images.
Two categories of AI generative images were analyzed:
Generative AI text-to-image
Photograph enhanced with Adobe Photoshop 2024’s generative AI features (generative fill/remove elements and expand boundaries of an image)
Specific impartial metrics were used to analyze each of these two categories.
Generative AI text-to-image
For these images, two AI classifiers were selected: AI or Not and Hive AI Detection websites.
Both of these websites function as machine learning services, analyzing the surface content of images to determine whether they originated from a photograph or were generated by an AI system.
To evaluate the accuracy of these systems in detecting AI generative text-to-image creations, an image dataset was compiled, consisting of 900 rendered from nine different AI-generative text-to-image systems and 100 camera-captured images from our CAPA 2020-2021 competitions:
- Adobe Firefly 2 text-to-image (100)
- Dream Studio (100)
- Meta AI (100)
- NightCafe (100)
- Stable Diffusion (100)
- Bing AI (100)
- Leonardo AI (100)
- Midjourney (100)
- SeaArt (100)
photographs (100) from a wide range of genres
With the exception for Adobe Firefly text-to-image creations, both of these detection systems exhibited a high degree of accuracy in correctly identifying photographs and Generative AI text-to-image creations:
AI or Not – average reliability 97%
Hive AI Detection – average reliability 99%
Hive AI Detection’s analysis of a Midjourney AI generated image.
However, these systems faced challenges in identifying Adobe Firefly text-to-image rendered images:
AI or Not – reliability 36%
Hive AI Detection – reliability 63%
Both detection systems likely encountered challenges in identifying Adobe Firefly AI images due to Firefly unique approach:
- The primary dataset comprises genuine Adobe Stock photographs, distinguishing it from other AI generative systems that scrape elements from billions of internet photographs.
- The dataset includes publicly available images that are no longer under copyright restrictions and from other external sources.
Combining diverse photograph sources into a unified dataset, along with leveraging Adobe’s machine learning algorithms, allows for the “automatically enhance image quality, remove imperfections, and even transform ordinary images into extraordinary works of art.”¹
Despite the photorealism, all Firefly images lacked any camera data and contained minimal metadata tag details. Some had AI inconsistencies upon closer inspection. A combination of subjective and objective analysis may be required to identify AI-generative images.
¹Usmani, Hisham. Unlocking Enhanced Experiences with Adobe Sensei AI and Machine Learning, 11 Aug. 2023, medium.com/@hishamusmani/unlocking-enhanced-experiences-with-adobe-sensei-ai-and-machine-learning-4871625b430.
Photograph enhanced with Adobe’s Generative AI features
The most recent release of the 2024 version of Photoshop contains two
AI-generative features:
Generative Fill (in-painting) – is a generative AI tool for non-destructive image editing that leverages text prompts to populate or remove part of an image with realistic AI-generated elements without a text prompt.
Expand (out-painting) – uses generative AI to seamlessly augment photographs beyond their original dimensions, intelligently generating additional visual content that naturalistically matches the existing scenery.
In parallel with this introduction, Adobe also introduced them
“Content Credentials in Photoshop”² which states:
Adobe automatically applies Content Credentials to assets generated with Adobe Firefly features, such as Text to Image, Text Effects, Generative Fill, Generative Recolor, and Text to Vector Graphic. This is done as part of Adobe’s commitment to supporting transparency around the use of generative AI.
The following non-personal identifiable information is always included in Content Credentials automatically applied to assets generated with Adobe Firefly features:
- Output thumbnail: visual thumbnail of the output; displays only for outputs generated using the Text to Image feature in the Firefly web app
- Issuer: Adobe Inc., the organization responsible for issuing the Content Credential
- Content summary: a notice that Adobe generative AI was used in the creation of the asset
- App or device used: the Adobe software application or hardware device used to produce the asset
- AI tool used: the Adobe generative AI tool used
- Actions: the general editing and processing actions taken to produce the asset. Only “Created” or “Other edits” actions will be listed for assets generated with Adobe Firefly features.”
² Content Credentials in Adobe Firefly, Adobe Corporation, 10 Oct. 2023, helpx.adobe.com/firefly/using/content-credentials.html.
For the purpose of this testing, a series of photographs were imported into Photoshop 2024. Each photograph was enhanced using one of the Firefly AI features (Generative Fill/remove & Expand). Various saving and exporting options were systematically tested.
Each original photograph and enhanced original photograph were analyzed by:
Using the File Info feature in the dropdown menu of Photoshop
Employing a metadata reader (metadata2go website) to scrutinize the detailed metadata of each image
Verifying the authentication and validation of the photograph and image using Adobe’s Content Credentials website
The analysis of both the original photograph and enhanced photographs revealed the following:
All original photograph’s camera data (camera information & shot information) was displayed in the File Info details (contained in the File dropdown menu of Photoshop 2024)
All original photograph’s metadata contained camera data (camera & shot information) and a listing of other details.
Adobe’s Content Credentials website analyzed these photographs and displayed “No Content Credentials” because no Copyright and Contact Info was established in the Photoshop 2024 application for this testing.
All enhanced photograph’s (Generative Fill, removal of element(s) & Expand) camera data (camera information & shot information) metadata tags were blank
All enhance photograph’s metadata did not contain any camera data but did include:
– Software Agent: Adobe Firefly
– Title: Generated Image
– Claim Generator: Adobe_Photoshop/25.1.0 (build 20231016.r.120 ca99df2; mac) adobe_c2pa/0.7.6 c2pa-rs/0.25.2
– Claim Generator Info Name: Adobe_Photoshop
– Claim Generator Info Version: 5.1.0 (build 20231016.r.120 ca99df2; mac)
- For all the enhanced photograph’s metadata contained the Title tag with ‘Generated Image’ and associated tags were added to the enhanced photographs, regardless of whether the “Export to” metadata setting was set to “None” or “Copyright and Contact Info.”
- The Adobe’s Content Credentials webpage analyzed all the enhanced photographs and displayed “This image combines multiple pieces of content. At least one was generated with an AI tool.”
Above analysis has shown that when Photoshop’s new AI-generative features are used, the metadata is altered. Adobe has implemented a Content Credentials system that strips camera details and adds tags like “Generative Image” to indicate AI editing.
These AI tags can be viewed using metadata tools like metadata2go.com. Uploading an image to the Adobe’s Content Credential website will provide indication if AI-generative features have been applied.
AI image generated using Dream Studio AI by Sheldon Boles.
Possible Area Of Confusion For Photographers
A key concern in photo post-processing is photographers removing dust spots or unwanted objects. As this article outlines, using Photoshop’s Generative Fill tool to remove an or element will result in the Adobe Content Credential algorithm processing the image as a ‘Generative Image’ resulting in the modification of the metadata tags.
To address this issue, the following guidelines may be necessary for inclusion in photo competition specifications:
- Images that have been created or modified using specific AI generative techniques – in-painting (where AI fills in generated elements not captured by the photographer or removed elements from a photograph) or out-painting (where the AI extends beyond the original boundaries using generated elements), whether with or without text prompts, are not permitted for submission into our photo competitions. For example, these provisions apply to Photoshop 2024 AI Generative Fill, Removal of element(s) and AI Expand.
- For all competitions, participants must retain the original image (un-retouched JPG or RAW file) with metadata intact and carry out any editing on a duplicate of the original. This ensures the preservation of the original photograph’s metadata details and the integrity of the original image content. When requested by the competition coordinator, the participant must provide a copy of the original image. Failure to provide the original image will result in having the submitted image removed from the competition.
Photographic organizations could consider informing members to refrain from using Photoshop 2024 Generative Fill for removing unwanted objects or elements. Entrants should be encouraged to use Photoshop’s Content Aware feature as an alternative method.
For the potential winning images, photography organizations could require the respective photographers to submit their original, unedited photographs. Both the original submitted photos would then be analyzed to determine what elements have been added to or removed. The original photo metadata should contain camera details and other relevant information.
These suggestions aim to strike a balance between encouraging creative editing and maintaining the authenticity and transparency of images submitted to photo competitions.
Conclusion
The article highlights practical techniques for identifying if an image was created or altered using generative AI. Adobe has taken the lead in developing the Content Authenticity Initiative, an open standard for certifying the source and history of media content. Their algorithm incorporates this standard and was demonstrated in the article.
The Coalition for Content Provenance and Authenticity supports Adobe’s open standards, which will likely be adopted by the US government. As AI detection improves, the new techniques will supplement rather than replace techniques outlined in this article.
In early 2024, we plan to apply this methodology to other applications with AI image generation and editing capabilities, including Amazon’s Titan, Canva, Imagn, Luminar Neo 2023, Pixlr, and more. Rigorous testing of AI abilities remains important as the technology advances.
Sheldon Boles – FCAPA
CAPA Director of Competitions