A wildlife photographer submits a competition entry. The image is sharp, well-composed, perfectly lit. Within hours, commenters on social media declare it fake. The contest organizers ask for proof of authenticity. The photographer has the original file on a hard drive somewhere, but no formal evidence, no documentation, no chain of custody. This scenario plays out with increasing frequency across contests, stock libraries, newsrooms, and client deliveries. The question "is this AI?" has become routine. Photographers need a concrete answer.
The Burden of Proof Has Shifted
A year ago, photographs were assumed real unless someone demonstrated otherwise. That default no longer holds. Generative models from Midjourney, DALL-E, Stable Diffusion, and Flux now produce images that fool casual viewers and, in documented cases, expert judges. Boris Eldagsen's AI-generated image won the Creative category at the 2023 Sony World Photography Awards before he revealed the deception. The judges, professionals with decades of experience, could not tell.
The consequence for working photographers is straightforward: your word is no longer enough. Stock agencies now require transparency around AI: Adobe Stock asks contributors to declare whether submissions involve AI, and Shutterstock prohibits AI-generated contributor uploads entirely. Photo contests from the Pulitzer Prize to Wildlife Photographer of the Year require original camera files for verification. Editorial clients want assurance before publication. Insurance companies want proof before accepting photographic evidence of damage.
This is not a commentary on trust or a philosophical debate about the nature of photography. It is a practical problem. When someone questions whether your image is real, you need to produce evidence, not arguments. The rest of this guide explains what that evidence looks like and how to create it.
The RAW File as Primary Evidence
The strongest evidence a photographer possesses is the RAW file. A RAW file contains unprocessed sensor data: the Bayer pattern mosaic from the color filter array, sensor-specific noise characteristics, and device metadata embedded in the file structure at the moment of capture. This data is a direct record of photons hitting silicon. It is to a digital photograph what a negative was to a film image.
AI generators do not produce RAW files. They output rasters: PNGs, JPEGs, WebPs. The generation process works by iteratively denoising random data through a neural network. There is no sensor, no lens, no Bayer pattern, no optical path. The output is a flat pixel grid with none of the internal structure that a genuine camera file contains.
Could someone create a fake RAW wrapper around synthetic content? In theory, yes. RAW formats like Adobe DNG are documented, and it is possible to construct a file that opens in Lightroom. But the internal data will not exhibit the characteristics of genuine sensor output. There will be no authentic Bayer CFA interpolation artifacts, no Photo-Response Non-Uniformity (PRNU) noise fingerprint matching a real sensor, no plausible shot noise distribution. A forensic comparison will expose the fabrication.
The practical advice is simple. Always shoot RAW+JPEG. The JPEG is your deliverable. The RAW is your receipt. Archive your RAW files with their original timestamps intact. Do not rename them in ways that strip creation dates. Do not delete them after export. Store them on redundant media. Treat them the way a business treats financial records, because in a dispute over authenticity, the RAW file is your primary defense.
What RAW Verification Actually Checks
Holding onto a RAW file is necessary but not sufficient. The file must be matched against the finished JPEG through a series of forensic comparisons that examine different properties of both images. These checks are independent, meaning they analyze different signal types, and an image must pass all of them to be considered verified.
Sensor authenticity analysis examines whether the RAW file exhibits characteristics consistent with genuine camera hardware. This includes checking for Bayer CFA patterns that result from real demosaicing, PRNU noise fingerprints unique to a specific sensor, and noise profiles that match the claimed camera model and ISO setting. AI-generated content and computationally fabricated RAW files fail these checks because they lack the physical signatures of light capture.
Structural similarity measurement compares the visual content of the JPEG to a normalized rendering of the RAW. The system accounts for legitimate edits (exposure adjustments, color grading, cropping) while checking that the underlying image content corresponds between the two files. This is measured through perceptual metrics that quantify how closely the JPEG matches what the RAW data would produce.
Histogram analysis compares the statistical distributions of color and luminance values between the RAW and JPEG. A genuine edit creates a plausible mathematical relationship between the two histograms. If the JPEG's color distribution cannot be explained as a transformation of the RAW's distribution, something is wrong.
Metadata consistency checks compare EXIF fields across both files: camera model, lens identifier, ISO, shutter speed, aperture, focal length, and timestamp. These values should align. A JPEG claiming to come from a Canon EOS R5 paired with a RAW file from a Nikon Z9 is an obvious mismatch, but subtler inconsistencies (implausible lens and body combinations, timestamps that don't align) also raise flags.
Recapture detection looks for signs that the image was photographed from a screen rather than captured from a real scene. This includes moire patterns from the interference between screen pixel grids and camera sensor pixels, doubled tone curves from the image passing through two display pipelines, and a flat focal plane inconsistent with the supposed three-dimensional depth of the scene.
Face region integrity analysis checks whether faces in the JPEG match faces in the RAW. If the image contains people, this comparison ensures that faces have not been swapped, composited, or synthetically generated between the RAW capture and the final output.
The strength of this approach lies in its multiplicity. Each check examines a different dimension of the image. Fooling one is possible. Fooling all of them simultaneously, while also producing a RAW file that passes sensor authenticity checks, is a far more difficult proposition than defeating a single AI classifier.
C2PA Content Credentials
Verification establishes that a JPEG derives from a genuine camera file. The next step is recording that verification in a way that travels with the image and can be inspected by anyone who encounters it downstream. This is what C2PA content credentials provide.
C2PA (Coalition for Content Provenance and Authenticity) is an open technical standard developed by Adobe, Microsoft, Intel, and others for embedding provenance information into digital files. A C2PA manifest is a tamper-evident digital certificate attached to the image. It records what verification was performed, what the results were, who performed the signing, and when. Unlike EXIF metadata, which can be edited with freely available tools like ExifTool, a C2PA manifest is cryptographically protected. Any modification to the image or the manifest after signing breaks the signature.
For a photographer, a C2PA credential transforms "I say this photo is real" into "here is signed evidence that this photo passed forensic verification against its original camera source file." The credential does not merely assert authenticity. It documents the evidence for that assertion and locks it to the file with cryptography.
The credential is inspectable. Anyone receiving the image can check the manifest using tools like Content Credentials Verify or the inspection features built into platforms that support C2PA. Google now surfaces C2PA data in its "About this image" feature across Google Images. As adoption grows, these credentials will become the standard way to communicate photographic provenance.
This matters because photographs move through many hands. An image might travel from photographer to editor to publisher to social media to archive. At each step, someone might ask whether it is genuine. A C2PA credential provides the answer without requiring the photographer to be present to vouch for it.
Building a Verification Workflow
The best time to verify an image is before anyone questions it. Pre-emptive verification, performed as part of the export process, is faster and more convincing than scrambling to assemble proof after an accusation.
For photographers who work in Adobe Lightroom, the Lumethic Lightroom plugin integrates verification into the export workflow. When you export a JPEG, the plugin submits both the JPEG and the corresponding RAW to the verification pipeline. If the image passes, it receives a C2PA credential before it leaves your system. The verified file is ready for submission to contests, stock libraries, or clients without additional steps.
For photographers who prefer a web-based workflow, Lumethic's verification platform accepts paired JPEG and RAW uploads directly. The process runs the same forensic checks and returns a verification report with a signed JPEG.
For mobile photographers, the Lumethic Capture iOS app creates verified images at the point of capture. The app records provenance data at the moment the photo is taken, establishing a chain of custody that begins at the sensor.
Not every image needs verification. The process is most valuable for high-stakes images: contest entries, editorial submissions to publications, stock photography uploads, and client deliveries in contexts where authenticity matters (legal documentation, insurance claims, real estate). For casual social media posts, verification is optional but increasingly useful as platforms begin displaying provenance information.
When you verify an image, keep three files together: the original RAW, the signed JPEG with its C2PA credential, and the verification report. Store them in the same folder or project archive. If a question arises months or years later, your evidence is organized and accessible.
The difference between verifying proactively and verifying reactively is significant. A photographer who submits a contest entry with a pre-existing C2PA credential demonstrates forethought and professionalism. A photographer who is asked to prove authenticity after the fact, and must then locate a RAW file, upload it, wait for verification, and send the results, is operating from a defensive position. The evidence may be identical, but the impression is different.
What Does Not Work
Several methods that seem like they should prove authenticity do not hold up under scrutiny.
Showing your Lightroom catalog is not proof. Catalogs can be reconstructed. They record editing history, but they do not independently verify that the underlying image originated from a camera sensor. A Lightroom catalog containing an AI-generated image looks identical to one containing a genuine photograph.
Pointing to EXIF metadata is not proof. EXIF data is trivially editable. ExifTool, a free command-line utility, can set any EXIF field to any value. Camera model, GPS coordinates, timestamp, lens information: all of it can be fabricated in seconds. EXIF data is useful as one input to a larger verification process, but alone it proves nothing.
Showing a GPS location on Google Maps is not proof. The GPS coordinates in EXIF data are just as editable as any other field. An AI-generated image of the Eiffel Tower can carry EXIF data claiming it was captured at 48.8584 N, 2.2945 E. The coordinates and the image have no verifiable connection.
Running the image through an AI detector is not proof. As documented in detail in How to Tell If a Photo Is AI-Generated, AI detection tools return probability scores, not evidence. Different detectors give different results for the same image. A "95% real" score from one tool means nothing if another tool returns "60% AI." Detectors are classifiers trained on specific datasets, and they are locked in a permanent arms race with improving generative models. A probability score is not admissible evidence in any meaningful sense.
Arguing that the photo "looks real" is not proof. Visual inspection is precisely the method that failed when Boris Eldagsen's AI image won at Sony World Photography Awards. Human judgment is unreliable for distinguishing high-quality synthetic images from photographs. If expert judges cannot do it consistently, asserting that an image is "obviously real" based on appearance carries no weight.
The common thread across all of these methods is that they are assertions or opinions. None of them produce verifiable, independently auditable evidence. Verification requires comparing the finished image to its source material through forensic analysis. Everything else is commentary.
Frequently Asked Questions
What if I only shoot JPEG, not RAW? Without a RAW file, the strongest form of verification is unavailable. A JPEG-only image cannot be compared against unprocessed sensor data because that data was never preserved. If you shoot JPEG only, consider switching to RAW+JPEG. The storage cost is modest relative to the evidential value. For images already captured as JPEG only, C2PA signing at the point of capture (using cameras with built-in C2PA support, like the Leica M11-P or recent Sony Alpha bodies) provides an alternative chain of custody, but it requires hardware that supports the standard.
Can someone fake a RAW file? It is technically possible to construct a file in a documented RAW format like Adobe DNG. Opening such a file in editing software would not immediately reveal the fabrication. But forensic verification examines the internal data, not just the file wrapper. A fabricated RAW will lack genuine Bayer CFA interpolation artifacts, authentic PRNU noise patterns, and plausible sensor noise distributions. These characteristics are byproducts of physical light capture and are extremely difficult to simulate convincingly enough to pass multi-factor forensic analysis.
Does verification work with smartphone photos? Yes. Smartphones produce files with sensor data, EXIF metadata, and noise characteristics just as traditional cameras do. Some smartphones shoot in RAW formats (Apple ProRAW, Samsung Expert RAW), which enables full RAW-to-JPEG verification. For smartphones that output only HEIC or JPEG, verification options are more limited, but metadata analysis and recapture detection still apply.
How long does verification take? Automated verification through platforms like Lumethic typically completes in under a minute. The process is computationally intensive (it runs multiple independent forensic checks in parallel) but is designed for practical use within an export or submission workflow. Batch processing through an API takes proportionally longer depending on volume.
What happens to my RAW file after verification? On Lumethic's platform, the RAW file is used for analysis and then deleted. It is never stored permanently on Lumethic's servers. This is a deliberate design decision. The RAW file is the photographer's most sensitive asset, containing the full unprocessed capture, and any system that handles it must do so responsibly. The verification results and cryptographic hashes are preserved in the C2PA manifest, but the RAW data itself is not retained.
Do I need to verify every photo I take? No. Verification is most valuable for images where authenticity may be questioned or where proof of authenticity adds tangible value. Contest entries, editorial submissions, stock uploads, legal and insurance documentation, and client deliveries in sensitive contexts are all strong candidates. For personal or casual use, verification is optional. The free tier on Lumethic includes five verifications per month, which is typically enough to cover a photographer's highest-value outputs.