Technical Documentation

Photo Verification: Detection Techniques

This whitepaper describes the technical approaches used to verify the authenticity of photographs. The verification system employs multiple independent techniques that exploit fundamental properties of digital image sensors, image processing pipelines, and physical phenomena.

Version 1.0English (Original)

Photo Verification: Detection Techniques for Identifying Fake and Manipulated Images

Abstract

This document describes the technical approaches used to verify the authenticity of photographs. Lumethic verification employs multiple independent verification techniques that exploit fundamental properties of digital image sensors, image processing pipelines, and physical phenomena to detect fake, manipulated, or recaptured images. By combining sensor physics analysis, perceptual comparison, and artifact detection, Lumethic verification creates a defense-in-depth approach that is robust against sophisticated manipulation attempts.


1. Introduction

1.1 The Verification Challenge

Digital image manipulation has become increasingly sophisticated. Modern tools can alter photographs in ways that are imperceptible to human observers, while generative AI can create entirely synthetic images. For applications requiring proof that a photograph authentically represents a real scene captured at a specific time and place, robust verification is essential.

The fundamental question the verification system answers is: "Was this image genuinely captured by a camera sensor from a real scene, and has it been preserved without substantive manipulation?"

1.2 The RAW File Advantage

The verification system leverages RAW image files as a ground truth reference. Unlike processed JPEGs, RAW files contain minimally processed data directly from the camera sensor. This data has distinct physical and statistical properties that are statistically inconsistent with synthetic or manipulated sources. By comparing a submitted JPEG against its purported RAW source, Lumethic verification can detect manipulations that would be invisible when examining the JPEG alone.

Scope Note: RAW-based verification requires access to the original RAW file, which limits applicability to workflows where RAW files are preserved. Lumethic verification is optimized for conventional camera sensors that produce standard Bayer mosaic RAW files.

Mobile Device Support: Most smartphone "RAW" formats (including Apple ProRAW) are not true mosaic RAW; they are computationally processed and do not contain genuine single-exposure Bayer data. For mobile workflows requiring verification, Lumethic provides a dedicated capture application that bypasses computational photography pipelines to produce compatible RAW files directly from the sensor.

1.3 Multi-Layer Defense

No single verification technique is foolproof. Lumethic verification therefore employs multiple independent techniques, each exploiting different physical or mathematical principles. An attacker would need to simultaneously defeat all techniques, a significantly harder challenge than bypassing any single check.

1.4 Threat Model and Scope

Lumethic verification assumes:

  • Access to an original RAW file paired with a processed JPEG

  • Consumer or prosumer camera sensors with standard CFA patterns

  • Conventional image processing pipelines (Adobe, Capture One, camera-native processors, etc.)


The system is designed to detect manipulation, synthesis, and recapture attacks using widely available tools and techniques. Attacks involving custom sensor simulators, bespoke generative models trained specifically on target sensor noise characteristics, or direct sensor-level hardware compromise represent significantly higher-cost adversarial scenarios that fall outside the primary threat model.


2. RAW File Authenticity Verification

These techniques verify that a RAW file contains genuine camera sensor data rather than fabricated or heavily manipulated content.

2.1 Bayer Pattern Validation

Principle: Digital camera sensors use a Color Filter Array (CFA), typically in the Bayer pattern, where each pixel captures only one color channel (red, green, or blue). The sensor outputs a single-channel mosaic image that must be "demosaiced" to produce a full-color image.

Detection Method: Genuine RAW files contain single-channel Bayer mosaic data. Lumethic verification verifies:

  • The data structure is a 2D single-channel array, not 3-channel RGB

  • The characteristic 2×2 Bayer pattern (RGGB, BGGR, etc.) is present

  • Frequency domain analysis reveals the distinctive Nyquist frequency components created by the CFA sampling


Note: Some sensors use alternative CFA patterns (e.g., Fujifilm X-Trans, Quad Bayer). Lumethic verification's primary threat model targets standard Bayer sensors, which represent the vast majority of cameras used in verification workflows.

Why It Works: If someone attempts to create a fake RAW file by "remosaicing" an RGB image (sampling each pixel to a single color channel), the result lacks the true Bayer structure. Demosaiced data has high correlation between color channels at each pixel location, while genuine Bayer data does not.

2.2 Shot Noise Analysis (Poisson Statistics)

Principle: Photon counting follows Poisson statistics, a fundamental law of quantum mechanics. When photons strike a sensor, the number of photoelectrons generated follows a Poisson distribution where the variance equals the mean. This creates characteristic "shot noise" where brighter regions have higher noise variance.

Detection Method: Lumethic verification analyzes the relationship between signal intensity and noise variance across the image. In genuine sensor data, noise variance increases linearly with signal brightness. Lumethic verification fits this photon transfer curve and checks for the expected Poisson relationship.

Why It Works: Synthetic images and heavily processed photos typically have Gaussian noise (constant variance) or noise patterns that don't follow the Poisson relationship. While generative models can approximate noise textures, they typically fail to maintain correct signal-dependent variance across the full dynamic range, a constraint that emerges from physics rather than learned appearance.

2.3 Photo Response Non-Uniformity (PRNU)

Principle: Every image sensor has unique manufacturing imperfections that cause slight sensitivity variations between pixels. This "Photo Response Non-Uniformity" acts like a fingerprint: it's multiplicative (scales with signal intensity) and consistent across all images from that sensor.

Detection Method: Lumethic verification detects the presence of PRNU by analyzing the multiplicative noise component. It extracts high-frequency residuals and checks for the characteristic spatial structure of PRNU patterns.

Why It Works: PRNU is an intrinsic property of each sensor's silicon wafer. While it is theoretically possible to simulate PRNU, doing so requires precise sensor-specific calibration that is impractical with standard tools. Synthetic images typically lack sensor-consistent PRNU, while composite images exhibit inconsistent patterns across spliced regions.

Note: Lumethic verification does not require a reference PRNU fingerprint for a specific camera; instead, it validates the physical presence and statistical behavior of PRNU patterns consistent with real sensor physics.

2.4 Fixed Pattern Noise (FPN)

Principle: Camera sensor readout electronics introduce systematic noise patterns, particularly column-wise and row-wise artifacts from the readout amplifiers. This "Fixed Pattern Noise" is additive and consistent for a given sensor.

Detection Method: Lumethic verification analyzes column and row statistics to detect the characteristic readout artifacts. These appear as slight systematic variations that are consistent along columns or rows.

Why It Works: FPN reflects the specific electronic readout structure of real sensors. While synthetic noise can be added to fake images, replicating the precise columnar and row-wise statistical structure of genuine sensor readout is rarely achieved with standard forgery tools.

2.5 Green Channel Imbalance

Principle: In the Bayer pattern, there are two types of green pixels (Gr and Gb) at different physical positions. Due to optical crosstalk and sensor characteristics, adjacent Gr and Gb pixels viewing the same scene content have correlated values.

Detection Method: Lumethic verification analyzes the correlation and ratio between Gr and Gb channels. In genuine RAW data, these channels show characteristic relationships. Lumethic verification checks that both the correlation and the variance ratio match expected physical behavior.

Why It Works: Synthetic noise is typically added uniformly without respecting the physical relationship between Gr and Gb pixels. Maintaining the correct Gr/Gb correlation while also satisfying other noise constraints is rarely achieved in practice.

2.6 Demosaic Artifact Detection

Principle: When an RGB image is converted back to Bayer format (remosaicing), the demosaic-then-remosaic process leaves characteristic artifacts. The color reconstruction algorithms create interpolation patterns that persist even after remosaicing.

Detection Method: Lumethic verification detects "checkerboard" artifacts in the Bayer data that result from the demosaic-remosaic round trip. It analyzes the spatial frequency content and local statistics for patterns that shouldn't exist in camera-original data.

Why It Works: Genuine RAW files have never been demosaiced, so they lack these artifacts. Any attempt to pass processed RGB data as RAW will carry telltale signs of the demosaic interpolation.

2.7 Bit Depth Validation

Principle: Camera sensors capture at specific bit depths (10-bit, 12-bit, 14-bit, etc.). The actual data structure reveals the true bit depth through the distribution of values and least significant bit (LSB) patterns.

Detection Method: Lumethic verification analyzes LSB patterns and value distributions to determine if the data genuinely uses the claimed bit depth. It checks that file size is consistent with the stated resolution and bit depth.

Why It Works: Upsampled or synthetic data won't have the correct LSB characteristics. For example, 8-bit data upsampled to 14-bit will have repetitive patterns in the lower bits.

2.8 PRNU Multiplicative Scaling Verification

Principle: PRNU (Photo Response Non-Uniformity) is fundamentally multiplicative: it scales linearly with signal intensity because it represents sensitivity variations in the silicon. When someone attempts to add synthetic PRNU to a fake image, they typically add it as a fixed pattern rather than scaling it with brightness.

Detection Method: Lumethic verification extracts the high-frequency noise residual and analyzes how its magnitude correlates with local signal intensity. For genuine PRNU:

  • The standard deviation of the noise residual in bright regions should be proportionally higher than in dark regions

  • A linear regression of noise magnitude vs. signal should show strong positive correlation

  • The multiplicative coefficient should fall within physically plausible ranges for real sensors


Why It Works: Properly implementing multiplicative PRNU scaling requires understanding the physics of sensor response. Most forgery attempts add uniform noise or scale it incorrectly. Even sophisticated attempts often fail to match the precise slope and linearity of real sensor PRNU response.

2.9 Poisson-PRNU Cross-Correlation Analysis

Principle: In genuine sensor data, shot noise (Poisson) and PRNU are generated by independent physical processes: shot noise from photon counting statistics, PRNU from silicon manufacturing variations. These should show near-zero correlation. When fake noise is added to synthetic images, both components are often generated from the same random process, creating detectable correlation.

Detection Method: Lumethic verification:

  1. Separates the noise into Poisson-like component (variance-stabilized residual) and PRNU-like component (multiplicative residual)

  2. Computes the correlation coefficient between these components across multiple image regions

  3. Flags images where correlation exceeds expected thresholds for independent physical processes


Why It Works: Independent noise processes produce uncorrelated residuals. When an attacker adds synthetic "sensor noise" to an image, they typically use a single noise generation method that fails to properly separate shot noise and PRNU physics. This creates correlation between components that should be independent, revealing the forgery.


3. Image Derivation Verification

These techniques verify that a submitted JPEG was genuinely derived from the claimed RAW file and hasn't been substantially altered.

3.1 EXIF Metadata Comparison

Principle: Camera EXIF metadata is embedded at capture time and contains detailed information about the capture settings. Both RAW and JPEG files from the same capture should have consistent metadata.

Detection Method: Lumethic verification compares key EXIF fields between the RAW and JPEG:

  • Camera make and model

  • Capture date and time

  • Exposure settings (shutter speed, aperture, ISO)

  • Focal length and lens information

  • GPS coordinates (if present)

  • Image orientation


Why It Works: While metadata can be edited, perfectly forging all metadata fields is laborious and error-prone. Inconsistencies in technical parameters (like an exposure time that doesn't match the visible noise level) indicate manipulation.

3.2 Structural Similarity Index (SSIM)

Principle: SSIM measures the perceptual similarity between two images by comparing local patterns of luminance, contrast, and structure. Unlike pixel-by-pixel comparison, SSIM is robust to minor differences in color grading and compression.

Detection Method: Lumethic verification renders the RAW file to a comparable format and computes SSIM against the submitted JPEG. The comparison is done on aligned and cropped images to account for any cropping differences.

Why It Works: Legitimate processing (color grading, exposure adjustment) preserves structural similarity. Substituting a different image or making major edits (object removal, face swaps) significantly reduces SSIM because the underlying structure changes.

3.3 Histogram Analysis

Principle: Images from the same source have similar tonal distributions. Even with color grading, the overall shape of the color histograms remains related between RAW rendition and processed JPEG.

Detection Method: Lumethic verification compares histograms using multiple metrics:

  • Correlation: How well the histogram shapes match

  • Chi-square distance: Statistical difference between distributions

  • Bhattacharyya distance: Overlap between probability distributions

  • Earth Mover's Distance: Minimum cost to transform one distribution to another


Why It Works: Significant manipulations alter the tonal distribution. The multi-metric approach catches different types of changes: brightness shifts, contrast changes, and color manipulations all leave different signatures.

3.4 Perceptual Hashing

Principle: Perceptual hash algorithms create compact fingerprints that are similar for visually similar images. Different algorithms capture different aspects of visual content.

Detection Method: Lumethic verification computes four types of perceptual hashes:

  • pHash (Perceptual Hash): Based on DCT frequency analysis, most robust to minor modifications

  • aHash (Average Hash): Based on brightness patterns, sensitive to exposure changes

  • dHash (Difference Hash): Based on gradient/edge patterns, detects structural changes

  • wHash (Wavelet Hash): Based on wavelet decomposition, sensitive to texture changes


Why It Works: Each hash algorithm is designed to be invariant to minor edits but sensitive to substantial changes. Using multiple algorithms provides redundancy, so a manipulation that defeats one hash is likely caught by others.

3.5 Face Identity Verification

Principle: If a photograph contains faces, those faces should be consistent between the RAW source and the submitted JPEG. Face swapping or replacement is a common manipulation.

Detection Method: Lumethic verification:

  1. Detects faces in both images using neural network-based detection

  2. Extracts identity embeddings using deep learning models trained on face recognition

  3. Matches faces between images using both position and identity similarity

  4. Uses bidirectional matching to ensure no faces were added or removed


Why It Works: Deep learning face embeddings capture identity in a way that's invariant to lighting and minor pose changes but distinct between different individuals. Face swaps require replacing actual face content, which breaks the embedding similarity even if the swap is visually convincing.

Limitations: Face verification is conditional: it applies only when faces are detected and is one signal among many rather than a single point of determination. Performance may vary with partial occlusion, extreme angles, or low resolution. This technique complements other verification methods rather than serving as a standalone check.

Privacy Note: Face embeddings are computed ephemerally for the sole purpose of intra-image consistency verification. No identity database is created or consulted, and embeddings are not retained after verification completes. This technique verifies that faces in the RAW and JPEG match each other; it does not identify individuals.


4. Recapture Detection

These techniques detect if a photograph was taken of a screen or printed image rather than a real scene.

4.1 JPEG Grid Artifacts in RAW

Principle: JPEG compression operates on 8×8 pixel blocks, creating subtle boundary artifacts at block edges. If a JPEG is displayed on a screen at 100% zoom and photographed, these 8×8 block boundaries become embedded in the RAW sensor data.

Detection Method: Lumethic verification analyzes the RAW data in the frequency domain for periodic artifacts at 8-pixel intervals. These artifacts appear as peaks in the 2D Fourier transform at specific frequencies corresponding to the JPEG block structure.

Why It Works: Camera-original RAW files never contain JPEG block artifacts; these only exist because a JPEG was rendered to screen and recaptured. This is a strong indicator of recapture.

4.2 Moiré Pattern Detection

Principle: When two periodic structures are superimposed, interference creates moiré patterns. A camera sensor photographing a pixel screen creates moiré from the interaction between the screen's pixel grid and the sensor's photosites.

Detection Method: Lumethic verification analyzes the frequency spectrum for the characteristic peaks and patterns of moiré interference. The specific frequencies depend on the relative pixel pitches and angles of the screen and camera.

Why It Works: Moiré patterns from screen photography have distinctive signatures that don't occur in direct photography of natural scenes. Even if the moiré is subtle to human vision, it's detectable in frequency analysis.

4.3 Halftone Rosette Detection

Principle: Printed images use halftone screening: patterns of dots at specific angles for each ink color (CMYK). These dot patterns create characteristic "rosette" patterns when the inks overlap.

Detection Method: Lumethic verification analyzes the image for the periodic structures characteristic of halftone printing. The specific angles and frequencies of halftone screens are well-defined and detectable.

Why It Works: Photographs of real scenes don't contain regular dot patterns. The presence of halftone patterns is strong evidence that the photograph is of a printed image rather than a real scene.

4.4 Channel Phase Shift

Principle: Many displays render RGB sub-pixels at slightly different spatial positions or refresh RGB channels at slightly different times. When photographed, this creates measurable phase offsets between color channels.

Detection Method: Lumethic verification analyzes the spatial relationship between color channels, looking for the characteristic phase shifts introduced by display technology.

Why It Works: Camera-original images have color channels that are spatially aligned (they come from the same photosites via the Bayer filter). Systematic phase shifts between channels indicate screen photography.

4.5 Screen Emission Detection

Principle: Display screens are emissive: they produce light with specific spectral characteristics. This creates a distinctive color profile different from reflected light in natural scenes.

Detection Method: Lumethic verification analyzes the color characteristics of the image for the telltale signatures of display emission, including specific color temperature ranges and spectral distributions.

Why It Works: Natural scenes are illuminated by reflected light with continuous spectra, while display screens emit light with peaked spectra from RGB sub-pixels. This difference is statistically detectable when aggregated across image regions, rather than serving as a per-pixel binary indicator.

4.6 Depth and Veiling Glare Analysis

Principle: Real scenes have depth variation, with objects at different distances from the camera. A screen or print is flat, with all content at the same distance. Additionally, photographing a screen often produces glare and reflections.

Detection Method: Lumethic verification analyzes depth cues in the image and looks for signs of veiling glare (reduced contrast from screen reflections).

Why It Works: The lack of true depth variation and the presence of glare artifacts are physical consequences of screen/print photography that are difficult to avoid or hide.

4.7 Chromatic Aberration Radial Physics Validation

Principle: Real camera lenses produce chromatic aberration (CA) that follows predictable physics: CA magnitude increases radially from the optical center following a polynomial model, with higher-order terms dominating at image edges. Additionally, real lens CA shows slight asymmetry across quadrants due to manufacturing tolerances and lens element decentering.

Detection Method: Lumethic verification validates CA patterns against lens physics by:

  1. Radial polynomial fitting: Measuring CA displacement at multiple points and fitting a physics-based optical model

  2. Higher-order term analysis: Verifying the presence of expected higher-order aberration components

  3. Quadrant asymmetry: Analyzing variation across image quadrants that results from real manufacturing imperfections


Why It Works: When photographing a screen or print, any CA present is from the display/print source, not from optical physics. Screen-originated CA is typically:
  • Uniform (no radial increase)

  • Linear only (missing cubic component)

  • Perfectly symmetric (no manufacturing variation)


These characteristics violate the physics of real lens systems, revealing the recapture.


5. Conclusion

5.1 Defense in Depth

The verification system implements multiple independent detection techniques across three categories:

  1. RAW Authenticity: Verifies the physics and statistics of genuine sensor data

  2. Image Derivation: Confirms the JPEG matches its claimed RAW source

  3. Recapture Detection: Identifies photographs of screens or prints


Each technique exploits different physical or mathematical principles. An attacker attempting to create a convincing fake would need to satisfy constraints across multiple independent domains:

Sensor Physics Constraints:

  • Generate synthetic RAW data with correct Poisson shot noise characteristics

  • Include convincing PRNU patterns with correct multiplicative scaling behavior

  • Include Fixed Pattern Noise with correct electronic readout structure

  • Maintain proper Bayer structure without demosaic artifacts


Noise Independence Constraints:
  • Ensure Poisson and PRNU noise components are properly uncorrelated

  • Maintain correct Gr/Gb channel relationships


Optical Constraints:
  • Generate chromatic aberration that follows radial lens physics with correct polynomial characteristics

  • Avoid introducing any recapture artifacts (moiré, JPEG grid, halftone patterns)


Derivation Consistency Constraints:
  • Ensure metadata consistency across dozens of fields

  • Preserve structural and perceptual similarity metrics

  • Maintain face identity consistency (if faces present)

5.2 The Strength of Physical Constraints

Many of these techniques exploit fundamental physics (Poisson statistics, sensor electronics, optical phenomena) that are difficult to reproduce across all constraints simultaneously using standard image processing tools. While any individual technique might be defeated with sufficient effort and specialized knowledge, defeating all techniques simultaneously while maintaining visual quality and physical consistency remains impractical at scale with current tools.

Lumethic verification's effectiveness comes not from any single technique but from the combination of independent checks that together create a high barrier against manipulation attempts using widely available methods.

5.3 What Verification Does Not Prove

It is important to understand the scope of what image verification establishes:

  • Capture authenticity: Verification confirms that an image was genuinely captured by a camera sensor and has not been substantively manipulated. It does not assert the intent behind the photograph or the semantic truth of the scene (e.g., a staged scene is still a genuine photograph).

  • Chain of custody: Verification validates the RAW-to-JPEG derivation but does not inherently prove who captured the image. However, the Lumethic capture application provides authenticated chain of custody from the moment of capture, and C2PA Content Credentials, when present, offer additional provenance validation through cryptographic signing.

  • Absolute certainty: While the multi-layer approach creates a high barrier against forgery, no verification system can provide mathematical proof of authenticity. Verification provides strong statistical evidence within the defined threat model.



References

The techniques described in this document build upon established research in digital image forensics. For readers interested in the underlying science, the following references provide authoritative background:

Foundational and Survey Research

  1. Lukas, J., Fridrich, J., and Goljan, M. "Digital Camera Identification from Sensor Pattern Noise." IEEE Transactions on Information Forensics and Security, 2006. DTIC

  2. Farid, H. "Image Forgery Detection." IEEE Signal Processing Magazine, 2009. Berkeley

  3. Rocha, A., Scheirer, W., et al. "Vision of the Unseen: Current Trends and Challenges in Digital Image and Video Forensics." ACM Computing Surveys, 2011. Paper

  4. Jung, K.-H. et al. "A Survey of Image Forensics." Digital Investigation, 2025. Plymouth

  5. Al-Ani, M. and Khelifi, F. "On the Sensor Pattern Noise Estimation in Image Forensics: A Systematic Empirical Evaluation." 2017. SciSpace

Sensor Authenticity and Physics

  1. Popescu, A.C. and Farid, H. "Exposing Digital Forgeries in Color Filter Array Interpolated Images." IEEE Transactions on Signal Processing, 2005. Springer

  2. Chen, M., Fridrich, J., et al. "Determining Image Origin and Integrity Using Sensor Noise." IEEE Transactions on Information Forensics and Security, 2008. ResearchGate

  3. Lukas, J., Fridrich, J., and Goljan, M. "Detecting Digital Image Forgeries Using Sensor Pattern Noise." SPIE Electronic Imaging, 2006. ResearchGate

  4. Fridrich, J. "Digital Image Forensics Using Sensor Noise." IEEE Signal Processing Magazine, 2009. IEEE

  5. Gloe, T. and Böhme, R. "The Dresden Image Database for Benchmarking Digital Image Forensics." Journal of Digital Forensic Practice, 2010. SciSpace

Optical and Physical Inconsistencies

  1. Johnson, M.K. and Farid, H. "Exposing Digital Forgeries Through Chromatic Aberration." ACM Multimedia Security Workshop, 2006. Eurecom

  2. Johnson, M.K. and Farid, H. "Exposing Digital Forgeries by Detecting Inconsistencies in Lighting." ACM Multimedia Security Workshop, 2005. Berkeley

  3. Bammey, P., et al. "An Adaptive Neural Network for Unsupervised Mosaic Consistency Analysis in Image Forensics." IEEE/CVF CVPR, 2020. CVF

Image Integrity and Perceptual Metrics

  1. Wang, Z., Bovik, A.C., et al. "Image Quality Assessment: From Error Visibility to Structural Similarity." IEEE Transactions on Image Processing, 2004. Wikipedia

  2. Zauner, C. "Implementation and Benchmarking of Perceptual Image Hash Functions." Master's Thesis, 2010. FRUCT

  3. Nataraj, L. et al. "Hybrid LSTM and Encoder–Decoder Architecture for Detection of Image Forgeries." IEEE Transactions on Image Processing, 2019. Digital Image Forensic

  4. Solanki, K. et al. "YASS: Yet Another Steganographic Scheme That Resists Blind Steganalysis." Information Hiding, 2007. ResearchGate

Recapture and Presentation Attacks

  1. Chen, C. et al. "CMA: A Chromaticity Map Adapter for Robust Detection of Screen-Recapture Document Images." IEEE/CVF CVPR, 2024. CVF

  2. Chen, C. et al. "Document Recapture Detection Based on a Unified Distortion Model of Halftone Cells." IEEE Transactions on Information Forensics and Security, 2023. ResearchGate

  3. Goljan, M. and Fridrich, J. "Camera Identification from Printed Images." SPIE Electronic Imaging, 2008. DTIC

Ready to verify your photos?

Try the Lumethic verification system or explore the protocol documentation for integration details.