Official implementation of the article "Unsupervised JPEG Domain Adaptation For Practical Digital Forensics"

Overview

Unsupervised JPEG Domain Adaptation for Practical Digital Image Forensics

@WIFS2021 (Montpellier, France)

Rony Abecidan, Vincent Itier, Jeremie Boulanger, Patrick Bas

Installation

To be able to reproduce our experiments and do your own ones, please follow our Installation Instructions

Architecture used

Domain Adaptation in action

  • Source : Half of images from the Splicing category of DEFACTO
  • Target : Other half of the images from the Splicing category of DEFACTO, compressed to JPEG with a quality factor of 5%

To have a quick idea of the adaptation impact on the training phase, we selected a batch of size 512 from the target and, we represented the evolution of the final embeddings distributions from this batch during the training according to the setups SrcOnly and Update($\sigma=8$) described in the paper. The training relative to the SrcOnly setup is on the left meanwhile the one relative to Update($\sigma=8$) is on the right.

Don't hesitate to click on the gif below to see it better !

  • As you can observe, in the SrcOnly setup, the forgery detector is more and more prone to false alarms, certainly because compressing images to QF5 results in creating artifacts in the high frequencies that can be misinterpreted by the model. However, it has no real difficulty to identify correctly the forged images.

  • In parallel, in the Update setup, the forgery detector is more informed and make less false alarms during the training.

Discrepancies with the first version of our article

Several modifications have been carried out since the writing of this paper in order to :

  • Generate databases as most clean as possible
  • Make our results as most reproducible as possible
  • Reduce effectively computation time and memory space

Considering that remark, you will not exactly retrieve the results we shared in the first version of the paper with the implementation proposed here. Nevertheless, the results we got from this new implementation are comparable with the previous ones and you should obtain similar results as the ones shared in this page.

For more information about the modifications we performed and the reasons behind, click here

Main references

@inproceedings{mandelli2020training,
  title={Training {CNNs} in Presence of {JPEG} Compression: Multimedia Forensics vs Computer Vision},
  author={Mandelli, Sara and Bonettini, Nicol{\`o} and Bestagini, Paolo and Tubaro, Stefano},
  booktitle={2020 IEEE International Workshop on Information Forensics and Security (WIFS)},
  pages={1--6},
  year={2020},
  organization={IEEE}
}

@inproceedings{bayar2016,
  title={A deep learning approach to universal image manipulation detection using a new convolutional layer},
  author={Bayar, Belhassen and Stamm, Matthew C},
  booktitle={Proceedings of the 4th ACM workshop on information hiding and multimedia security (IH\&MMSec)},
  pages={5--10},
  year={2016}
}

@inproceedings{long2015learning,
  title={Learning transferable features with deep adaptation networks},
  author={Long, M. and Cao, Y. and Wang, J. and Jordan, M.},
  booktitle={International Conference on Machine Learning},
  pages={97--105},
  year={2015},
  organization={PMLR}
}


@inproceedings{DEFACTODataset, 
	author = {Ga{\"e}l Mahfoudi and Badr Tajini and Florent Retraint and Fr{\'e}d{\'e}ric Morain-Nicolier and Jean Luc Dugelay and Marc Pic},
	title={{DEFACTO:} Image and Face Manipulation Dataset},
	booktitle={27th European Signal Processing Conference (EUSIPCO 2019)},
	year={2019}
}

Citing our paper

If you wish to refer to our paper, please use the following BibTeX entry

@inproceedings{abecidan:hal-03374780,
  TITLE = {{Unsupervised JPEG Domain Adaptation for Practical Digital Image Forensics}},
  AUTHOR = {Abecidan, Rony and Itier, Vincent and Boulanger, J{\'e}r{\'e}mie and Bas, Patrick},
  URL = {https://hal.archives-ouvertes.fr/hal-03374780},
  BOOKTITLE = {{WIFS 2021 : IEEE International Workshop on Information Forensics and Security}},
  ADDRESS = {Montpellier, France},
  PUBLISHER = {{IEEE}},
  YEAR = {2021},
  MONTH = Dec,
  PDF = {https://hal.archives-ouvertes.fr/hal-03374780/file/2021_wifs.pdf},
  HAL_ID = {hal-03374780}
}
Owner
Rony Abecidan
PhD Candidate @ Centrale Lille
Rony Abecidan
Examples of how to create colorful, annotated equations in Latex using Tikz.

The file "eqn_annotate.tex" is the main latex file. This repository provides four examples of annotated equations: [example_prob.tex] A simple one ins

SyNeRCyS Research Lab 3.2k Jan 05, 2023
Methods to get the probability of a changepoint in a time series.

Bayesian Changepoint Detection Methods to get the probability of a changepoint in a time series. Both online and offline methods are available. Read t

Johannes Kulick 554 Dec 30, 2022
OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages

OCR-Streamlit-App OCR Streamlit App is used to extract text from images using python's easyocr, pytorch and streamlit packages OCR app gets an image a

Siva Prakash 5 Apr 05, 2022
Pytorch implementation of the paper "Topic Modeling Revisited: A Document Graph-based Neural Network Perspective"

Graph Neural Topic Model (GNTM) This is the pytorch implementation of the paper "Topic Modeling Revisited: A Document Graph-based Neural Network Persp

Dazhong Shen 8 Sep 14, 2022
Learning What and Where to Draw

###Learning What and Where to Draw Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee This is the code for our NIPS 201

Scott Ellison Reed 337 Nov 18, 2022
AOT-GAN for High-Resolution Image Inpainting (codebase for image inpainting)

AOT-GAN for High-Resolution Image Inpainting Arxiv Paper | AOT-GAN: Aggregated Contextual Transformations for High-Resolution Image Inpainting Yanhong

Multimedia Research 214 Jan 03, 2023
SCNet: Learning Semantic Correspondence

SCNet Code Region matching code is contributed by Kai Han ([email protected]). Dense

Kai Han 34 Sep 06, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
This is a work in progress reimplementation of Instant Neural Graphics Primitives

Neural Hash Encoding This is a work in progress reimplementation of Instant Neural Graphics Primitives Currently this can train an implicit representa

Penn 79 Sep 01, 2022
Omniscient Video Super-Resolution

Omniscient Video Super-Resolution This is the official code of OVSR (Omniscient Video Super-Resolution, ICCV 2021). This work is based on PFNL. Datase

36 Oct 27, 2022
How to train a CNN to 99% accuracy on MNIST in less than a second on a laptop

Training a NN to 99% accuracy on MNIST in 0.76 seconds A quick study on how fast you can reach 99% accuracy on MNIST with a single laptop. Our answer

Tuomas Oikarinen 42 Dec 10, 2022
Linescanning - Package for (pre)processing of anatomical and (linescanning) fMRI data

line scanning repository This repository contains all of the tools used during the acquisition and postprocessing of line scanning data at the Spinoza

Jurjen Heij 4 Sep 14, 2022
Easy and comprehensive assessment of predictive power, with support for neuroimaging features

Documentation: https://raamana.github.io/neuropredict/ News As of v0.6, neuropredict now supports regression applications i.e. predicting continuous t

Pradeep Reddy Raamana 93 Nov 29, 2022
IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID,

Intermediate Domain Module (IDM) This repository is the official implementation for IDM: An Intermediate Domain Module for Domain Adaptive Person Re-I

Yongxing Dai 87 Nov 22, 2022
Learning from graph data using Keras

Steps to run = Download the cora dataset from this link : https://linqs.soe.ucsc.edu/data unzip the files in the folder input/cora cd code python eda

Mansar Youness 64 Nov 16, 2022
Custom TensorFlow2 implementations of forward and backward computation of soft-DTW algorithm in batch mode.

Batch Soft-DTW(Dynamic Time Warping) in TensorFlow2 including forward and backward computation Custom TensorFlow2 implementations of forward and backw

19 Aug 30, 2022
Social Network Ads Prediction

Social network advertising, also social media targeting, is a group of terms that are used to describe forms of online advertising that focus on social networking services.

Khazar 2 Jan 28, 2022
GANsformer: Generative Adversarial Transformers Drew A

GANformer: Generative Adversarial Transformers Drew A. Hudson* & C. Lawrence Zitnick Update: We released the new GANformer2 paper! *I wish to thank Ch

Drew Arad Hudson 1.2k Jan 02, 2023
Bib-parser - Convenient script to parse .bib files with the ACM Digital Library like metadata

Bib Parser Convenient script to parse .bib files with the ACM Digital Library li

Mehtab Iqbal (Shahan) 1 Jan 26, 2022
HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow

Class HiddenMarkovModel HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow 2.0 Installatio

Susara Thenuwara 2 Nov 03, 2021