开源软件名称(OpenSource Name):mjkwon2021/CAT-Net开源软件地址(OpenSource Url):https://github.com/mjkwon2021/CAT-Net开源编程语言(OpenSource Language):Python 88.6%开源软件介绍(OpenSource Introduction):CAT-NetThis is the official repository for Compression Artifact Tracing Network (CAT-Net). Given a possibly manipulated image, this network outputs a probability map of each pixel being manipulated. This repo provides codes, pretrained/trained weights, and our five custom datasets. For more details, see the papers below. The IJCV paper is an extension of the WACV paper and it covers almost all contents provided by the WACV paper.
Myung-Joon Kwon, In-Jae Yu, Seung-Hun Nam, and Heung-Kyu Lee, “CAT-Net: Compression Artifact Tracing Network for Detection and Localization of Image Splicing”, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 375–384
Myung-Joon Kwon, Seung-Hun Nam, In-Jae Yu, Heung-Kyu Lee, and Changick Kim, “Learning JPEG Compression Artifacts for Image Manipulation Detection and Localization”, International Journal of Computer Vision, 2022, vol. 130, no. 8, pp. 1875–1895, Aug. 2022. Setup1. Clone this repo.[Google Drive Link] or [Baiduyun Link] (extract code: ycft).2. Download the weights from:
If you are trying to test the network, you only need CAT_full_v1.pth.tar or CAT_full_v2.pth.tar. v1 indicates the WACV model while v2 indicates the journal model. Both models have same architecture but the trained weights are different. v1 targets only splicing but v2 also targets copy-move forgery. If you are planning to train from scratch, you can skip downloading. 3. Setup environment.
4. Modify configuration files.Set paths properly in 'project_config.py'. Set settings properly in 'experiments/CAT_full.yaml'. If you are using single GPU, set GPU=(0,) not (0). InferencePut input images in 'input' directory. Use English file names. Choose between full CAT-Net and the DCT stream by commenting/uncommenting lines 65-66 and 75-76 in At the root of this repo, run:
The predictions are saved in 'output_pred' directory as heatmaps. Train1. Prepare datasets.Obtain datasets you want to use for training. You can download tampCOCO datasets on [Baiduyun Link]. Note that tampCOCO consists of four datasets: cm_COCO, sp_COCO, bcm_COCO (=CM RAISE), bcmc_COCO (=CM-JPEG RAISE). Also note that compRAISE is an alias of JPEG RAISE in the journal paper. You are allowed to use the datasets for research purpose only. [6 Aug 2022 update] Now the link changed from Google Drive to Baiduyun. compRAISE can be easily created by just JPEG compressing RAISE. Set training and validation set configuration in Splicing/data/data_core.py. CAT-Net only allows JPEG images for training. So non-JPEG images in each dataset must be JPEG compressed (with Q100 and no chroma subsampling) before you start training. You may run each dataset file (EX: Splicing/data/dataset_IMD2020.py), for automatic compression. If you wish to add additional datasets, you should create dataset class files similar to the existing ones. 2. Train.At the root of this repo, run:
Training starts from the pretrained weight if you place it properly. LicenceThis code is built on top of HRNet. You need to follow their licence. For CAT-Net, you may freely use it for research purpose. Commercial usage is strictly prohibited. CitationIf you use some resources provided by this repo, please cite these papers.
KeywordsCAT-Net, Image forensics, Multimedia forensics, Image manipulation detection, Image manipulation localization, Image processing |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论