Uploaded by Ahmet KORKMAZ

s00371-022-02710-z

advertisement
The Visual Computer
https://doi.org/10.1007/s00371-022-02710-z
ORIGINAL ARTICLE
Non-overlapping block-level difference-based image forgery
detection and localization (NB-localization)
Sanjeev Kumar1,2
· Suneet Kumar Gupta1 · Umesh Gupta1 · Mohit Agarwal1
Accepted: 16 October 2022
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022
Abstract
With advent of digital devices, we are surrounded by many digital images. We usually believe on digital images in whatever
form presented to us. Therefore, we need to be careful as the images may be forged. There exist several image forgeries
through which original intent of the image may be hidden and some other meaning is reflected through forgery. Copy-move
forgery is one such forgery technique, where the manipulator copies certain portion of the image and duplicates it in some
other portion of the same image. In this paper, we propose a novel approach to detect the copy-move forgery in images
using non-overlapping block level pixel comparisons and that can achieve better detection and classification accuracy. This
approach divides image into 4, 5, 6 or more such blocks and compare each block by moving sliding window over the entire
image which is not overlapping with current block. It was found that with different number of blocks the forged region of
different sizes can be easily found. We have used SSIM (structure similarity index) parameter to classify the image as forged or
original. Algorithm is simulated on various datasets including (MICC, CASIA, coverage, and COMOFOD, etc.) and achieved
maximum accuracy of 98% and also compared our result on precision, recall, FPR and FNR including other parameters.
Keywords Image processing · Forged images · Original image · Copy-move forgery
1 Introduction
In the era of social media, availability of low-cost smart
phones with good quality cameras, feature-rich image editing
software or mobile applications has made it very convenient
to capture and modify the digital images. Using the digital images for the purpose of entertainment is acceptable to
some extent but if the tempered/forged images are spread
over social media or in news to convey ill facts, it becomes
a big problem. Usually, we see lot of image or video posts
on Facebook, twitter and whatsApp that are fake. Different
types of forgery can be applied on images such as copymove forgery, image splicing, retouching, morphing, etc. [1].
Different types of forgery detection mechanism are represented in Fig. 1 [2]. Active techniques require to know some
prior information regarding the original image to prove its
authenticity [3]. As in case of watermarking [4], the extra
B
Sanjeev Kumar
look4sanjeev@gmail.com
1
Bennett University, Greater Noida, India
2
KIET Groups of Institutions, Delhi-NCR, Ghaziabad, India
information is embedded in the image itself at the time of
image generation, which can be visually seen in image.
But again, it happens through manipulation in original
images. Passive techniques [5] don’t use or rely on any prior
information embedded in the image rather it considers the
whole image as input to find the traces of possible manipulation in image. Here in this paper our focus is on passive
image authentication technique of copy-move forgery detection. In copy-move forgery, certain part of original image is
patched and pasted at some area of same image to create different representation and interpretation as shown in Fig. 2.
The other types of image forgeries include retouching, splicing, morphing and scaling as shown in Fig. 3. There exist
different categories of solution for image forgery detection.
Major types include active image and passive image classification. The different approaches adopted by various authors
for the problem of image forgery detection are broadly classified in key-point based and block based [6]. In block based,
the image is usually divided into small regular sized blocks
and features are extracted at block level. Block-level features are generally based on DCT [7], DWT [8], LBP [9],
SVD [10] and PCA [11]. Key-point-based approaches generally use SURF-[12] or SIFT [13] features-based descriptor
123
S. Kumar et al.
Fig. 1 Forgery detection
techniques [2]
Fig. 2 a Original image and
b Corresponding forged image
with copy-move forgery
which are robust in terms of scaling and rotation. We are
proposing a novel block-based algorithm to detect and localize the copy-move forged area in given image.
The rest of article is organized as follows: Sect. 2 provides details of related literature in image forgery detection.
Section 3 discusses the proposed block-based forgery localization algorithm (NB-localization) algorithm and material.
In Sect. 4, results of evaluation are presented, and finally,
Sect. 5 concludes the paper.
2 Background literature
Till now, several approaches have been adopted in the literature for identification and classification of image forgery
specifically copy-move forgery. In major studies, manual features extraction techniques based on blocks or key points are
adopted. Rohini et al. [14] used block based, with feature
reduction at block level with help of discrete cosine transformation. Features so extracted are compared, and duplicates
are identified based on certain threshold. Haodong et al.
[15] used fusion of two existing approaches to generate
the fused tampering possibility map results, achieving better localization output. Wu et al. [16] proposed extraction of
123
SURF key points at first level, and then features are reduced
using local binary pattern operator with rotation, yielding
rotation invariant features. Local binary patterns are calculated through difference of pixels. Cozzolino et al. [17]
used patch-match and nearest neighbor field for localization
of forged regions. Image is divided into patches, and these
patches are compared on basis of nearest neighbor. Algorithm
is applied iteratively with different values of displacement,
and best displacement is determined for approximate and
fast match. Rotation invariant features are extracted through
Zernike moments for matching step input. Robust solution
against scaling and rotation is achieved through overlapping
blocks of circulars shape [18]. Objective is to extract features that are invariant to geometrical transformations. It is
achieved through the polar exponential transform on each circular block. Dimensionality reductions are achieved through
the singular value decomposition, and approach is implemented with low computation cost. A hybrid approach based
on combination of FMT and SIFT is proposed by Meena
et al. [19]. This approach works better for both smooth and
textured regions. SIFT is responsible for textured-based features and FMT performed well for smooth region features.
Meena et al. applied Gaussian–Hermite moments to extract
the block-level features. The extracted features at block level
Non-overlapping block-level difference-based image forgery detection and localization…
Fig. 4 Sample forged images with copied blocks in red squares
two-layer deep neural network. Extra and irrelevant features
are discarded by neural network, and tempering-based weak
feature signals are preserved. This method shown better performance compared to RCNN. A new method based on SIFT
key-point elimination was proposed by Hossein et al. [24] to
eliminate or reduce the extra key points extracted through
SIFT algorithm with a Gaussian function.
3 Proposed algorithm (NB-localization)
Fig. 3 Other types of image forgery: a Original image before retouching, b Retouching outcome, c Original image-1, d Original image-2,
e Spliced forged image, f Image-1 and g Image-2 used in morphing,
h Morphed image, i Original image before rescaling, j Rescaled image
were compared lexicographically to find the similar blocks
in input image. The approach represents the robust solution
for identifying the forged region [20]. Dual mechanism is
adopted by Qiyue Lyu et al. [21] where first-level matching is done through key point based on (Delaunay triangles)
to estimate the region of forgery in image. In second stage,
key points retrieved at level one are expanded with help of
nearby key point that are classified on basis of DBSCAN
(density-based spatial clustering of applications with noise)
algorithm. Ritu et al. [22] proposed a block-level feature
extraction based on FCM (fuzzy C-means) clustering with
mperor penguin optimization. Segmented block features are
thereafter passed through the gabour filter for removing the
false matches through RANSAC (random sample consensus) algorithm. A new approach to extract the weak features
is adopted by Chen et al. [23] where the input image is fed to a
The algorithm (NB-localization) to match the copied blocks
in a forged image is described in Algorithm 1. In this, value
of variable ‘block’ is chosen as 5 to divide the image in 5 ×
5 blocks. However, the blocks were tested with 4, 5, 6 and
more such values and best results were obtained using 5. In
the algorithm, there are 4 nested for loops where 1st for loop
scans image blockwise from top to bottom along height of
image; 2nd for loop scans image blockwise from left to right
along width of image; 3rd for loop scans image for matching blockwise from top to bottom along height of image by
going to each and every possible pixel; and finally 4th for
loop scans image for matching blockwise from left to right
along the width of image by going to each and every possible pixel. Thus 1st and 2ndare used for loop scans image by
dividing into blocks and 3rd and 4th for loop scans image
picking blocks of same size with top left corner of block
possible on any pixel. Thus finally we can find if a block is
having maximum matching pixels block at any of the block
of first two loops with any of the block of last two loops. The
maximum matching block is found by getting a difference of
image block obtained from first two loops with image block
obtained from last two loops. Then we count number of zero
pixels in this difference image block. We keep the last maximum zero count and position of two blocks cropped. Finally
after completing all four for loops, we get the maximum zeros
position of blocks and pixels and we plot 2 rectangles with
red boundary to show forgery as shown in Fig. 4.
This process will work on non-forged images, and it will
show maximum matching block in those images also. Hence
we use SSIM index of two cropped blocks, and if it greater
123
S. Kumar et al.
than 0.5, then we can safely take that image is forged, else it
is non-forged. We have not considered overlapping similarity
as we focus on 2 distinct image portions which are copied and
looking alike. If we consider overlapping blocks, then it will
try to match a region which is copied over itself by slightly
moving it in x and y direction, and then 2 such regions will
not be present.
Structural similarity index measure (SSIM) is based on
three components of images: luminance (l), contrast (c) and
structure (s). The equation depends on following three equations:
I (x, y) =
2μx μ y + c1
μx 2 +μ y 2 +c1
(1)
c(x, y) =
2σ x σ y +c2
σx 2 +σ y 2 +c2
(2)
s(x, y) =
σx y +c3
σx y +c3
(3)
123
With additional condition:
c3 =
c2
(4)
C1
The equation for SSIM can now be written as:
SS I M(x, y) = [I (x, y)α .c(x, y)β .s(x, y)γ ]
(5)
Setting weights α, β and γ to 1 the equation reduces to:
SS I M(x, y) =
(2μx μ y +c1 )(2σ x σ y +c2 )
(μx 2 +μ y 2 +c1 )(σ x 2 +σ y 2 +c2 )
(6)
Here μx is the average of x, μy is the average of y, (σ 2 is the
variance of x, σ 2 is the variance of y, and σ xy is the covariance
of x and y.
Non-overlapping block-level difference-based image forgery detection and localization…
Table 1 Summary of datasets used for algorithm evaluation
Parameters
COMOFOD
CASIA-1
CASIA-2
IMD
MICC-F2000
COVERAGE
Total Images
10,400
1721
12,323
96
2000
200
Forged
5200
921
5123
48
700
100
Original
5200
800
7200
48
1300
100
Image size/s
512 × 512, 3000 ×
2000
384 ×
256
320 × 240 to 800 ×
600
1024 × 683 to 3264
× 2448
2048 × 1536
410 × 421 to 534 ×
438
Scaled
images
Yes
Yes
Yes
Yes
Yes
Yes
Rotated
images
Yes
Yes
Yes
Yes
Yes
Yes
Translation
Yes
Yes
Yes
Yes
Yes
Yes
Combination
Yes
Yes
Yes
Yes
Yes
Yes
4 Results and discussion
The experiments were performed using NVIDIA DGX v100
supercomputer with 40,600 CUDA cores and 1000 Tera
FLOPs speed. The machine helped to achieve fast results
with a very big dataset of images, and we calculated various
performance metrics.
4.1 Dataset description
We have used eight datasets for our algorithm evaluation.
Each dataset has heterogeneous quality of images for getting
a robust and accurate outcome of forgery classification and
localization. The summary of datasets used is shown in Table
1. There are various challenges associated with detection of
copy-move forgery like rescaling of copied patches, rotation of copied patches, or translation of patches, etc. So we
have considered different datasets where images belonging
to different categories (like scaled, rotated, translated) are
included. The objective of including rescaled, rotated, and
translated patched images is to verify the robustness of proposed algorithm. The description of different datasets used in
the study is shown in Table 1 along with available categories
of images.
The localization result after applying the proposed algorithm for two sample images is shown in Fig. 4. Here the
maximum matching results can be seen with red squares
around the copied region.
The value of SSIM in 2 maximum matching sub-blocks
in forged and non-forged images is shown in Table 2. As
seen clearly, the zero count and SSIM are h i g h for actually
forged images. If we take threshold as 0.5 for SSIM, then
we can easily know if image is forged or not. We have used
SSIM parameter at place of well-known descriptors here,
as in non-forged image the maximal similar blocks will be
returned by algorithm. The descriptors like DCT (discrete
cosine transform) or DWT (discrete wavelength transform)
are used with image compression and hence will not work
here because we have to again find the difference, Euclidean
distance or cosine similarity in these feature vectors. SSIM
is a measure of similarity of images and can distinguish in
exactly same and nearly same image crops.
For evaluating proposed NB-localization algorithm, different datasets were considered. The major datasets used in
the domain of image forgery detection are COMOFOD [25],
CASIA V1 [26], CASIA V2 [27], image manipulation dataset
(IMD) [7], MICC-F220 [28], MICC-F2000, MICC-F600 and
coverage [29]. For sake of simplicity and randomness, we
have taken around 100 sample images from the datasets to
evaluate on our algorithm and some of the results are presented in Table 2. Major evaluation parameters that have been
taken are precision, recall, F1-score and accuracy.
4.2 Performance evaluation parameters
After applying the algorithm on mentioned evaluation
datasets, results are computed in different dimensions to compare the performance. The parameters and their significance
is discussed below.
Accuracy: It is assessment about the model performance
in terms of prediction [30, 31]. This parameter indicates how
many predictions of forged images are actually correct out of
total prediction made through the algorithm [32]. Accuracy
is computed for all described datasets, and best accuracy of
97% is achieved with COMOFOD and converge dataset as
shown in Table 3(a).
Precision: It is about how many images are correctly identified as forged image using the algorithm [22, 33]. The best
value of precision is 0.98 with four datasets as shown in Table
123
S. Kumar et al.
Table 2 Statistics of matching 2 sub-blocks in forged and non-forged
images
Table 3 Various evaluation parameters result in ascending order with
different datasets. (a) Accuracy, (b) precision, (c) sensitivity, (d) specificity, (e) FNR, (f) FPR, (g) FDR, (h) PT, (i) MCC, (j) F1-score, (k)
FOR and (l) FMI
(a) Accuracy
(b) Precision
Dataset
Value
Dataset
Value
CASIA-2
0.92
CASIA-2
0.92
MICC-F2000
0.93
IMD
0.94
CASIA-1
0.94
MICC-F600
0.94
MICC-F600
0.94
MICC-F2000
0.96
IMD
0.96
CASIA-1
0.98
MICC-F220
0.96
MICC-F220
0.98
COMOFOD
0.97
COMOFOD
0.98
Coverage
0.97
Coverage
0.98
(c) Recall
(d) Specificity
Dataset
Value
Dataset
Value
CASIA-1
0.90
CASIA-1
0.90
MICC-F2000
0.90
MICC-F2000
0.90
CASIA-2
0.92
CASIA-2
0.92
MICC-F220
0.94
MICC-F220
0.94
MICC-F600
0.94
MICC-F600
0.94
COMOFOD
0.96
COMOFOD
0.96
Coverage
0.96
Coverage
0.96
IMD
0.98
IMD
0.98
(e) FNR
(f) FPR
Dataset
Value
Dataset
Value
IMD
0.02
IMD
0.02
COMOFOD
0.04
COMOFOD
0.04
Coverage
0.04
Coverage
0.04
MICC-F220
0.06
MICC-F220
0.06
MICC-F600
0.06
MICC-F600
0.06
CASIA-2
0.08
CASIA-2
0.08
CASIA-1
0.10
CASIA-1
0.10
MICC-F2000
0.10
MICC-F2000
0.10
(g) FDR
3(b). Consistent value of more than 90% for each dataset
reflects the robustness of proposed algorithm.
Recall: In terms of problems under consideration, recall
or sensitivity is assessment of images that are predicted to
be forged out of total forged images [22]. Result of recall on
different datasets of forgery domain is shown in Table 3(c),
and best recall value of 0.98 is achieved on IMD (image
manipulation dataset).
123
(h) PT
Dataset
Value
Dataset
Value
COMOFOD
0.02
IMD
0.13
Coverage
0.02
COMOFOD
0.17
MICC-F220
0.02
Coverage
0.17
CASIA-1
0.02
MICC-F220
0.20
MICC-F2000
0.04
MICC-F600
0.20
IMD
0.06
CASIA-2
0.23
Non-overlapping block-level difference-based image forgery detection and localization…
Table 3 (continued)
(g) FDR
(h) PT
Dataset
Value
Dataset
Value
MICC-F600
0.06
CASIA-1
0.25
CASIA-2
0.08
MICC-F2000
0.25
(i) MCC
(j) F1 Score
Dataset
Value
Dataset
Value
MICC-F2000
0.83
CASIA-2
0.92
CASIA-2
0.84
MICC-F2000
0.93
CASIA-1
0.84
CASIA-1
0.94
MICC-F600
0.88
MICC-F600
0.94
MICC-F220
0.90
IMD
0.96
COMOFOD
0.93
MICC-F220
0.96
Coverage
0.93
COMOFOD
0.97
IMD
0.94
Coverage
0.97
(k) FOR
Dataset
(l) FMI
Value
Dataset
Value
IMD
0.02
CASIA-2
0.92
COMOFOD
0.04
MICC-F2000
0.93
Coverage
0.04
CASIA-1
0.94
MICC-F220
0.06
MICC-F600
0.94
MICC-F600
0.06
IMD
0.96
CASIA-2
0.08
MICC-F220
0.96
CASIA-1
0.09
COMOFOD
0.97
MICC-F2000
0.09
Coverage
0.97
Specificity: It is also called TNR (true-negative rate).
Specificity is about calculation of ratio of original images
predicted with respect to total original images [34]. It signifies the extent of deviation toward particular class. Both
recall and specificity should approach 1 or 100% equally to
demonstrate the unbiased classification. In our case, the best
value of specificity is 0.98 for image manipulation dataset
that is very close to 1 as shown in Table 3(d).
Similar to above parameters, there are other evaluation
parameter chosen such as false-negative rate (FNR) [34],
false-positive rate (FPR) [35], false discovery rate (FD) [36,
37], prevalence threshold (PT) [38] to explore the robustness of algorithm, Mathews correlation coefficient (MCC)
[39], F1-score [22], false omission rate (FOR) [40], FowlkesMallows index (FMI) [41], etc. Result of all the parameters is
shown in Fig. 3a–l. Result of all the parameters is consistent
with different dataset.
Table 3 shows the result of proposed algorithm for various parameters discussed above with different public datasets
like CASIA, MICC-F2000, COMOFOD that are majorly
used by authors in the area of image forgery.
In Fig. 5, the AUC-ROC graph is shown for AUC values
computed through CASIA-1, CASIA-2, coverage, MICCF2000, COMOFOD, and MICC-F6000. Best values are
obtained on COMOFOD dataset, and remarkably equivalent
results are shown for others datasets as well.
In Table 4, comparisons of accuracy, precision, recall,
F1-score, etc., are shown for different datasets under consideration. Average sensitivity or TPR through datasets is 93%
with average accuracy of 95%. In Table 5, results obtained are
compared with the other state-of-the-art literature for copymove forgery. We have used 100 random images from each
Fig. 5 AUC-ROC curve
comparison for different datasets
123
S. Kumar et al.
Table 4 Statistical result of
different parameters considering
all results for datasets under
study
Sr.
Parameter
Avg.
Std. Dev.
Min.
Max.
1
Accuracy
0.95
0.02
0.92
0.97
2
Precision
0.96
0.02
0.92
0.98
3
Sensitivity
0.94
0.03
0.90
0.98
4
F1-Score
0.95
0.02
0.92
0.97
5
Specificity
0.94
0.03
0.90
0.98
6
FPR
0.06
0.03
0.02
0.10
7
FNR
0.06
0.03
0.02
0.10
8
FDR
0.04
0.02
0.02
0.08
9
PT
0.20
0.04
0.13
0.25
10
MCC
0.89
0.04
0.83
0.94
11
FOR
0.06
0.03
0.02
0.09
12
FMI
0.95
0.02
0.92
0.97
Table 5 Benchmarking of proposed algorithm with existing literature results based on dataset
Dataset
COMOFOD
CASIA-1
CASIA-2
MICC-F220
MICC-F2000
MICC-F600
123
Approach/Features
# Acc
Pre.
Sen.
F1-S.
Sp.
FPR
FNR
0.04
NB-localization [Proposed]
0.97
0.98
0.96
0.97
0.96
0.04
Tetrolet transform [33]
–
0.99
0.96
0.96
–
–
–
SIFT [42]
–
0.77
0.82
0.8
–
–
–
Segmentation [43]
–
0.77
0.66
0.71
–
–
–
Dense field matching [44]
–
0.71
0.88
0.78
–
–
–
Adaptive over segmentation [45]
–
0.81
0.84
0.82
–
–
–
Block-level features [46]
–
0.89
0.83
0.87
–
–
–
NB-localization [proposed]
0.94
0.98
0.90
0.94
0.90
0.10
0.10
Inception-Net [47]
–
0.71
0.55
0.64
–
–
–
Surface probability [48]
–
–
–
0.54
–
–
–
U-Net [49]
0.76
–
–
0.84
–
–
–
RCNN [50]
–
–
–
0.4
–
–
–
NB-localization [proposed]
0.92
0.92
0.92
0.92
0.92
0.08
0.08
2-D Markov model [50]
0.89
–
–
–
–
–
–
Stacked autoencoder [51]
0.91
57.67
–
–
–
–
–
CNN, camera-based features [52, 53]
0.73
–
0.96
–
0.6
–
–
NB-localization [proposed]
0.96
0.98
0.94
0.96
0.94
0.06
0.06
SVM, SURF [54]
0.8
–
–
–
–
–
–
Key-point Matching [55]
0.92
–
–
–
–
–
–
DCT, SURF
0.95
–
–
–
–
–
–
KNN, YCbCr (Color) [56]
0.94
–
–
–
–
–
–
NB-localization [proposed]
0.93
0.96
0.90
0.93
0.90
0.10
0.10
BRIEF, SURF [57]
0.82
–
–
–
–
–
–
SVM, SURF [54]
0.81
–
–
–
–
–
–
Key-point matching [55]
0.85
–
–
–
–
–
–
NB-localization [proposed]
0.94
0.94
0.94
0.94
0.94
0.06
0.06
FAST, BRIEF [58]
0.84
–
–
–
–
–
–
LIOP, DBSCAN [21]
–
0.74
0.81
0.77
SIFT, ORB,SVM [57]
0.9
–
–
–
–
–
–
Non-overlapping block-level difference-based image forgery detection and localization…
Table 5 (continued)
Dataset
Approach/Features
# Acc
Pre.
Sen.
F1-S.
Sp.
FPR
FNR
Coverage
NB-localization [proposed]
0.97
0.98
0.96
0.97
0.96
0.04
0.04
RCNN [59]
–
–
–
0.47
–
–
–
LSTM, radon transform [60]
0.98
–
–
0.91
–
–
–
CFA features with CNN [61]
–
–
–
0.19
–
–
–
# Acc = Accuracy, Pre. = Precision, Sen = Sensitivity, F1-S = F1-Score, Sp = Specificity
Bold indicates the peak value as compared to other approaches in literature for same parameter
Table 6 Pixel-level results with
different number of blocks
Image
Blocks
1st block top
left
2nd block top
left
4
(0, 384)
(146, 89)
(58,397)
(204, 102)
(24, 380)
(170, 85)
5
6
dataset for obtaining the results reported in Table 5 except
IMD dataset where all the 96 images were used for obtaining
the result. Best accuracy and F1-score were 97% exhibited
by coverage dataset. COMOFOD dataset performed well
among all the dataset in terms of precision results. It can be
observed from the benchmarking table that proposed algorithm achieved 97% accuracy with COMOFOD dataset with
maximum value of F1-score of 0.97.
4.3 Pixel results for different block numbers
The results obtained were investigated with respect to different block numbers (4, 5, 6) in which image was divided.
The pixels of top left corner of matching blocks were also
recorded for any image processing algorithm to take these
values and act according to them. The results with different
block size on a sample image are shown in Table 6. Thus it
123
S. Kumar et al.
was found that algorithm is robust for different block sizes
and user can adjust it to find the forged block of different
sizes. User can execute the process with number of blocks as
an input parameter and visually find the best matching forged
region in different executions.
8.
9.
5 Conclusion
In general, the forgery detection is implemented through
either key-point-based approaches or block-based
approaches. Here, we have used block-based approach
to identify the forged area in images. Blockwise algorithm is
applied to find matching blocks on the bases of block-level
features difference. The blocks so taken have been compared
in terms of maximum number of zeros and SSIM parameters.
Blocks with maximum matching zeros are localized through
rectangular boundary. Classification decision is based on
SSIM value to classify the image into forged or original
class. The approach works well for translated blocks and
scaled blocks. As a future work, the algorithm can be
improved to detect the rotated patches in forged images.
Data Availability Statement Data sharing is not applicable to this article
as no datasets were generated or analyzed during the current study.
Results are computed on publically available datasets, and appropriate
citations are provided for the same.
10.
11.
12.
13.
14.
15.
16.
Declarations
17.
Conflict of interest All authors certify that they have no affiliations with
or involvement in any organization or entity with any financial interest
or nonfinancial interest in the subject matter or materials discussed in
this manuscript.
18.
19.
References
1. Jain, I., Goel, N.: Advancements in image splicing and copy-move
forgery detection techniques: a survey (2021). https://doi.org/10.
1109/Confluence51648.2021.9377104
2. Tyagi, S., Yadav, D.: A detailed analysis of image and video forgery
detection techniques. Vis. Comput. (2022). https://doi.org/10.1007/
s00371-021-02347-4
3. Santhosh Kumar, B., Karthi, S., Karthika, K., Cristin, R.: A systematic study of image forgery detection. J. Comput. Theor. Nanosci.
(2018). https://doi.org/10.1166/jctn.2018.7498
4. Swain, M., Swain, D.: An effective watermarking technique using
BTC and SVD for image authentication and quality recovery. Integration (2022). https://doi.org/10.1016/j.vlsi.2021.11.004
5. Manjunatha, S., Patil, M.M.: A study on image forgery detection
techniques. CiiT Int. J. Digit. Image Process. 9(5) 2017
6. Mushtaq, S., Mir, A.H.: Image copy move forgery detection: a
review. Int. J. Futur. Gener. Commun. Netw. 11(2), 11–22 (2018).
https://doi.org/10.14257/ijfgcn.2018.11.2.02
7. Christlein, V., Riess, C., Jordan, J., Riess, C., Angelopoulou, E.:
An evaluation of popular copy-move forgery detection approaches.
123
20.
21.
22.
23.
24.
IEEE Trans. Inf. Forensics Secur. 7(6), 1841–1854 (2012). https://
doi.org/10.1109/TIFS.2012.2218597
Li, G., Wu, Q., Tu, D., Sun, S.: A sorted neighborhood approach for
detecting duplicated regions in image forgeries based on DWT and
SVD. In: Multimed. Expo, 2007 IEEE Int. Conf., pp. 1750–1753
(2007). https://doi.org/10.1109/ICME.2007.4285009
Isaac, M.M., Wilscy, M.: Image forgery detection using
region—based rotation invariant co-occurrences among adjacent
LBPs. J. Intell. Fuzzy Syst. 34(3), 1679–1690 (2018). https://doi.
org/10.3233/JIFS-169461
Dixit, R., Naskar, R., Mishra, S.: Blur-invariant copy-move forgery
detection technique with improved detection accuracy utilising
SWT-SVD. IET Image Process. 11(5), 301–309 (2017). https://
doi.org/10.1049/iet-ipr.2016.0537
Shrivastava, V.K., Londhe, N.D., Sonawane, R.S., Suri, J.S.: A
novel and robust Bayesian approach for segmentation of psoriasis lesions and its risk stratification. Comput. Methods Programs
Biomed. 150, 9–22 (2017). https://doi.org/10.1016/j.cmpb.2017.
07.011
Elaskily, M.A., Elnemr, H.A., Dessouky, M.M., Faragallah, O.S.:
Two stages object recognition based copy-move forgery detection
algorithm. Multimed. Tools Appl. 78(11), 15353–15373 (2019).
https://doi.org/10.1007/s11042-018-6891-7
Gan, Y., Zhong, J., Vong, C.: A novel copy-move forgery detection
algorithm via feature label matching and hierarchical segmentation
filtering. Inf. Process. Manag. 59(1), 102783 (2022). https://doi.
org/10.1016/j.ipm.2021.102783
Maind, R.A., Khade, A., Chitre, D.K.: Image copy move forgery
detection using block representing method. (2), 49–53 (2014)
Li, H., Luo, W., Qiu, X., Huang, J.: Image forgery localization via
integrating tampering possibility maps. IEEE Trans. Inf. Forensics Secur. 12(5), 1240–1252 (2017). https://doi.org/10.1109/TIFS.
2017.2656823
Wu, Y., et al.: Copy-move forgery detection exploiting. Multimed.
Tools Appl. 2(2), 57–64 (2020). https://doi.org/10.1007/978-98110-7644-2
Cozzolino, D., Poggi, G., Verdoliva, L.: Copy-move forgery detection based on patchmatch. In: Universit ´ a Federico II di Napoli,
DIETI, 80125 Naples Italy, pp. 5312–5316 (2014)
Wang, Y., Kang, X., Chen, Y.: Robust and accurate detection of
image copy-move forgery using PCET-SVD and histogram of
block similarity measures. J. Inf. Secur. Appl. 54, 102536 (2020).
https://doi.org/10.1016/j.jisa.2020.102536
Meena, K.B., Tyagi, V.: A hybrid copy-move image forgery detection technique based on Fourier-Mellin and scale invariant feature
transforms. Multimed. Tools Appl. 79(11–12), 8197–8212 (2020).
https://doi.org/10.1007/s11042-019-08343-0
Meena, K.B., Tyagi, V.: A copy-move image forgery detection technique based on Gaussian-Hermite moments. Multimed.
Tools Appl. 78(23), 33505–33526 (2019). https://doi.org/10.1007/
s11042-019-08082-2
Lyu, Q., Luo, J., Liu, K., Yin, X., Liu, J., Lu, W.: Copy move forgery
detection based on double matching. J. Vis. Commun. Image
Represent 76, 103057 (2021). https://doi.org/10.1016/j.jvcir.2021.
103057
Agarwal, R., Verma, O.P.: Robust copy-move forgery detection
using modified superpixel based FCM clustering with emperor penguin optimization and block feature matching. Evol. Syst. 13(1),
27–41 (2022). https://doi.org/10.1007/s12530-021-09367-4
Chen, H., Han, Q., Li, Q., Tong, X.: Digital image manipulation detection with weak feature stream. Vis. Comput. 38(8),
2675–2689 (2022). https://doi.org/10.1007/s00371-021-02146-x
Hossein-Nejad, Z., Nasri, M.: Clustered redundant keypoint elimination method for image mosaicing using a new Gaussian-weighted
blending algorithm. Vis. Comput. 38(6), 1991–2007 (2022). https://
doi.org/10.1007/s00371-021-02261-9
Non-overlapping block-level difference-based image forgery detection and localization…
25. Tralic, D., Zupancic, I., Grgic, S., Grgic, M.: CoMoFoD—new
database for copy-move forgery detection. In: 55th Int. Symp.
ELMAR, no. September 2013, pp. 25–27 (2013)
26. Dong, J., Wang, W., Tan, T.: CASIA image tampering detection
evaluation database. In: 2013 IEEE China Summit Int. Conf. Signal
Inf. Process. ChinaSIP 2013—Proc., pp. 422–426 (2013). https://
doi.org/10.1109/ChinaSIP.2013.6625374
27. Salloum, R., Ren, Y., Jay Kuo, C.C.: Image splicing localization
using a multi-task fully convolutional network (MFCN). J. Vis.
Commun. Image Represent. 51, 201–209 (2018). https://doi.org/
10.1016/j.jvcir.2018.01.010
28. Alberry, H.A., Hegazy, A.A., Salama, G.I.: A fast SIFT based
method for copy move forgery detection. Futur. Comput. Inform.
J. 3, 159–165 (2018). https://doi.org/10.1016/j.fcij.2018.03.001
29. Wen, B., Zhu, Y., Subramanian, R., Ng, T.T., Shen, X., Winkler, S.:
COVERAGE—a novel database for copy-move forgery detection.
In: Proceedings—International Conference on Image Processing,
ICIP, 2016, vol. 2016. https://doi.org/10.1109/ICIP.2016.7532339
30. Gupta, D., Choudhury, A., Gupta, U., Singh, P., Prasad, M.: Computational approach to clinical diagnosis of diabetes disease: a
comparative study. Multimed. Tools Appl. 80(20), 30091–30116
(2021). https://doi.org/10.1007/s11042-020-10242-8
31. Elaskily, M.A., et al.: A novel deep learning framework for
copy-moveforgery detection in images. Multimed. Tools Appl.
79(27–28), 19167–19192 (2020). https://doi.org/10.1007/s11042020-08751-7
32. Kumar, S., Gupta, S.K., Gupta, U., Kaur, M.: VI-NET: a hybrid
deep convolutional neural network using VGG and inception V3
model for copy-move forgery classification. J. Vis. Commun.
Image Represent. 89, 1036 (2022). https://doi.org/10.1016/j.jvcir.
2022.103644
33. Meena, K.B., Tyagi, V.: A copy-move image forgery detection technique based on tetrolet transform. J. Inf. Secur. Appl. 52, 102481
(2020). https://doi.org/10.1016/j.jisa.2020.102481
34. Kaliyar, R.K., Goswami, A., Narang, P., Sinha, S.: FNDNet—a
deep convolutional neural network for fake news detection. Cogn.
Syst. Res. 61, 32–44 (2020). https://doi.org/10.1016/j.cogsys.2019.
12.005
35. Abbas, M.N., Ansari, M.S., Asghar, M.N., Kanwal, N., O’Neill, T.,
Lee, B.: Lightweight deep learning model for detection of copymove image forgery with post-processed attacks (2021). https://
doi.org/10.1109/SAMI50585.2021.9378690
36. Krylov, V.A., Moser, G., Serpico, S.B., Zerubia, J.: False discovery
rate approach to unsupervised image change detection. IEEE Trans.
Image Process. 25(10), 4704–4718 (2016). https://doi.org/10.1109/
TIP.2016.2593340
37. “COMOFOD dataset repository,” 2021, [Online]. https://www.vcl.
fer.hr/comofod/.
38. Lagouvardos, P., Spyropoulou, N., Polyzois, G.: Perceptibility and
acceptability thresholds of simulated facial skin color differences.
J. Prosthodont. Res. 62(4), 503–508 (2018). https://doi.org/10.
1016/j.jpor.2018.07.005
39. Dilshad Ansari, M., Prakash Ghrera, S.: Copy-move image forgery
detection using direct fuzzy transform and ring projection. Int. J.
Signal Imaging Syst. Eng. 11(1), 44–51 (2018). https://doi.org/10.
1504/IJSISE.2018.090606
40. Alamuru, S., Jain, S.: Video event classification using KNN classifier with hybrid features. Mater. Today Proc. (2021). https://doi.
org/10.1016/j.matpr.2021.03.154
41. Barghout, L., Sheynin, J.: Real-world scene perception and perceptual organization: lessons from computer vision. J. Vis. 13(9), 709
(2013). https://doi.org/10.1167/13.9.709
42. Amerini, I., Ballan, L., Caldelli, R., Del Bimbo, A., Serra, G.: A
SIFT-based forensic method for copy-move attack detection and
transformation recovery. IEEE Trans. Inf. Forensics Secur. 6(3
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
PART 2), 1099–1110 (2011). https://doi.org/10.1109/TIFS.2011.
2129512
Li, J., Li, X., Yang, B., Sun, X.: Segmentation-based image
copy-move forgery detection scheme. IEEE Trans. Inf. Forensics
Secur. 10(3), 507–518 (2015). https://doi.org/10.1109/TIFS.2014.
2381872
Cozzolino, D., Poggi, G., Verdoliva, L.: Efficient dense-field copymove forgery detection. IEEE Trans. Inf. Forensics Secur. 10(11),
2284–2297 (2015). https://doi.org/10.1109/TIFS.2015.2455334
Pun, C.M., Yuan, X.C., Bi, X.L.: Image forgery detection using
adaptive oversegmentation and feature point matching. IEEE
Trans. Inf. Forensics Secur. 10(8), 1705–1716 (2015). https://doi.
org/10.1109/TIFS.2015.2423261
Sun, Y., Ni, R., Zhao, Y.: Nonoverlapping blocks based copy-move
forgery detection. In: Security and Communication Networks, vol.
2018 (2018)
Zhong, J.L., Pun, C.M.: An end-to-end dense-InceptionNet for
image copy-move forgery detection. IEEE Trans. Inf. Forensics
Secur. 15, 2134–2146 (2020). https://doi.org/10.1109/TIFS.2019.
2957693
Amerini, I., Uricchio, T., Ballan, L., Caldelli, R.: Localization
of JPEG double compression through multi-domain convolutional neural networks. In: IEEE Computer Society Conference
on Computer Vision and Pattern Recognition Workshops, 2017,
vol. 2017-July. https://doi.org/10.1109/CVPRW.2017.233
Bi, X., Wei, Y., Xiao, B., Li, W.: RRU-net: the ringed residual
U-net for image splicing forgery detection. In: IEEE Computer
Society Conference on Computer Vision and Pattern Recognition Workshops, 2019, vol. 2019-June. https://doi.org/10.1109/
CVPRW.2019.00010
Zhao, X., Wang, S., Li, S., Li, J.: Passive image-splicing detection
by a 2-D noncausal Markov model. IEEE Trans. Circuits Syst.
Video Technol. 25(2), 185–199 (2015). https://doi.org/10.1109/
TCSVT.2014.2347513
Zhang, Y., Goh, J., Win, L.L., Thing, V.: Image region forgery
detection: a deep learning approach. Cryptol. Inf. Secur. Ser. 14,
1–11 (2016). https://doi.org/10.3233/978-1-61499-617-0-1
Bondi, L., Lameri, S., Guera, D., Bestagini, P., Delp, E.J., Tubaro,
S.: Tampering detection and localization through clustering of
camera-based CNN features. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops,
2017, vol. 2017-July. https://doi.org/10.1109/CVPRW.2017.232
Kumar, S., Gupta, S.K.: A robust copy move forgery classification using end to end convolution neural network. In: ICRITO
2020—IEEE 8th Int. Conf. Reliab. Infocom Technol. Optim.
(Trends Futur. Dir.), pp. 253–258 (2020). https://doi.org/10.1109/
ICRITO48877.2020.9197955
Alharbi, A., Alhakami, W., Bourouis, S., Najar, F., Bouguila, N.:
Inpainting forgery detection using hybrid generative/discriminative
approach based on bounded generalized Gaussian mixture model.
Appl. Comput. Inform. (2020). https://doi.org/10.1016/j.aci.2019.
12.001
Manu, V.T., Mehtre, B.M.: Copy-move tampering detection using
affine transformation property preservation on clustered keypoints.
Signal Image Video Process. 12(3), 549–556 (2018). https://doi.
org/10.1007/s11760-017-1191-7
Kasban, H., Nassar, S.: An efficient approach for forgery detection in digital images using Hilbert–Huang transform. Appl. Soft
Comput. J. 97, 106728 (2020). https://doi.org/10.1016/j.asoc.2020.
106728
Kaur, R., Kaur, A.: Copy-move forgery detection using ORB and
SIFT detector. Int. J. Eng. Dev. Res. 4(4) (2016)
Yeap, Y.Y., Sheikh, U., Rahman, A.A.H.A.: Image forensic for
digital image copy move forgery detection. In: Proc.—2018 IEEE
14th Int. Colloq. Signal Process. Its Appl. CSPA 2018, pp. 239–244
(2018). https://doi.org/10.1109/CSPA.2018.8368719
123
S. Kumar et al.
59. Zhou, P., Han, X., Morariu, V.I., Davis, L.S.: Learning rich features for image manipulation detection. In: Proc. IEEE Comput.
Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1053–1061 (2018).
https://doi.org/10.1109/CVPR.2018.00116
60. Chen, H., Chang, C., Shi, Z., Lyu, Y.: Hybrid features and semantic reinforcement network for image forgery detection. Multimed.
Syst. 28(2), 363–374 (2022). https://doi.org/10.1007/s00530-02100801-w
61. Ferrara, P., Bianchi, T., De Rosa, A., Piva, A.: Image forgery localization via fine-grained analysis of CFA artifacts. IEEE Trans. Inf.
Forensics Secur. 7(5), 1566–1577 (2012). https://doi.org/10.1109/
TIFS.2012.2202227
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds
exclusive rights to this article under a publishing agreement with the
author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such
publishing agreement and applicable law.
Sanjeev Kumar M. Tech, is assistant professor at KIET group of
institutions. He is Pursuing Ph.D.
in computer science engineering
from Bennett University, Greater
Noida. He has rich teaching experience of 15 years in various engineering colleges and universities.
His core research area includes
AI/deep learning and image processing. He is working as Assistant professor in KIET group of
institutions.
Suneet Kumar Gupta PhD., is
an assistant professor at Bennett
University, Greater Noida. He has
worked on many publications in
the fields of wireless sensor networks, natural language processing and the Internet of things.
His current research interests also
include deep learning models.
123
Umesh Gupta is Ph.D., is an
assistant professor at School of
Computer Science Engineering
&Technology, Bennett University, India. He has done Ph.D.
in Machine Learning from the
National Institute of Technology
Arunachal Pradesh, Arunachal
Pradesh, India. He has more
than 8 years of Academic and
Industry Experience. He has
published about 30 research
papers in National/International
Conferences/Journals such as
Applied soft computing, Applied
Intelligence, Neural Processing Letters, and Machine Learning and
Cybernetics. His research interest includes machine learning and
optimization, pattern recognition, support vector machines, image and
video processing.
Mohit Agarwal has done B.Tech
from IIT, Delhi in 1995 in Computer Science and Engineering.
After this he has worked in
software industry for period
of around 17 years and then
completed M.Tech in CSE from
ABES-EC, Ghazibad which is
affiliated to Dr. A.P.J. Abdul
Kalam Technical University, Lucknow. He has been in academics
since 2016 and has been active in
research work while completing
PhD from Bennett University,
Greater Noida.
Download