Uploaded by Maham Jahangir

jei-response to reviewers

advertisement
Manuscript No: 220528G
Manuscript Title: “Sparse Adversarial Image Generation using Dictionary Learning”
To: Journal of Electronic Imaging Editor
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the
reviewers’ comments.
Best regards,
Maham Jahangir et al.
Reviewer#1, Concern # 1: .From the general perspective, there is no major mistake in the
algorithm design and experimental design in the manuscript. Dictionary learning is a technology
that has been widely used in the field of image algorithm. Dictionary learning has significant
applications in noise reduction, recognition and other directions in traditional image algorithms.
After bringing this idea into the adversarial attacks, "whether it can significantly improve the
effect"
is
indeed
worth
more
in-depth
analysis
and
discussion.
However, there is too little description of adversarial attacks in the manuscript, which makes
readers confused. It is suggested to add a paragraph to explain why dictionary learning can have an
effect on adversarial attacks. from the perspective of theoretical algorithm, which parts are
specifically
affected.
Author action: The detailed explanation on adversarial attacks is added in the introduction and its
highlighted. The importance of dictionary learning in this regard is also added on page 3 of
Introduction.
Reviewer#1, Concern # 2: MNIST data pictures are used in the manuscript. However, as a paper
with the theme of image technology, this paper lacks enough illustrations. Readers want to see
more
actual
pictures
or
comparative
images.
It is hoped that more pictures can be added to illustrate the effects of the algorithm described in
the manuscript. At the same time, in order to make readers understand, it is suggested to use case
comparison to express the difference between this algorithm and other algorithms.
Author action: We have improved the methodology diagram. We have added a new figure 2 on
page 11 to show the illustrations of examples attacked
Reviewer#1, Concern # 3: In the experimental part of the manuscript, several important
statistical data, "loss on test", "attack success rate", etc., need more descriptive information.
Statistical method? Statistical standards? Definition of success & loss? Even if the standardized
test library is used, it is also hoped that the reference and quotation of experimental standards
can be clearly marked in the table part.
Author action: The definitions for loss on test, attack success rate have been added in the metric
sub-section of experiments section.
Reviewer#1, Concern # 4: Lack of application chapter. In practical application, if the algorithm
improvement in the manuscript is adopted. Will there be better application scope and
adaptability? If the new algorithm in the manuscript is adopted, whether the application method
and application scenario of the traditional adversarial attacks algorithm can be changed.
Author response: The experiments section shows the applicability and efficacy of the proposed
approach. The improvement and application in different fields is beyond the scope of this work
and is left for future work
Author action: We have listed down the ways of algorithm improvement in the conclusion section.
We also added the future planned application areas to test the proposed algorithm.
Reviewer#1, Concern # 5: The conclusion chapter is too concise. It is hoped that the basis for these
conclusions can be expressed in the conclusion part. For example, it is obtained from experiment
XX and data YY.
Author action: Conclusion has been improved and tried to incorporate the suggestions of the
reviewers.
Reviewer#2, Concern # 1: Overall writing needs to improve. Grammar and punctuation have to be
airtight.
Please
refer
to
examples
below.
Line
21:
Needs
comma
after
'time,
improvements
...'.
Line 22: Should be comma instead of 'and'. Something like 'wide range of applications, utilizing
more complex and deep architectures, thus improving the overall classification process'.
Author action: We have proof read the paper for any possible grammatical errors and refined the
article. Also the manuscript has been run through professional software to improve the
punctuation and grammar.
Reviewer#2, Concern # 2: JEI readers may need more context and explanation of the basic
concepts
before
deep
diving.
For
example.
Line 32 jumps right into l2-norm distortions without explanation of l2-norm.
\Author action: The detail on l2-norm has been added in the introduction before line 32.
Reviewer#2,
Concern
#
3:
Citations
should
be
silent.
In Line 72, '...introduced by Din et al [16].'
Author action: The following issue has been taken care of in the revised manuscript
Example:
Reviewer#2, Concern # 4: Defense evaluation and information about fooling ratio against state-ofthe-art would be interesting to compare against.
Author response: The adversarial attacks and defenses are two different streams of the field. It
would be very interesting. However, due to experimental complexity we wish to reserve it for a
separate research article. We plan to compare and evaluate the defense evaluation in future.
Reviewer#3, Concern # 1: While the idea is interesting and matches the scope of JEI, the writing
style of the paper makes it hard (for me in parts impossible) to follow the plot. This especially
holds true for the mathematical notion and derivations. Just as an example, have a look at
algorithm 1. Here, the input is a set P of perturbation vectors (not clear, how they are computed;
this follows later). The algorithm itself is defined for p, which seems to be an element of P (but not
introduced). But if so, a D is output for each p. In line, the actual minimization problem is defined
for a single p. Is this meant (and does it make sense)? Similarly, in the algorithm, Err is defined as a
sum with running index n, but n is not part of the summand. Thus: What is actually computed?
And this is just a single example of either superficial or incorrect writing and notation. Other
examples: In the present form, algorithm 2 would output |S| perturbation maps p. The input is
defined as F = feature map. I guess it should be a set of feature maps, one for each S_i? But then,
the S_i would also be inputs, wouldn't they? What is meant by l2-norm distortions? Is l2 meant as
Frobenius norm (I think so)? Why is sparsity denoted by k in the results part when the sparsity
controlling factor was lambda? k was only mentioned as a running index. What is meant by
"making a transform stronger"?).
Author action: We have revised the algorithms so as too clear any confusion.
Reviewer#3, Concern # 2: In addition, the experiments are not clearly described. What is meant by
training in the given situation? Which networks are really used? And how? And for the results:
How do the actual adversarial examples really look? Just "good performance" does not mean that
they are "good" adversarial examples in terms of human assessment.
Author response:
Author action: The information regarded training a deep network has been added in the
experiments section.
Reviewer#3, Concern # 3:And final comments: Maybe it would help to make the code publicly
available to allow others to have a look at what is actually done.
Author response: We shall make the code public once the paper is accepted
Download