Uploaded by one_man_army

Efficient Key Reconciliation with Cascade Algorithm

advertisement
2023 5th International Conference on Communications, Information System and Computer Engineering (CISCE) | 979-8-3503-2679-6/23/$31.00 ©2023 IEEE | DOI: 10.1109/CISCE58541.2023.10142787
2023 5th International Conference on Communications, Information System and Computer Engineering (CISCE)
High Efficient Secret Key Reconciliation Scheme
Based on Cascade Algorithm
Keming Tian
College of Information and Communication Systems
Information Engineering University
Zhengzhou, China
Email: 13381059170@163.com
Gang Xin*
College of Information and Communication Systems
Information Engineering University
Zhengzhou, China
Email: downxg@163.com
Jian Zhang
College of Information and Communication Systems
Information Engineering University
Zhengzhou, China
Long Li
College of Information and Communication Systems
Information Engineering University
Zhengzhou, China
Abstract—In the quantum key generation scheme, key
reconciliation is used to eliminate inconsistent key bitstream
between partners. The speed of reconciliation and the retention
length of data are the main indicators for evaluating a
reconciliation method. The Cascade algorithm is developed based
on the BINARY reconciliation. By forward backtracking,
Cascade can correct even-numbered errors hidden in the data.
However, its computational complexity seriously affects the
reconciliation speed, and the final length of key generated is
relatively low. In this paper, an optimization method BFRCascade algorithm is proposed to reduce the computational
complexity. Meanwhile, the final unified deletion rule (FUDR) is
applied to increase retention length of data. Compared with
Cascade algorithm, the proposed method can achieve 54% higher
reconciliation speed when the signal-to-noise ratio (SNR) is 8dB,
and the retention length of data is increased by at least 44%
through FUDR.
In recent years, the method of key reconciliation using high
performance error correction coding has been proposed. Such
as key reconciliation based on convolutional codes and BCH
codes [4]; and the application of LDPC codes has been proven
to improve reconciliation efficiency [5-6]. Although the above
reconciliation algorithms have achieved better results, they will
delete secret key information in the same length as the
interactive information after reconciliation to ensure the
security of the key, resulting a shortening of key generation
length. On the other hand, the key reconciliation method based
on error correction coding is prone to error diffusion when the
initial key disagreement rate (KDR) is high [7].
Considering some scenarios with low SNR, the initial
inconsistency rate of partners’ observation data is high due to
the influence of external noise (e.g., underwater acoustic
communication) [8]. However, the reconciliation based on
BINARY can still achieve error correction when the KDR is
0.5 [9]. Due to Cascade algorithm’s capacity of correcting even
errors, it is of great significance to carry out research on
reducing the computational complexity of Cascade algorithm
and realize efficient key reconciliation in harsh environment.
Keywords- BINARY; backtrack; reconciliation; complexity;
BFR-Cascade; parity sum.
I. INTRODUCTION
Key reconciliation is an important segment in both the
fields of quantum key distribution and physical layer security.
After obtaining the discrete sequence resources with high
correlation, Alice and Bob conduct key reconciliation through
the public channel to eliminate the different parts of the
discrete sequence, and finally generate completely consistent
key bits for secure communication [1]. The earliest and most
mature reconciliation method is BBBSS [2]. The dichotomy
search (BINARY)is the core of error correction in BBBSS,
which is simple to implement. After grouping the local data,
Alice and Bob calculate the parity sum of each group and send
it to each other. The groups with different parity sum are
further divided into two groups to compare their parity sum
results. Finally, the inconsistent bits are found thus completing
the error correction. Obviously, the BINARY can only correct
odd errors in the packet. After BINARY correction, there are
still some even errors hidden. To solve this problem, Cascade
reconciliation was proposed by Brassard and Salvail, firstly it
searches odd errors by BINARY after random perturbation of
data, then it finds all rounds containing these errors, finally
backtracking even errors in the earlier rounds [3].
979-8-3503-2679-6/23/$31.00 ©2023 IEEE
The rest of the paper is organized as follows. The method of
implementing Cascade algorithm is introduced in Section II. In
Section III, an optimization method BFR-Cascade algorithm
and a new data-deletion rule is proposed. After that, we present
simulation results and performance analysis in Section IV.
Finally, conclusions are drawn in Section V.
II. IMPLEMENTATION OF CASCADE ALGORITHM
A method for implementing R-round Cascade
reconciliation is as follows [10].
Step 1: Use ΒINARY in the first round of error correction.
Alice and Bob select a random function f1 to perturbate the key
string (length N) to avoid continuous errors. Then they group
the data according to length L1. Alice calculates the parity
check code of all groups (data packets) and sends it to Bob.
Bob compares the received parity check code with his own,
then BINARY process is used to correct errors in groups with
inconsistent parity results. After error correction, all groups of
Bob contain only even errors. Both Alice and Bob save the
377
Authorized licensed use limited to: Southeast University. Downloaded on March 09,2025 at 06:35:09 UTC from IEEE Xplore. Restrictions apply.
error-corrected data (noted as D1) and record the location
information of the data members, i.e., the information of the
group of each bit, which is recorded as the set S1, and initialize
i = 2;
Step 2: Error correction of round i.
1) Alice and Bob select a packet length Li and a random
function fi to randomly perturbate and group the key string, and
record the location information of these groups as the set S2 and
then initialize j = 1;
2) For the jth group, Bob compares parity sum result with
Alice. If the result is inconsistent, the BINARY method is used
to correct the error, and the error position is noted as eij. If
consistent, jump to 4);
3) Let P1 be the set of all the group information containing
the wrong position ei1 in the 1 ~ (i-1) historical round, i.e., P1 =
{Sk /Sk ⋂ei1 ≠Ø, k=1, 2..., i-1}. Due to the discovery of ei1, the
groups in β1=P1 have odd errors (except ei1). Alice and Bob
used the BINARY method to correct the error by selecting the
shortest group from β1, and note the error position as ei2.
Similarly, let P2 be the grouping information containing the
error position ei2 in the 1~(i-1) historical round, then β2 is
constructed as: β2 = (β1∪P2) \ (β1∩P2), it contains odd number
of errors (except ei2). If β2≠Ø, then Alice and Bob use the
BINARY method to correct the error ei3 by selecting the
shortest group from β2, and then get the group information set
P3 containing the error location ei3 and the set β3. Repeat this
process until βh = Ø;
Figure 1. Error-correction capability comparison of Cascade and BINARY
III. THE PROPOSED OPTIMIZATION SCHEME
In Section II, an implementation method of R-round
Cascade reconciliation is introduced, whose backtracking
calculation is complicated, thus seriously affecting the
reconciliation speed, especially when R is large. In addition,
the original Cascade reconciliation did not discuss information
leakage in the process of reconciliation. As a matter of fact, in
the process of BBBSS error correction, for each parity sum
exchanged, the groups having consistent parity need to delete 1
bit to prevent third parties from eavesdropping and speculating
on information of the groups [11], which is the main reason for
data loss [12]. In view of the above two problems, this paper
proposes an optimization scheme aiming to improve the
reconciliation efficiency and the retention length of data.
4) If j < [N/Li], Alice and Bob set j = j + 1, jump to step 2) ,
where [·] denotes the downward rounded function. Otherwise
skip to Step 3;
Step 3: If i < R, then Alice and Bob set i = i + 1 and jump to
Step 2; Otherwise, the algorithm is completed.
A. Complexity Reduction
1) Cascade Algorithm that only backtracks the first
round(BFR-Cascade)
In the original Cascade algorithm, from round i > 1, when
one error eim is found, it is necessary to backtrack all the
packets where the eim appears in the 1~(i-1) round, and these
packets constitute a set βi, then the packet with the shortest
length in βi is corrected by BINARY. Obviously, the amount
of calculation in the whole process is huge.
The performance of error-correction based on BINARY and
Cascade is shown in Figure. 1. In this Figure, (a), (c) depict the
KDR result of the two methods when SNR is 10 and 0
respectively, while (b), (d) shows the error-corrected number.
The results indicates that the KDR decreases as the
backtracking rounds increases, and the KDR of Cascade
method has a gentle decline when SNR is low, a more rapid
decline when SNR is higher. According to (b) and (d), more
errors are corrected by Cascade method, no matter how large
the SNR is, which proves the stronger error correction
capability of Cascade compared with BINARY. It is worth
mentioning that in (c) although BINARY achieves a KDR near
to 0, it has a cost of losing data: when the reconciliation
completes, there is less than 5% data saved.
There is no doubt that as the reconciliation progresses, the
KDR is getting lower and lower. In order to find potential
errors, the packet length is expected to be increased. The
length of packet derived from D1 (the result of BINARY
correction in the first round) is shortest. According to the error
found in each round, which is noted as eim, we can trace it
back to the first round through the relationship between fi and
f1, determine the packet including eim in D1 and then complete
the error correction. As the round continues, the hidden even
errors in D1 are more and more found, so as to achieve the
purpose of reducing the KDR. In this way, there is no longer
necessary to calculate βi by backtracking round 1~(i-1), nor to
compare the length of all packets in βi, which will greatly
reduce the amount of calculation.
378
Authorized licensed use limited to: Southeast University. Downloaded on March 09,2025 at 06:35:09 UTC from IEEE Xplore. Restrictions apply.
Figure 2. The flow diagram of BFR-Cascade
•
specific to which bit, because it will be backtracked to the
corresponding position of the first round.
Repeated Error-Correction Avoiding Mechanism
Repeated error-correction is an important issue in Cascadebased method, for it will affect the reconciliation efficiency.
Considering the original Cascade algorithm, in order to avoid
the repeated parts of Pv and βv-1 being corrected twice, βv is
constructed as: βv = (βv-1 ∪Pv) \ (βv-1 ∩Pv). For the BFRCascade algorithm, in order to avoid repeated error correction
in round i, the position pim in S1 backtracked from eim can be
recorded, let Gi = {pi1, pi2, ...}. If the subsequent errors found
in round i do not belong to Gi, error correction is performed,
otherwise it is skipped directly. Note the repeated errorcorrection avoiding mechanism as RECAM.
B. Increase the retention length of data: FUDR
Take the method of obtaining secret key bits from wireless
channel for example, the azimuth and distance between the
legitimate communicator Bob and eavesdropper Eve have a
great impact on the cross-correlation [13]. Therefore, it is
possible to generate similar key bits when they are close
enough. It has been mentioned in the previous section that in
order to avoid information leakage to third parties, each subpacket having consistent parity sum needs to delete 1 bit, and
3 bits need to be deleted when the packet length is 4. However,
there are two methods for how to delete: one is to delete the
error bits and the leaked bits after each round of backtracking,
so that the data only leaves even-error packets (the same as
BBBSS processing); the other is to mark the error bits and the
leaked bits during the backtracking process (replace these bits
with numbers greater than 0), and delete them uniformly after
all backtracking rounds is completed. The first method not
only discards more data, but also may find errors in
subsequent rounds that have been deleted in the first round and
cannot correct even errors, which limits the error-correcting
capability of the algorithm. Therefore, the second method is
preferred, which is noted as final unified deletion rule (FUDR).
The comparison in the ability of saving data of the two
methods is shown in Table 1, where the length of original data
is 65536 and both methods conduct 20 rounds of backtracking.
The result shows that the retention length of data is increased
through FUDR, and the increase ratio is 44% when SNR is 10.
The flow diagram of BFR-Cascade algorithm is shown in
Figure. 2.
2) Stop error-correction when packet length is 2.
In the process of BINARY error correction, when odd
errors are found while the packet length is Li, there will be a
set of sub-packets with the length {Li/2, Li/4...,2} to conduct
BINARY correction in turn. The specific process is: Alice and
Bob interact with the parity sum of these sub-packets, save the
sub-packets having consistent parity sum results, and
continuously perform BINARY error checking on the others
until the error is determined. In order to avoid information
leakage, each sub-packet with consistent parity sum needs to
delete 1 bit.
Considering the BFR-Cascade algorithm, the error
correction happens in the first round (D1). When the length of
sub-packet is 2, then the 2 bits with different parity sum can be
deleted directly without error correction (replace them with
deletion codes, see section B), because one bit of them is
wrong, and the other bit discloses itself while interacting
parity sum value, i.e., the leaked information. According to the
above analysis, when the length of the binary error correction
packet is 4, only 1 valid bit are retained. The last step of the
error correction process is omitted, which can reduce the
amount of computation and save the reconciliation time. It is
worth mentioning that the error eij found in round i needs to be
TABLE 1. FINAL KEY LENGTH VERSUS SNR
SNR/dB
Deletion Method
Delete after each
round
FUDR
0
5
10
15
20
8743
17825
29034
36975
43822
37117
37456
41712
46171
50263
379
Authorized licensed use limited to: Southeast University. Downloaded on March 09,2025 at 06:35:09 UTC from IEEE Xplore. Restrictions apply.
IV. SIMULATION RESULTS
A. Simulation Setup
Firstly, a pair of highly correlated discrete data is
generated, and their inconsistency is caused by noise. Set the
data length N as 216(65536). Although there are some new
quantization methods proposed in recent years, in [14] secret
key generated by WGAN-GP adversarial auto-encoder is
discussed, and C. Chen and M. A. Jensen propose a method
using temporally and spatially correlated wireless channel
coefficients to generate secret key [15], this paper only focus
on the key reconciliation method, equal quantization method
[16] is adopted to form key bitstream. In order to verify the
capacity of the reconciliation algorithm versus different initial
KDR, the discrete data’s SNR range is set from 0 to 20 dB,
and the corresponding initial KDR range is from 0.333 to
0.045. The reconciliation algorithm includes BINARY,
Cascade and BFR-Cascade proposed in this paper. In the
simulation, the reconciliation speed and the length of reserved
data of each algorithm are recorded.
Figure 4. Reconciliation time versus backtracking depth
3) Retention length of data: Figure. 5 shows the
comparison of the capacity of saving data of different
reconciliation methods . The result proves that as the
backtracking round increases, the saved length od data shows
a decreasing trend, and the BINARY method delete data
most,while BFR-Cascade with FUDR can save more than 44%
secret key bits compared with normal Cascade when SNR is 5.
Figure 3. Reconciliation time versus backtracking depth
B. Result Analysis
1) Reconciliation spped : To reduce the complexity, BFRCascade simplify the backtracking rounds of Cascade from
1~(i-1) rounds to only first round , the depth of backtracking is
decreased. Theoretically, the calculation complexity and
backtracking depth are positively correlated. In Figure. 3, the
normalized reconciliation time of different round is displayed.
The result shows that reconciliation time increases as the
backtracking depth becomes bigger. BFR-Cascade can save
54% time compared with Cascade when SNR is 8.
2) Key Disagreement Rate : Key reconciliation aims to
decrease the KDR bettween partners. Error-corrction capacity
of several methods is shown in Figure. 4, which shows that
KDR decreases gradually as the backtracking round goes,
normal Cascade has a most rapid decreasing speed, but it
needs more calculating time; When SNR is bigger than 10,
BFR-Cascade have the best performance, while the BIANRY
method need more rounds to decrease the KDR.
Figure 5. Saved length versus backtracking depth
V.
CONCLUSIONS
To solve the problem of low reconciliation efficiency
caused by high computational complexity of the original
Cascade reconciliation algorithm, the BFR-Cascade algorithm
is proposed in this paper, which simplifies all rounds of
backtracking to the first round, consequently simplifying the
construction of location information set and the comparison of
packet length in the backtracking process. At the same time,
the deletion method of the leaked information is redesigned,
according to FUDR, the unusable bits are marked during each
round of backtracking, and then delete them after all
backtracking updates are completed. Simulation results show
that the proposed algorithm improves the reconciliation speed
by 54 % compared with the original Cascade. The retention
length of data increased by 44 % with FUDR. In the proposed
380
Authorized licensed use limited to: Southeast University. Downloaded on March 09,2025 at 06:35:09 UTC from IEEE Xplore. Restrictions apply.
algorithm, the packet length L1 of the first round of
reconciliation is an important factor. When the initial KDR is
high, L1 should be as small as possible to ensure that the
disagreement rate is reduced to 0 after reconciliation. However,
the smaller L1 is, the longer it takes to complete the error
correction of all packets. Therefore, how to properly determine
the value of L1 needs further research in the future.
REFERENCES
[1]
D. Diao and B. Wang, "Secrecy performance analysis of UAV wireless
powered communications against full-duplex proactive eavesdropping,"
2021 International Conference on Communications, Information System
and Computer Engineering (CISCE), Beijing, China, 2021, pp. 192-199,
doi: 10.1109/CISCE52179.2021. 9445912.
[2] C. H. Bennett, F. Bessette,and G. Brassard, “Experimental quantum
cryptography,” Journal of cryptology,vol.5(1), pp. 3-28,1998
[3] G. Brassard, L. Salvail, “Secret-key Reconciliation by Public
Discussion”, Workshop on theTheory and Application of Cryptographic
Techniques. Springer Berlin Heidelberg, 1993,410-423.
[4] W. Traisilanun, K. Sripimanwat and O. Sangaroon, "Secret key
reconciliation using BCH code in quantum key distribution," 2007
International Symposium on Communications and Information
Technologies, Sydney, NSW, Australia, 2007, pp:1482-1485, doi:
10.1109/ISCIT.2007.4392249.
[5] P. David, “High-speed QKD Reconciliation using Forward Error
Correction”, American institute of physics,2004, pp: 299-302.
[6] N Sun,S. Zhang,G. Xin, “The information reconciliation protocol
basing on error codes”, IEEE, International conference on software
engineering and service science.2013, pp: 693-696.
[7] F. X. Guo, D. Yu, G. Xin, A Key Rreconciliation Algorithm Based on
Quantized Soft Information, Computer simulation,2018,35(06):291-295.
[8] C. Xu. “Research on physical layer security technology of underwater
acoustic communication ”, Jiangsu University of Science and
Technology, May,2022, pp: 10-15.
[9] F. X. Guo, D. Yu, G. Xin, “Joint Secret Key Reconciliation Algorithm
Based on BINARY and Polar Codes”, Journal of Information
Engineering University, Vol.19, No.3, pp.338-339.
[10] D. Le, Research on key technologies of post-processing in quantum key
distribution, Harbin Institute of Technology,2016. pp.109-112.
[11] Y. S. Shiu, S. Y. Chang, H. C. Wu, S. C. Huang and H. H. Chen,
"Physical layer security in wireless networks: a tutorial," in IEEE
Wireless Communications, vol. 18, no. 2, pp: 66-74, April 2011, doi:
10.1109/MWC.2011.5751298.
[12] D. Elkouss, A. Leverrier, R. Alleaume and J. J. Boutros, "Efficient
reconciliation protocol for discrete-variable quantum key distribution,"
2009, IEEE International Symposium on Information Theory, Seoul,
Korea (South), 2009, pp. 1879-1883, doi: 10.1109/ISIT.2009.5205475.
[13] X. Wang, A. Hu, Y. Huang and X. Fan, "The spatial cross-correlation of
received voltage envelopes under non-line-of-sight," 2022 4th
International Conference on Communications, Information System and
Computer Engineering (CISCE), Shenzhen, China, 2022, pp. 303-308,
doi: 10.1109/CISCE55963.2022.9851093
[14] J. Han, Y. Zhou, G. Liu, T. Liu and X. Zeng, "A Novel Physical Layer
Key Generation Method Based on WGAN-GP Adversarial
Autoencoder," 2022 4th International Conference on Communications,
Information System and Computer Engineering (CISCE), Shenzhen,
China, 2022, pp. 1-6, doi: 10.1109/CISCE55963.2022.9851065.
[15] C. Chen and M. A. Jensen, "Secret Key Establishment Using Temporally
and Spatially Correlated Wireless Channel Coefficients," in IEEE
Transactions on Mobile Computing, vol. 10, no. 2, pp. 205-215, Feb.
2011, doi: 10.1109/TMC.2010.114.
[16] J. Wallace, “Secure Physical Layer Key Generation Schemes:
Performance and Information Theoretic Limits” 2009 IEEE International
Conference on Communications, Dresden, 2009,1-5.
381
Authorized licensed use limited to: Southeast University. Downloaded on March 09,2025 at 06:35:09 UTC from IEEE Xplore. Restrictions apply.
Download