Implementation and evaluation of Polar Codes in 5G Implementation och evaluering av Polar Codes för 5G Tobias Rosenqvist Joël Sloof Faculty of Health, Science and Technology Computer Science 15 hp Supervisor: Stefan Alfredsson Examiner: Kerstin Andersson Date: 2019-06-12 Serial number: N/A Implementation and evaluation of Polar Codes in 5G Tobias Rosenqvist, Joël Sloof © 2019 The author(s) and Karlstad University This report is submitted in partial fulfillment of the requirements for the Bachelor’s degree in Computer Science. All material in this report which is not our own work has been identified and no material is included for which a degree has previously been conferred. Tobias Rosenqvist Joël Sloof Approved, Date of defense Advisor: Stefan Alfredsson Examiner: Kerstin Andersson iii Abstract In today’s society the ability to communicate with one another has grown, were a lot of focus is aimed towards speed in the telecommunication industry. For transmissions to become even faster, there are many ways to enhance transmission speeds of which error correction is one. Padding messages such that they are protected from noise, while using as few bits as possible and ensuring safe transmit is handled by error correction codes. Short codes with low complexity is a solution to faster transmission speeds. An error correction code which has gained a lot of attention since its first appearance in 2009 is Polar Codes. Polar Codes was chosen as the 3GPP standard for 5G control channel. The goal of the thesis is to develop and implement Polar Codes and rate matching according to the 3GPP standard 38.212. Polar Codes are then to be evaluated with different block sizes and rate matching settings. Finally Polar Code is compared with Convolutional code in a LTE-simulation environment. The performance evaluations are presented using BLER/(Eb /N0 )-graphs. In this thesis a Polar encoder, rate matching and a Polar decoder (with Successive Cancellation algorithm) were successfully implemented. The simulation results show that Polar Codes performs better with longer block sizes and also has a better BLER-performance than Convolutional Codes when given the same message lengths. v Contents 1 Introduction 1 1.1 Thesis goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Expected results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Actual results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 2 Background 4 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Mobile broadband . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.1 Long Term Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.2 New Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Noise channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3.1 AWGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Phase-shift keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.5 Channel capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.6 Interleaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.7 Rate matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.8 Error correction code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.8.1 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.8.2 Polar Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Eb /N0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.10 Block error rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.11 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.12 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 2.9 vi 3 Project Design 13 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3 Polar encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3.1 3GPP 38.212 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3.2 Encoding and channel polarisation . . . . . . . . . . . . . . . . . . 16 Rate matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4.1 Sub-block interleaving . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4.2 Bit selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Rate dematching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5.1 Bit deselection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5.2 Sub-block deinterleaving . . . . . . . . . . . . . . . . . . . . . . . . 22 Polar decode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.6.1 Belief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.6.2 Frozen bits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Successive Cancellation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.7.1 Leaf state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.7.2 Left state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.7.3 Right state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.7.4 Up state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.8.1 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.8.2 Performance comparisons . . . . . . . . . . . . . . . . . . . . . . . 29 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.4 3.5 3.6 3.7 3.8 3.9 4 Project Implementation 30 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.2 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 vii 4.3 Polar encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.1 encode_polar.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.3.2 getReliabilitySeq.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.3.3 PolarEncode.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.3.4 getGn.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Rate matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.4.1 rateMatch_polar.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.4.2 subBlock_interleaver.m . . . . . . . . . . . . . . . . . . . . . . . . 36 4.4.3 bit_selection.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.4.4 bit_interleaving.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Rate dematching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.5.1 DerateMatch_polar.m . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.5.2 bit_Deinterleaving.m . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.5.3 bit_Deselection.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.5.4 subBlock_Deinterleaver.m . . . . . . . . . . . . . . . . . . . . . . . 41 Polar decoder: Successive Cancellation . . . . . . . . . . . . . . . . . . . . 42 4.6.1 decode_polar.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.6.2 SC_Decode.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.6.3 SC_Decode_Node.m . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.7 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.4 4.5 4.6 5 Results 51 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.2 Polar Codes: Successive Cancellation performance . . . . . . . . . . . . . . 51 5.3 Rate matching with repetition . . . . . . . . . . . . . . . . . . . . . . . . . 52 5.4 Rate matching with puncturing . . . . . . . . . . . . . . . . . . . . . . . . 54 5.5 Rate matching with shortening . . . . . . . . . . . . . . . . . . . . . . . . 55 viii 5.6 Polar Codes vs Convolutional Codes . . . . . . . . . . . . . . . . . . . . . 56 5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 6 Conclusion 59 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.2 Thesis evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 ix List of Figures 2.1 Effect of AWGN on a Signal of 5 dB . . . . . . . . . . . . . . . . . . . . . 7 2.2 Constellation diagram of 8-PSK (left) and BPSK (right) . . . . . . . . . . 8 3.1 Simulation chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 Convolutional Codes design . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3 Simulation chain, encoding block highlighted . . . . . . . . . . . . . . . . . 15 3.4 B-DMC channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.5 Polarisation of W into N virtual channels, Un : input bit, Gn : see equation 3.5, Xn : bit after encoding, Yn : received bit . . . . . . . . . . . . . . . 17 3.6 Polarisation of W using G2 . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.7 Simulation chain, rate matching highlighted . . . . . . . . . . . . . . . . . 19 3.8 Bit selection: repetition, puncturing, shortening . . . . . . . . . . . . . . . 20 3.9 Simulation chain, rate dematching highlighted . . . . . . . . . . . . . . . . 21 3.10 Simulation chain, decoding block highlighted . . . . . . . . . . . . . . . . . 22 3.11 Successive Cancellation binary tree, L(n) is beliefs in node where n is number of beliefs. ûi represent a decoded bit and 0:s are frozen bits. . . . . . . . . 24 3.12 Pre-order depth-first binary tree . . . . . . . . . . . . . . . . . . . . . . . . 24 3.13 Simulation chain, included blocks highlighted . . . . . . . . . . . . . . . . 27 4.1 Simulation chain, encode block highlighted . . . . . . . . . . . . . . . . . . 30 4.2 Polar Encoding file scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.3 MATLAB code for encode_polar CRC calculation . . . . . . . . . . . . . . 32 4.4 MATLAB code for encode_polar, mapping information bits to reliable virtual channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 32 MATLAB code for getReliabilitySeq, collect the reliability sequence of size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.6 MATLAB code for PolarEncode, encoding u_bits . . . . . . . . . . . . . . 34 4.7 MATLAB code for getGn, generating Gi . . . . . . . . . . . . . . . . . . . 34 N x 4.8 Simulation chain, rate matching block highlighted . . . . . . . . . . . . . . 35 4.9 Rate matching file scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.10 MATLAB code for subBlock_interleaver . . . . . . . . . . . . . . . . . . . 36 4.11 MATLAB code for bit_selection . . . . . . . . . . . . . . . . . . . . . . . . 37 4.12 MATLAB code for bit_interleaving . . . . . . . . . . . . . . . . . . . . . . 38 4.13 Simulation chain, rate dematching block highlighted . . . . . . . . . . . . . 38 4.14 Rate dematching file scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.15 MATLAB code for bit_Deselection . . . . . . . . . . . . . . . . . . . . . . 41 4.16 MATLAB code for subBlock_Deinterleaver . . . . . . . . . . . . . . . . . . 42 4.17 Simulation chain, Polar decoding block highlighted . . . . . . . . . . . . . 42 4.18 Decoding file scheme for Successive Cancellation . . . . . . . . . . . . . . . 43 4.19 MATLAB code for decode_polar . . . . . . . . . . . . . . . . . . . . . . . 44 4.20 MATLAB code for SC_Decode . . . . . . . . . . . . . . . . . . . . . . . . 45 4.21 MATLAB code from SC_Decode_Node, leaf operation . . . . . . . . . . . 47 4.22 MATLAB code from SC_Decode_Node, left/right/up part. Functions SplitLeft and SplitRight are code derived from equation 3.7 and 3.9 . . . . . . . . 48 4.23 MATLAB code from variable initialisation in testEnvironment . . . . . . . 49 4.24 MATLAB code where the message is generated testEnvironment . . . . . . 49 4.25 MATLAB code BLER calculation . . . . . . . . . . . . . . . . . . . . . . . 50 5.1 SC block size performance. Polar(N,K), subframes: 1000, CRC: crc24c . . 52 5.2 No rate matching vs repetition. Dotted lines are without rate matching and solid lines are with repetition. Polar(N, K) E , Subframes: 1000, CRC: crc24c. 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 No rate matching vs puncturing. Dotted lines are without rate matching and solid lines are with puncturing. Polar(N, K) E , Subframes: 1000, CRC: crc24c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 55 5.4 No rate matching vs shortening. Dotted lines are without rate matching and solid lines are with shortening. Polar(N, K) E , Subframes: 1000, CRC: crc24c. 5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Polar Codes versus Convolutional Codes, Subframes = 1000, transmission block size = 960, signal strength = [-5:0.5:5], message sizes(Left-to-right): 88, 144, 176, 208, 256, 328, 392, 472, 536, 616. Same color & marker = same message size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii 57 1 Introduction Communication has always been important in all societies. It exists in many shapes and forms, such as verbal, written and transmitted by other means. In just a matter of decades the way we communicate over long distances has changed from sending paper letters, to real-time talk to each other with the help of telecommunications. With real-time communications the phone usage has gone from needing to find a phone-booth to always having a phone in our pockets. As society has grown away from the patience we had waiting for those paper letters, to impatiently waiting for someone to return a missed call, the demand for reliable and fast data transmission has risen. With each generation of mobile network released, where 4G is currently used and 5G is being developed, the goal is to improve these data transmissions. With the use of channel coding the amount of errors which occur during transmission can be controlled and kept to a desirable amount. The algorithms used for channel coding are called error correction codes as their purpose is to restore messages transmitted over a channel and doing so even though the channel might be affected by noise. As 5G, also known as New Radio, is being developed, research and proposals were made on which algorithms should be used for channel coding. The 3GPP project, which unites telecommunications standard development organisations and companies, approved Polar Code as the channel coding algorithm for 5G control channels. Polar Codes has a recursive structure with low complexity and was proven to achieve channel capacity for long block lengths. Polar Codes was invented by Erdal Arikan and has been a hot topic since he published his paper in 2009 [1]. 1 1.1 Thesis goal This thesis was proposed by Tieto, an enterprise IT company with years of experience with development in the telecommunication sector [2]. Tieto has ongoing development within the newest telecommunication technologies and was in need of an in-depth view of the Polar Codes used in the New Radio (NR) technology. The goal of this thesis is to develop and implement Polar Codes. This includes an encoder which follows the 3GPP standard 38.212 and a decoder. Since there is no standard for a Polar decoder, research needed to be done to evaluate what algorithm to use in this thesis. One of the goals was to research different decoders and present these to Tieto before making a decision of which to implement. After the implementation of Polar Codes, Tieto wanted an evaluation of its performance using BLER/(Eb /N0 )-graph. The evaluation of Polar Codes were to include performance for different block sizes, with and without rate matching. Finally Polar Codes was to be compared with Convolutional Codes, which is an error correction code used in Long Term Evolution (LTE). To do so Polar Codes had to be implemented into Tieto’s LTE-simulation environment. 2 1.2 1.2.1 Results Expected results Polar Codes is used in the control channel of the NR network, more commonly known as 5G. This suggests that the performance of Polar Codes should be better than the performance of Convolutional Codes used in LTE, more commonly known as 4G. 1.2.2 Actual results The results and findings of this thesis correspond with initial thoughts. Polar Codes perform better when larger block sizes are used with all different types of rate matching that are described in the adapted 3GPP standard. The implemented decoding algorithm for Polar Codes is Successive Cancellation. With this decoding algorithm Polar Codes performs better than Convolutional Codes in all simulations ran for this thesis. 1.3 Thesis outline This thesis is focused on the comparison and evaluation of Polar Codes. The design and implementation of Polar Codes is described in detail and the results are evaluated. The thesis starts with chapter 2, explaining the background information required to understand the design and implementation. After the background information is covered, the design and implementation of Polar Codes are described in detail in respectively chapter 3 and 4. Finally, in chapter 5 and 6, the results and conclusions are covered, describing the evaluation and findings of this thesis. 3 2 Background 2.1 Introduction To communicate, three parts need to be available, a sender, a receiver and a medium where the message can be sent over. These mediums are often subject to noise, and thus have the risk of message corruption upon transmission. The type of medium used in this thesis is a channel, namely radio channel. To compensate for the interference of noise, error correction codes are implemented where unreliable communication channels are used. The telecommunication industry relies heavily on the usage of error correction codes in the Long Term Evolution (LTE) and New Radio (NR) technologies, described in section 2.2.2 and 2.2.1 respectively. This thesis is focused on the performance of error correction codes. The codes that are to be evaluated are Convolutional Codes and Polar Codes. Polar Codes is the agreed upon 3GPP standard in the NR technology and is therefore interesting to evaluate in comparison with the older technology used in LTE, namely Convolutional Codes. Polar Codes can be implemented using different decoding algorithms and the impact of the algorithms on performance are also to be evaluated. This chapter covers the needed backgrounds information that is required to understand the subjects this thesis approaches. 4 2.2 Mobile broadband Mobile broadband technology’s are used to enable wireless internet access to mobile devices. The technologies of interest for this thesis are LTE and NR. In the sections below LTE and NR are described in more detail. 2.2.1 Long Term Evolution LTE is a mobile broadband technology and global standard, commonly known as 4G [3]. LTE has been used for about 10 years since its first commercial deployment in 2009. The technology was researched and developed by 3rd Generation Partnership Project (3GPP). 3GPP is a large organisation of member companies from all over the world working together to research and improve mobile communication technologies. LTE was designed to keep evolving to ensuring competitiveness over a ten-year-time-frame. The requirements for the evolution of LTE from 3G are, as quoted from [3]: • Reduced delays, in terms of both connection establishment and transmission latency; • Increased user data rates; • Increased cell-edge bit-rate, for uniformity of service provision; • Reduced cost per bit, implying improved spectral efficiency; • Greater flexibility of spectrum usage, in both new and pre-existing bands; • Simplified network architecture; • Seamless mobility, including between different radio-access technologies; • Reasonable power consumption for the mobile terminal. 2.2.2 New Radio NR is a mobile broadband technology and global standard, commonly known as 5G [4]. NR is developed by 3GPP and is the successor to be of LTE. The NR technology has been under active development since 2015 and reached early deployments of 5G networks in 2018. NR has many similarities with LTE, and reuses many of its features. 5 The focus of NR can be divided into three categories; • Enhanced mobile broadband (eMBB) - eMBB corresponds to achieving an enhanced user experience, by reaching higher data rates. • Massive Machine-Type Communication (mMTC) - mMTC corresponds to the technology supporting massive amounts of devices with low costs and energy consumption. • Ultra-Reliable and Low-Latency Communication (URLLC) - URLLC corresponds to the technology achieving high reliability and very low latencies. 2.3 Noise channels To measure and quantify the performance of digital communication channels, such as wireless communication, one looks at the probability of bit error detection at a given disturbance, called noise. Many different noise models exist for a variety of different communication scenarios. A few examples of these types of noise models are Additive White Gaussian Noise (AWGN), Rayleigh fading and Random waypoint. • AWGN is a model that can simulate the natural noise existing all around us, this noise can be caused by example thermal vibrations of atoms [5]. AWGN is the noise model used in the simulations for this thesis. • Rayleigh fading is used to simulate an urban environment where line of sight is nearly impossible between source and receiver [6]. • Random waypoint model is used to simulate mobile devices that change location, velocity and acceleration over time [7]. This thesis will be using AWGN as its noise channel, Tieto had an existing implementation which could be used and they wanted Polar Codes tested with it. 6 2.3.1 AWGN AWGN is a basic noise model which is used to simulate the effect that natural random signals, known as background noise, will have on the wireless signal as seen in figure 2.1. Its name, AWGN, comes from the characteristics of the model as follows [5]. • Additive - The received transmission is equal to the transmitted signal plus noise. • White - The noise represents the idea of a uniform power across the whole frequency band, which means it does not appear different at other frequencies. • Gaussian - AWGN follows a Gaussian normal distribution, which means it can acquire both negative and positive values, with values closer to zero having a higher probability to appear. • Noise - Signal disturbance. Figure 2.1: Effect of AWGN on a Signal of 5 dB 7 2.4 Phase-shift keying Digital modulation is a technique that is used for data transmission, namely, • Frequency-Shift keying (FSK). • Amplitude-shift keying (ASK). • Phase-shift keying (PSK). These are all modulation schemes which represents the data differently. This thesis is using PSK, which apart from telecommunication also is used for Bluetooth, RFID and Wireless LANs [8]. PSK is a technique where the carrier signal is modulated by varying the cosine and sine inputs at a particular time, where the sine carrier is split into 2n phases [9]. The distribution of phases are often represented using a constellation diagram which shows the in-phase as a dot on the complex plane, see figure 2.2. Figure 2.2: Constellation diagram of 8-PSK (left) and BPSK (right) By using more phase states, it is possible to have a higher data rate on the same bandwidth signal compared to using less phase states. This comes with the downside of adding more complexity to the modulation and a higher risk of misdetection. This thesis is using the simplest form of PSK, namely Binary Phase Shift Keying (BPSK), where sine carriers are represented by two phase states, 0-bit: 1 and 1-bit: -1, see figure 2.2. 8 2.5 Channel capacity Channel capacity means in what rate information can be transmitted without compromising the outcome on the receiving end. Transmitting bits at higher rates than the channel capacity will result in bits colliding on the receiving end and thus corrupting the output. Since a channel is always experiencing noise, one can not simple transmit a message of full channel capacity and expect complete recovery. This introduced the Shannon-Hartley Theorem which is a mathematical model where the channel capacity of a channel with the presence of noise can be calculated [10]. The theorem states that coding methods exists that can reach full channel capacity but can not be used to develop these codes. 2.6 Interleaving Interleaving is a technique where a bit stream is interleaved by a known distribution pattern. This means that the bits are placed out of order, leading to more diversity within the bit stream. The advantage of using interleaving is that if a chunk of bits is lost in transmission, the error bits are spread out after deinterleaving making it easier for the error correction code to restore the original message. 2.7 Rate matching Rate matching is a method where information blocks are manipulated so that the length of it matches the transmission rate [3]. How these information blocks can be manipulated depends on their size after encoding, which can either lead to shortening or extending of the block. How this is achieved depends on which standard is used. LTEs, rate matching for convolutionally coded transport channels is done in three steps, sub-block interleaving, bit collection and bit selection [11] to achieve desired sizes. NRs, rate matching for Polar Codes is also done in three steps, sub-block interleaving, bit selection and interleaving of coded bits [12]. In chapter 3, Design, rate matching for Polar Codes is described more extensively. 9 2.8 Error correction code Error correction codes add redundancy to a message in such a way that the receiver can verify the integrity of the message and verify if the received message is correct and if not, even restore the message to a certain extent. Veronique Charlet et al. [13] explains error-correction codes in a simplified way, that a person who speaks, sending a signal to someone who is listening, by means of the air conveying the vibrations and forming the sound wave. Although, others might be talking nearby thus creating noise, making it harder for the receiver to hear the message. By yelling the message the speaker increases the probability that the message is heard, but this is far more exhausting. Instead of shouting, the speaker can add a number after each letter which correspond to the letter’s place in the alphabet. The extra data is redundant, but in the case of corruption, the numbers might help to clarify what the intended message really is. 2.8.1 Convolutional Codes Convolutional Codes is a type of error correction code which operates on a bit stream. The bit stream is pushed through what can be seen as a sliding application. With every tick, each bit is moved further along the application creating a dependency between the bits. A more detailed description of Convolutional codes can be found in its design section 3.2. Convolutional Codes was introduced in 1955 by Peter Elias [14] when he showed that redundancy can be applied with the help of shift registers. Convolutional Codes are used in today’s LTE [11] control channel. 10 2.8.2 Polar Codes Polar Codes, introduced by Arikan Erdal in 2009 [1], builds on the concept of channel polarisation. The code is in the first family to achieve maximum channel capacity on any symmetric binary-input discrete memory-less channel (B-DMC) and does this with low coding complexity O(N logN ), where the block length is represented by N . Polar Codes are based on recursion where the original channel Wn is divided into virtual channels W1 and W2 . Arikan proved that with enough division-recursion the cut off rate would be higher on the virtual channel than the original channel and that the virtual channels tend to have either high reliability or low reliability. This means that the channels are either completely noisy or noiseless, where the noiseless channels should be chosen to transmit data on. Polar Codes was agreed upon to be used in the upcoming 5G - NR technology [12], replacing the now used Convolutional Codes in LTE. 2.9 Eb /N0 Energy per bit to noise power spectral density ratio (Eb /N0 ) is a measurement used to evaluate the strength of a channel, where higher values correlates to a strong signal and lower values correlates to a weak signal. Eb /N0 is presented in decibel (dB). 2.10 Block error rate Block error rate (BLER), is a ratio between number of blocks containing errors to the total number of transmitted blocks. BLER is calculated after the decoding and evaluates the cyclic redundancy check (CRC), which is a type of checksum, for each transported block [15]. The CRC is appended onto each transmitted block and the block will only be marked as successfully transmitted if the CRC-attachment matches the CRC calculated by the receiver. 11 2.11 MATLAB MATLAB is a development environment and programming language [16]. The programming language relies heavily on matrices, making it the perfect environment to handle large data collections. MATLAB is used to simulate the physical communication layer with code provided by Tieto and calculates the simulation results to a BLER-Eb /N0 graph. 2.12 Summary In this chapter, the background of this thesis was introduced and the needed terms to understand the goal of this thesis have been introduced and explained. This chapter has introduced mobile broadband technologies LTE and NR, where some crucial parts of the technologies have been described in more detail. The main focus is on the description of the different error correction codes and its dependencies. 12 3 Project Design 3.1 Introduction In this chapter the overall design of the project is introduced and described. The highlighted blocks in the transmission chain, seen in figure 3.1, represent the design and implementation scope of this thesis. The same figure can be used to describe the transmission chain for Convolutional Codes, with the exception of which encode and decode algorithm is used. Blocks highlighted in figure 3.1 are described in this chapter where the main focus is on Polar Codes with the chosen decoding algorithm, Successive Cancellation. Figure 3.1: Simulation chain 13 3.2 Convolutional Codes The design structure of Convolutional Codes is as follows; A bit stream (denoted C), containing x number of bits, is pushed through m number of memory registers (denoted D), each holding one input bit. The bits are then pushed through a XOR gate, as shown in figure 3.2, which gives an output bit stream. Figure 3.2: Convolutional Codes design Convolutional Codes are denoted as (n, k, m) [17, ch.6], where n is the number of input bits, k is the number of output bits and m is the number of memory registers. The rate is denoted n/k (rate 1/3 in figure 3.2), meaning that for every n input bit(s) there are k output bit(s). After each time frame all registers perform a bit shift to the right, meaning that new bits come in and the bit currently in last register is dropped. The scope of this thesis does not include the implementation of Convolutional Codes, but rather its performance compared to Polar Codes. This thesis uses an already existing implementation of Convolutional Codes, according to 3GPP 36.212 [11], provided and implemented by Tieto. Since the implementation is Tietos proprietary software, this thesis will not include any source code of Convolutional Codes. 14 3.3 Polar encoder Figure 3.3: Simulation chain, encoding block highlighted Polar encoder is the first step in Polar Codes. The Polar encoder is used by the sender, see figure 3.3, to modulate the message such that the receiver will know how to decode and verify the message. In the encoder the information bits are mapped according to a reliability sequence found in 3GPP 38.212 specification, see section 3.3.1. The reliability sequence is built upon the channel polarisation which the channel undergoes in the encoder, explained in section 3.3.2. 3.3.1 3GPP 38.212 The 3GPP 38.212 [12] standard holds many specifications regarding the usage and implementation of channel coding. Not all of the specifications listed in 38.212 are implemented in this thesis. 3GPP 38.212 describes how the Polar Codes are implemented in NR control channels. This thesis focuses on evaluating the performance between different channel codes and algorithms when transmitting over an AWGN channel. Since the evaluations are performed on a LTE-simulation the interleaver will not be used on any of the channel codes. This is because the interleaver is done differently in LTE and NR. Thus making it harder to 15 evaluate the code performances. Along with the sent data, the 3GPP 38.212 also includes specific ways of using cyclic redundancy checks (CRC) that are used in different implementations of Polar Codes in the physical channels. This thesis will focus on the implementation of Polar Codes in the Control Channel, therefore a CRC of size 24 [12, ch 7.3.2], namely crc24c is used as described in [12, ch 5.1]. 3GPP 38.212 includes a Polar reliability sequence that is used to determine the ”frozen bits” and which channel the information bits should be sent on, further explained in the following section. The Polar reliability sequence is established by sorting the virtual channels by their channel capacity from worst to best. 3.3.2 Encoding and channel polarisation The purpose of an encoder is to translate information from a sender to a code-word which is then transmitted to the receiver. As the transmission can be affected by noise, which disrupts the signal, the encoder also needs to add security to the message in such a way that the decoder still can restore the message even though that some bits might be flipped. To do so Polar Codes are assuming that a Binary Discrete Memoryless Channel (BDMC) [18] is used [1]. P r{X = x, Y = y} (3.1) As seen in equation 3.1 where x and y are discrete alphabets, in this case x = {0, 1} and y = {0, 1}, B-DMC represents bits that can ether be 1 or 0. X and Y respectively are input and output values, where X is a value in x and Y is a random value in y. This leads to equation 3.2. 16 W : P r{X = x, Y = y} = P r{X = x}p(Y |X) (3.2) For the rest of the thesis equation 3.2 is denoted W , for all (X, Y ) ∈ x × y. The goal of Polar encoding is to take W , see figure 3.4, and make N copies that are combined and split into different virtual channels, see figure 3.5. The virtual channels are either ”noiseless” or completely ”noisy” channels [1]. The noise which is referred to is the information which is added from other channels, see figure 3.6 where U1 is considered more noisy than U2 . The more channels that are polarised the larger the gap between the noiseless and noisy channels becomes. The combination of virtual channels is done by using a Kronecker Figure 3.4: B-DMC channel Figure 3.5: Polarisation of W into N virtual channels, Un : input bit, Gn : see equation 3.5, Xn : bit after encoding, Yn : received bit product of the matrix G2 , see expression 3.3, where the rows of the matrix represents the virtual channels a bit should be transmitted on and the columns represent whether or not the bit should be XOR:ed when combining the virtual channels. An illustration of G4 and GN respectively can be seen in equations 3.4 and 3.5. 17 1 0 G2 = 1 1 1 1 G4 = 1 1 0 0 1 0 0 1 1 1 (3.3) 0 0 0 1 [ ]⊗n GN = G2 The process of combining two W copies is illustrated in figure 3.6. (3.4) (3.5) A Polar encoded Figure 3.6: Polarisation of W using G2 transmission is denoted as (N, K), where K = length of the message and N > K, N = 2n , n = 2, 3, ..., 10 which forms a vector u of length N bits. The remaining N −K bits are called frozen bits with a set value of 0. The u vector is then mapped to the channels according to a reliability sequence. The frozen bits are placed on the least reliable channels, called frozen positions and the information bits are placed in the most reliable channels, called information positions. After the channels are combined, the output from the encoder is a code-word denoted as vector d, where d = uGn mod 2. 18 3.4 Rate matching Figure 3.7: Simulation chain, rate matching highlighted Rate matching is about matching the number of bits to the desired transmission rate, see section 2.7. Rate matching for Polar Codes is performed after encoding and before the bits are sent over the channel as seen in figure 3.7. For Polar Codes there are three parts to rate matching, sub-block interleaving, bit selection and interleaving. For this thesis sub-block interleaving and bit selection was implemented according to the 3GPP 38.212 standard [12]. These parts are described in more detail in the following sections. Bit interleaving is outside of the scope of this thesis and therefore not implemented. 3.4.1 Sub-block interleaving Sub-block interleaving distributes bits, as described in section 2.6, according to a specific interleaver pattern. The bit distribution uses 32 bit sub-blocks that are redistributed following the sub-block interleaver pattern in the 3GPP 38.212 standard [12, table:5.4.1.11]. The distribution also takes into account which bits are ”punctured” or ”shortened” in the bit selections part of rate matching. 19 3.4.2 Bit selection Bit selection uses three main techniques to achieve the desired number of transmission bits, these are: repetition, puncturing and shortening as described in figure 3.8. Figure 3.8: Bit selection: repetition, puncturing, shortening Repetition is used when the desired transmission bits are greater than the encoded bits, and as the name suggests, repeats the encoded bits. Using repetition the bits are sent multiple times within the transmission. This achieves a bit redundancy that makes it less likely that a bit is wrongly decoded. Puncturing and shortening are techniques used only when the desired transmission bits are less than the encoded bits. Puncturing removes bits from the start of the 32 bit subblocks and is used when the ratio between information bits and the desired transmission bits is less or equal than 7/16 according to the 3GPP 38.212 standard [12]. Shortening removes bits at the end of the 32 bit sub-blocks and is used when puncturing is not used meaning that the ratio between information bits and desired transmission bits is greater than 7/16 according to the 3GPP 38.212 standard [12]. Both of these options are needed as Polar Codes is dependent on frozen bits. As explained, puncturing removes the bits from the start of the block, which often are Frozen Bits. But if the ratio between information bits and transmission bits is greater than 7/16 shortening has to be used, because otherwise there would be too few frozen bits. 20 3.5 Rate dematching Figure 3.9: Simulation chain, rate dematching highlighted Rate dematching is performed on the received bits to restore the encoded bit structure so that the Polar decode algorithm can decode the message as seen in figure 3.9. Rate dematching consists of three parts, sub-block deinterleaving, bit deselection and deinterleaving. Just like interleaving in rate matching, deinterleaving is outside of the scope of this thesis. Sub-block deinterleaving and Bit Deselection are described in the following sections. 3.5.1 Bit deselection Bit deselection needs to reconstruct the encoded bits that were manipulated using the three techniques in bit selection, see section 3.4.2. Received bits that were manipulated with repetition in bit selection are added up for every recurrence of a bit, creating a stronger belief for that bit. Received bits that were manipulated with puncturing in bit selection will have zeros set at the start of the encoded bits, and for shortening, bits will have zeros set at the end of the encoded bits. The zeros are added to replace the bits that were removed during bit selection. 21 3.5.2 Sub-block deinterleaving Sub-block deinterleaving is done by redistributing the bits back to how they were distributed before sub-block interleaving, see section 3.4.1. Redistributing is performed according to the reversed Sub-Block interleaver pattern in the 3GPP 38.212 standard [12, table:5.4.1.1-1]. 3.6 Polar decode The Polar decoder receives a Polar encoded message that was transmitted over an AWGN channel as seen in figure 3.10. Because of the noise on the AWGN channel, the received message values includes signal offsets compared to the message that was sent. Polar Codes is designed to be able to correct these offsets and reconstruct the correct message from the received data. Figure 3.10: Simulation chain, decoding block highlighted There are a variety of algorithms for decoding Polar encoded messages. Arikan proposed the Successive Cancellation (SC) technique in 2009 [1] as one of the first decoding algorithms for Polar Codes. As part of this thesis different algorithms were presented to Tieto with a top level description of their design and decoding approach. The researched and presented algorithms were: 22 • Successive Cancellation [1]. • Successive Cancellation List [19]. • Belief Propagation [20]. • Belief Propagation List [21]. • Low-Complexity Software Stack Decoding (Based on Successive Cancellation) [22]. • Successive Cancellation Priority Decoding [23]. Most algorithms that could be found were based upon SC with added complexity for better performance. Therefore SC were chosen as the decoder to be implemented for this thesis. An attempt was made to also implement Successive Cancellation List, since this has a significant code gain compared to SC [19]. Unfortunately time did not allow completion of SCL decode algorithm. 3.6.1 Belief Beliefs, denoted as L, are the transmitted values that the decoder receives. These are represented as negative values for bit value 1, and positive values for bit value 0. The strength of the belief are the absolute value of L, where larger values represent stronger beliefs. Example, |L| = 0.15 can be considered as a weak belief, while |L| = 30 can be considered as strong belief. 3.6.2 Frozen bits Decoding the received beliefs can be done with different algorithms, but the frozen bits will always be handled the same way. The Polar encoder sets frozen bits that represent the virtual channels with the most noise, sorted from worst to best in the reliability sequence found in 3GPP standard [12]. Information bits are never sent over the frozen bit channels. When decoding the received beliefs, the frozen bits should always be set to 0 regardless of what value the belief has. 23 3.7 Successive Cancellation Because of the way the Polar Codes are structured, the received values, r, from the channel can be represented as a binary tree. The sequence in which the received beliefs, L, are decoded is similar to a pre-order depth-first search tree as seen in figure 3.11. Figure 3.11: Successive Cancellation binary tree, L(n) is beliefs in node where n is number of beliefs. ûi represent a decoded bit and 0:s are frozen bits. Figure 3.12: Pre-order depth-first binary tree 24 The binary tree structure is made of nodes and leaf-nodes, where a node always has two children. A leaf-node is a node without children [24]. The operations in nodes can be split into four states: Leaf, Left, Right and Up, where the leafs represent the decoded message bits. The Leaf state will only be used if the node is a leaf and the Left and Right states will only be used in non-leaf nodes. As seen in figure 3.12, in a pre-order depth-first search tree, the order of these states will always be: 1. Check if the node is a leaf, if so, perform leaf state operations. 2. Check if the node is not a leaf, if so, perform left operations followed by right operations. 3. The up state operations are always performed as the last operations in any node. 3.7.1 Leaf state The leaf state for a node is about calculating the corresponding decoded bit for the leaf and setting ûi , representing the estimated bit value. If the leaf represents a frozen bit position, the value of the belief Li that was passed down to the leaf is ignored and ûi is set to zero. If the leaf represents a non-frozen bit position, the value of the belief Li is used to determine what value to set to ûi as seen in equations 3.6. Li ≥ 0 : ûi = 0 Li < 0 : ûi = 1 25 (3.6) 3.7.2 Left state The left state for a node is about sending the correct beliefs Li to the node’s left child, see equations 3.7. The beliefs are split into two parts, a and b. a = Ln (1, 2, 3, ..., n/2) b = Ln (n/2 + 1, n/2 + 2, n/2 + 3, ..., n/2 + n) (3.7) The polarisation that is performed on the virtual channels during encoding, as seen in figure 3.6, is the bases that leads to equation 3.8. Ln/2 = sgn(a) × sgn(b) × min(|a|, |b|) (3.8) For each value in a and b the signs are multiplied and result is then multiplied with the smallest value of the absolute values of a and b. This is done because the value closest to zero is a value from the most unreliable virtual channel. 3.7.3 Right state The right state for a node is about sending the correct beliefs Ln to the node’s right child. This state is performed after the left state is done because the result of the left child is used to determined what data is sent to the right child. The a and b values from left state, as seen in equation 3.7, are also used in the right state. The right state is depending on the value ûi from the left child, where ûi represent the bit calculated for the polarised virtual channels as seen in equations 3.9. As several bits might have been set by the left child, i is an index for each bit set by the left child. ûi = 1 : Ln/2 = b − a ûi = 0 : Ln/2 = b + a 26 (3.9) 3.7.4 Up state The up state for a node is about sending the gathered and calculated data to the parent node. This state is used in both nodes and leaf-nodes and is the final state that contains the decoded bits in the root node. The up state is done as the last action in any node. For leaf-nodes the calculated ûi is used and other nodes are reconstructing the polarised virtual channel structure using the calculated û values as seen in equation 3.10. Lastly the ûi value is sent up to the parent node. An example can be seen in 3.11. ûi = [ûlef t ⊕ ûright , ûright ] ûlef t = [1] 3.8 ûright = [1] (3.10) ûi = [01] (3.11) Simulation Figure 3.13: Simulation chain, included blocks highlighted The simulation environment is where channel codes are tested and evaluated. The simulation environment uses a range of parameters that makes it possible to test and evaluate different performance aspects. As seen in figure 3.13 the scope of this thesis is 27 limited to the blocks highlighted. Each block represent a module with a specific purpose. This thesis uses two different simulation environments. One that is used to simulate the performance of Polar Codes using different Polar Codes parameters and one environment to compare Polar codes with Convolutional Codes. The environment used to simulate Polar Codes performance is described in further detail in the implementation section 4.7, whereas the environment used in performance comparison to Convolutional Codes is provided and owned by Tieto and therefore will not be described in detail. When comparing Polar Codes to Convolutional Codes, the blocks are in the same sequence, but Polar encoding/decoding-modules will be replaced with Convolutional encoding/decoding-modules. 3.8.1 Parameters Parameters of interest for this thesis are PolarBlockSize, InformationBits, TransmissionBlocksize, Eb/N0(db), Noise and SubFrames. • PolarBlockSize - Different block sizes are tested and evaluated against each other in the same conditions. This allows analysis on performance variations depending on the size of the blocks being encoded and decoded. Possible block sizes for Polar Codes are 2n , where 5 ≤ n ≤ 10 as specified in 3GPP 38.212 [12]. • InformationBits - The amount of message bits and CRC bits combined. This parameter should mostly be half the PolarBlockSize except for simulations evaluating rate matching using puncturing in bit selection. • TransmissionBlocksize - Different transmission block sizes are evaluated using rate matching to achieve the desired sizes. • Eb /N0 (dB) - A range of values representing the signal strength in the channel as described in section 2.9. Performance results with different Eb /N0 (dB) are interesting to measure and compare to see the performance differences, in decibel between simulations. 28 • Noise - AWGN is the type of noise used for all the simulations in this thesis, as described in section 2.3.1. • SubFrames - The subframes are used to run a simulation multiple times using the same settings to be able to get statistical data from the results. In LTE and NR one SubFrame corresponds to one millisecond time period for a transport block transmission. More subframes will yield more accurate results for the simulation. 3.8.2 Performance comparisons Performance comparisons are represented with a BLER vs Eb /N0 graph. BLER is on the y-axis and presented with a logarithmic-scale and corresponds to a percentage of blocks that are seen as erroneous, see section 2.10. Eb /N0 is on the x-axis and corresponds to the signal strength in dB, see section 2.9. 3.9 Summary In this chapter the overall project design for this thesis was described. The Convolution Codes used in LTE have briefly been explained but the focus is more on the Polar Codes used in NR. The 3GPP 38.212 standard for NR was introduced and described along with the Polar encoder. The design of the chosen decoding algorithm, Successive Cancellation, was introduced along with its decoding concept. The simulation process was described and the different possible parameters were explained. 29 4 4.1 Project Implementation Introduction In this chapter the implementations of Polar Codes, rate matching and the simulation environment developed for the testing of Polar Codes are described. This chapter has five main sections: Polar encoder, rate mathcing, rate dematching, Polar decoder and simulation. The Polar decoding algorithm in this thesis is a recursive implementation of Successive Cancellation. The following sections contains a thorough description where each implementation is explained in detail with the help of code snippets. 4.2 Convolutional Codes The Convolutional Codes that is used in this thesis is being developed and maintained by Tieto and will therefore not be described in detail. The Convolutional encoder was implemented following the 3GPP 36.212 standard [11]. The scope of this thesis does not include the implementation of Convolutional Codes. 4.3 Polar encoder Figure 4.1: Simulation chain, encode block highlighted 30 The Polar encoder is implemented according to the 3GPP 38.212 standard except for the interleaver [12]. The interleaver, see figure 3.3, is not relevant to this thesis because the main focus is on the performance of the algorithms. The Polar encoder is written in MATLAB and implemented in a LTE simulation environment provided by Tieto. This thesis implementation of the Polar encoder is a combination of four files as seen in figure 4.2, this design is used to keep the same design pattern as used in the simulation environment. Because of the way the Polar encoder is implemented in the simulation environment, the core of the encoder is the encode_polar function. The encode_polar function gets the input bits as a parameter, encodes the bits by using the functions mentioned below, and then returns the encoded bits, e_bits. Figure 4.2: Polar Encoding file scheme 4.3.1 encode_polar.m The encode_polar function receives four parameters: • trBlock - Transport block. • N - Transport block size. • crcType - Variable containing name of CRC. • rnti - Variable that is used to calculate a unique CRC. The crcType and rnti values are used to calculate the CRC that is appended to trBlock and placed in vector b_bits as seen in figure 4.3. When b_bits is constructed, the length K of b_bits is used to calculate N , where 31 if strcmpi (crcType , 'crc16 ') %[D16 + D12 + D5 + 1] gCrc = [1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1]; crcXor = dec2bin (rnti , 16) - '0'; elseif strcmpi (crcType , 'crc24c ') %[D24+ D23+ D21+ D20+ D17+ D15+ D13+ D12+ D8+ D4+ D2+ D+ 1] gCrc = [1 1 0 1 1 0 0 1 0 1 0 1 1 0 0 0 1 0 0 0 1 0 1 1 1]; crcXor = dec2bin (rnti , 24) - '0'; elseif strcmpi (crcType , 'crc11 ') %[D11 + D10 + D9 + D5 + 1] gCrc = [1 1 1 0 0 0 1 0 0 0 0 1]; crcXor = dec2bin (rnti , 11) - '0'; else error ('Unknown crcType '); end b_bits = crcCalc ( trBlock (:) , gCrc , crcXor ); Figure 4.3: MATLAB code for encode_polar CRC calculation N > K, N = 2n , n = 2, 3, ..., 10 and is the number of virtual channels that are used during encoding and transmission. N is used to get the reliability sequence from getReliabilitySeq and the sequence is placed in QN . QN contains the positions of the most reliable channels in ascending order. Vector u_bits is of size N and initialised with all zeros. The information bits, b_bits, are mapped to u_bits using the K most reliable channels of the reliability sequence in QN as seen in figure 4.4. This way the frozen bits are the unused positions which has value 0 from the initialisation. % Collects the reliability sequence of size N Qn = getReliabilitySeq (N); % Generates the u_bits and assign the information bits according to Qn. u_bits = zeros (1, N); u_bits (Qn(N - K+1 :1 :end )) = b_bits ; Figure 4.4: MATLAB code for encode_polar, mapping information bits to reliable virtual channels 32 Vector u_bits is then sent to PolarEncode to be encoded and returned into vector e_bits. e_bits is then returned to the simulation environment that transmits e_bits to the receiver simulation code. 4.3.2 getReliabilitySeq.m The reliability sequence is defined by the 3GPP 38.212 standard and implemented as a large vector with the values found in 3GPP 38.212 [12, table:5.3.1.2-1]. The vector Q contains the full reliability sequence for N = 1024. getReliabilitySeq will receive a value N when called and collect the channel positions where Qi ≤ N , the positions are placed in vector Qn and returned as seen in figure 4.5. function Qn = getReliabilitySeq (N) % Reliability sequence from 3GPP 38 .212 Table 5.3.1.2 -1 Q = [0 1 2 4 8 16 32 3 5 64 9 6 17 10 18 128 12 33 65 ... (1024 values )] % Returns reliability sequence for positions ≤ N Qn = Q(Q ≤ N); Figure 4.5: MATLAB code for getReliabilitySeq, collect the reliability sequence of size N 4.3.3 PolarEncode.m In this function the encoding of the bits is performed. The function receives a input vector u_bits which contains the information bits along with the frozen bits. After receiving the n : th Kronecker product of G2 from getGn a matrix multiplication is performed as u_bits × Gn and placed into e_bits as seen in figure 4.6. Equation 4.1 illustrates a matrix multiplication for a 4-bit encoding. The output is then taken as mod2 since there can only be bit values. [ u1 u2 u3 1 ] 1 u4 1 1 0 0 0 ] 1 0 0 [ = u +u +u +u u +u u +u u 1 2 3 4 2 4 3 4 4 0 1 0 1 1 1 33 (4.1) function e_bits = PolarEncode ( u_bits ) N = length ( u_bits ); % Generate the Kronecker product of u_bits. Gn = getGn (N); % Generates e_bits according to GF (2) e_bits = mod( u_bits *Gn ,2); Figure 4.6: MATLAB code for PolarEncode, encoding u_bits 4.3.4 getGn.m getGn is used to generate the Kronecker product of the matrix G2 for size N as seen in figure 4.7. This is done by first generating G2 as seen in equation 3.3 after which a Kronecker product of two G2 is calculated creating G4 as seen in equation 3.4, and stored in Gi . Gi is then calculated repetitively using Kronecker product of the previous Gi and G2 until Gi has the number of rows equal to N as seen in equation 3.5. % Generates n:th Kronecker Matrix with G2. function Gi = getGn (N) % Generate basic G2 G2 =[1 0 ; 1 1]; Gi = G2; n = log2(N); % Grow Gi to size N for i=1:n -1 Gi = kron(Gi , G2); end Figure 4.7: MATLAB code for getGn, generating Gi 34 4.4 Rate matching Figure 4.8: Simulation chain, rate matching block highlighted Rate matching is performed on polar encoded bits before the bits are sent over the channel, as seen in figure 4.8. Rate matching was implemented following the 3GPP 38.212 standard [12]. 4.4.1 rateMatch_polar.m This is the main function for rate matching, it holds the three different steps that rate matching for Polar Codes does, following the 3GPP 38.212 standard [12]. rateMatch_polar takes five parameters and consists of three steps as seen in figure 4.9. Figure 4.9: Rate matching file scheme 35 The parameters used when calling rateMatch_polar are as follows: • d_bits - Encoded bits to perform rate matching on. • N - Block size for Polar Codes. • K - Number of information bits. • E - Desired block size. • I_bil - Flag to enable bit interleaving. First the rate matching code calls subBlock_interleaver, second the bit_selection and third bit_interleaving. After these functions have manipulated the bits, rate matching is complete and f_bits is returned. 4.4.2 subBlock_interleaver.m The subBlock_interleaver takes two parameters: • d_bits - bits to perform sub-block interleaving on. • N - Block size for Polar Codes. Sub-block interleaving makes use of a sub-block interleaver pattern described in section 3.4.1. Using the pattern the bits are redistributed into sub-blocks of size 32 as seen in figure 4.10. function y_bits = subBlock_interleaver (d_bits , N) P = [0 1 2 4 3 5 6 7 8 16 9 17 10 18 11 19 12 20 13 21 14 22 15 23 ... 24 25 26 28 27 29 30 31]; J = []; for n = 0:1:N -1 i = floor ((32* n)/N); J(n+1) = P(i+1) *(N/32) + mod(n, N/32); y_bits (n+1) = d_bits (J(n+1) +1); end end Figure 4.10: MATLAB code for subBlock_interleaver 36 4.4.3 bit_selection.m Bit selection takes four parameters and consists of three parts as described in section 3.4.2. The parameters are: • y_bits - Bits to perform bit selection on. • N - Block size for Polar Codes. • K - Number of information bits. • E - Desired block size. N, K and E are used to determine what kind of bit selection to perform as seen in figure 4.11. Repetition is performed when E ≥ N , puncturing is performed when K/E ≤ 7/16 and shortening is performed when K/E > 7/16. function e_bits = bit_selection (y_bits , N, K, E) if (E ≥ N) % Repetition e_bits = y_bits (mod (0:1:E-1, N)+1); else if ((K/E) ≤ (7/16) ) % Puncturing e_bits = y_bits (N-E +1:1: N); else % Shortening e_bits = y_bits (1:E); end end end Figure 4.11: MATLAB code for bit_selection 4.4.4 bit_interleaving.m Bit interleaving takes three parameters: • e_bits - Bits to perform bit interleaving on. • E - Desired block size. • I_bil - Flag to enable bit interleaving. 37 Bit interleaving is outside of the scope of this thesis, and therefore the I_bil flag will always be set to zero and return f_bits equal to e_bits as seen in figure 4.12 function f_bits = bit_interleaving (e_bits , E, I_bil ) if(I_bil == 1) % Not implemented error ('Polar interleaving not implemented '); else f_bits = e_bits ; end end Figure 4.12: MATLAB code for bit_interleaving 4.5 Rate dematching The rate dematching happens before Polar decoding, see figure 4.13, and has the function of reversing the changes made by rate matching. To do so each step described in the standard and implemented in rate matching had to be analysed so that a reverse function of each part could be developed. In this section the files created for the rate dematching are presented and described with the help of code snippets. Figure 4.13: Simulation chain, rate dematching block highlighted 38 4.5.1 DerateMatch_polar.m As the receiver side of the simulation has demodulated the received bits these are sent to the rate dematcher with the function call DerateMatch_polar. The parameters given to this function are as follows: • llr - This vector contains the Log Likely-hood ratios (LLR). In other terms the beliefs for each transmitted bit. • N - This is an integer value which refers to the block size of information bits plus frozen bits. This is needed so that the rate dematching restores the bits to the correct block size. • K - An integer value which refers to the number of information bits that were sent. • E - An integer value which refers to the block size of the transmission. • I_bil - A flag indicating if bit interleaving should be performed. For this thesis I_bil will always have a value of 0. DerateMatch_polar calls, in given order, bit_Deinterleaving, bit_Deselection and subBlock_Deinterleaver with the correct parameters as seen in figure 4.14. The order of the calls are given in the reverse order compared to rate matching. DerateMatch_polar then returns the rate dematched bits back to simulations chain. Figure 4.14: Rate dematching file scheme 39 4.5.2 bit_Deinterleaving.m bit_Deinterleaving deinterleaves coded bits. In this function the I_bil flag is used. Although, since it is always set to zero, because of the scope for this thesis, the only function of bit_Deinterleaving is to place the llr values into another vector called deInterleav_llr and then return this vector. 4.5.3 bit_Deselection.m bit_Deselections function is to restore the length of the transmitted bits so that it matches the length of encoded bits. The length of the encoded bits has either been extended or shortened depending on if N is larger or smaller than E. These values are sent in as parameters and are as follows: • deInterleav_llr - This vector contains the LLR/beliefs for the transmitted bits. • N - This is an integer value which refers to the block size of information bits + frozen bits that the encoder had. This is need so that the rate dematcher restores the bits to the correct block size. • K - An integer value which refers to the number of information bits that was sent. • E - An integer value which refers to the block size of the transmission. First decision taken by bit_Deselection is depending on the ratio between E and N, see figure 4.15. If E ≥ N then this means that puncturing, see section 3.4.2, was made on the encoded bits before transmission. As seen in figure 4.15 the first N bits from deInterleav_llr are placed into deSelected_llr. The last E−N bits are then looped through and their beliefs are added on to the corresponding index in deSelected_llr. This makes the beliefs for these bits stronger. If E < N it means that bits were removed before transmission. One of two operations was performed, either puncturing or shortening. For both cases zeros are added for the removed bits, but the placement of them depends on which operation has been made, see figure 4.15. A belief of zero correspond to a neutral belief, meaning that it is equally likely that the removed bit were a 0 or 1. 40 function deSelected_llr = bit_Deselection ( deInterleav_llr , N, K, E) if (E ≥ N) % Repetition deSelected_llr = deInterleav_llr (1:N); for k = N:1:E -1 index = mod(k, N)+1; deSelected_llr ( index ) = deSelected_llr ( index ) + ... deInterleav_llr (k+1); end else if ((K/E) ≤ (7/16) ) % Puncturing deSelected_llr = [ zeros (1, N-E) deInterleav_llr ]; else % Shortening deSelected_llr = [ deInterleav_llr zeros (1, N-E)]; end end end Figure 4.15: MATLAB code for bit_Deselection 4.5.4 subBlock_Deinterleaver.m subBlock_Deinterleavers function is to restore the order which was interleaved by sub-block interleaver on the transmitter side. The parameters used when calling subBlock_Deinterleaver are as follows: • deSelected_llr - LLR/beliefs values, that are to be rearranged to restore their order before decoding. • N - Block size used for the Polar encoding. To redistribute the bits back to the order of the encoded bits, subBlock_Deinterleaver uses the pattern described in section 3.5.2. The implementation of sub-block deinterleaver can be seen in figure 4.16. 41 function r_deMatched = subBlock_Deinterleaver ( deSelected_llr , N) P = [0 1 2 4 3 5 6 7 8 16 9 17 10 18 11 19 12 20 13 21 14 22 15 23 ... 24 25 26 28 27 29 30 31]; J = []; for n = 0:1:N -1 i = floor ((32* n)/N); J(n+1) = P(i+1) *(N/32) + mod(n, N/32); r_deMatched (J(n+1) +1) = deSelected_llr (n+1); end end Figure 4.16: MATLAB code for subBlock_Deinterleaver 4.6 Polar decoder: Successive Cancellation Figure 4.17: Simulation chain, Polar decoding block highlighted This section includes how the Successive Cancellation (SC) decoding algorithm was implemented for this thesis. Polar decoding is the last step done before message verification and BLER-calculations, see figure 4.17. The decoding algorithm was implemented using recursion. As Polar Codes can be represented using a perfect binary tree, it makes recursion a valid option with the use of pre-order depth-first traversal. The implementation is split into three files decode_polar, SC_Decode and SC_Decode_Node, see figure 4.18. decode_polar is the function that is called as the main function for the 42 decoding algorithms. The function prepares variables which then are sent to SC_Decode which starts the SC decoder. SC_Decode calls the SC_Decode_Node which starts the traversal and recursion. Figure 4.18: Decoding file scheme for Successive Cancellation 4.6.1 decode_polar.m The implementation of the decoder starts in the decode_polar-file. This function is used before the decoding algorithms. This function acts as a gateway from the current implementation of LTE-simulation such that the right in-/output values are used. The in-parameters are: • e_bits - Vector containing encoded received bits represented by their belief values. • N - Variable containing the size of received transmission. • K - Variable which contain the number of information bits transmitted. • crcType - String variable that contains a reference to which CRC-attachment size have been used when encoding the information bits. • crcXor - Vector containing the binary values of rnti represented by n number of bits, where n is the CRC length. Outputs values are: • trBlock - Vector containing the decoded bits. • crcFlag - Variable that contains the number of CRC errors found. 43 function [trBlock , crcFlag ] = decode_polar (e_bits , N, K, crcType , crcXor ) if strcmpi (crcType , 'crc16 ') %[D16 + D12 + D5 + 1] gCrc = [1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1]; crcSize = 16; elseif strcmpi (crcType , 'crc24c ') %[D24+ D23+ D21+ D20+ D17+ D15+ D13+ D12+ D8+ D4+ D2+ D+ 1] gCrc = [1 1 0 1 1 0 0 1 0 1 0 1 1 0 0 0 1 0 0 0 1 0 1 1 1]; crcSize = 24; elseif strcmpi (crcType , 'crc11 ') %[D11 + D10 + D9 + D5 + 1] gCrc = [1 1 1 0 0 0 1 0 0 0 0 1]; crcSize = 11; else error ('Unknown crcType '); end c_bits = SC_Decode (e_bits , N, K); %Puts information bits - crc bits into a_bits a_bits = c_bits (1:K- crcSize ); crcBits = c_bits (K- crcSize +1 :end ); % Check CRC crcCheck = crcCalc (a_bits , gCrc , crcXor ); crcFlag = sum( crcBits (:) ̸= crcCheck ); trBlock = a_bits ; Figure 4.19: MATLAB code for decode_polar The function starts with an if-statement which verifies which CRC-attachment has been used for the encoding of the information bits. This is needed towards the end of the function when the decoded bits have to be verified. The function then calls SC_Decode with the parameters e_bits, N and K which returns the actual decoded bits without frozen bits, and these are put in variable c_bits. c_bits now contains the decoded information bits, (message+CRC). The function will separate these into a vector containing the message bits and another containing the CRC bits. Message bits and CRC bits respectively are placed in a_bits and crcBits. A new CRC attachment is then calculated for a_bits using gCrc and crcXor and is placed into crcCheck, the values are then compared with the decoded CRC and the number of mismatches are placed into crcF lag. 44 4.6.2 SC_Decode.m This function acts as a starting point for the recursive function call and restructures the returned results according to the reliability sequence. In-parameters are: • d_bits - Vector containing encoded received bits represented by their belief values. • N - Variable containing the size of received transmission. • K - Variable which contain the number of information bits transmitted. Output value is: • c_bits - Vector containing the information bits (Frozen bits are removed). This function starts of by doing a call to the recursive function SC_Decode_Node with all its input values. It also sends in two empty vectors which are used for storing bits and a node index that indicates that the recursive call starts at the root node. The returned values from SC_Decode_Node are placed in three vectors: L, ucap and u. The only value of interest is vector u because it contains the decoded bits. The function removes the frozen bits from u, see figure 4.20, because they are useless from this point forward. This is done with the help of function getReliabilitySeq, that gets the reliability sequence of size N and places it in Qn, see section 4.3.2. To only store the information bits the functions collects the N − K most reliable positions from Qn and collect the bits values in u on these given positions. These bit values are placed into c_bits which is later returned as the function end. function c_bits = SC_Decode (d_bits , N, K) [L ucap u] = SC_Decode_Node (d_bits , [], [], N, K, 1); Qn = getReliabilitySeq (N); c_bits = u(Qn(N-K+1 :end )); end Figure 4.20: MATLAB code for SC_Decode 45 4.6.3 SC_Decode_Node.m This is the function where the actual decoding happens. It builds on recursion and operates on a perfect binary search tree, meaning that every node, except leaf nodes, have exactly two children. The recursive function has four node states: leaf state, left state, right state, and the up/return state. The leaf state checks if the node is a leaf and uses the belief to set the u value. The left and right states respectively call the left and right children nodes. The up/return state will send the calculated value up to its parent node. Each recursive call uses six in-parameters and it has three output values. In-parameters are: • L - Vector containing the beliefs for the transmitted bits. • ucap - Vector containing bit values as they are represented in the virtual channels. These values will change as the recursion unrolls itself. • u - Vector containing the set bit values, when the root node is finished this contains the complete decoded transmission. • N - Size of the received bits. • K - Size of received information bits. • i - Index for which node the recursion is currently at. Output values are: • L - Vector containing the updated beliefs by this node. • ucap - Vector containing the updated bit values by this node. • u - Vector containing the currently decoded bit values updated only by leafs. The first decision made in the function is whether or not the current node is a leaf, see figure 4.21. This is done by checking if the index i is larger than the size of transmitted bits N . If so, this means the node is a leaf and the function has to verify whether or not this leafs corresponding bit position is a frozen bit or not. Its position is collected by calculation (i − (N − 1)). If the leaf corresponds to a frozen bit position, the bit value is 46 set to 0, otherwise it is set according to its belief value where L < 0 : 1 || L ≥ 0 : 0. The decided bit value is appended to ucap and u. %Leaf if i ≥ N Qn = getReliabilitySeq (N); f_pos = Qn (1:N-K); % Frozen if any( f_pos == (i -(N -1))) ucap(END +1) = 0; u(END +1) = 0; %Not Frozen else if L(1) ≥ 0 ucap(END +1) = 0; u(END +1) = 0; else ucap(END +1) = 1; u(END +1) = 1; end end end Figure 4.21: MATLAB code from SC_Decode_Node, leaf operation If the index is less than N this means that the node is not a leaf and has to split up L such that half the values are sent to the left child and the other half to the right child, see figure 4.22. Which values to send to the left child is done be the SplitLeft-function, definition in section 3.7.2. The recursive call to the left child will then include the beliefs picked from SplitLeft, ucap, u, N, K and i × 2 which sets the index to left child of the node. The returned values are put into leftL, ucapL and u. As the left child is finished, the function continues to the right child. Parameters sent to the right child are ucap, u, N, K, (i × 2) + 1 and beliefs from L depending on the output of function SplitRight, see section 3.7.3 for definition of SplitRight. The returned values from right child are put into rigthL, ucapR and u. ucapR and rightL represent the set bits by right child and their corresponding updated beliefs. 47 Finally the function combines the results received from its children, left and right, to send up the tree to its parent, see section 3.7.4 for definition. The operation is done by adding every position from ucapL with corresponding ucapR and performing a mod2operation on the result. %Not Leaf else % Split L in half: a & b a = L(1 :end /2); b = L(end /2+1 :end ); %Left [ leftL ucapL u] = SC_Decode_Node ( SplitLeft (a,b), ucap , u, N,K, i*2); % Right [ rightL ucapR u] = SC_Decode_Node ( SplitRight (a,b, ucapL ), ucap , u, ... N, K, (i*2) +1); %Up ucap = [mod( ucapL +ucapR , 2) ucapR ]; L = [ leftL rightL ]; end Figure 4.22: MATLAB code from SC_Decode_Node, left/right/up part. SplitLeft and SplitRight are code derived from equation 3.7 and 3.9 4.7 Functions Simulation The main file for the simulation is testEnvironment.m. At the start of the program a random seed is set, this is done to ensure that all simulations can be repeated with the same result each time it is run. After the random seed is set the other variables are initialised, see figure 4.23. Apart from variables associated with CRC, all values are parameters to define the scope of the simulation. For every loop, block sizes and signal strength is set. The test environment itself is build using three encapsulated for-loops, from inside out: subframe-loop, signal-strength-loop and block-size-loop. 48 rnti = 123; crcSize = 24; crcXor = dec2bin (rnti , crcSize ) - '0'; crcType = 'crc24c '; nSubframes = 1000; minSNR = -1; stepSNR = 0.25; maxSNR = 4; %power (2, n) to get blocksize. start_n = 7; end_n = 10; Figure 4.23: MATLAB code from variable initialisation in testEnvironment At the innermost for-loop, subframe-loop, the message to be sent is prepared. Its length is calculated using N, K and crcSize, see figure 4.24, where N and K will increase for each outermost loop cycle. N and K respectively represent the transmission block size and information bits size. N = trBlockSize ; K = N/2; % Generate Message a = randi ([0 1], 1, K- crcSize ); Figure 4.24: MATLAB code where the message is generated testEnvironment The message bits are then sent to the Polar encoder, see section 4.3, and the output is placed into encodedBits variable. To simulate the encoded bits being transmitted, noise is added to the message by using a AWGN channel simulation. This is done by converting the encodedBits to symbols, see section 2.4, and then adding random generated noise represented by complex values which creates an offset. The offset is mainly dependent on signal strength and N/K-ratio. The receiver side of the simulation passes the received encoded bits to the decoder, see 49 section 4.6, and the output is placed in trBlock and crcFlag. crcFlag is stored in a matrix called crcMatrix which holds the crcFlag for each subframe. When all the nSubframes have been executed for the current signal strength and block size, BLER is calculated, see code snippet 4.25 and BLER definition in section 2.10. ResultBLER (iRES , n- start_n +1) = ... sum( crcMatrix (iSNR ,1: nSubframes ) >0)/ nSubframes ; Figure 4.25: MATLAB code BLER calculation 4.8 Summary In this chapter the overall implementation of this thesis was described. The main focus was on the NR implementation of Polar Codes and the rate matching that comes with it. The Polar decoding algorithm described in the implementation was Successive Cancellation. The simulation environment was described, explaining the different parameters that are set for the simulations. 50 5 Results 5.1 Introduction This chapter explains and evaluates the results from the simulations where Polar Codes is used with the Successive Cancellation algorithm. The simulations have been configured with different parameters to evaluate the desired parts of the Polar Codes implementation. The following sections will evaluate Polar Codes on its decoding performance using different sizes with and without different rate matching techniques. Finally Polar Codes is compared to the existing control channel algorithm used in LTE, Convolutional Codes. 5.2 Polar Codes: Successive Cancellation performance The simulation results in figure 5.1 show that the performance of Polar Codes using SC as decoding algorithm varies dependent on block size. The result shows that the simulation where block size (1024, 512) was used performs better than Polar Codes with smaller block sizes. For 1% BLER, the block size of (1024, 512) compared to block size (512, 256), has a code gain of almost 0.5 dB. This means that block size (1024, 512) can reach the same decoding performance as block size (512, 256) with 0.5 dB less signal strength. Compared to the block sizes, (256, 128) and (128, 64) the code gain is more than 0.5 dB. This behaviour can be explained with the structure of Polar Codes, where a larger block size will lead to more polarised virtual channels as explained in section 3.3.2. More polarised virtual channels lead to a bigger difference between information bits and frozen bits, making it easier to successfully decode the received bits. 51 Figure 5.1: SC block size performance. Polar(N,K), subframes: 1000, CRC: crc24c 5.3 Rate matching with repetition Figure 5.2 represents a simulation where the transmission is using rate matching with repetition, see section 3.4 for definition. The graph also includes results from simulations done without rate matching to show the performance gained when using rate matching with repetition. Simulations ran with rate matching are displayed as solid lines and without rate matching are represented with dotted lines. To simulate repetition, during rate matching a redundancy of bits is created. The bits which are sent over the channel contains two copies of each encoded bit after rate matching. 52 Figure 5.2: No rate matching vs repetition. Dotted lines are without rate matching and solid lines are with repetition. Polar(N, K) E , Subframes: 1000, CRC: crc24c. When comparing the block sizes using repetition from 10% down to 0.1% BLER, one can see that block size (1024, 512) has the best performance. The largest code gain, which is nearly 1.5 dB, can be seen between (1024, 512) and (128, 64) at 0.1% BLER. As explained in the earlier section, this depends on Polar Codes structure as it achieves better performance when used on larger block sizes. In figure 5.2 it is also clear that from 2.5 to 4 Eb /N0 each block size using repetition has a better BLER performance compared to the same block size without repetition. For (1024, 512) it is a code gain of 0.5 dB at 0.1% BLER. One can see that at approximately 0.5% block error rate, (512, 256) behaves unexpected which causes it to perform worse than (128, 64). These errors though can be neglected as 53 they yield an error of about (20/1000) erroneous blocks on a 1000 subframes simulation. 5.4 Rate matching with puncturing Rate matching with puncturing will result in smaller block sizes that are sent over a channel as described in section 3.4. For the evaluation of the performance of Polar Codes using rate matching with puncturing, simulations were performed by puncturing 10% of the bits before transmission. The results of these simulations can be seen in figure 5.3, where simulations using puncturing are represented with solid lines, and simulations without rate matching are represented with dotted lines. The simulation results show that for all four block sizes the simulations without rate matching perform slightly better. The code gain for simulations without rate matching compared to simulation using puncturing is up to 0.25 dB at 1% BLER. This result is expected because removing bits logically makes it more difficult to restore the original message bits. The performance loss when using puncturing is however not that great and if a stronger signal is available, performance improvements could be achieved with this technique. 54 Figure 5.3: No rate matching vs puncturing. Dotted lines are without rate matching and solid lines are with puncturing. Polar(N, K) E , Subframes: 1000, CRC: crc24c. 5.5 Rate matching with shortening For the simulation of the effect that shortening has on the BLER performance, this thesis uses a shortening of 10%. This means that 10% of the bits are removed from the transmission during rate matching, see section 3.4. In figure 5.4 the results from shortening compared to simulations without rate matching is presented. Simulations using shortening are represented with solid lines while simulations without rate matching is represented by dotted lines. By comparing the same block size to each other it is clear that shortening has a large negative impact on the performance of Polar Codes. For block size (1024, 512) at 1% BLER there is a code gain of -0.75 dB. This is expected though since shortening removes important bits to match the desired block size. 55 Although performing worse than a transmission not using rate matching, shortening should still be used. When a strong signal is available, shortening allows the transmission to send less bits and still restore the message at the receiver side. Figure 5.4: No rate matching vs shortening. Dotted lines are without rate matching and solid lines are with shortening. Polar(N, K) E , Subframes: 1000, CRC: crc24c. 5.6 Polar Codes vs Convolutional Codes In the previous sections Polar Codes has been tested using different block sizes to determine whether or not its performance grows as the block sizes increases. In this section Polar Codes is compared against LTEs control channel algorithm, Convolutional Codes, to see how it performs compared to the algorithm its replacing. In contrast to the earlier simulations, this one was ran in Tieto’s LTE simulation environment. Because the scope of this thesis did not include development and implementation 56 of Convolutional Code, an already existing implementation from Tieto was used. Figure 5.5: Polar Codes versus Convolutional Codes, Subframes = 1000, transmission block size = 960, signal strength = [-5:0.5:5], message sizes(Left-to-right): 88, 144, 176, 208, 256, 328, 392, 472, 536, 616. Same color & marker = same message size As can be seen in figure 5.5, 10 simulations were ran for both Convolutional Codes and Polar Codes. They used the same environment but differ in which rate matching was used. Polar Codes uses the rate matching implemented in this thesis which follows the 3GPP standard for NR [12]. Convolutional Codes uses the rate matching implemented by Tieto in their environment, the rate matching algorithm for Convolutional Codes follows the 3GPP standard for LTE [11]. In figure 5.5 the same block sizes are represented with the same colours. Polar Codes are displayed by a solid line while Convolution Codes are displayed as dotted line. As can be seen in figure 5.5 Polar Codes has better BLER-performance than Convolutional Codes on all tested block sizes. When looking at 1%-BLER one gets an approximate average of 0.4 dB code gain for Polar Codes. 57 5.7 Summary This chapter explained and evaluated the simulation results that were generated from the different simulations. First the performance of Polar Codes was evaluated using different rate matching settings, after which the performance of Polar Codes was compared against Convolutional Codes. The results shows that Polar Codes performs better with larger block sizes for all the different settings tested. This is a reasonable result that can be explained with the polarisation of channels which is the core structure of Polar Codes. Rate matching was used to enlarge the block size to double the polar encoded bits. The results show that a slight code gain can be achieved with this technique. When a block size is larger than the desired transmission block size, rate matching can be used in two different ways, puncturing or shortening. The performance of these techniques is expected to be less, compared to not reducing the block size. This loss in reliability when sending smaller block sizes could lead to performance gain when a strong signal is available. Puncturing shows the most promising results with a code gain of about -0.25 dB at 1% BLER whereas shortening losses more performance with about -0.75 dB code gain. Lastly Polar Codes is compared with Convolutional Codes using different message sizes for a set transmission block size. The results show that Polar Codes has an approximate average of 0.4 dB code gain over Convolutional Codes. 58 6 Conclusion 6.1 Introduction This chapter contains the final conclusions for this thesis divided into two sections: Thesis evaluation and Future work. Thesis evaluation will explain the work performed in this thesis and the findings from the results achieved from the simulations. Future work, explains how research in this thesis could be continued to improve the implementation and reach better results by using a more complete simulation environment and by improving the algorithms used. 6.2 Thesis evaluation This thesis evaluated the performance of Polar Codes. Polar Codes can use a variety of different decoding algorithms and after careful consideration Successive Cancellation was chosen as the algorithm to develop and implement for this thesis. Successive Cancellation is one of the first proposed algorithms and has many adoptions making it an interesting algorithm to evaluate. After developing and implementing Polar Codes using the Successive Cancellation algorithm, its performance was measured within a simulation environment. Polar Codes was evaluated using a CRC: crc24c according to the 3GPP standard [12] and the simulation was ran for 1000 subframes. The results from the simulation shows that Polar Codes performs better for larger block sizes. Polar Codes was also evaluated for different rate matching techniques, repetition, puncturing and shortening. These techniques are defined in the 3GPP standard [12]. Repetition is a rate matching technique used when more redundancy for the transmitted bits is desired and therefore the encoded bits will be sent multiple times within the same transmission block. The results from the simulation using repetition show that there is a 59 small code gain when the block size is doubled. When a smaller block size is desired puncturing or shortening can be used. The simulations ran for this thesis evaluated the puncturing and shortening when the block size was reduced by 10%. Puncturing shows a small performance loss compared to no reduction of block size. Shortening show a significant larger performance loss compared to puncturing with the same amount of reduction. These performance losses might not be desired in most cases, but when a strong signal is available, higher transfer rates might be achieved when reducing the amount of transferred bits. Lastly the performance of Polar Codes is compared to the performance of Convolutional Codes. The evaluation was done by implementing Polar Codes in an LTE simulation environment where its performance could be measured and compared to the performance of Convolutional Codes. The results show that Polar Codes performs slightly better in terms of different block sizes and rates that were simulated and evaluated. 6.3 Future work There are many different research paths that can be chosen for future work. One that is ranked high on our list is to implement a simulation environment more similar to a NR-network. This would make the comparison between Convolutional Codes and Polar Codes more fair, since Polar Codes in this thesis is tested in an environment meant for LTE-simulations. Because implementing a whole 5G-simulation environment might be too tough, it would be interesting to at least implement an Interleaving-module and the missing part of the Rate Matching(Interleaving of coded bits). Another approach would be to implement more Polar decoders to test their performance against each other, with a final comparison to Convolutional Codes. Of course the first algorithm to implement would be Successive Cancellation List (SCL) which has been proven to have a significant code gain compared to Successive Cancellation. Unfortunately time did not permit the implementation during this thesis. 60 Another interesting research point would be to implement SC in a hardware suitable programming language. All implementations done during this thesis has been made in MATLAB. Implementing SC in for example C would allow improvement compared to our implementation of SC done in MATLAB. An interesting aspect to evaluate would be time and memory consumption between MATLAB implementation and C implementation. 61 References [1] E. Arikan. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Transactions on Information Theory, 55(7):3051–3073, July 2009. ISSN 0018-9448. doi:10.1109/TIT.2009.2021379. [2] Tieto. About Tieto, 2019. URL https://www.tieto.com/en/about-us/ our-company/. [Accessed February 2019]. [3] Stefania Sesia, Matthew Baker, and Issam Toufik. LTE - The UMTS Long Term Evolution : From Theory to Practice., volume 2nd ed. Wiley, 2011. ISBN 9780470660256. [4] Erik Dahlman, Stefan Parkvall, and Johan Sköld. 5G NR: The Next Generation Wireless Access Technology. Academic Press, 2018. ISBN 9780128143230. [5] Various. Additive white gaussian noise, 2018. URL https://en.wikipedia.org/ wiki/Additive_white_Gaussian_noise. [Accessed February 2019]. [6] Various. Rayleigh fading, 2019. URL https://en.wikipedia.org/wiki/Rayleigh_ fading. [Accessed February 2019]. [7] Various. Random waypoint model, 2019. URL https://en.wikipedia.org/wiki/ Random_waypoint_model. [Accessed February 2019]. [8] Tutorialspoint. Wi-fi - radio modulation, . URL https://www.tutorialspoint.com/ wi-fi/wifi_radio_modulation.htm. [Accessed March 2019]. [9] Tutorialspoint. Digital communication - phase shift keying, . URL https: //www.tutorialspoint.com/digital_communication/digital_communication_ phase_shift_keying.htm. [Accessed March 2019]. [10] C. E. Shannon. Communication in the presence of noise. Proceedings of the IRE, 37 (1):10–21, Jan 1949. ISSN 0096-8390. doi:10.1109/JRPROC.1949.232969. [11] 3GPP Technical Specification 36.212. Evolved Universal Terrestrial Radio Access (E-UTRA); Multiplexing and Channel Coding (FDD), 15.4.0 edition, January 2019. URL https://portal.3gpp.org/desktopmodules/Specifications/ SpecificationDetails.aspx?specificationId=2426. [12] 3GPP Technical Specification 38.212. NR; Multiplexing and channel coding, 15.4.0 edition, January 2019. URL https://portal.3gpp.org/desktopmodules/ Specifications/SpecificationDetails.aspx?specificationId=3214. [13] I’MTeck. What are turbo codes?, 2016. URL https://blogrecherche.wp.imt.fr/ en/2016/09/16/what-are-turbo-codes/. [Accessed February 2019]. 62 [14] P. Elias. Error-free coding. Transactions of the IRE Professional Group on Information Theory, 4(4):29–37, Sep. 1954. ISSN 2168-2690. doi:10.1109/TIT.1954.1057464. [15] Various. Block error rate, 2018. URL https://en.wikipedia.org/wiki/Block_ Error_Rate. [Accessed February 2019]. [16] Mathworks. Matlab, 2019. URL https://mathworks.com. [Accessed February 2019]. [17] Christer Frank. Kodning för felkontroll. Lund : Studentlitteratur, 2004. ISBN 91-4403664-7. [18] Various. Binary symmetric channel, 2018. URL https://en.wikipedia.org/wiki/ Binary_symmetric_channel. [Accessed February 2019]. [19] I. Tal and A. Vardy. List decoding of polar codes. IEEE Transactions on Information Theory, 61(5):2213–2226, May 2015. ISSN 0018-9448. doi:10.1109/TIT.2015.2410251. [20] B. Yuan and K. K. Parhi. Architecture optimizations for bp polar decoders. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2654–2658, May 2013. doi:10.1109/ICASSP.2013.6638137. [21] A. Elkelesh, M. Ebada, S. Cammerer, and S. ten Brink. Belief propagation list decoding of polar codes. IEEE Communications Letters, 22(8):1536–1539, Aug 2018. ISSN 1089-7798. doi:10.1109/LCOMM.2018.2850772. [22] H. Aurora, C. Condo, and W. J. Gross. Low-complexity software stack decoding of polar codes. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS), May 2018. doi:10.1109/ISCAS.2018.8351832. [23] D. Guan, K. Niu, C. Dong, and P. Zhang. Successive cancellation priority decoding of polar codes. IEEE Access, 7:9575–9585, 2019. ISSN 2169-3536. doi:10.1109/ACCESS.2019.2890838. [24] Thomas H. Cormen. Introduction to Algorithms., volume 3rd ed. The MIT Press, 2009. ISBN 9780262033848. 63