Self Intelligent Compression on EEG Signals

advertisement
Self Intelligent Compression on EEG Signals
Ms. R.Arputhavalli
Mr. T. Somassoundaram
PG Student, ECE
Associate professor and head of dept
Christ College of Engineering and Technology,
Christ College of Engineering and Technology,
Puducherry, India.
Puducherry, India.
Abstract – This paper provides the effective compression on
between two signals. When the PRD is said to be less then there is
EEG
a low distortion in the compression process.
(Electroencephalogram)
signals
based
on
hybrid
compression such as SPIHT+DEFLATE and SPIHT+LZW.
Based on the length of the signal the type of compression is
Mostly for the medical application lossless compression is
said to be chosen. Specifically, it examines the use of lossy
said to be used. Because there should be no loss of data during the
compression on EEG signals in order to reduce the amount of
compression process. Suppose if we apply the lossy technique to
data which has to be transmitted or stored, while having as
this medical image means, some part of image details get
little impact as possible on the information in the signal
suppressed and thus a loss of data occurs which leads to non-
relevant to diagnosis.The Proposed algorithm revealed a better
proper diagnosis.
compression ratio than the traditional compression algorithm.
And also the quality of the signal is said to be improved by
using these two algorithms.
Keywords –Electroencephalogram, diagnosis, compression,
I INTRODUTION
So the lossy type of compression is said to be avoided in this
particular medical applications. In lossless compression bits are
said to be reduced by avoiding and removes the unwanted
redundancy without any loss of information. The compression
process mostly not only provides the low storage cost but also
decrease the consumption of bandwidth when the transmission is
said to be carried out. The major advantage of compression is to
EEG is a main important tool for monitoring and measure the
provide the less storage cost and the bandwidth consumption.
performance of brain. There are different types of EEG such as
home monitoring and in-patient whereas home monitoring
Thus by compressing the data we can transmit the data to the
provides more advantages over in-patient in neurological
receiver easily by high speed and with minimum amount of
conditions of diagnosing. A large amount of data can be created
bandwidth. The major goal of the compression technique is the
even with the little amount of recording of EEG signals. The
reduce the amount of data without affecting the correct diagnosed
important contributor is a wireless transmission for the power
information.
consumption in a portable device by reducing the amount of data
to be sent is desirable.
The compression is a major area due to the development in
In this paper the SPIHT based hybrid compression used for
medical applications is analyzed and discussed. Two types of a
compression
technique
is
said
to
used
here
such
as
digital information and technology advancement. Generally there
SPIHT+DEFLATE and SPIHT+LZW. The parameters such as
are two types of compression such as lossless compression and
compression ratio , psnr and computation time is going to analyses
lossy compression. Under these two categories many techniques of
here. For the decomposition of a signal DWT method is used
compression are there but according to the certain application, the
which is said to be as a preprocessing step for both type of
effective and proper compression should be adopted. While
algorithms.
comparing to the lossless compression, lossy compression achieves
high compression ratio.
Yang et al introduced a hybrid scheme depends on SPIHT
and Huffman encoding. In this process the bit planes of low
But there will be a imperfections in a reconstructed signal. A
magnitude and signed are said to be scanned in fixed order and by
trade off arises between loss in signal than can be tolerate thus the
node-by-node and those bit streams are said to be further
the ratio can be achieved. PRD is a general method to find the loss
compressed using Huffman algorithm[1].
\In another method, the arithmetic coding is applied on
SPIHT encoded images which has a good compression but the
hybrid compression and then apply inverse discrete wavelet
transform and finally post processing of the signal is carried out.
image details are suppressed little[2].
This is the working process of the proposed flow diagram.
The rest of the paper deals with the proposed block in section
III OVERVIEW OF AN ALGORITHM
II and then the overview of the algorithm with the parameters are
discussed in section III and then the result analyses and conclusion
SPIHT
are discussed in section IV and section V respectively.
Set Partitioning In Hierarchical Tress is an image
II PROPOSED FLOW DIAGRAM
compression algorithm that provides the intrinsic similarities
The proposed algorithm flow diagram is shown in figure 2. In
towards the wavelet decomposition of an image. It is more
this process at first the EEG signal is said to be taken. The first
efficient than EZW because it identifies the grouping of
process is said to be pre-processing, where the unwanted noises in
insignificant coefficients. There are two different types of passes in
the signal are said to be removed by some filters. Here notch filter
SPIHT algorithm such as sorting and refinement .And it contains
is used to remove the noises in the signal. After removing the noise
three set of lists such as LSP, LIS and LIP.
EEG signal
Reconstructed EEG
signal
Pre-processing
DWT
Post-processing
Inverse DWT
Hybrid
compression
Inverse Hybrid
compression
Fig 1. Flow diagram of the proposed algorithm
it is said to be applied to Discrete wavelet transform where the
In every list, the each type of admission is expressed by the
decomposition of the signal takes place. Here the signal is said to
coordinate. The number of magnitude refinement passes can be
be divided as coefficients in 4×4 matrix. Then those coefficients
calculated from the maximum magnitude of coefficients in the
are said to applied for the compression process.
initialization process. The each and every pixels are treated as
insignificant in the initial stage.
Here the compression technique is said to be as a hybrid
compression such as SPIHT+DEFLATE and SPIHT+LZW. Based
After the initialization process it is followed by three passes
on the length of the signal the type of compression is said to be
such as sorting pass, refinement pass and quantization step update
chosen. Some particular length is said be allocated for first
pass. These three set of passes are said to be repeated in a order till
compression suppose if the length of EEG signal is more than the
the transmission of the least significant bits are carried out. The
particular length then by using another method the compression is
each and every pixels in the LIP which were insignificant till
said to be takes place.
previous pass are verified and those become significant are moved
to LSP in the sorting pass.
The major goal of this hybrid compression is said to
minimize the storage cost and during the transmission process the
As same as the sets in LIS are also tested for significance and
consumption of a bandwidth is said to be low and we can transmit
those which are significant are removed from list and then they are
the data in high speed without any loss. For the reconstruction of a
said to be partitioned. The new subsets which contain many
signal the inverse process is carried out. The compressed signal is
elements are added to LIS and the single pixels are said to be
said to be taken then we have to apply the inverse process of
added to LIS or LSP, which is based on their significance. The
pixels which is present in the LSP are encoded for nth most
significant bit in the magnitude refinement pass.
The CDF9/7 wavelet has already achieved wide-spread
acceptance for use in compression algorithms [27], and is the
wavelet function used in this paper.
Deflate
Deflate is a compression technique that combines LZ77 and
LZW
Huffman together. The dictionary based algorithm similar to LZ77
is used for recurring sequences of the text. The Huffman code is
used for entropy encoding. In simple words, it is a compression
technique of two stages.
Lempel- Ziv-Welch (LZW) is a universal lossless data
compression algorithm created by Abraham Lempel, Jacob Ziv,
and Terry Welch. It was published by Welch in 1984 as an
improved implementation of the LZ78 algorithm published by
In the first stage the dictionary based technique for the
Lempel and Ziv in 1978
reoccurrence of the string is used. In the second stage the
commonly used strings is replaced with the shorter representations
and the less commonly used strings is replaced with the longer
representation. In the First stage, if the duplicate string is found
from the given string then the current occurrence of the string is
replaced with the pointer of the previous occurrence in the form of
a distance, length pair.
Lempel-Ziv is substitution or dictionary-based coding
algorithm. This method reads strings of symbols and encodes them
through the creation of a dictionary of individual or sets of
symbols. In text compression, the LZW algorithm starts with a
string of characters containing binary data in the form of characters
ranging from (0-255) ,then a dictionary is made for each repetitive
pattern. Dictionary starts from 256 -4096 code.
Distance is limited to 32K bytes and the length is limited to
256 bytes. Duplicate strings are found in the hash table. The hash
Then every new pattern is stored in dictionary and encoded
table is searched starting from the commonly used strings to less
accordingly. Decoding algorithm and the same code pattern is
commonly used strings thus taking the advantage of
generated again and file uses compressed file and stores the new
Huffman coding.
pattern in another dictionary and decode the code matches it in the
dictionary is decompressed.
In the second stage, the method used is Huffman coding
IV RESULTS AND DISCUSSIONS
which creates an un-prefixed tree of nonoverlapping intervals,
where the length of each sequence is inversely proportional to the
probability of that symbol needing to be encoded. The more likely
a symbol has to be encoded, the shorter its bit-sequence will be.
At first different EEG signals are said to be taken from a set
of database. First we are going to apply the preprocessing step.
Notch filter is used to remove the unwanted noise present in the
signal. The signal is first compressed by SPIHT+DEFLATE
DWT
algorithm
This section provides an overview of the DWT, which is
This is the signal after removing the noise from EEG signal
employed as a preprocessing step to both compression algorithms
with the help of notch filter. This step is carried out in a pre-
investigated in this paper. The DWT is well documented in the
processing stage. Then the signal is said undergo the
literature, so only a brief overview will be given here.
decomposition stage with the help of discrete wavelet transform
then based on the length of the signal the compression technique is
The DWT decomposes a signal into a set of basis functions
carried out. After the compression process the inverse procedure is
known as wavelets [25], [26]. The initial wavelet, also known as
said to be followed for the reconstructing the original signal. We
the mother wavelet, is used to construct the other wavelets by
have to extract the original signal from the compressed signal
means of dilation and shifting. The DWT coefficients are defined
without any data loss or a minimum amount of data loss.
as the inner product of the original signal and the selected basis
functions. These coefficients provide an alternative representation
of the original signal, giving good localization of the signal’s
energycomponents from both a time and frequency perspective.
But for the medical application no data data should get
loss if the data loss occurs then there is a problem in the diagnosis.
So for most of the medical applications the lossless compression
This is the compressed signal, left side signal is said to be as
original signal and the right side signal is said to be a reconstructed
technique is used.
signal.
Fig 5 original and reconstructed signal with minimum length
For the clear appearance of the signal we are reducing its
length and the signal is shown below with original signal as left
and the reconstructed signal in the right side.
Fig 3 Removal of noise from EEG signal
Here the hybrid compression technique such as
SPIHT+DEFLATE
and SPIHT+LZW both comes under the
category of a lossless compression only.
The compression for this signal is said to be 59.72 and the
time consumption is said to be little high when compared with the
SPIHT algorithm alone.
The next compression process is said to be SPIHT+LZW.
Here the same procedure are said to be followed as for the above
compression method.
First the EEG signal is to be taken from the set of database.
The noise is said to be removed by notch filter as shown in figure
5. Then the decomposition process is carried out and with the help
of the output coefficients compression process is carried out.
Fig 4 Original and Reconstructed signal
Fig 8 Original and Reconstructed signal with minimal length
Fig 6 Removal of noise from original signal
For the clear appearance of the signal we are reducing its length
and the signal is shown below with original signal as left and the
reconstructed signal in the right side. For this compression method
the compression ratio is found to be 59. Three parameters are said
to be analysed such as compression ratio, psnr and a computation
time.
The result analysis for the compression of SPIHT+DEFLATE and
SPIHT+LZW Is shown below
Algorithm
SPIHT+DEFLATE
SPIHT+LZW
Fig 7 Original and reconstructed signal
Different
Compression
signals
ratio
Psnr
Time
Signal 1
Signal 2
58.23
41.3
2.73
60.10
43.23
3.1
Signal 3
58.01
43.98
2.6
Signal 4
59.0
45.6
2.9
Signal 1
57.11
30.11
1.8
Signal 2
57.50
32.23
1.6
Signal 3
58.9
31.6
1.23
Signal 4
58.23
32.6
1.5
This is the compressed signal, left side signal is said to be as
original signal and the right side signal is said to be a reconstructed
signal
V CONCLUSION
In this Paper, efficient compression technique using
discrete wavelet Transform and the hybrid compression such as
SPIHT+DEFLATE and SPIHT+LZW we compressed the different
types of EEG signals, The compression ratio is said to be better
than while comparing other algorithms and the quality of the
and long term epilepsy monitoring,” in Proc. Annu. Conf. IEEE
reconstructed signal is said to be similar than that of the original
Eng. Med. Biol. Soc., Aug. 2007,
signal, the compression algorithm used here is lossless thus no loss
vol. 2007, pp. 2456–2459.
of data occurs. And here the compression ratio is said to be better
[13] A. J. Casson and E. Rodriguez-Villegas, “On data reduction in
for DEFLATE combinations and the time taken for the process of
EEG monitoring: Comparison between ambulatory and non-
compression is said to be less in LZW combinations.
ambulatory recordings,” in Proc. Eng. Med. Biol. Soc., Aug. 2008,
pp. 5885–5888.
[14] A. J. Casson and E. Rodriguez-Villegas, “Toward online data
REFRENCES
reduction
for
portable
electroencephalography
systems
in
[1] D. Hill, “Value of the EEG in diagnosis of epilepsy,” Br. Med.
epilepsy,” IEEE Trans. Biomed. Eng., vol. 56, no. 12, pp. 2816–
J., vol. 1, pp. 663–666, Mar. 1958.
2825, Dec. 2009.
[2] A. J. Casson, S. Smith, J. S. Duncan, and E. Rodriguez-
[15] J. L. C´ardenas-Barrera, J. V. Lorenzo-Ginori, and E.
Villegas, “Wearable EEG: What is it, why is it needed and what
Rodr´ıguez-Valdivia, “A wavelet-packets based algorithm for EEG
does it entail?” Proc.IEEE, vol. 2008, pp. 5867–5870, Aug. 2008.
signal compression,” Med. Inf. Internet Med., vol. 29, no. 1, pp.
[3] A. J. Casson and E. Rodriguez-Villegas, “Data reduction
15–27, 2004.
techniques to facilitate wireless and long term AEEG epilepsy
monitoring,” in Proc. 3rd Int. IEEE/EMBS Conf. Neural Eng.,
May 2007, pp. 298–301.
[4] F. Vergari, V. Auteri, C. Corsi, and C. Lamberti, “A zigbeebased ECG transmission for a low cost solution in home care
services delivery,” Mediterranean J. Pacing Electrophysiol,
[5] C. Otto, A. Milenkovic, C. Sanders, and E. Jovanov, “System
architecture of a wireless body area sensor network for ubiquitous
health monitoring,” J. Mobile Multimedia, vol. 1, no. 4, pp. 307–
326, 2006.
.
[6] S. Faul, “Automated neonatal seizure detection,” Ph.D.
dissertation, Nat. Univ. Ireland, Cork, Ireland, 2007
[7] S. Faul, A. Temko, and W.Marnane, “Age-independent seizure
detection,” in Proc. Annu. Conf. IEEE Eng. Med. Biol. Soc., 2009,
pp. 6612–6615.
[8] R. P. McEvoy, S. Faul, and W. P. Marnane, “Ambulatory react:
Real-time seizure detection with a DSP microprocessor,” in Proc.
Annu. Conf. IEEE Eng. Med. Biol. Soc., 2010, pp. 2443–2446.
[9] M. D. Adams, The JPEG-2000 Still Image Compression
Standard, Standards Contribution, ISO/IEC JTC 1/SC 29/WG 1N
2412, 2001
[10] A. Said and W. A. Pearlman, “A new fast and efficient image
codec based on set partitioning in hierarchical trees,” IEEE Trans.
Circuits Syst. Video Technol., vol. 6, no. 3, pp. 243–250, Jun.
1996.
11] Univ. of Freiburg, Freibug, Germany (2011, Nov. 16). EEG
database— Seizure prediction Project Freiburg.
Available:
https://
[Online].
epilepsy.uni-freiburg.de/freiburg-seizure-
prediction-project/eeg-database
[12] A. J. Casson, D. C. Yates, S. Patel, and E. RodriguezVillegas, “Algorithm for AEEG data selection leading to wireless
Download