Electromagnetic Tomography: Real

advertisement
Electromagnetic Tomography: Real-Time Imaging
using Linear Approaches
Jorge Manuel da Silva Caeiros
Dissertation for obtaining the Master’s Degree in
Biomedical Engineering
Jury
President:
Prof. Patrı́cia Margarida Figueiredo
Supervisors:
Prof. Raul Carneiro Martins
Member:
Prof. Artur Lopes Ribeiro
December 14, 2010
Acknowledgments
My first aknowledgment goes to my supervisor, professor Raúl Martins, for introducing me to
this area of research, for giving me all the required conditions to carry out this project and for
always being a source of motivation and encouragement.
Secondly I would like to thank Nuno Bandeira Brás. He provided precious advices in every
area explored during this thesis and my conversations with him were definitely enlightening.
Finally I want to thank my family and friends, for giving me all the support I needed, and specially to Marta, for never letting me give up even when certain obstacles were difficult to overcome
and for never turning a deaf ear to my endless speeches about this work. Without your help it
would have been much more difficult to complete this project.
Sincerely,
Jorge Caeiros
i
Abstract
The objective of the present work is to apply linear image reconstruction algorithms specifically
to two tomographic imaging technologies aimed for biomedical applications, the Magnetic Induction Tomography (MIT) and the Electrical Impedance Tomography (EIT). In order to accomplish
this it was required the implementation of numerical forward problem solvers, which simulate the
interaction between a given electromagnetic field and a body with a specific conductivity distribution. They are intended to be fully automatic, requiring only the specification of the problem data,
and allow the obtainment of virtual measurements, which are subsequently used in the image
reconstruction process.
In terms of image reconstruction, different reconstruction algorithms were studied in order
to obtain images in real-time. The Back-Projection and the Filtered Back-Projection methods
were implemented, either along straight lines, or along magnetic flux lines (specifically for the
MIT) or equipotential lines (specifically for EIT). Still regarding the EIT image reconstruction, the
applicability of the developed 2D forward problem solver to a state of the art linear reconstruction
algorithm (GREIT), originally based on a 3D forward problem solver, was studied.
The results obtained were satisfying in the sense that they demonstrate the applicability of
these technologies to the clinical environment.
Keywords
Magnetic Induction Tomography, Electrical Impedance Tomography, Finite Integration Technique, Filtered Back-Projection, GREIT.
iii
Resumo
O objectivo do presente trabalho consiste na aplicação de algoritmos de reconstrução de
imagem lineares para duas tecnicas tomográficas de imagiologia tendo em vista aplicações
biomédicas, a Tomografia por Indução Magnética (TIM) e a Tomografia por Impedância Eléctrica
(TIE). Deste modo foi necessário primeiro a implementaçao de métodos de resolução do problema directo, que simulam a interacção de um determinado campo electromagnético com um
corpo com uma distribuição de conductividades especı́fica. Pretende-se que esta etapa seja
totalmente automática, necessitando apenas da especificação dos dados do problema, e que
permita a obtenção de medidas virtuais que subsequentemente serão utilizadas no processo de
reconstrução de imagem.
No que respeita a este processo de reconstrução de imagem, diferentes algoritmos foram estudados de forma a obter imagens em tempo real. Os métodos de Retroprojecção e Retroprojecção
filtrada foram implementados, quer segundo linhas rectas, quer segundo linhas de fluxo magnético
(especificamente para a TIM) ou linhas equipotenciais (especificamente para a TIE). Ainda relativamente ao processo de reconstrução de imagem da TIE, foi estudada a utilização do modelo
2D desenvolvido para resolução do problema directo num algoritmo linear recentemente desenvolvido (GREIT), que originalmente é baseado num modelo 3D.
Os resultados obtidos foram satisfatorios e demonstram a aplicabilidade destas tecnologias
ao mundo clı́nico.
Palavras Chave
Tomografia por Indução Magnética, Tomografia por Impedância Eléctrica, Técnica de Integração
Finita, Retroprojecção Filtrada, GREIT.
v
Contents
1 Introduction
1
1.1 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2 Original Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.3 Organization of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2 Background Context
5
2.1 Electrical properties of biological tissues . . . . . . . . . . . . . . . . . . . . . . . .
6
2.2 Magnetic Induction Tomography
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.2.1 Description of the typical system . . . . . . . . . . . . . . . . . . . . . . . .
7
2.2.2 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.3 Electrical Impedance Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
2.3.1 Description of the typical system . . . . . . . . . . . . . . . . . . . . . . . .
13
2.3.2 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
3 Implementation of the Governing Equations
19
3.1 Maxwell’s Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
3.1.1 Eddy Currents Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
3.1.2 Electrical Impedance Tomography Formulations . . . . . . . . . . . . . . .
24
3.2 The Finite Integration Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3.2.1 Discretization of the Domain . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3.2.2 Discretization of electromagnetic quantities . . . . . . . . . . . . . . . . . .
26
3.2.3 Maxwell-Grid-Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
3.2.4 Algebraic properties of the matrix operators . . . . . . . . . . . . . . . . . .
28
3.3 MIT forward problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.3.1 Eddy currents formulation in FIT . . . . . . . . . . . . . . . . . . . . . . . .
29
3.3.2 Software Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
3.3.2.A Meshing and subgridding . . . . . . . . . . . . . . . . . . . . . . .
30
3.3.2.B Numerical Calculations . . . . . . . . . . . . . . . . . . . . . . . .
34
3.3.2.C Preconditioned and iterative forward problem solver . . . . . . . .
35
3.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
vii
Contents
3.3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4 EIT Forward Problem
43
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
3.4.1 Discretization of the used formulation . . . . . . . . . . . . . . . . . . . . .
44
3.4.2 Software Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
3.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
3.4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4 Image Reconstruction
51
4.1 Traditional Reconstruction Methods . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
4.1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
4.1.2 Back-Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
4.1.3 Analytical methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
4.2 MIT inverse problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
4.2.1 Practical Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
4.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
4.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
4.3 EIT inverse problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
4.3.1 Equipotentials Back-Projection . . . . . . . . . . . . . . . . . . . . . . . . .
66
4.3.2 Graz consensus Reconstruction algorithm for EIT . . . . . . . . . . . . . .
66
4.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
4.3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
5 Conclusions and Future Developments
73
Bibliography
77
viii
List of Figures
2.1 Phasor diagram relating primary and secondary magnetic fields. . . . . . . . . . .
8
2.2 External appearance of an MIT system. . . . . . . . . . . . . . . . . . . . . . . . .
10
2.3 Results of a MIT state of the art image reconstruction. . . . . . . . . . . . . . . . .
12
2.4 Illustration of an EIT system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.5 Illustration of different methods of impedance data collection in EIT . . . . . . . . .
14
2.6 EIT image reconstruction during a right temporal complex partial seizure . . . . . .
18
3.1 Representation of a cell and dual cell of the orthogonal grid doublet . . . . . . . .
26
3.2 Recursive subdivision of a cube into octants and its corresponding octree . . . . .
31
3.3 Low to high resolution interface for 1-forms . . . . . . . . . . . . . . . . . . . . . .
33
3.4 High to low resolution interface for 1-forms . . . . . . . . . . . . . . . . . . . . . . .
33
3.5 Low to high and high to low resolution interfaces for 3-forms . . . . . . . . . . . . .
33
3.6 3D mesh with four different resolution levels and a single subgridding region . . . .
37
3.7 Difference between 3D meshes containing single and multiple subridding regions .
37
3.8 Current density amplitude and direction for a conductivity plane. . . . . . . . . . .
38
3.9 Problem geometry for the simulation of a conductive sphere. . . . . . . . . . . . .
39
3.10 Absolute values of the source and residual magnetic vector potential fields . . . .
40
3.11 Primary and residual magnetic flux density lines. . . . . . . . . . . . . . . . . . . .
40
3.12 Variation of the residual magnetic vector potential and emf with distance. . . . . .
41
3.13 Conductivity phantom used for current density observation. . . . . . . . . . . . . .
42
3.14 Absolute values of the current density distribution. . . . . . . . . . . . . . . . . . .
42
3.15 Dual grid complex used in the 2D case. . . . . . . . . . . . . . . . . . . . . . . . .
44
3.16 Single resolution 2D mesh used for domain discretization. . . . . . . . . . . . . . .
46
3.17 Electric scalar potential fields generated from different current injection protocols. .
47
3.18 Effect of inhomogeneous conductivity distribution in the potential fields and current
lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
4.1 Representation of the line integrals defining the Radon transform of an object. . . .
53
4.2 Graphic representation of the Back-Projection method . . . . . . . . . . . . . . . .
54
4.3 Graphic representation of the two-dimensional Fourier reconstruction method . . .
55
ix
List of Figures
4.4 Spatial frequency response of commonly used filters in Filtered Back-Projection
.
57
4.5 Prototypes tested in the MIT inverse problem . . . . . . . . . . . . . . . . . . . . .
57
4.6 Setup used for sensitivity mapping . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
4.7 Dependence of the measured phase shifts on the position of the object. . . . . . .
60
4.8 Sinograms of a sphere in two different positions . . . . . . . . . . . . . . . . . . . .
60
4.9 Comparison between simple and filtered Back-Projection image reconstruction. . .
61
4.10 Observation of the filtering effect using one dimensional conductivity profiles. . . .
62
4.11 Effect of the use of a finite number of projections. . . . . . . . . . . . . . . . . . . .
63
4.12 Comparison between standard and modified Filtered Back-Projection methods . .
64
4.13 Image reconstruction of a conductivity phantom . . . . . . . . . . . . . . . . . . . .
64
4.14 Comparison between true and reconstructed one dimensional conductivity profiles
65
4.15 Schematic representation of the training data for GREIT . . . . . . . . . . . . . . .
67
4.16 Effect of the number of electrodes in the Equipotentials Back-Projection method
.
69
4.17 Comparison between the Equipotentials Back-Projection method and GREIT . . .
70
4.18 Comparison between the true and reconstructed one dimensional conductivity profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.19 Image reconstruction of a conductivity phantom representing the heart and lungs.
x
70
71
List of Tables
2.1 Electrical properties of biological tissues over three different frequencies . . . . . .
7
2.2 Summary description of the most recent EIT systems. . . . . . . . . . . . . . . . .
16
3.1 Discrete electromagnetic quantities used in the FIT . . . . . . . . . . . . . . . . . .
27
xi
List of Tables
xii
List of Abbreviations 1
ω
σ
div
grad
lap
eó
óh
óa
J~
~
E
~
D
EIT
T~
emf
ô
óób
dô
angular frequency
complex conductivity
discrete divergence operator
discrete gradient operator
discrete laplacian operator
edge integral of the electric field intensity
edge integral of the magnetic field intensity
edge integral of the magnetic vector potential
electric current density
electric field intensity
electric flux density
electrical impedance tomography
electrical permitivity
electric vector potential
electromotive force
: face integral of the electric flux density
FBP
GREIT
i
~
H
µ
MIT
ν
Φ
~
A
MGE
φ
PDE
ϕ
PSF
σr
~r
B
~r
A
V
~s
B
~s
A
δ
~
B
1 Note:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
face integral of the magnetic flux density
Filtered Back-Projection
Graz consensus Reconstruction algorithm for EIT
imaginary unit
magnetic field intensity
magnetic permeability
magnetic induction tomography
magnetic reluctance
magnetic scalar potential
magnetic vector potential
maxwell-grid-equations
modified scalar electric potential
partial differential equations
phase shift
point spread function
real conductivity
reduced magnetic flux density
reduced magnetic vector potential
scalar electric potential
source magnetic flux density
source magnetic vector potential
skin depth
total magnetic flux density
This list is in alphabetic order
xiii
List of Abbreviations
xiv
Nomenclature
∇×
∇·
∇
∆
A
āe
a
aT
:
:
:
:
:
:
:
:
Continuum curl operator
Continuum divergence operator
Continuum gradient operator
Continuum laplacian operator
Denotes a vector quantity
Denotes a complex quantity
Denotes a matrix quantity
Matrix transpose
xv
Nomenclature
xvi
1
Introduction
Contents
1.1 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Original Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Organization of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
3
4
1
1. Introduction
1.1
Motivation and Objectives
Non-destructive imaging systems have experienced a great technological development in the
last few decades, allowing the evolution from planar images to full 3D reconstructions, from static
images to dynamic and functional ones. It is a growing field, with each year bringing forth new
imaging systems, improvement of older ones, new numerical techniques, all of them with a wide
spectrum of applications, making it a very interesting and appealing area of knowledge.
Every tomographic imaging system uses an internal or external energy source (e.g. electromagnetic radiation, pressure waves), which interacts with the body in analysis. Afterwards,
peripheral measurements are taken and that information is then used to reconstruct certain physical properties of the body. So in order to understand what the measurements represent and to
use that information correctly, one must first comprehend the underlying physical phenomenon.
The study of the model that mimics it enables the acquisition of a better insight of what happens
between the source and the sensing system. Since electromagnetic tomography is still in its early
stage, with a lot of research ground to be covered but with a very promising future, the author was
persuaded to follow this route and apply it to this specific technology.
Two particular cases of electromagnetic tomography were studied throughout the course of
this work, Magnetic Induction Tomography (MIT) and Electrical Impedance Tomography (EIT). Although at first, only the MIT was going to be explored, the idea that in the near future one could
somewhat combine the two technologies made the author include the study of EIT as well. The
first one is still in the prototyping phase, whereas in the former case there are already commercially available systems.
In the case of MIT, coils are used to generate an oscillating magnetic field that produces eddy
currents on the object. These in turn generate a secondary magnetic field which causes slight
changes in the main magnetic field. The essential information necessary to infer the complex
conductivity of the body under test is contained in these changes. The fact that it has good penetration power in biological tissues, including bone, makes its application in a biomedical context a
very promising and attractive option, namely for monitoring cerebral ischemia and haemorrhage,
localization of epileptic foci , together with research into normal brain function and neuronal activity.
Regarding EIT, it is an imaging technique in which the conductivity or permittivity of part of
an object is inferred from surface electrical measurements. Typically, conducting electrodes are
attached to the surface of the object and small alternating currents are applied to some or all
of the electrodes. The resulting electrical potentials are measured, and the process is then
repeated for numerous different configurations of applied current. EIT is useful for monitoring
patient lungs because the air has a large conductivity contrast to the other tissues in the thorax [1]. The most promising clinical application of lung EIT measurements is for monitoring pa2
1.2 Original Contributions
tients being treated with Mechanical ventilation. Such ventilation can often result in Ventilatorassociated lung injury [1]. EIT is also being investigated in the field of breast imaging as an
alternative/complementary technique to Mammography and MRI for breast cancer detection [2].
One of the main advantages of the electromagnetic tomography is that it’s much less expensive when compared to other imaging technologies such as Magnetic Resonance Imaging (MRI)
or Positron Emission Tomography (PET). It also has great temporal resolution (in the order of
milliseconds) and it is, in principle, safe. Its main disadvantage is its poor spatial resolution and
signal and contrast to noise ratios. The prospects of the clinical usage of this imaging technique
greatly depend upon improvement of these limitations.
The objectives were then defined as creating numerical simulators of the forward problem for
both cases, (i.e. for a given source of energy and a body with controlled physical characteristics,
implement the model that mimics their interaction), and implementing a linear method to solve
the inverse problem (i.e. from the measurements obtained from experimental work or a forward
problem simulation, infer the physical properties of the object under test). The forward problem
solvers are intended to be used as research tools to help developing new prototypes, source
geometries and sensors configurations, and therefore should be versatile enough to allow the
user to quickly change the geometry of the problem at hand and still obtain reliable results. Also,
they should be fast enough so that they could be used as part of a non-linear inverse problem
solver, based on regularization techniques (see for example [3] and [4]). The method to solve the
inverse problem was chosen as such for two main reasons. The first one is to have the ability
to obtain images in real-time, therefore enabling continuous physiological monitoring, one of the
future main applications of electromagnetic tomography. The second one is to use the results
obtained as the first approximation to a non-linear iterative reconstruction method, theoretically
accelerating the convergence process.
The amount of work to be done, the variety of knowledge to be obtained, the possibility of getting in contact and contributing to a new and developing technology, made this a very interesting
and challenging work, in the context of a Master’s Thesis project.
1.2
Original Contributions
In this work, a new mesh generation algorithm was developed in order to perform the discretization of the 3D space in the forward problem of the MIT. It is original in the sense that it
enables the creation of an octree type grid in a bottom up approach, contrary to the usual recursive top down method, allowing the decrease in the number of elements required for the space
discretization, while maintaining the accuracy level. This leads to a smaller system of equations
to be solved and hence the solution is obtained more quickly.
Another original contribution consists in the study of the applicability of the GREIT, which is a
3
1. Introduction
state of the art image reconstruction algorithm originally based on a 3D forward model, to the 2D
case. It is useful as the method renders good results even if a 2D forward model is used, and
hence it can be applied in a wide variety of cases and not just to the one it was originally devised
for.
1.3
Organization of the thesis
As it was stated in the previous section, two main areas are going to be tackled, the forward
problem modeling and the inverse problem solution. Taken this into account the thesis is organized as followed. Chapter two starts by giving a brief explanation about the electrical properties of
the biological tissues. Afterwards, the typical system of both imaging technologies is presented,
along with their operating principles and state of the art. In the begining of chapter three, the
Maxwell’s equations and their typical formulation regarding the physical principles of both imaging
systems are presented. After that, the Finite Integration Technique is described and applied to the
discretization of the governing equations of EIT and MIT. A description of the developed software
architecture is given, as well as a detailed explanation of all the modules integrated in the forward
problem solvers. Simulations were also made with simple objects with controlled physical characteristics, and their results are presented here. Chapter four deals with the image reconstruction
procedure. The adopted algorithms are thoroughly explained and the results obtained are presented. In the fifth chapter, final conclusions are made regarding all the work that has been done
and a further work section is presented, containing ideas about intended future developments.
4
2
Background Context
Contents
2.1 Electrical properties of biological tissues . . . . . . . . . . . . . . . . . . . . . 6
2.2 Magnetic Induction Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Electrical Impedance Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 12
5
2. Background Context
2.1
Electrical properties of biological tissues
Bioimpedance is a measure of the opposition to the flow of electric current through biological
tissues. It is the opposite of electrical conductivity and it is the basic physical property of the human tissues that electromagnetic tomography aims to image. The measurement of bioimpedance
has already presented itself as a useful, non-invasive method for measuring several physiological
variables such as blood flow (bioimpedance plethysmography [5]) or body composition (Bioelectrical impedance analysis [6]).
A biological tissue consists of a group of cells that are not necessarily identical, despite having
the same origin, but work together in order to carry out a specific function. Its impedance comprises two components, the resistance and reactance. The conductive characteristics of body
fluids provide the resistive component, whereas the cell membranes acting as imperfect capacitors, contribute to a frequency-dependence reactance. Hence, the conductivity is a complex
entity.
The tissue impedance measured with high frequencies (around 10 MHz) will depend mostly
on the resistive components, since the cell membranes do not offer much opposition to the current
flow in this frequency range. However, in low frequencies (around 20 Hz), they impede the current
flow. The recording of the bioimpedance over this range of low to high frequencies can hence be
used to characterize the tissue.
Another property that is important to have in mind is the skin effect, which consists in the fact
that when a time varying electromagnetic wave interacts with a conductive material, the induced
current tends to flow in the periphery of the conductor. The higher the frequency, the smaller is
the area in which the current flows. The skin depth is a measure of the distance over which the
current density falls to 1/e of its original value.
There is a great deal of information contained in the bioimpedance map of the body. It includes the characterization and identification of cells based on their impedance, which differs
between cells based on the size, orientation, and membrane thickness, among other factors.
Also, changes in this map translate to specific physiological and/or pathological conditions. For
instance, the measured impedance is highly dependent on the amount of water in the body and
hence this information can be used to detect edemas. Another example is the identification and
characterization of cancer cells, since they exhibit different impedances when compared to healthy
tissues.
Electromagnetic tomography intends to explore these phenomenons, using the impedance information of the different tissues to form images of the contents of the human body. Since their
electrical properties, namely the conductivity, permittivity and skin depth, are frequency dependent, the contrast between different tissues can be improved by changing the frequency of the
source (magnetic field for MIT and electric current for EIT). Examples of this dependence are
6
2.2 Magnetic Induction Tomography
presented in table 2.1.
Tissue
Skin
Liver
Brain
Grey
Matter
Brain
White
Matter
Frequency /MHz
1
10
100
1
10
100
1
10
100
1
10
100
Conductivity /S.m−1
0.2214
0.3306
0.3660
0.1867
0.2815
0.3167
0.1633
0.2341
0.2917
0.1021
0.1343
0.1585
Relative permittivity
1833
365.1
221.8
1536
349.4
223.1
860.4
439.5
319.7
479.8
223.8
175.7
Skin depth /m
1.337
0.4405
0.3104
1.454
0.4869
0.3424
1.439
0.5830
0.3932
1.792
0.7481
0.5352
Table 2.1: Electrical properties of biological tissues over three different frequencies
2.2
Magnetic Induction Tomography
Magnetic induction tomography is a non-invasive and contact-less electromagnetic imaging
technique used to map the passive electromagnetic properties of an object under investigation.
Excitation coils are used to generate a primary magnetic field which in turn induces eddy currents
on the object. The secondary magnetic field generated by these currents contains the information
about the conductivity distribution inside the object. It can be seen as a soft field parameter
estimation problem, which means that the underlying physical phenomenon is ruled by a set
of electromagnetic partial diferential equations (PDE) and the parameters to be inferred are the
material coefficients. The image reconstruction procedure in MIT is a non-linear problem in its full
set of unknowns, the fields and conductivity map. It is also ill-conditioned, which means that great
changes in the conductivity distribution inside the object translate to very small changes in the
measurements obtained and ill-posed, meaning that the number of independent measurements
is inferior to the number of parameters to be estimated. This unlocks the use of regularization
techniques.
2.2.1
Description of the typical system
The standard MIT system comprises several source and sensor coils with fixed positions,
mounted on the same plane on a cylinder, with base diameter around 20 to 30 cm, surrounding
the object under test (e.g. [7], [8] and [9]).
According to the general law of induction1 , in order to produce eddy currents on the object
the magnetic field must vary with time. Taking into account the previously discussed physiological
properties of bioimpedance, usually harmonic signals with frequencies between tens of kHz to
1 The
full description of the Maxwell’s equations is presented in section 3.1
7
2. Background Context
1-10 MHz are applied in the excitation coils. The measurements are normally carried out using
sensing coils, which ideally consist of open loops with electrical shields in front of them in order
to decouple the electric and magnetic effects. In this way, the electromotive force induced in the
sensor is solely due to the magnetic fields. One of the main disadvantages in MIT is that the
excitation field also induces a signal in the sensing coil (the primary signal), which is much larger
when compared to the relevant signal due to the magnetic field generated by the eddy currents in
the material (the secondary signal). Both fields can be represented using a phasor diagram, like
the one presented figure 2.1.
Figure 2.1: Phasor diagram representing the primary (B) and secondary (∆B) magnetic fields.
The total detected field (B + ∆B) lags the primary field by an angle ϕ. Image taken from [10].
One can see clearly that the secondary magnetic field induces a change of magnitude and
phase of the primary field. Most linear approaches use only the phase shift information to reconstruct the impedance map of the object (see [7]), whereas the non-linear methods use both
magnitude and phase of the electromotive forces induced in the sensors.
The ratio between the voltage induced by the secondary magnetic field (Vr ) and the voltage
induced by the primary magnetic field (V ) is called signal-to-carrier ratio (SCR), and its minimum
measurable quantity is an indicator of the system’s performance. The need to remove V from the
acquired signal, in order to obtain a measurement of the required secondary field gave birth to
the so called cancellation techniques, like the twin coil cancellation method, which is used in [11].
This correct removal is a fundamental feature in any MIT system, as it enables reducing the error
associated with the measurement of the secondary magnetic field and increases the signal to
noise ratio.
2.2.2
State of the Art
Magnetic Induction Tomography started out as a technique applicable to the industry, namely
to detect and characterize defects in conducting materials. In 1985, one of the first algorithms to
reconstruct the 2-dimensional distribution of a magnetic field from peripheral measurements was
presented ([12]), and was based on the Fourier Central Projection theorem. In 1993, the Magnetic Backprojection Imaging technique was develloped in [13], which allowed the obtainment of
2D images of the vascular lumen for the assessment of atherosclerosis by injecting current into
the blood vessel, detecting the resulting magnetic flux and then backprojecting it to show any
deviations from the linear blood flow. Al-Zeibak et al (1995) and Peyton et al (1996) suggested
8
2.2 Magnetic Induction Tomography
and demonstrated an approach for the spatial localization of metallic and ferromagnetic objects
based on the results of amplitude measurements. Although this was a good approach for industrial purposes, it was not feasible in a biomedical context due to the inherent low conductivities
of biological tissues. The first report of MIT for biomedical use was by Al-Zeibak and Saunders
(1993). An excitation and a sensing coil operating at 2 MHz were scanned past a tank filled with
saline solution, whose conductivity was similar to biological tissues, with immersed metallic objects, in a translate and rotate manner. Images were reconstructed by filtered backprojection and
showed the outline of the tank and the internal features. This work had the effect of stimulating
and inspiring research on this topic. In [8], the first analytical predictions of the expected measurements were done, based on a simple axisymmetrical problem. They considered that for a
sample between the excitation and sensing coils, and if the skin depth in the material is larger
than its dimension (which is true for most biological samples), then:
∆B
∝ ω(ω0 r − iσ).
B
(2.1)
This means that conduction currents cause a component in ∆B proportional to the frequency
and in quadrature with the primary field. Also, there is an in phase component, which is proportional to the square of the frequency, that is caused by the displacement currents produced by the
time variation of the electric field inside the object. So, the ∆B will have a real part representing
the permittivity and an imaginary part representing the conductivity of the sample. It was also
shown the need to use, in biomedical MIT, frequencies higher than in industrial MIT, since a measurement at 10 MHz of a sample with 1 Sm−1 of conductivity would induce a secondary signal
with a size of only about 1% of the primary one [8]. It was also in this article that a two-coil, translate and rotate system was introduced, capable of measuring volumes of saline solutions with
conductivities in the same order of magnitude as biological tissues, employing for the first time a
cancellation method consisting of a back-off-coil for subtraction of the excitation field. Although
the idea of using a third coil to back off the primary signal was not new and was used more than
40 years ago, by Tarjan and McFee (1968), its application to MIT was pioneer.
In [7] a working MIT prototype was implemented using 16 inductor and detector coils, placed
along a circle of 35 cm diameter. They used a cylindrical electromagnetic screen surrounding the
workspace, isolating it from external influence (see figure 2.2). The system operated at 20 MHz,
which they considered as a nearly optimal frequency for biomedical applications. The following
heuristic approach was used.
Z
ϕ≈
W σdl.
(2.2)
L
Here, W is a geometrical weighting factor, L is an unperturbed magnetic line connecting the
detector with the inductor and ϕ the measured phase shifts. The image reconstruction procedure
was made by the filtered backprojection of these phase shifts along the magnetic induction line.
9
2. Background Context
Figure 2.2: External appearance of the MIT system. Image taken from [7]
Both [14] and [15] present a discussion about image quality. They characterized the image
resolution with the Raleigh criterion, which states that two point-shaped objects are separable if
their point spread function (PSF) overlaps in such a way that the peak of the first one coincides
with a zero of the second one. In the case of a sinc-shaped PSF, the minimum separable distance
corresponds to 64 % of the width of the PSF. In both these papers the PSF is calculated using
the sensitivity matrix, which maps the changes of conductivity distribution onto the changes of the
induced voltage in the receiver channels. They have observed that, unlike other imaging systems
like X-ray CT, the MIT PSF depends on the location and geometry of the body under test. This
leads to the conclusion that the resolution depends strongly on the geometry of the problem.
Also in [14], the influence of the type and configuration of the receiver was studied. In order to
accomplish this they implemented two different types of receivers, coils and gradiometers, under
two different excitation/receivers configurations. In the first configuration, the receivers were in
front of the excitation coils and in the second one, they are rotated 22.5◦ with respect to the
first one. They verified that very close to the border it seems that gradiometers lead to better
results. It was also stated that the gradiometers seemed insensitive to the configuration used,
whereas the second configuration lead to better results while using coils. Finally they observed
that generally, the PSF is more blurred when gradiometers are used, and therefore, the images
obtained using this type of receivers are worse when compared to the ones obtained using coils.
Still regarding [15], they stated that in MIT the number of independent data points is much lower
than the number of voxels making the system under-determined. This happens because there
is a finite number of possible source-sensor configurations. Moreover, the different data points
are correlated to a certain degree. They considered that this dependence on the amount of
available information is the fundamental limit to the spatial resolution. Regularization techniques
try to compensate for this under-determination by adding some sort of a priori knowledge. This
however influences the resolution, since it leads to a defined smearing of this information over the
image plane and provides the typical diffuse images of electromagnetic tomography.
10
2.2 Magnetic Induction Tomography
Concerning the measurement issues with the MIT system, the main challenge is to have an
SCR as low as possible, while maintaining a stable noise base during the acquisition time. In [16],
the value of 10−7 is presented as the minimum measurable SCR that assures image reconstruction of objects with conductivities below 1 Sm−1 , which is the case of biological samples, in the
frequency range of (0.01-10) MHz. The major trend in MIT systems is to have limited amount of
sources and sensors, leading to in-vivo real time imaging (see e.g. [17]). Although this systems
only have to be stable for a short period of time, they have limited amount of independent information and therefore, limited resolution. Previous work done with gradiometers, like in [17, 18], show
that this kind of sensors have a cancellation factor much higher than standard coils, therefore
permitting a better suppression of the primary field, improving the phase measurement up to 53
times. They are however highly dependent on their position, and have poorer spatial resolution,
as seen in [14], mainly due to their insensitivity to objects with axial symmetry. Furthermore, the
results are dependent on the type of gradiometer, like far and short sensing gradiometers.
The need to improve image resolution lead to the development of the moving system presented in [11]. Since gradiometers are position dependent, their usage in a moving system was
replaced by coils, using the twin cancellation method to suppress the source field. Because this
system is stable for several minutes, the number of independent measurements is increased,
while maintaining the value of 10−7 for the SCR. Although no image reconstruction was yet made
using this approach, theoretically it can have an improved spatial resolution.
Computational electromagnetics concerns the wide spectrum of methods to solve the MIT forward problem. The Nodal and Edge Finite Element Method [19, 20] under the (A, V − A) formula-
e
e
tion is considered one of the most accurate methods for solving the eddy current problem, and has
been used for MIT image reconstruction with very good results (e.g. [3, 9, 14, 15, 21]). The integral formulation was recently used to model the MIT forward problem in [22], but its application in
the inverse problem context, as far as the author’s knowledge, is yet to be seen. Another method
which has been successfully used for the solution of electromagnetic field problems is the Finite Integration Technique (FIT), which uses the integral, rather than differential form of Maxwell’s
equations (more on this topic in chapter 3). Its application to eddy current problems was done, for
example, in [23, 24], and specifically in the MIT inverse problem context in [25]. Almost all of these
methods make some simplifications to the Maxwell’s equations, the most common one being the
assumption that the problem is stationary, reducing the equations to their harmonic version. Other
simplifications include the assumption of constant magnetic permeability or the non-existence of
planar or sheet currents. Although the first one is not entirely true, it is a valid assumption when
biological bodies are concerned since the variability of the magnetic permeability between tissues
isn’t great and its effect isn’t as preponderant as the effect of the complex conductivity [26]. The
last assumption is true in the case of a biological sample, due to the inherently low conductivity of
the tissues [26].
11
2. Background Context
Concerning the MIT inverse problem, several approaches have been made. The first ones
were linear methods like the filtered backprojection algorithm (e.g. [7, 8]) or the Newton One-Step
Error Reconstruction (NOSER) [15]. These approaches have the advantage of being computationally light, allowing in-vivo imaging, but are not adequate for all cases and the images obtained
have worse spatial resolution when compared to non-linear methods. Almost all of these make
usage of the sensitivity matrix, which allows finding a quadratic order descent direction for the
parameter estimation problem, but is computationally very expensive. In [3] the conductivity map
is estimated using sensitivity matrices obtained for different conductivity phantoms. An example
of the results obtained is presented in figure 2.3.
Figure 2.3: (a) Four inclusions each 4 cm in diameter with conductivity of 1.8 S/m centered at
(-5,0,0) cm and (5,0,0) cm, (0,5,0) cm and (0,-5,0) cm. (b) Reconstructed image using a single
step. (c) Reconstructed image after five nonlinear iterations. Image taken from [3]
Finally, to deal with the ill-posedness of the MIT problem, several regularization schemes have
been developed and applied successfully, like Tikhonov [3] ,Total variation [9], Half quadratic [27],
Truncated Singular Value decomposition [15] or Wavelet [25].
2.3
Electrical Impedance Tomography
If one considers a bounded, simply connected domain Ω with boundary ∂Ω, then EIT can
generically be seen as the inverse, ill-posed and non-linear problem of determining the impedance
in the interior of Ω, given simultaneous measurements of direct or alternating electric currents and
voltages at its boundary, being ruled as well by a set of electromagnetic PDE. Once again the nonlinearity character arises from the full set of unknown variables, the electric field and conductivity
map, and the ill-posedness is due to the inferior number of independent data points, when compared to the number of parameters to be estimated, making the system under-determined.
EIT is a non-invasive technique involving the formation of images of the impedance distribution
across a sectional plane of a body under test, from peripheral measurements. It is the most
developed case of electromagnetic tomography, with commercial systems already available.
12
2.3 Electrical Impedance Tomography
2.3.1
Description of the typical system
According to [28], the typical EIT system consists in the placement of an array of N electrodes
around the periphery of the body under investigation. Small amplitude electrical currents (1-10
mA) at a fixed frequency are applied in turn to each adjacent pair of electrodes, enceforth known
as drive pairs. For each one, the electric potential is measured by the remaining (N-3) pairs of
adjacent electrodes, known as receiver pairs, in a differential manner. Further measurements
between non-adjacent pairs would be irrelevant as this data could be obtained from linear combination of the voltages registered in the adjacent receiving pairs. It should be noted that drive and
receiving pairs aren’t exclusively made up of adjacent electrodes and it is advisable to use different combinations to increase the size of independent data in order to improve the image spatial
resolution. An illustration of the typical system is presented in figure 2.4.
Figure 2.4: Illustration of an EIT system. Current is being applied across two electrodes resulting
in current stream lines and equi-potential lines
The typical EIT system is capable of producing two types of images. One is called difference
imaging and was the first type being developed by the Sheffield group [29]. It consists in the measurement of two different data sets, at different times or frequencies. These are then subtracted
(difference EIT), or subtracted and then divided (normalized difference EIT) by a reference set of
measurements corresponding to the background conductivity. The resulting images translate to
changes in the background conductivity, representing a given physiological parameter, such as
blood volume or cell size. The other type is named absolute imaging since it produces images
containing the absolute conductivity or permitivity map of the body. It is technologically more challenging, as the contact impedance of the electrodes plays an important role here and cannot be
assessed accurately when concerning a clinical application.
According to [30], there are three main electrical current injection protocols, which are chosen specifically for each application, and are classified into adjacent, opposite and adaptive.
The first one was devised in 1987 by Brown and Segar [29] in which the current is applied by
an adjacent drive pair and the voltage is measured from all other adjacent receiving pairs (figure 2.5 a)). Due to reciprocity, the measurements in which the drive and receiving pairs are
13
2. Background Context
interchanged yield identical measurements leaving the number of independent measurements
in 1/2 × #electrodes × (#electrodes − 3) . Regarding the second protocol, here each drive pair
consists of two diametrically opposite electrodes (figure 2.5 b)). The electrode adjacent to the
current-injecting electrode is used as a reference and voltage measurements are carried out with
all the remaining electrodes. This is done consecutively for each possible driving pair, leading to
the same number of independent data points as the previous method, with the advantage that
this method has a more uniform current distribution and therefore, better sensitivity. Regarding
the adaptive method, current is injected through all electrodes (figure 2.5 c)). The voltages are
measured with respect to a single grounded electrode, and the current distribution is consecutively being rotated by the electrode angular spacing, leading therefore to the highest number of
independent measurements equal to 1/2 × #electrodes × (#electrodes − 1).
((a))
((b))
((c))
Figure 2.5: Illustration of different methods of impedance data collection for a cylindrical volume
conductor and 16 equally spaced electrodes. a) adjacent method; b) opposite method; c) adaptive
method.
Calibration is a fundamental feature in any hardware system and should be done prior to
any measurement. This is specially relevant in a low resolution imaging system like EIT, since
any measurement error degrades even further its spatial resolution. According to [2], the principal sources of error in an EIT system arise essentially from common mode effects such as
skin-electrode contact impedance and stray capacitances. The total error is dependent on the
interaction of these effects and will differ for each electrode combination, but can however be
minimized by enhancing the common mode rejection ratio through hardware improvement.
It should be noted that the area of the measurement electrodes should be minimized so it can
recognize the electrical potential at a specific location (ideally a single point in order to increase
the resolution). But the smaller the area, the higher its contact impedance which consequently
requires an even higher input impedance for the voltmeter. Because there is a limit to this input
impedance, there has to be a compromise between the electrode area and measurement accuracy. Still regarding electrodes, their modeling in the forward problem can be quite tricky as the
14
2.3 Electrical Impedance Tomography
current boundary is unknown for all locations. There are two models to estimate this boundary
condition. The first one consists in approximating the current density by a constant on the surface
of each electrode, and zero in the spaces between electrodes. For a more accurate model, one
should use the so called complete electrode model, which considers that the current density is
higher on the edge of the electrodes. This model also predicts that this effect is reduced as the
contact impedance increases2 .
2.3.2
State of the art
Although EIT has been used as an industrial and geophysical application since 1930, the first
steps towards its application in the medical field were only taken in the early 80s by the Sheffield
group ([29]). Their system, the APT system mk 1, became the most widespread EIT system
applied successfully in many clinical applications (see [1]) and it is considered a reference mark
as it forms the basis of many of the newly developed EIT systems. It comprises 16 electrodes
and uses the adjacent current injection protocol discussed in the previous section. The current
injected has only one frequency (50 kHz) and comes from a single current source, connected to
a multiplexer that controls which drive pair is going to be used.
The most recent version of this system is the mk 3.5 ([32]), contains 8 electrodes and has a
frequency bandwidth between 2 kHz and 1.6 MHz. It can acquire and reconstruct images at a
rate of 25 frames per second and therefore is capable of in-vivo imaging. The current injection
protocol is the same as the older version. Another recently developed system is the Adaptive
Current Tomographs system (ACT4 [33]), and, as the name implies, it uses the adaptive current
injection protocol. It can support up to 72 electrodes and has a wide bandwidth, with lower and
upper limits equal to 300 Hz and 1 MHz respectively. Its main application is pulmonary monitoring.
A very promising system is the one developed by the Dartmouth group ([33]). It is aimed for breast
imaging and is capable of obtaining images in real-time (around 30 frames per second) with a
signal-to-noise ratio of over 100 dB. Its operating frequency range is 10 kHz-10MHz and uses 32
electrodes. The UCLH Mk 1b system should also be included here, mainly due to its ambulatory
nature. It can support up to 64 electrodes and operates at frequencies between 225 Hz - 77 kHz.
The reason for this low operating frequency band is that this system is aimed at monitoring brain
function and lower frequencies induce larger measurement changes during brain activities, and
therefore produce more relevant data. Also, lower frequencies can penetrate the skull more easily.
A summary of the systems discussed here is presented in table 2.2.
Most of the image reconstruction methods need an accurate model to predict the electrical
potential from an imposed electrical current through an object with known physical properties. The
most common method to solve the EIT forward problem is the Finite Element Method3 (FEM), as
there are already commercially available FEM software. The FEM requires a 3D discretization of
2 For
3 The
more information on this topic see [31]
complete FEM formulation applied to EIT is presented in [31]
15
2. Background Context
System
mk. 3.5
ACT4
UCLH Mk 1b
Electrodes /#
8
up to 72
up to 64
Frequency range /kHz
2-1600
0.3-1000
0.225-77
Real-time imaging
yes
yes
no
Table 2.2: Summary description of the most recent EIT systems. Adapted from [2].
the domain and converges to a solution of the PDE that it represents, with accuracy proportional
to the number of elements and the order of interpolation. However, other variants of the FEM are
also used, such as the Finite Difference Method and the Finite Volume Method ([2]), that make a
compromise between computational speed and accuracy in the representation of the domain, as
they use regular grids which aren’t that adequate for representing curved regions. The Boundary
Element Method (BEM) is also used in the EIT forward problem, for accurately modeling surfaces
of regions. The system of equations to be solved is dense rather than sparse (the case of FEM),
so its computational advantage over the FEM strongly depends on the number of surfaces to be
modeled. Also there are hybrid methods, like the one presented in [36], that combine multiple
methods, such as FEM for modeling regions with inhomogeneous conductivities and BEM for
surfaces of constant conductivity. The Finite Integration Technique was also applied in [25] to
solve the 2D EIT forward problem.
Concerning the inverse problem, from a theoretical point of view it was demonstrated in [31]
that all possible boundary measurements do uniquely determine the conductivity inside the domain. However, there is loss of information during data acquisition mainly due to noise and finite
number of electrode combinations. Therefore the solution is not unique and the measured data
can only lead to the best approximation of the conductivity distribution. Although there is quite a
large number of non-linear solvers of the EIT inverse problem which work very well for simulated
measurements obtained from forward problem solvers, they haven’t been successfully applied to
clinical data. This is mainly due to the data variability between subjects, which arise from the
skin-contact impedance with its base line drift or domain shape variation during data acquisition.
Therefore only linear solvers, which assume that small changes in the conductivity distribution
translate to linear changes in the peripheral voltage measurements, have been successfully applied in a clinical context.
The first linear EIT image reconstruction method was developed by the Sheffield group in [37]
and consisted in a modification of the filtered backprojection algorithm used in X-ray CT. It came
to be known as the Sheffield algorithm and it is still in use in some clinical applications. Other
linear methods used to infer the conductivity distribution of the body under test, which rely on
the sensitivity matrix, include the truncated singular value decomposition or the Penrose-Moore
method and Tikhonov regularization ([2]). The NOSER and Fast NOSER ([38]) are non-iterative
linear solvers that take only one step of the Newton iterative method and are also used in EIT
16
2.3 Electrical Impedance Tomography
image reconstruction. The conductivity estimates obtained from linear methods can be inproved
using methods such as the Landweber’s method ([39]), which iteratively approximates the estimation to the real conductivity distribution by making use of the sensitivity matrix. In 2009 a paper
suggesting a consensus linear reconstruction algorithm for lung EIT was published with the intent of presenting a valuable clinical and research tool ([40]). In that article, the algorithm GREIT
(short for Graz consensus Reconstruction algorithm for EIT), was developed to address several
issues like spatial non-uniformity in image amplitude, position and resolution. These limitations
are inherent to other image reconstruction algorithms, such as the Back-Projection method, and
greatly difficult the interpretation of the acquired image.
Concerning the non-linear image reconstruction methods, the two most common ones are
the Gauss-Newton (GN) and modified Newton-Raphson (mNR) which converge to the conductivity distribution in a least squares sense. Another method used regularly is the layer stripping
algorithm, which starts by finding the complex conductivity of a small layer on the boundary and afterwards synthesizes the measurement of the next layer. The process is repeated consecutively
until the whole object is imaged. This method is useful for it is computationally less expensive
than the standard non-linear iterative methods (GN and mNR) and still addresses the non-linear
character of the EIT problem, providing good results specially for multi-layered objects.
EIT, even in its experimental and research phase, has been applied successfully to several clinical situations. One of its main applications is imaging of the thorax, namely imaging ventilation,
detection of blood clots or pulmonary emboli or even monitoring the drainage of a pneumotorax
caused by pulmonary lesions. It may also be used to measure cardiac output or isolated perfusion defects. Dynamic perfusion impedance images describing different times of the cardiac cycle
can also be produced using cardiosynchronous averaging. A very promising application for EIT is
breast imaging, namely to detect malignant tissues, thus avoiding the conventional, discomforting
and sometimes painful, X-ray mammography. Another promising clinical application of EIT is brain
imaging, namely to detect the origin of focal epilepsy (see figure 2.6).
The need to surgically remove the epileptic focus in some patients demands a very accurate
estimation of its localization. Since continuous monitoring is necessary because the occurrence
of an epileptic episode is so unpredictable and because the repetitive activity owing to a focal
seizure can cause local ischemia, detectable using EIT, this technology may be better suited than
fMRI for localization of epileptic foci.
Finally, there is a variation of EIT known as Magnetic Resonance Electrical Impedance Tomography (MREIT), which, as the name implies, consists of a combination of MRI and EIT. Firstly
images representing the magnetic flux density distribution are obtained using a low-frequency
magnetic resonance current density imaging technique. This transforms the ill-posed inverse
problem of EIT into a well-posed one in MREIT, opening up the possibility of obtaining images
with high spatial resolution and contrast, but still having the information about the passive elec17
2. Background Context
Figure 2.6: Image reconstruction during a right temporal complex partial seizure. Axial slices
(a and d), coronal slices (b and e) and sagittal slices are shown passing through the area of
maximum impedance). Image taken from [2]
tromagnetic properties of tissues. This is a new imaging method and several technical issues still
have to be solved, such as the amount of injected current required (over 15 mA), but it shows
great potential in the future.
18
3
Implementation of the Governing
Equations
Contents
3.1
3.2
3.3
3.4
Maxwell’s Equations . . . . . . .
The Finite Integration Technique
MIT forward problem . . . . . . .
EIT Forward Problem . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
20
25
29
43
19
3. Implementation of the Governing Equations
In this chapter the implementation of the equations that rule the physical phenomenons of both
Magnetic Induction Tomography and Electrical Impedance Tomography is performed using the
Finite Integration Technique. These equations are derived from the Maxwell’s equations in section
3.1. It should be stated that the software developed was intended to meet different demands for
MIT and EIT.
For the MIT scenario, the first feature that the software should fulfill is to automatically generate the required mesh for the discretization of the 3D domain. This gives great versatility to
the forward problem solver since only the parametrization of the problem geometry is required.
This mesh should also contain different resolution levels and should adapt itself to the desired
objects, allowing the existence of maximum resolution zones over the object and sensors, but not
in the space between them, improving therefore the computational speed by greatly diminishing
the number of mesh elements, while maintaining the same accuracy of the solution. The number
of different resolution levels should also be controlled as well as the size of the maximum resolution mesh elements. The implementation was also based on several approximations, such as
considering constant magnetic permeability in the entire space, which makes the problem slightly
easier since only the complex conductivity is to be reconstructed, or acknowledging that the skin
depth is larger than the physical dimensions of the object, thus avoiding the need to consider
sheet currents. The equations implemented are the ones derived in section 3.1 for the harmonic
and stationary case.
Regarding the EIT, the software was developed to meet simpler demands, since the governing
equations are much less complex and the size of the problem is considerably smaller since the
problem was reduced to the 2D case. Therefore, the discretization of the domain is carried out
using a regular mesh, without subgriding schemes, and is computed automatically to adapt to
the geometry of the problem. The size of the elements is also controlled by the user and a
simplification to the quasi-static state was also performed, discarding magnetic induction effects.
Both forward problem solvers were implemented in Matlab. The choice of this language had
several reasons. The first and most important one was the familiarity and priorly acquired knowledge of this language by the author, which allowed to tackle the amount of work that needed to
be done in the time range of a Master’s thesis project. Other reasons include its great computational power when concerning matrix operations and good debugging capabilities, both of which
enabled a very fast software prototyping.
3.1
Maxwell’s Equations
James Clerk Maxwell developed the classical electromagnetic theory, synthesizing all previous unrelated experimental observations, extending them by the introduction of the concept of
displacement current and the notion of field, and postulating a set of partial differential equations
20
3.1 Maxwell’s Equations
applicable to all macroscopic electromagnetic phenomena. One of those equations describes the
general law of induction which relates the induced electromotive force (emf) in a path with the
time derivative of the magnetic flux density through that path. The polarity of the induced emf is
such so that if a current is produced in the path, the resulting magnetic field will try to oppose
the change in the flux density. Another equation is an extension of the Ampere’s Law. It states
that the magnetic field is produced by currents, and these can be separated into free current,
produced by the motion of charges, and displacement current, produced by the time variation of
the electric flux density. The Gauss law gives the most important property of electric flux density,
namely that flux out of a closed surface equals the charge enclosed by it and is also present in
the Maxwell’s equations. The final equation prescribes the non-existence of magnetic charges by
the zero divergence of the magnetic flux density. The famous four Maxwell’s equations are thus
presented:
∇×E =−
e
∂B
e
∂t
∂D
e
∂t
∇ × H = Jf +
e e
(3.1)
(3.2)
∇·D =ρ
e
(3.3)
∇·B =0
(3.4)
e
These equations are indefinite unless the constitutive relations are known. For simple media,
which are linear, isotropic and homogeneous, these relations are given by:
D = εE
e
(3.5)
B = µH
e
(3.6)
J = σE
(3.7)
e
e
e
e
With the Maxwell’s equations one should associate, for completeness, the continuity equation
(even though it can be derived from equations (3.2) and (3.3)):
∇·J =−
e
∂ρ
∂t
(3.8)
The equations presented so far are general and can be simplified to the particular case where
the impinging fields vary sinusoidally with time. In this case it is easier to write the fields in phasor
notation in which the time dependence is specified by a product between the field dependent
solely on the spatial coordinates and eiωt . With this notation, the operator ∂/∂t becomes simply
iω and equations (3.2) and (3.1) can be rewritten, resorting as well to the constitutive relations, as
follows:
¯ = −iω B,
∇×E
e
(3.9)
¯ = σ̄ E,
∇×H
(3.10)
e
e
e
21
3. Implementation of the Governing Equations
where σ̄ is the complex conductivity given by σ̄ = σr + iωε. When an object with non-zero conductivity is concerned, there are no charge sources inside its domain and therefore the currents
have null divergence, which implies that any current line should be closed.
3.1.1
Eddy Currents Formulations
In this section several typical descriptions of eddy current problems are analyzed and the
adopted mathematical formulation is thoroughly described.
Although it is possible to describe an eddy current problem resorting directly to the electric
and magnetic field, according to [20] it is more convenient to use potentials since they lead to
more robust and numerically stable system of equations. A typical formulation makes usage of
the scalar magnetic potential Φ and the electric vector potential T , which, in the quasi-static case,
obey the relations J = ∇ × T and H = T − ∇Φ ([12, 20]). In the nonconducting regions the
e
e
e
e
magnetic field is completely described by the scalar potential which can be computed using the
null divergence of the magnetic field, leading to the following equation,
−∇ · µ∇Φ = 0.
(3.11)
In the conductive volume one has to combine both magnetic scalar potential and electric vector
potential and introduce them in equation (3.10). The resulting equation is presented below,
∇ × σ −1 ∇ × T + iwµ(T − ∇Φ) = 0.
e
(3.12)
e
According to [20], although this formulation excellently suits skin effect problems with current
excitation, it fails intrinsically when conductors are multiply connected. This can be overcome by
imposing a very low conductivity in the non-conductive regions between conductors, thus generating a virtual single connected space.
The introduction of an electric scalar potential V coupled with the magnetic vector potential A,
e
¯ = −(∇V + iω A), provide a simple and convenient way
which are defined as B = ∇ × A and E
e
e
e
e
of modeling an eddy current problem [20]. Although it could be possible to write the governing
equations with V , it is more common to use the modified electric scalar potential φ, which obeys
V = ∂φ/∂t. This potential allied with A renders the following equation that holds for conducting
e
and nonconducting zones,
∇×
1
∇ × A + iωσ̄(∇φ + A) = Js ,
µ
e
e e
(3.13)
where Js is the source current density defined just over the source coils and the terms involving
e
σ̄ disappear in the nonconducting region.
A great disadvantage of the previous formulation is the need to carefully model the source
coils in order to have an accurate representation of the source coil current density ([20]). To overcome this difficulty, a different formulation was introduced based on V coupled with the unknown
component of A, which is termed reduced magnetic vector potential, Ar , and is associated with
e
22
e
3.1 Maxwell’s Equations
the secondary magnetic field produced by the eddy currents ([20, 41]). This is rather useful as
it avoids an exact modeling of the source coils and the magnetic flux density can be computed
as the sum of the impressed Biot-Savart field, which can be easily computed, and the curl of
Ar . This is the adopted formulation for implementing the MIT forward problem and therefore its
e
mathematical details will be described carefully.
If one substitutes (3.1) in (3.2), and defines E and B through the modified scalar electric
e
e
potential and the magnetic vector potential, one will end up with equation (3.13). However, it is
necessary to use a gauging condition for A, in order to uniquely define this potential and hence
e
obtain a unique solution for the eddy current problem. The most common gauging condition
(e.g. [19, 20, 41]) is the Coulomb gauge defined as ∇ · A = 0. This allows adding the term
e
−∇(∇ · A) to equation (3.13) without changing it, leading to the following,
e
∇×
1
∇ × A − ∇(∇ · A) + iωσ̄(∇φ + A) = Js .
µ
e
e
e e
(3.14)
Taking µ as a constant and using the identity,
∇ × ∇ × A = ∇(∇ · A) − ∆A,
e
(3.15)
ν∆A − iωσ̄(∇φ + A) = −Js .
(3.16)
e
e
one ends up with the following differential equation,
e
e
e
It should be noted that the Coulomb gauge is directly enforced by this last equation, but a
boundary condition imposing the referred gauge at the boundary of the space in analysis should
be added. It should also be referred that this equation does not impose the current divergence
free condition, which would mean that one is not forcing closed current paths inside the conductor.
Therefore, equation (3.8) expressed in terms of potentials should be added to the system, having
in mind that no charge sources or sinks are present inside the domain. The resulting equation
makes usage of the constitutive relation presented in equation (3.7) and is shown here,
∇ · [iωσ̄(∇φ + A)] = 0.
(3.17)
e
As stated previously, in the adopted description A is separated in its source and residual
e
components (A = As + Ar ). As can be numerically calculated by solving ∆As = −µ0 Js , which
e e
e
e
e
e
essentially consists of three equations similar to the Poisson’s equation and therefore has the
following solution,
A(r) =
ee
µ0
4π
I
I(r0 )dl0
eR e .
(3.18)
This equation takes into account that the source current is confined to a closed path. R is the
distance between dl0 and the observation point defined by r, and r0 is the coordinate vector of dl0 .
e
e
e
It is clear that the source magnetic vector potential at a given point is obtained by integrating the
23
3. Implementation of the Governing Equations
source current I around its circuit. Substituting A = As + Ar in (3.16) and (3.17) one ends up with
e e
the following governing equations:
e
ν∆Ar − iωσ̄(∇φ + Ar ) = iωσ̄ As ,
(3.19)
∇ · [iωσ̄(∇φ + Ar )] = −iω∇ · σ̄ As .
(3.20)
e
e
e
e
e
The continuity of the normal component of B and the continuity of the tangent component of
e
H in the interfaces between two different media is fulfilled by the non-existence of sheet currents.
e
In the interface between a conducting and nonconducting region, the normal component of the
current density should be set null, since there are no currents outside the conducting media.
However, this Neumann condition is already enforced by equation (3.20) and therefore is not
necessary to define it explicitly. Nevertheless, a homogeneous Dirichlet boundary condition needs
to be imposed at the space boundary, forcing the normal component of Br to be null.
e
Equations (3.19) and (3.20), alongside the necessary boundary conditions, provide a nonsingular system of equations that uniquely determines the solution of an eddy current problem.
3.1.2
Electrical Impedance Tomography Formulations
When concerning the EIT forward problem, it is common to assume a quasi-static case, which
is to say that the frequency is sufficiently low that the effect of magnetic induction can be ignored.
Taking this into account it is possible to describe the electric field as the gradient of the scalar
electric potential and hence, equation (3.2) is reduced to the following,
∇ × H = σ E + Jimpressed ,
e
(3.21)
e e
where Jimpressed are the current sources. If one takes the divergence on both sides of equa-
e
tion (3.21) and sets Jimpressed as zero since there are no current sources inside the domain Ω,
e
then one ends up with the equation,
∇ · (σ∇V ) = 0
in Ω,
(3.22)
which can be seen as the distributed parameters equivalent of the lump parameter circuit
analysis Kirchoff law. This equation should be complemented with Dirichlet boundary conditions
V = vi under electrode i, or else Neumann boundary conditions, σ∇V · n = ji under electrode i
e
and σ∇V · n = 0 everywhere else, being n the outward unit normal of the boundary of Ω, ∂Ω. It is
e
e
stated in [31] that specification of Dirichlet boundary conditions is sufficient to uniquely determine
a solution for V , whereas Neumann boundary conditions only specify V up to an additive constant,
which is equivalent to choosing an earth point. It should also be noted that boundary current
density must obey the law of conservation of current, which is equivalent to satisfy the consistency
condition
24
R
∂Ω
j = 0.
3.2 The Finite Integration Technique
The adopted formulation is based on the continuity equation in which the magnetic induction
effects are also discarded. Using the same potential description for the electric field and using the
constitutive relation expressed in equation (3.7), the following equation can be easily obtained.
∇ · σ∇V = −
∂ρ
∂t
(3.23)
Using this formulation one only has to place several charge sources and sinks at the boundary of the object under analysis, taking care to respect the consistency condition, and apply
homogeneous Dirichlet boundary conditions at the space boundary. This formulation leads to a
non-singular system of equations and the rendered solution for the potential is unique.
It should be noted that a description of the EIT problem could be done without considering the
quasi-static case. This would imply a description of the Maxwell’s equations in terms of scalar
and vector potentials and would greatly increase the complexity of the system1 . Although the
images reconstructed using higher frequencies are in fact better, in most cases the differences
aren’t significant and the compromise is acceptable.
3.2
The Finite Integration Technique
The finite integration technique (FIT) is a spatial discretization scheme to numerically solve
electromagnetic field problems with complex geometries, both in time and frequency domains.
The basic idea behind this approach is to apply the Maxwell’s equations in integral form, rather
than in its differential counterpart, to a set of staggered grids on which the scalar variables are
stored in the cell centers whereas the vector quantities are located in the cell edges and faces.
This finite volume-type discretization scheme relies on the usage of integral balances and thus
proofing stability and conservation properties of discrete fields ([44]).
3.2.1
Discretization of the Domain
Most electromagnetic field problems consist of open boundary problems. Hence, the first
discretization step of the FIT is to reduce them to bounded, simply connected domains Ω ∈
R3 , containing the geometry of the problem in question. Consequently, this domain has to be
decomposed in a finite number of disjoint cells meaning that the intersection of two different cells
must be either empty or a two-dimensional polygon, one-dimensional edge or a single point,
shared by both cells. This defines the main grid G. A dual-grid G̃ is also defined under the
premise that the main grid cell centers form the grid points of G̃. In the case of orthogonal grids
(which were used in this thesis), they are defined as two cubic meshes where the cell centers of
G correspond to cell vertices in G̃, leaving a one-to-one relationship between edges of G̃ cutting
through the cell surfaces of G and vice versa (see figure 3.1). This type of grids was chosen
1 More
on this topic in [42, 43]
25
3. Implementation of the Governing Equations
mainly because it provides an easier way to compute the required differential operators based on
topological information, when compared to non-orthogonal hexahedrical or tetrahedrical meshes.
Figure 3.1: Representation of the spatial allocation of a cell and dual cell of the orthogonal grid
doublet {G, G̃}. Image taken from [44].
The electromagnetic quantities are mapped into the grids in the so called p-chains, by integration. p can vary between the values 1, 2 and 3, corresponding respectively to cell edges, areas
or volumes. These chains take into account an inner orientation of G and and outer orientation
of G̃. The resulting scalar map describing the integration of a given field over a p-chain is called
a p-cochain. This means that the vector fields are substituted by discrete forms defined in G
and G̃ along their edges or facets (1-chain and 2-chain respectively) and scalar fields are defined
over each cell volume of G and G̃ (a 3-chain). Their values are specified in the correspondent
cochains, which are attached to the same specific geometric forms. The following notation is
henceforth adopted. A given quantity x is written as x
ó if it is defined along the edges, xóó if it is
defined over the faces, and simply x if it is a quantity integrated over the cell volume and defined
as a mean value at the cell center.
3.2.2
Discretization of electromagnetic quantities
As previously stated, in the FIT the electromagnetic quantities are substituted by their integrals
along edges, faces or volumes. According to [44], the electric field is a quantity defined over the
cell edges of the main grid and hence is a 1-form given by the integral of the electric field along
each specific edge:
eó =
Z
E · dl.
edge
e e
(3.24)
Similarly, the magnetic field intensity and magnetic vector potential are also 1-forms but are
defined along the edges of the dual grid. The magnetic flux density is considered a 2-form and
therefore is defined over the faces of the main grid and given by the following integral.
óób = Z
f ace
26
B.ds.
e e
(3.25)
3.2 The Finite Integration Technique
Both electric current and electric flux densities are 2-forms defined over the faces of the dual
grid. Table 3.1 contains the information about the basic electromagnetic quantities used in the
FIT, their topological form and the grid in which they are defined.
Quantity
eó
óh
ô
óój
óób
dô
φ
óa
Description
Electric Field Intensity
Magnetic Field Intensity
Topological Form
1-form
1-form
Grid
G
G̃
Electric flux density
2-form
G̃
Electric current density
2-form
G̃
Magnetic flux density
Electric scalar potential
Magnetic vector potential
2-form
3-form
1-form
G
G
G̃
Table 3.1: Discrete electromagnetic quantities used in the FIT. Adapted from [25].
Assuming a lexicographical ordering of the cells composing the main grid, the discrete electromagnetic quantities can be assembled into column vectors and the classic differential operators
can be defined as matrices containing only topological information on the incidence relation of the
cells within G and on their orientation. These discrete topological operators can then be applied
to the electromagnetic quantities simply through matrix product. In order to distinguish the continuum case from the discrete one, the following notation is adopted: lap is the discrete laplacian
operator, grad is the discrete gradient, div is the discrete divergence and curl is the discrete curl
˜ and curl
˜ grad,
˜ div
˜ are the same operators but defined over the dual grid. It should
operator. lap,
be stated that the curl operator is applied to 1-forms and transforms them into 2-forms and the div
operator transforms 2-forms into 3-forms (see [44]). Also, due to the topological relations between
˜ T.
the two grids, the two operators grad and div obey the relation grad = −div
3.2.3
Maxwell-Grid-Equations
The discretization of the dual grid complex results in matrix equations that feature the topological grid operators. For the cell complex pair {G, G̃} the complete set of discrete matrix equations,
the so-called Maxwell-Grid-Equations (MGE) is given by ([44]):
curl eó = −
dó
ób,
dt
dô ó
˜ ó
curl
h = dô+ ó
j,
dt
(3.26)
(3.27)
˜ dô
ô= ρ,
div
(3.28)
ó
(3.29)
div ób = 0.
It should be stated that these equations are first derived for the surface and edges of single
cell facet and then extended to larger areas.
27
3. Implementation of the Governing Equations
So far, the discretization of the Maxwell’s equations consists only in the bounding of the domain and the information that these equations hold is only about integral state variables allocated
either on points, edges, surfaces or volumes. They are, however, exact representations of the
Maxwell’s equations on a grid doublet ([44]). The first approximation appears when integral field
and respective flux variables, allocated on different grids, have to be related through a constitutive
material relation. For isotropic media, the material matrices are diagonal and establish the relations of the degrees of freedom corresponding to the two grid complexes, coupling edge degrees
of freedom (1-forms) with the dual facets degrees of freedom (2-forms). This mapping of a 1-form
onto a 2-form dubs the material matrices as discrete Hodge operators. They contain the metrical
information of the MGE, which is to say that they contain the averaged information of the material
on the grid dimensions. Since equations (3.26) through (3.29) are exact and contain only topological information, one of the discretization errors can be pinpointed to the constitutive material
equations presented in equations (3.30) through (3.32),
óój = Mσ eó,
óób = Mµóh,
ô
dô= M eó.
(3.30)
(3.31)
(3.32)
For instance, the Mσ matrix is given by ([25, 44]):
RR
[Mσ ]ij = δij RÃ
σ.dA
.
Li .dl
i
(3.33)
The M matrix can be obtained using the same equation but exchanging σ for and the Mµ
matrix is obtained by taking the surface integral over the area of the G cells and the line integral
over the edges of the G̃ cells, and of course substituting σ for µ.
3.2.4
Algebraic properties of the matrix operators
One of the main properties of FIT which allows the representation of the Maxwell’s equations
is the discrete analog of the vector identity ∇ · ∇ × = 0 given by the matrix equations,
div curl = 0,
(3.34)
˜ curl
˜ = 0.
div
(3.35)
These relations arise from algebraic topology. The computation of the discrete divergence
consists in the summation of the flux components. For these, any grid voltage is considered twice
with different sign in the discrete curl summation, yielding the zero result of the overall summation.
This property which holds for both grid complexes is essential for the conservation and stability
properties of FIT.
Another important property of the FIT is the relation between the curl-matrices of both grid
cell complexes given by,
28
3.3 MIT forward problem
˜ T.
curl = curl
(3.36)
Transposition of equations (3.34) and (3.35), combined with the identity (3.36) leads to the
following discrete equations:
˜ divT = 0,
curl
(3.37)
˜ T = 0.
curl div
(3.38)
Remembering the relation between the discrete gradient and divergence operators, it is possible to see that both equations represent the discrete analog of the vector identity ∇ × ∇ = 0,
which implies that fields described as gradients of scalar potentials will be exactly irrotational also
on a discrete level.
It is important to note that in the FIT it is also possible to describe stationary and harmonic
fields using phasor notation, rendering once more the relation ∂/∂t = iω.
The combination of the discretization techniques provided by FIT with the mathematical formulations presented in sections 3.1.1 and 3.1.2 leads to an easy but effective way to model the
electromagnetic phenomena on which MIT and EIT are based on.
3.3
3.3.1
MIT forward problem
Eddy currents formulation in FIT
One of the main attractions of the FIT is that it is possible to take the same steps as in chapter
2 to derive the governing equations for the eddy currents phenomenon. Therefore the electric and
magnetic field will be described resorting to the electrical scalar and magnetic vector potentials,
and this last one will be separated in its residual and source components. Hence, the electric field
intensity and magnetic flux density are given by:
eó = iω(grad φ + ó
a),
óób = curl óa.
(3.39)
(3.40)
And the equation (3.2) can be written using FIT operators and quantities as such:
ójs .
˜ Mν curl ó
curl
a − iωMσ (grad φ + ó
a) = −ó
(3.41)
As in the continuous case, this singular system of equations is complemented with the current
closure equation given by:
div [iωMσ (grad φ + ó
a)] = 0.
(3.42)
Following the same approach as in [25], the double-curl operator in equation (3.41) can be
substituted by a laplacian operator if one applies a proper gauging condition. The obvious choice
29
3. Implementation of the Governing Equations
would be to use the Coulomb gauge (∇ · A = 0), but this is not directly feasible in FIT because
e
the divergence operator is applicable to 2-forms and the magnetic vector potential is a 1-form.
However, the use of the conductivity matrix allows the definition of a gauging condition given by
˜ Mσ ó
div
a = 0, enabling the following definition for the laplacian operator ([25]), which incorporates
the notion of constant magnetic permeability:
˜ Mσ ó
˜ Mν curl ó
lap (Mσ , Mν )ó
a = Mσ grad(div
a) − curl
a.
(3.43)
Similarly to what was done in the continuous case, this definition can be applied to equation (3.41) and using the separation of ó
a into ó
as and ó
ar , one ends up with the following equations,
lap ó
ar − iωMσ (grad φ + ó
ar ) = iωMσ ó
as ,
(3.44)
div [iωMσ (grad φ + ó
ar )] = − div [iωMσ aôs ].
(3.45)
They constitute a robust, linear and non-singular system of equations that holds for conducting
and nonconducting regions [25].
3.3.2
Software Description
The developed forward problem solver can generically be seen as a 3D, FIT based, partial differential equation solver using a multi-level mesh, specific for solving eddy current problems. The
software can be divided in several parts. The first one consists in the definition of the desired problem, in which all the sources, sensors and objects are geometrically parametrized and respective
physical properties are specified, namely the conductivity of each object and the amplitude and
frequency of the source current. This is the only user-software interface. Every other step is taken
automatically. In the following sections, the remaining parts will be carefully described.
3.3.2.A
Meshing and subgridding
It is unfeasible to perform a 3D domain discretization of a problem with the size of the MIT with
a single resolution mesh, since the number of required cells is excessively high. Subgridding is
therefore a necessary feature for it enables the allocation of higher resolution mesh regions where
they are necessary and lower resolution ones in the places where the solution isn’t meant to be
used and can hence have lower accuracy.
There are several subgridding schemes. The simplest one consists in a single high resolution
zone, which consecutively and simultaneously expands and relaxes to lower resolution levels
until the entire domain is discretized. Although for some applications this is sufficient, in the MIT
scenario where a high accuracy solution is mainly intended over the objects and sensors, such a
mesh isn’t the ideal one.
The OcTree type mesh [45] is a more convenient subgridding scheme, since it concentrates
the high resolution cells over the desired objects and sensors but not in the space between them.
30
3.3 MIT forward problem
These OcTree meshes (see figure 3.2) are most often generated in a top-down approach by recursively subdividing the 3D domain into eight octants. Although it is a very interesting subgridding
method for it allows concentration of higher resolution areas where they are required and was
actually implemented during this thesis as the first discretization approach, it becomes slow when
modeling objects with very complex geometries due to its iterative nature. Also, one has to make
sure that each resolution level cell can only be connected to the immediate higher or lower cell
level, which can make this method very slow when concerning meshes containing a large amount
of cells. Another disadvantage is that each eight cells resultant from a cell division need to be
stored carefully in order to maintain the grid topology.
The final subgridding scheme developed and used afterwards renders a 3D multi-level mesh
similar to the one obtained with an OcTree but is generated in the oposite way and avoids some
of the problems stated previously. It is a bottom-up approach where for each object and sensor
specified by the user, high resolution grids are generated in order to contain them. Afterwards
each of these grids starts to expand and relax to lower resolution levels and begins to merge with
the others. This process is repeated until the entire domain is meshed. In this way one assures
that each cell is connected with others belonging to the same level, or to the levels immediately
above or below, leading to a more stable system. Also the domain is discretized more quickly
while maintaining the grid topology, which is essential for the FIT.
As stated in section 3.2 the FIT requires an exact knowledge of the neighbors of each cell
in every direction in the grid complex. In regular grids this does not present any difficulty. With
subgridding schemes, however, special care has to be taken in the interface between resolution
levels, since in this case there are no direct neighbors (see figures 3.3 through 3.5). Following
the concept presented in [25], a virtual neighbor can be obtained by interpolation of the interface
neighbors, leading to an easy and quick method that, when implemented, provides good results.
This neighbor information can be assembled in a set of matrices {N }kij , where the k index concerns the two p − f orms implemented (1- and 3- forms), the j index can be x, y or z, the three
possible orientations of each cell, and the i index regards all possible directions, respectively x+ ,
y + , z + , x− , y − and z − . All these matrices are n × n, where n is the number of cells in the grid
Figure 3.2: Left: Recursive subdivision of a cube into octants. Right: The corresponding octree.
31
3. Implementation of the Governing Equations
complex. Each line of these matrices contains the information about the neighbors of the cell with
the same index, in the respective direction and orientation. If there is a direct neighbor, then the
correspondent column will be assigned with the value 1. If there aren’t any direct neighbors, a
set of coefficients specifying the weight of each interface cell in the interpolation will be set in the
respective columns. These interpolation coefficients are presented in figures 3.3 through 3.5, as
well as each p − f orm and respective virtual neighbor.
It should be noted that the interpolation formulas presented describe the nearest point of the
plane or line defined by the 1- or 3- forms respectively, to the application point of the correspondent
virtual neighbor. This interpolation introduces a further error in the MIT forward problem. Another
thing to have in mind when observing the previous figures is that the ó
a quantity, although a 1-form,
is defined at the center of the facet of each cell. This happens because if one recalls table 3.1,
one sees that ó
a is defined along the edges of the dual grid G̃ and hence, due to the topological
relationship between the two grid complexes, will appear in the face center of each cell of G.
The information contained in the set of neighbor matrices is very powerful since it allows to
easily create the required differential operators and constitutive relations ([25, 46]). For instance,
the Laplacian operator is given by:
lapx
= N1xx+ + N1xy+ + N1xz+ + N1xx− + N1xy− + N1xz− − 6D,
(3.46)
lapy
= N1yx+ + N1yy+ + N1yz+ + N1yx− + N1yy− + N1yz− − 6D,
(3.47)
lapz
= N1zx+ + N1zy+ + N1zz+ + N1zx− + N1zy− + N1zz− − 6D.
(3.48)
D is a n × n identity matrix, being n again the number of cells in the grid complex.
The three components of the divergence operator are defined as follows,
divx = N1xx+ − D,
(3.49)
divy = N1yy+ − D,
(3.50)
divz = N1zz+ − D.
(3.51)
These two operators are to be applied to 1-forms. The gradient operator, which is going to be
applied to 3-forms, is thus presented:
gradx
=
D − N3xx− ,
(3.52)
grady
=
D − N3yy− ,
(3.53)
gradz
=
D − N3zz− .
(3.54)
One should observe that the divergence and gradient operators are defined in opposite directions in order to maintain the proper relations presented in section 3.1.4 and therefore lead to the
correct solution.
32
3.3 MIT forward problem
Figure 3.3: Low to high resolution interface for 1-forms. The virtual neighbor is represented by a
gray arrow and its value is given by the formula below each figure. Taken from [46].
((a))
((b))
Figure 3.4: High to low resolution interface for aligned (a) and misaligned (b) 1-forms. The respective virtual neighbors are represented by the gray arrow whose value is given by the formulas
presented. Taken from [46].
Figure 3.5: Low to high (left) and high to low (right) resolution interfaces for 3-forms. The virtual
neighbor is represented by the gray circle and its respective interpolation formula is given below
each figure. Taken from [46]
33
3. Implementation of the Governing Equations
The defined operators can be applied to equations (3.44) and (3.45) to build the following
linear system of equations Au = y, where,
2
6
A=6
4
lapx − iωµ0 Mσ
O
O
O
lapy − iωµ0 Mσ
O
O
O
lapz − iωµ0 Mσ
divx [iωMσ gradx ] divy [iωMσ grady ] divz [iωMσ gradz ]
2 x3
óa
66 óaryr 77
u = 4 z 5,
óar
2
6
y=6
4
−iωµ0 Mσ [gradx ]
−iωµ0 Mσ [grady ]
−iωµ0 Mσ [gradz ]
div[iωMσ grad]
3
77
5,
(3.55)
(3.56)
φ
iωMσ µ0 ó
axs
iωMσ µ0 ó
ays
iωMσ µ0 ó
azs
-div[iωMσ ó
as ]
3
77
5.
(3.57)
For a main grid composed of N cells, where N 0 of these have non-zero conductivity, the size of
A is (3N + N 0 ) × (3N + N 0 ), which means that the full solution (ó
ar , φ) is only obtained in the cells
composing the object. This however does not present any issue, since the required simulated
measurements represent electromotive forces induced in the sensors, and these are solely due
to the magnetic field, and hence to both ó
ar and ó
as .
This is a very large system of equations, populated primarily with zeros, which makes the
usage of sparse matrices the most attractive option for its implementation.
3.3.2.B
Numerical Calculations
In this section, the remaining numerical computations are described, namely the calculation of
the emf on the sensor coils and the magnetic field produced by the source coil.
Concerning the first, the required equation can be derived from the integral form of the general
law of induction, written in terms of the magnetic vector potential and assuming the stationary and
harmonic case,
I
I
E.dl = iω
path
ee
A.dl.
path
ee
(3.58)
The integral on the left hand side of the previous equation is termed electromotive force and
results from integrating the projection of the magnetic vector potential over the path of the sensor
coil. It is possible to see that this equation is linear in A, which means that the total emf induced in
e
the sensors is a linear sum of the emf produced by Ar and As . Since this calculation is intended to
e
e
be automatic, the numerical implementation of equation (3.58) must be able to take into account
any sensor geometry. This means that only the geometry needs to be specified, using for instance
parametric equations for each spatial coordinate. After this specification, a difference matrix is
created which has -1 in the main diagonal and 1 in the first upper diagonal. The application of
this matrix to the specified geometry results in a set of tangent vectors which specify the path
34
3.3 MIT forward problem
of integration. The emf then results from the sum of each individual dot product between the
magnetic vector potential specified in the application point of a tangent vector, and that tangent
vector.
The source field is calculated by the discretization of equation (3.18), substituting the integral by a discrete sum. It uses the same difference matrix applied to the spatial coordinates of
the source, assuring the independence from a specific geometry. It should also be noted that
this computation is only carried out in the cells composing the objects with non-zero conductivity. Since the Mσ matrix is diagonal with only N 0 non-zero elements (N 0 being the number of
cells composing the objects), then its application to each of ó
axs , ó
ays and ó
azs (see equation (3.57))
obviously renders vectors with only N 0 elements different from 0. Therefore, it’s unnecessary to
calculate the source field in all N cells composing the main grid and this observation allows to
accelerate the process by a factor of N 0 /N . This is particularly relevant when modeling small
objects in large problems, which is the case of MIT.
The presented methods allow to quickly change the geometry of the problem at hand and still
obtain the desired results. However, the required discretizations introduce a final error to the MIT
forward problem, which increases as the discretization becomes coarser. In the next section, the
adopted method for solving large sparse systems will be presented.
3.3.2.C
Preconditioned and iterative forward problem solver
The general solution to a linear system of equations Au = y describes all possible solutions.
One can find the general solution by first solving the corresponding homogeneous system Au = 0,
which renders a basis for the respective solution space. After the finding of a particular solution to
the non-homogeneous system Au = y, one can write any solution to the system as the sum of this
particular solution plus a linear combination of the basis vectors obtained from the homogeneous
solution.
Linear systems of equations can be solved either by direct or iterative methods. The first ones
attempt to solve the problem by a finite sequence of operations (e.g. Gaussian elimination or
variants) leading to the actual solution in the absence of rounding errors. Iterative methods have a
broader range of applications. They try to solve the system by finding successive approximations
to the solution starting from an initial guess. When concerning very large problems, these last
ones are the only possible choice, for direct methods would be prohibitively expensive in terms of
computational power.
The iterative methods often use the concept of preconditioning which intends to reduce the
condition number of the problem, making it more suitable for a numerical solution. It uses a
matrix called the preconditioner. A preconditioner P of a matrix A is such that P −1 A has a
smaller condition number than A, leading to an increase in the convergence rate of an iterative
method. The system can be preconditioned at the right and the solution is obtained by solving
35
3. Implementation of the Governing Equations
the two systems (AP −1 )b = y and P −1 b = u, or more commonly at the left where the solution is
derived from the system P −1 (Au − y) = 0. The obtained solutions are the same as the original
system just as long the preconditioner matrix is nonsingular. Since the operator P −1 must be
applied at each step of the iterative linear solver, its computation should be as light as possible.
The cheapest preconditioner would therefore be the identity matrix. This however results in the
original linear system and the preconditioner does nothing. At the other extreme, the choice
P = A gives P −1 A = AP −1 = I, which has optimal condition number of 1, requiring a single
iteration for convergence. In this case the preconditioner calculation is as difficult as solving the
original system.
The chosen preconditioner consists in a trade-off between the 2 extremes and is an incomplete
LU factorization, which approximates A by the product of a lower and upper triangular matrices.
There are two main classes of iterative methods to solve Linear systems of equations, the
stationary iterative methods and the Krylov subspace methods. The first ones solve a linear system employing an operator which approximates the original system and measuring the residual to
form a correction equation. This process is repeated until the solution is obtained with the desired
accuracy. Although they are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices. Krylov subspace methods work by forming an orthogonal
basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence).
The approximations to the solution are then formed by minimizing the residual over the subspace
formed. There are several methods in this class, like the conjugate gradient method, generalized minimal residual method, biconjugate gradient method and biconjugate gradient stabilized
method. This last one was the adopted method. It replaces the orthogonal sequence of residuals
by two orthogonal sequences that enable finding a value near the minimum of the problem.
3.3.3
Results
In this section several results obtained with the forward problem solver are presented. They
intend to give some validation to the developed software and show its versatility concerning the
simulation of eddy current problems. Most of the simulations are based on the russian prototype
presented in figure 2.2, which consists of 8 source/sensor coils composed by 15 windings of 2.5
cm of radius, placed around a cylinder with 50 cm of diameter.
The first results intend to show some of the versatilities of the mesh generation algorithm. In
figures 3.6 and 3.7 the same domain is discretized but based on different subgridding schemes.
In figure 3.6 the typical mesh is presented when a high accuracy solution is only intended around
the object region. These are useful meshes when one is only concerned about the local changes
in the primary magnetic field due to its interaction with the object, since they do not require a
high number of cells to perform the complete domain discretization. Figure 3.7 presents a much
more useful scenario in which both object and sensor regions have a solution with high accuracy.
36
3.3 MIT forward problem
Figure 3.6: 3D mesh with four different resolution levels and a single subgridding region. It is
visible 8 source/sensor coils and a sphere in the center of the space.
These are the cases used for simulation of measurements which are then going to be used to
perform image reconstruction (see chapter 4). In this figure, two possible subgridding schemes
can be observed, both rendering a solution with the same accuracy. However, figure 3.7 a) is
much more interesting and computationally effective, since it requires four times less cells to
discretize the domain than in b), and takes the same amount of time to generate. This leads to
a system of equations four times smaller which takes considerably less time to solve, making it a
very important optimization tool for the MIT forward problem.
((a))
((b))
Figure 3.7: 3D mesh with three different resolution levels and a) 9 subgridding regions; b) a single
subgridding region. 8 source/sensor coils and a sphere in the center of the space can also be
observed.
The next set of results was obtained from simulations using simple phantoms of conductivity
and intend to give some validation to the forward problem solver. Although there are several
papers describing analytical solutions for eddy current problems (e.g. [47] and [48]), most of them
obtained for axisymmetric geometries, their implementation would be too time consuming and
hence would impede carrying out the objectives stated in chapter one, namely to perform image
37
3. Implementation of the Governing Equations
reconstruction. Therefore, only empirical validations were performed by comparing the order of
magnitude of the obtained results with previous published works, for instance [46] and [25], and
by comparison of their shape with the ones derived from classic electromagnetic theory ([49]
and [50]). The final validation is carried out in chapter 4, where the simulated measurements are
used to perform image reconstruction. Since that was possible, it is assumed that the forward
problem is working to some degree. All there is left to it is to quantify the error for several mesh
sizes and configurations.
The first simulation concerns a conducive plate placed orthogonally in relation to the source
coil axis and is presented in figure 3.8. The plate is square, with side length equal to 20 cm
and depth equal to 1 mm. The source coil has 15 windings of 2.5 cm of radius and a current of
amplitude 2 A and frequency 1 MHz is used. The objective is to observe and analyze the current
density direction and amplitude.
((a))
((b))
Figure 3.8: Current density amplitude and direction for a plane with conductivity 1 S/m, placed
orthogonally in relation to the source coil axis. a) 3D view of the problem. b) Top view. The color
map reflects the current density absolute value in each point.
There are several things that should be emphasized right away. The first one is that any current
line is, as expected, closed. Also, one can observe that the current density is confined to the
conductive plane and that the Neumann condition stating that the normal component of the current
density in the interface between conducting and nonconducting media should be null is fulfilled,
since in that interface, the current density is parallel to the material edge. In the intersection point
between the plate and coil axis, the source magnetic vector potential is null and, as predicted,
results in zero current density in that same point. Hence it acts as the current circulation axis. It
should also be stressed that the amplitude of the current density increases from the center to the
periphery reaching its maximum value there. This is predicted from electromagnetic theory and
38
3.3 MIT forward problem
is more notable as the frequency increases. The order of magnitude is in accordance with [25]
and it should be stated that the axisymmetry of the solution results from the axisymmetry of the
problem geometry.
The next simulation regards a conducting sphere with conductivity of 1 S/m and a source coil
with the same specifications as in the previous case. A single sensing loop is also present in order
to measure the induced electromotive force resulting from the residual magnetic field. The sphere
is placed at the center of the space and both source and sensor share the same axis, which is
parallel to the x axis and crosses the center of the sphere. The problem geometry is presented
in figure 3.9, where it is also observable a top view of the generated mesh with 3 subgridding
regions and 4 levels of resolution, in which the maximum resolution cells have a side length of 5
mm.
((a))
((b))
Figure 3.9: Problem geometry for the simulation of a conductive sphere. a) 3D view. b) top view
of the four-level resolution mesh generated.
Due to the axisymmetry of the problem at hand, the results will be shown, without any loss
of generality, in the z = 0 plane. Firstly, the As and Ar maps in absolute values are displayed in
figure 3.10.
e
e
It can be seen that each map is mostly defined around the place where they are generated
(coil and sphere respectively), decreasing with the inverse of the distance thereafter. Also visible
is a difference between the peak values of around 105 . This is to be expected when regarding
objects with such low conductivities and is in accordance with the experimental findings. Another
interesting thing to stress is that the solution is, as expected, symmetric with respect to the coil
axis.
A more interesting result is the one observed in figure 3.11, where both source and residual
magnetic flux densities are visible. They were obtained by taking the curl of each of the source
and residual magnetic vector potential fields. The red circle represents the spatial location of the
sphere. It can be seen that the residual flux has the opposite direction of the primary flux and is
39
3. Implementation of the Governing Equations
Figure 3.10: Absolute values of the source (left) and residual (right) magnetic vector potential
fields in the plane z=0. The source coil is depicted in red and its axis in green.
Figure 3.11: Magnetic flux density lines originated from the source (left) and sphere (right) in the
plane z = 0. The source coil is depicted in red and the red circle represents the spatial location of
the sphere.
generated inside the object. Hence it is trying to offer some opposition to the rate of change of
the primary magnetic field, which is in accordance with the general law of induction. The shape of
both fluxes is exactly in agreement with the theoretical one described in [49] and [50]. It can still
be seen that the magnetic flux lines are closed and thus the null divergence equation is correctly
enforced. It should finally be noted that the symmetry in both field maps arises once more from
the symmetry of the problem geometry.
Still regarding this problem, in figure 3.12 is presented the variation of the magnitude of the Ar
e
field and the electromotive force with distance. The emf is calculated using a single loop stationed
40
3.3 MIT forward problem
in several positions along its axis. In these results is clearly visible the rapid decrease of the
magnitude with the distance, which is expected taking into account the spatial variation of the
magnetic field. It should be stated that the order of magnitude and the shape of the curves are in
conformity with the ones published in [46].
((a))
((b))
Figure 3.12: a) Variation of the absolute value of the residual magnetic vector potential with both
y and z coordinate, on the plane x = 0; b) Variation of the absolute value of the emf for a single
loop placed along its axis at various distances from the plane x = 0.
The results presented thus far were based on objects with simple geometries and a single
conductivity. However useful for standard simulations, namely to assess the accuracy and correctness of the forward problem solver, they do not represent objects that could actually be used
as conductivity phantoms to evaluate, analyze, and tune the performance of an MIT system. To fill
this gap, a final result is presented with the intent of showing the current distribution inside a more
complex object. The generated phantom presented in figure 4.13 is based on the work published
in [3] and consists in a large sphere with 5 cm of radius and a conductivity of 0.3 S/m, containing
two spherical perturbations, with higher and lower conductivities with respect to the main sphere.
The high conductivity perturbation has a radius of 2 cm and a conductivity of 1.5 S/m and the low
conductivity one has a radius of 1 cm and 0.05 S/m of conductivity.
The simulation was carried out using a 3D mesh like the one shown in figure 3.7 a), and each
maximum resolution cell has 3 mm of side length. Five levels of resolution were employed.
In figure 3.14 the amplitude of the current density distribution is depicted. In this case, the
current lines are complex, not easy to predict and actually wouldn’t add a great contribution to
what this result intends to show. Therefore they are not displayed. The phantom was placed in
the center of the space and the typical source coil setup was used.
The first thing that should be noted is that, once again, the current density is confined to the
limits of the object and no current circulates in nonconducting regions. Another interesting thing
41
3. Implementation of the Governing Equations
((a))
((b))
Figure 3.13: Conductivity phantom composed of a 0.3 S/m conductivity sphere, containing two
others, one with a conductivity of 1.5 S/m and the other with 0.03 S/m. a) 3D View in which the
largest sphere is slightly transparent. b) cross section view.
((a))
((b))
((c))
Figure 3.14: Absolute values of the current density distribution. a) 3D view also depicting the
source coil; b) cross section along the plane x = 0; c) cross section along the plane z = 0.
42
3.4 EIT Forward Problem
is that the maximum current density appears, as expected, in the highest conductivity region and
that in the lowest conductivity sphere almost no current circulates. This differentiated distribution
is what bestows the ability to reconstruct objects in an MIT system. In the same figure one can
also observe regions inside the object where the current is nearly zero. They correspond to the
axis of current circulation.
3.3.4
Discussion
A software focused on solving the MIT forward problem was implemented and tested. It is
based on the eddy currents formulation expressed in terms of the reduced magnetic vector potential, Ar and the modified electric scalar potential, φ, which avoids the need to carefully model
e
the geometry of the source and is numerically more stable when compared to other formulations
(see section 3.1.1).
The chosen discretization technique was the Finite Integration Technique which allowed to
build a non-singular linear system of equations that was stored using sparse matrices due to
the large dimension of the problem and the fact that the matrix is mostly populated by zeros. This
system of equations can be solved using already available methods, like iterative Krylov subspace
methods. The chosen one, the Biconjugate Gradient stabilized method using an incomplete LU
factorization as a preconditioner presented itself as a fast and robust solver for the system of
equations of the MIT forward problem.
A versatile 3D mesh generator was implemented so the grid created could automatically adapt
itself to the geometry of the problem, leading to an OcTree type mesh which enables saving as
much as four times the number of required cells to discretize the domain. The automation character of the developed forward problem solver turns it into a powerful research tool, for it allows
modeling any eddy current problem requiring only the specification of the problem geometry and
stimulation patterns.
Although the results obtained were not compared to analytical solutions in order to have a
measurement of the error, empirical validations were performed in order to assess the solutions
obtained. Both shape and amplitude were in accordance with previous published works and with
the classical electromagnetic theory. Hence it is considered that the developed 3D MIT forward
problem solver is working properly and efficiently.
3.4
EIT Forward Problem
Although it would be possible to make modifications to the previously developed 3D PDE
solver, namely change the final equation while still based on the same method, it was considered
more beneficial to implement as well a 2D PDE solver, based on the same discretization principles.
In this case, the problem is much simpler and smaller but still very useful. The discretization of
43
3. Implementation of the Governing Equations
the 2D EIT forward problem was carried out by following the same steps as in section 3.2.
3.4.1
Discretization of the used formulation
Following the same logic as in section 3.2, the involved fields and parameters are specified
as integral quantities defined over two dual grids. For a given conductivity distribution, the only
unknown variable here is the electric field which can be expressed as the gradient of a scalar field
under the approximation to the quasi-static state. This scalar field is defined in the center of each
of the main grid cells. The conductivity is also defined in this same geometric place. Both grids,
the scalar field and constitutive quantities are depicted in figure 3.15.
Figure 3.15: Dual grid complex used in the 2D case. The field quantities and constitutive parameters are also present. Image taken from [25].
The conductivity σ is considered isotropic and is defined resorting to a diagonal matrix. Since
the application of the gradient operator results in a change of a 3-form to a 1-form, it is necessary
to define a σx and a σy matrices resulting from an averaging over the x and y direction respectively,
in order to have σ and the gradient of the scalar field defined over the same geometric place. This
averaging is done through a matrix P. Therefore, the discretized governing equation is:
div [P σgrad u] = −
dρ
,
dt
(3.59)
where u is the discretized electric scalar potential. The div and grad are matrices similar to
the ones defined in the previous section, containing only the topological information about the
dual grid complex. These two operators continue to respect the relation grad = divT . The size
of the resulting system of equations is equal to the number of cells composing the required mesh
for the discretization of the 2D domain.
44
3.4 EIT Forward Problem
3.4.2
Software Description
The software developed to carry out the objectives for this section was again implemented in
MatLab, for the same reasons presented in the previous section. It can generally be seen as a
simple 2D PDE solver based on staggered grids, focused on solving the EIT forward problem.
It should be noted that particular care was taken in the efficiency of the forward problem solver,
mainly due to its use in the inverse problem context for the computation of the sensitivity matrix
(see chapter 4).
Once more, the only requirement for its usage is the specification of the problem geometry,
conductivity distribution and stimulation patterns. One thing that should be stated right away is
that the electrodes were modeled using the simplest method, which considers the current density
on the surface of each electrode as a constant and zero in the space between them. Although
in a first implementation the electrodes had finite length and the measurements were carried out
by averaging the electric potential along their path, it was found that there were no significant
differences between this method and considering the electrodes as a single point. Therefore, the
former method was adopted and each drive pair is modeled as charge source and sink points.
It should also be noted that the quasi-static state is assumed, thus discarding the magnetic effects. These assumptions, although simplistic, lead to very good results, in the context of both
forward and inverse problems. Even if further developments have to be done in order to have a
more robust model, in a first approach these approximations are valid and very useful, since they
simplify the problem at hand but still capture the fundamentals of the underlying electromagnetic
phenomena.
Due to the considerably smaller size of this problem when compared to the 3D MIT, no subgridding schemes needed to be implemented since one can increase the resolution in the entire
domain, and still have an acceptable number of cells composing the grid. Therefore, a regular grid
was used to automatically perform the complete discretization of the specified domain using a single resolution level. This type of grid facilitates the building of the necessary neighbor matrices,
k
since no interpolation between different resolution levels is required. The set {Nij
} contains the
required neighbor matrices. The k index only concerns with the implemented 3-form, the i index
defines the two orientations of each cell, x and y, and the j index refers to all possible directions,
respectively x+ , x− , y + and y − . Since there is no interface between resolution levels, each row of
each matrix has only one entry which specifies the direct neighbor of the cell with the same index
as the row number, in the respective orientation and direction. Both gradient and divergence were
implemented using the formulas presented in the previous section, but applied to 2D.
Since the system of equations Au = y is much smaller than in the MIT case, it can be solved
through Gaussian elimination variants, namely by resorting to general triangular factorization computed by Gaussian elimination with partial pivoting, which defines the A matrix as a product between a permutation of a lower triangular matrix and an upper triangular one. The solution is
45
3. Implementation of the Governing Equations
obtained by solving two easier systems, involving each of these matrices. In partial pivoting, the
algorithm selects the entry with largest absolute value from the column of the matrix that is currently being considered as the pivot element. Partial pivoting is generally sufficient to adequately
reduce round-off error. This contrasts with complete pivoting (or maximal pivoting) where the entries in the whole matrix are considered, interchanging rows and columns to achieve the highest
accuracy. Complete pivoting is usually not necessary to ensure numerical stability and, due to the
additional computations it introduces, it may not always be the most appropriate pivoting strategy.
3.4.3
Results
In this section several results obtained with the developed forward problem solver are presented. They are based on the Sheffield prototype APT system mk 1, which is composed by
16 equally spaced electrodes placed around the object. For a given driving pair injecting current
inside the object, the resulting electric potential is analyzed and compared to the theoretically
expected one. Also current stream lines are determined by computing the gradient of the scalar
potential and using the constitutive relation between the current density and the electric field.
Firstly, the generated mesh is shown in figure 3.16. Each cell has a side length of 2.5 cm so
they could be seen individually in this figure, but smaller side lengths are used when performing
the simulations. The array of electrodes is also visible and is depicted by the red dots placed
around the periphery of the circular object.
Figure 3.16: Generated single resolution 2D mesh, where each cell has side length of 2.5 cm.
The object is depicted as the gray circle and the electrodes are represented by the red dots.
The next set of results concerns simulations using two different current injection protocols, the
adjacent and opposite. The object used is similar to the one presented in the previous figure, but
has a radius of 0.1 m and has a conductivity of 1 S/m. The mesh contained cells of 3 mm of side
46
3.4 EIT Forward Problem
length. The resulting electric scalar potentials are displayed in figure 3.17. Here, one can also
observe the correspondent current stream lines superimposed on each respective image.
((a))
((b))
Figure 3.17: Resulting electric scalar potential fields from a) adjacent and b) opposite, current
injection protocols, using a circular object with 10 cm radius and 1 S/m of conductivity. The
respective current lines are also displayed.
The obtained results greatly resemble the scalar electric potential fields originated from electric
dipoles described in [49] and [50]. One can clearly see the isopotential lines originated between
the driving pairs. They will be particularly relevant when concerning the inverse problem, namely
for the EIT filtered backprojection algorithm, which converts the cartesian coordinate system into
a coordinate system based on isopotential and isocurrent lines2 . One can also observe, as expected, that the current flows from the injecting electrode to the receiving one. Moreover, the
current lines are in direct agreement with the theoretically expected ones presented in figure 2.5.
It is also visible a rapid decrease of the amplitude of the scalar potential field with the distance
from the driving pair. This is to be expected since a partial cancellation takes place when two opposite charges are brought close to each other. It can be observed as well that this cancellation
is better in some directions than in others, which is something to be expected.
Another thing that should be stated is that the solution is only obtained in the non-zero conductivity regions. This is due to the σ matrix which is diagonal and has as many non-zero entries as
non-zero conductivity cells. Therefore the product of this matrix by the gradient operator impedes
the knowledge of the solution outside the object. One way to solve this problem would be to set
the conductivity outside the body as a very small, but non-zero value. This however could bring
numerical instabilities to the system of equations and is thus avoided. Since the solution is only
required inside the space confined by the electrode array, the fact that it isn’t obtained outside the
object region does not present any issue.
2 The
full details are presented in chapter 4.
47
3. Implementation of the Governing Equations
A much more appealing result is the one presented in the next figure. It concerns the same
circular object as in the previous simulation, but now withholding two circular perturbations, with
higher and lower conductivities with respect to the first object. A representation of this conductivity
phantom is presented in figure 3.18. The high conductivity object has a conductivity of 10 S/m
and the low conductivity one has a conductivity of 0.1 S/m. These values were chosen as such
so the changes in the current and potential patterns could be visible. The simulation was carried
out using a grid where each element has side length equal to 3 mm and the obtained results are
displayed in figure 3.18 b) and c).
((a))
((b))
((c))
Figure 3.18: a) Conductivity phantom composed by a circular object of 1 S/m of conductivity and
radius of 10 cm, containing two circular perturbations of 1.5 cm of radius and conductivity of 10
S/m and 0.1 S/m; Resulting electric scalar potential fields and current lines from b) adjacent and
c) opposite current injection protocols.
The most interesting thing that should be noted here is that the current is trying to flow through
48
3.4 EIT Forward Problem
the lowest impedance path between the injecting and receiving electrodes. This phenomenon is
well depicted in figure 3.18 c), where it can be seen that the current lines are clearly trying to enter
the high conductivity perturbation and avoid the low conductivity one. This is in exact agreement
with electromagnetic theory and gives credibility to the forward problem solver.
3.4.4
Discussion
A 2D PDE solver was implemented and applied specifically to solve the 2D EIT forward problem. It could of course be modified, with only minimum effort, to simulate other physical phenomenon ruled by a PDE since the required differential operators are already defined.
The developed software is based on the dual grid method and only requires the specification
of the problem geometry and physical constants in order to work properly. The mesh generation,
the building of the neighbor matrices, the creation of the system of equations which leads to
the final solution, is done automatically to meet the user’s specifications. The fact that all these
steps are taken automatically enables a quick changing of the problem specifications, whether
the geometry, conductivity distribution or stimulation pattern, and still obtain useful results. It is
considered a good tool for computationally testing new prototypes, or simply new sources/sensors
configurations.
The results obtained are in accordance with the electromagnetic theory, despite the simplifications that were done. However there is still room for improvement, namely in the electrode
modeling section. It is very convenient that in the near future one takes into account the contact
impedance of the electrodes, which is an essential feature in any EIT system, through the so
called complete electrode model. The results will then have a deeper physical meaning and will
be closer to reality.
This forward problem solver is highly efficient for it can compute the solution of 16 different
current injection profiles in around 0.2 s, for a regular grid composed of 10000 elements. This will
be particularly useful in the image reconstruction procedure.
49
3. Implementation of the Governing Equations
50
4
Image Reconstruction
Contents
4.1 Traditional Reconstruction Methods . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 MIT inverse problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.3 EIT inverse problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
51
4. Image Reconstruction
In this chapter the image reconstruction process is going to be tackled through linear approaches. The use of the forward problem solvers developed in the previous chapter enables the
simulation of measurements, which afterwards are used, blindfolded to everything else, in order
to infer the physical properties of the body under analysis.
Due to its historical relevance in biomedical imaging systems and the fact that it was used in
this thesis, although with some variants, the Back-Projection1 method is going to be presented,
as well as its upgraded version, the Filtered Back-Projection, which is the most widespread image
reconstruction algorithm incorporated into X-ray CT scanners. Later, the modifications required
for the application of these methods to both MIT and EIT are described.
This chapter ends with the explanation and application of a state of the art image reconstruction algorithm based on the calculation of a sensitivity matrix. The feasibility of its application
using an underlying 2D forward problem was studied, in contrast with the full 3D case presented
in [40].
4.1
Traditional Reconstruction Methods
Digitally-computed image reconstructions are inherently discrete. If the size of the image is n
by m pixels, then it can be described using an abstract vector space of dimension n × m. Each
basis vector represents a small area in real space. The coefficient assigned to that basis vector
will represent the average value of the image within that area.
The basic principle behind the image reconstruction process of a tomographic system is that
the two-dimensional internal structure of an object can be reconstructed from a series of onedimensional projections of the object acquired at different angles. The detected signal intensities
in these projections are dictated by the two-dimensional distribution of the physical property on
which the imaging system is based on.
In this section, two standard approaches for biomedical image reconstruction are going to be
described. The first one is the Back-Projection method, which is the simplest in concept and the
easiest to understand. The other corresponds to the so-called analytic reconstruction methods,
which are based on exact solutions to the image equations and hence are faster. The filtered
Back-Projection belongs to this class of methods.
4.1.1
Notation
A two-dimensional Cartesian coordinate system (x, y) is used to describe the points of the
layer under analysis. A density function f (x, y) specifies the contribution of each point to the detected signal. The goal is to reconstruct this function from peripheral measurements. In X-ray CT,
f (x, y) represents the linear attenuation coefficient, in radioisotope imaging it is proportional to the
1 Capitalization is used to distinguish between back-projection as a process, and Back-Projection as a method of image
reconstruction in which unmodified profiles are back-projected and summed
52
4.1 Traditional Reconstruction Methods
radioisotope density and in electromagnetic tomography it describes the impedance distribution
map.
The ray follows a well defined path between the source and the detector. For sake of simplicity
let us assume a straight path (which is the approximation made in CT). The ray-path is described
by a (x0 , y 0 ) coordinate system which is rotated by the same angle φ (see figure 4.1) and is thus
specified in polar coordinates (x0 , φ).
Figure 4.1: Representation of the used coordinate systems. An object is depicted in gray and
ray-paths are represented by the dashed arrows. A one-dimensional projection is also visible.
The ray-sum is defined as the weighted sum of all the basis vector coefficients that lie along
the path of the ray. This weighing is a function of the path length of the ray through the pixel.
The ordered set of all ray-sums for a fixed position is called a projection p and is defined by the
following equation,
pφ (x0 ) =
Z
f (x, y)dl,
(4.1)
L
where L describes the ray-path. This equation is called the Radon Transform. It is useful to
describe the x0 coordinate in terms of the Cartesian coordinates and the projection angle φ,
x0 = x cos φ + y sin φ.
(4.2)
Although continuous in nature, f (x, y) is in practice represented by a discrete array of equally
spaced points arranged rectangularly and is usually confined to the circular domain formed by
the source and sensors. It is also common to consider that the projections are taken at several
positions equally spaced in terms of angular displacements.
4.1.2
Back-Projection
It is the simplest reconstruction technique and consists in back-projecting each profile across
the plane, i.e., assigning the magnitude of each ray-sum to every point that makes up the ray. This
process is graphically depicted for two projections in figure 4.2.
53
4. Image Reconstruction
((a))
((b))
Figure 4.2: Graphic representation of the Back-Projection method. a) Two projections of a rectangular object are shown; b) back-projection and superimposition of the same projections forming
an approximation to the object. Taken from [51].
This means that each pixel is selected in turn and the ray that passes through it in every
projection is computed. The value that is assigned to the pixel is determined by summing each
corresponding ray-sum. For this reason, Back-Projection is also called the summation method.
This process is described by the following equation,
fˆ(x, y) =
m
X
(4.3)
pφj (x cos φj + y sin φj )∆φ,
j=1
where m is the number of projections and ∆φ is the angular spacing between them. It should
be stressed that fˆ is simply an estimation of the true map f .
One of the main disadvantages of this method is that it produces a star effect artifact due to
the application of the ray-sum to every pixel composing the ray-path. This is more evident when
discrete areas of high density are present. Although the use of many projections minimize the
visibility of each individual ray composing the star artifact, a general fogging of the background is
always present.
4.1.3
Analytical methods
The analytic reconstruction techniques constitute a much more elegant class of reconstruction
methods and are based on direct solutions to equation (4.1). Their only approximation lies in
limiting the spatial frequencies to a maximum km , which results in three main consequences. An
array of points equally spaced by w = 1/2km can be used for image reconstruction, the projections
may be sampled at the same interval w and discrete Fourier series can be used.
Most analytical techniques are based on the ”Central Slice Theorem” which relates the Fourier
transform of the measured projections, P (k, φ), to the Fourier transform of the image, F (kx , ky ).
Its derivation can be started by the calculation of the two-dimensional Fourier transform of the
Z∞Z∞
density function,
F (kx , ky ) =
−∞
f (x, y)e−2πi(kx x+ky y) dx dy,
(4.4)
−∞
where kx and ky are the wave numbers of each sinusoidal wave, in the x and y directions. By
rotating the (x, y) coordinate system to the (x0 , y 0 ) one, and defining k =
54
È
kx2 + ky2 , one ends up
4.1 Traditional Reconstruction Methods
(a)
(b)
(d)
(c)
Figure 4.3: Graphic representation of the two-dimensional Fourier reconstruction method. a)
Projections acquired at two different angles; b) sinc functions obtained after application of the
Fourier transform of both projections plotted in the corresponding angles; c) matrix containing
the interpolated Fourier coefficients; d) the inverse two-dimensional Fourier transform renders the
reconstructed image. Taken from [51].
with the following equation,
Z∞Z∞
F (kx , ky ) =
−∞
0
f (x, y)e−2πikx dx0 dy 0 .
(4.5)
−∞
Exchanging the order of integration it can be seen that the y 0 integral is the ray-projection
pφ (x0 ) along a straight line connecting the source to the detector. This leads to the following,
Z∞
F (kx , ky ) =
0
pφ (x0 )e−2πikx dx0 = P (k, φ),
(4.6)
−∞
where P (k, φ) is the Fourier transform of pφ (x0 ) with respect to x0 . Thus, the Fourier transform
of the projection is equal to the Fourier transform of the image along the line kx cos φ + ky sin φ =
0. This is the Central Slice Theorem. It can be used for image reconstruction just as long as
interpolation is performed to obtain the Fourier coefficients derived from the projections on a
rectangular matrix required for the inverse two-dimensional Fourier transform. This process of
reconstruction is exemplified in figure 4.3 for a rectangular object.
Although this method leads to better results than the Back-Projection, the fact that it needs
interpolation can become quite cumbersome. However, it is possible to make the Back-Projection
work properly, just as long as appropriate modifications are made to the projections. These can
be found by the following derivation, which starts by rewriting equation (4.3) in its integral form,
fˆ(x, y) =
Zπ
pφ (x cos φ + y sin φ)dφ.
(4.7)
0
55
4. Image Reconstruction
Through substitution of pφ (x cos φ + y sin φ) by its Fourier representation, and by multiplying
and dividing it by |k|, one ends up with,
fˆ(x, y) =
Z πZ ∞
0
−∞
P (k, φ) 2πik(x cos φ+y sin φ)
e
|k| dkdφ.
|k|
(4.8)
The application of the two-dimensional Fourier transform to the previous equation and the
usage of the Central Slice Theorem lead to the following relation,
F̂ (kx , ky ) =
P (k, φ)
F (kx , ky )
=
.
|k|
|k|
(4.9)
It is now clear that the back-projected image is equal to the true image, with the particular difference that its Fourier amplitudes are divided by the magnitude of the spatial frequency. Therefore,
if the one-dimensional projections are properly filtered with a one-dimensional filter kernel, one
ends up with a very good estimation of the original image, just as long as enough projections are
used. These filtered projections now have both positive and negative values, which are going to
counteract the inherent blurring of the simple Back-Projection method. There are essentially three
types of filtering, Radon, Convolution and Fourier2 , although this last one is the most widely used
due to its computational speed. It performs the filtering in the frequency domain by simply multiplying the Fourier transforms of both profile and filter kernel, which are calculated resorting to the
Fast Fourier Transform. It is clear from equation (4.9) that the first filter kernel that was used had
the frequency response of |k| and is termed the Ram-Lak filter. Although it accurately removes
the blurring, due to its shape (figure 4.4) it amplifies the high spatial frequencies and hence it
does not have desirable noise characteristics. To reduce this amplification and hence improve
the noise performance of this filter, several others were developed, such as the Shepp-Logan,
low-pass cosine or generalized Hamming filters (figure 4.4). However, this high spatial frequency
attenuation can also lead to a decrease in contrast, since the edges of the objects correspond to
high spatial frequencies. There is then a trade-off between image quality and noise reduction.
4.2
MIT inverse problem
In this section, the MIT inverse problem is going to be addressed using the previously described Filtered Back-Projection method, as well as modified version of it, in which the backprojection is done along unperturbed magnetic field lines connecting the source and detectors.
These methods are based on the heuristic approach presented in [7, 10] that consists in,
Z
ϕ≈
W σdl,
(4.10)
L
where L is an unperturbed magnetic field line and W is a geometrical weighing factor. This equation is directly comparable with equation (4.1) and it means that the one-dimensional projections
consist in the phase shifts between source and detector signals, induced by the residual field.
2 Refer
56
to [51] for a full description of each of these methods
4.2 MIT inverse problem
Figure 4.4: Spatial frequency response of commonly used filters in Filtered Back-Projection. 1,
Ram-Lak; 2, Shepp-Logan; 3, low-pass cosine; 4, generalized Hamming
4.2.1
Practical Implementation
Equation (4.10) is highly dependent on the problem geometry, mainly because it needs a
description of the magnetic flux lines. Hence, two different prototypes were tested. The first
one was inspired by a first generation CT scanner and consists in a single source and detector
coils (figure 4.5 (a)), sharing the same axis, translating and rotating together to obtain the required
projections. It relies on the approximation that the magnetic flux lines connecting them are straight
lines, which isn’t far from reality when an object with small size and low conductivity is present.
The standard Filtered Back-Projection can then be used for image reconstruction.
((a))
((b))
Figure 4.5: Prototypes tested in the MIT inverse problem. (a) represents the prototype that uses
a single source and sensor and works by executing a series of translations followed by rotations.
(b) displays the prototype based on a single source and multiple detectors and works by rotating
both by the same angular increment. The translations are depicted by the dashed arrows and the
rotations by the curved ones. Red coils represent the sensors the black ones the sources.
57
4. Image Reconstruction
The second prototype that was tested is based on the one presented in [7] and uses a single
source and multiple detectors, in which all axis belong to the same plane, all stationed in the a ring
surrounding the object (figure 4.5 (b)). The projections are acquired by rotating the source and
detectors by the same angle. The image is reconstructed by a modified Filtered Back-Projection,
where the profiles are back-projected along the unperturbed primary magnetic flux lines connecting each detector to the source. This is done by first computing the source magnetic flux density
in the entire domain. Afterwards, a recursive relation is used to find the line that connects the
source to the detector,
Pi+1 = Pi −
e
e
BPi
e dl,
||BPi ||
(4.11)
e
where Pi+1 is the next position, Pi+1 is the current position, BPi is the magnetic flux interpo-
e
e
e
lated in the current position and dl is a a length increment. The starting position is the center point
of the detector coil and relation is executed until Pi+1 is near the center of the source coil, up to a
e
desired accuracy. The complete set of computed positions define the magnetic line. The fact that
the magnetic flux density is only computed once and not at every iteration makes this process
very efficient. Each line is then converted from Cartesian coordinates to pixel indexes and are
used as binary masks defining the path of back-projection. One thing that should be taken care
is that since the lines now have fine width, they overlap in some regions. Therefore, each pixel
is assigned the inverse of the number of magnetic lines that crosses it. If this wasn’t executed,
areas of increased conductivity would appear in the image that had nothing to do with the real
conductivity distribution but simply with the superimposition of several magnetic flux lines.
The projections are simulated using the developed forward problem solver described in section 3.3. For a given object with a certain conductivity distribution, the A matrix defined in equation (3.55) is computed and stored, as well as its incomplete LU factorization used for preconditioning. Since the computations required for the obtainment of these two are the main timeconsumers of the forward problem software, the fact that they are only executed once and saved
afterwards greatly accelerates the simulation process. Now, for each source position, the only
necessary operations are the calculation of the y vector (equation (3.57)), the resolution of the
system of equations and the needed interpolations for having the emf over each sensor. These
are computationally light when compared to the previous ones.
The four types of filter kernel displayed in figure 4.4 were implemented and are applied to the
projections by multiplication of Fourier transforms.
The usual setup comprises source and sensors coils composed by 15 windings with radius of
2.5 cm, and the used current has 2 A of amplitude and 1 MHz of frequency. It should be noted
that, for both prototypes, the image reconstruction process is carried out in the plane containing
the axis belonging to the source and detectors.
58
4.2 MIT inverse problem
4.2.2
Results
This section concerns several results obtained using both image reconstruction techniques for
different objects with specific conductivity distributions.
The first results intend to demonstrate the spatial sensitivity of a MIT system, i.e. the variation
of the phase shifts with the spatial location of the object in relation to the position of both source
and detector. For this purpose a single source and detector coils were used, sharing the same
axis but 35 cm apart (figure 4.6). The centers of the source and detector coils have respectively
the coordinates (-0.175,0,0) m and (0.175,0,0) m.
The frequency in this case was 20 MHz and the test object consists in a sphere with diameter
equal to 10 cm. To assess the effect of the conductivity in the acquired measurements, two
homogeneous conductivity distributions were applied to the sphere, one equal to 7 S/m and the
other equal to 3.5 S/m. The plane z = 0 crosses the center of the sphere at all times. These
specifications were chosen as such to match the experimental setup presented in [7], which was
used to carry out the same objective. Both source and detectors have fixed positions and the
object is moved along the x axis or the y axis. The obtained results are displayed in figure 4.7.
The displayed curves are in agreement with the experimental findings published in [7] and this
gives further credibility to the forward problem solver. One can see that the measured phase shifts
are in direct proportion to the conductivity of the sphere and hence confirm to some degree the
suggested heuristic approach (equation (4.10)). The behavior of the curves is also in accordance
with the magnetic induction theory. The x-dependence curve shows a minimum phase shift when
the object is equally distant from the source and detector, a global maximum when the object is
nearer the source and a local maximum when the object is closer to the detector. This can be
explained by the relation between the magnitude of the induced magnetic field and the distance
that it needs to travel in order to reach the detector. This contrasts with X-ray CT, where for a
single source and detector, the object movement along the x axis would always render the same
measurement. The y-dependence curve confirms the fact that the sensitivity is higher in the zone
nearer the magnetic line connecting inductor and detector. This is the main source of information
required for image reconstruction through Filtered Back-Projection.
Figure 4.6: Setup used for sensitivity mapping. The object is a 10 cm diameter sphere, with
conductivity 7 S/m in one test case and 3.5 S/m in the other. Taken from [7].
59
4. Image Reconstruction
((a))
((b))
Figure 4.7: Dependence of the measured phase shifts on the position of the object along the (a)
x axis and (b) y axis and on the object’s conductivity.
The results presented in figures 4.8 (a) and (b) are sinograms, which correspond to the complete set of acquired projections, for a sphere with diameter of 5 cm and a single conductivity of 1
S/m, respectively in two different spatial positions. In one, the center of the sphere coincides with
the center of the space and in the other, the sphere is shifted 4 cm in the positive direction of the
x axis. The measurements were carried out using the first described prototype.
((a))
((b))
Figure 4.8: Sinograms of a (a) centered sphere and a (b) right deviated sphere, with conductivity
of 1 S/m.
As expected, the centered sphere originates the same one-dimensional profile for every projection angle and the deviated sphere generates a symmetric sinogram. In this last one it should
be noted, following the previous discussion about figure 4.7, that the magnitude of the phase
shifts varies with the projection angle, since now the distance from the object to the source or the
60
4.2 MIT inverse problem
detector is changing. Hence, the maximum value appears in the projection taken at 90◦ , since it’s
there that the distance to the detector is minimum.
The next set of results correspond to the image reconstruction based on the two depicted
sinograms and intend to show the differences between simple and filtered Back-Projection. Since
no noise is simulated, the ram-lak filter is used. Figures 4.9 (a) through (c) correspond to the
centered sphere and from (d) to (f) to the deviated sphere.
((a))
((b))
((c))
((d))
((e))
((f))
Figure 4.9: The left column depicts the true conductivity maps, the middle column the images
reconstructed using Back-Projection method and the right one using the Filtered Back-Projection
method. The top line concerns the centered sphere and the bottom one the deviated sphere.
For both cases it should be noted, as predicted, the great decrease in blurring due to filtering.
This is particularly more evident in figure 4.10 where the one dimensional conductivity profile is
displayed, obtained for the horizontal line that crosses the center of each image. Another great
thing to stand out is that it is possible to pinpoint the location of the sphere using this method of
image reconstruction and that there is a correspondence between the measured phase shifts and
the conductivity distribution, as equation (4.10) predicts. However, an observation has to be made
regarding the deviated sphere reconstruction. One can see that it resembles more an ellipse than
the true circular cross-section. This results from the increase of phase shifts due to the proximity
of the object to either source or detector. Hence there is a primary direction where the magnitude
61
4. Image Reconstruction
((a))
((b))
Figure 4.10: One dimensional conductivity profiles, depicting the true one and the ones reconstructed through simple and filtered back-projection.
of the phase shifts is bigger, that has nothing to do with the conductivity distribution. Although
expected, it can be seen as an inherent limitation of this prototype.
The effects arising from the use of a limited number of projections in the image reconstruction
process will now be explored. Both standard and modified Filtered Back-Projection methods are
used. This effect is depicted in figure 4.11 where two spheres are present, with radius of 2.5 cm
but with different conductivities. One has 1 S/m of conductivity and its center has the coordinates
(0.05,0,0) m, and the other has 0.7 S/m of conductivity and is stationed in the position (-0.05,0,0)
m.
It is visible that the objects can only be represented correctly when a high number of projections is used. One thing that should be noted is that there is a reconstructed conductivity zone
connecting the two objects, that does not exist in the true conductivity map (see figures 4.11 (a),
(d) and (g)). This happens as a result of the previously described effect of increased phase shifts
due to the proximity of the object to the source or sensors. The projections taken around 90◦
will then be much larger in magnitude than the ones taken around 0◦ or 180◦ , and since their
back-projection tracts cross the region between the two spheres, then a conductivity value will
be assigned there which cannot be canceled by filtering. This effect is minimized in the modified
Filtered Back-Projection, where there are fewer lines crossing the center of the image since now,
due to the path of back-projection, the information is more concentrated in the periphery. The
filtering process still enables the identification of the regions where the spheres are located. Here,
the standard method is slightly better, since the center of each peak coincides with the center of
each sphere, whereas the peaks from the modified method are shifted towards the periphery (see
figure 4.12). However, the last ones have a better correspondence in terms of value to the true
conductivity map than the previous ones. Therefore, the modified Filtered Back-Projection is the
preferred method.
62
4.2 MIT inverse problem
((a))
((b))
((c))
((d))
((e))
((f))
((g))
Figure 4.11: (a) True conductivity map. (b) through (g) depict the effect of a finite number of
projections. The number of projections used is one, for (b) and (e), two, for (c) and (f) and 36,
for (d) and (g). The images in the middle and bottom lines were obtained respectively using the
standard and modified Filtered Back-Projection methods.
The last result that is going to be presented corresponds to a conductivity phantom composed
by an ellipsoid of 0.3 S/m of conductivity, 8 cm of radius in the x direction and 3 cm in the y and
z directions, containing two spherical perturbations of radius 1.5 cm and conductivity of 1 S/m
and 0.7 S/m (see figure 4.13 (a)). The reconstruction was carried out using the modified Filtered
Back-Projection method, where the ram-lak filter kernel was again used. The results are displayed
63
4. Image Reconstruction
Figure 4.12: One dimensional conductivity profiles. Both true and reconstructed profiles are
shown.
((a))
((b))
Figure 4.13: Image reconstruction of a conductivity phantom. (a) depicts the true map and (b) the
reconstructed one.
in figure 4.13 (b).
In the reconstructed image it is clearly visible the outline of the ellipsoid as well as two spherical
perturbations. Although there is a good correspondence between the shape of the reconstructed
phantom and the original one, in terms of conductivity is slightly different (see figure 4.14), specially in the lower conductivity perturbation. This could somewhat be avoided if a reference data
obtained for the ellipsoid without the spherical perturbations was used. It could be seen as a
calibration step and it would lead to a result similar to the one presented in figure 4.11 (g), where
only the conductivity spheres would be visible.
4.2.3
Discussion
The MIT inverse problem was addressed through linear methods. The Back-Projection method
was implemented, as well as the Filtered Back-Projection along straight lines or along magnetic
64
4.3 EIT inverse problem
Figure 4.14: One dimensional conductivity profile for the line defined by y = 0 and z = 0. The red
curve represents the reconstructed phantom and the blue curve the true one.
flux lines. It was observed that the filtering greatly reduces the inherent blurring of the BackProjection method and that back-projecting filtered profiles along magnetic flux lines was the
method that rendered the best results.
The use of the phase shifts information as profiles to be back-projected proved itself as a
simple and quick way of inverting the problem. Although it works well for objects with simple
conductivity distributions, just as long as enough projections are used, it has some limitations
when multiple objects with different conductivities are present. Despite being able to identify the
regions where they are stationed, it does not provide enough spatial resolution to represent them
accurately.
The main disadvantage of these methods resides in the fact that the phase shifts only contain
information about the absolute value of the conductivity. This means that no specific information
about its real or imaginary components is present, thus precluding the characterization of the
electrical properties of biological tissues.
Despite their limitations, the Back-Projection method and its variants are considered a good
first approach to problem inversion procedure, as they are conceptually simple, very quick and
still lead to good results when concerning simple objects. The results obtained for more complex
conductivity distributions can still be used as a first approximation of an iterative reconstruction
method, such as Gauss-Newton, hopefully accelerating the convergence process.
4.3
EIT inverse problem
In this section the EIT inverse problem is going to be tackled through two distinct methods.
The first and simpler one consists in a modified back-projection algorithm along equipotential
lines connecting drive and receiving pairs. The second one is the GREIT algorithm published
in [40], where the image reconstruction is carried out using a sensitivity matrix, calculated by
65
4. Image Reconstruction
means of a random training set. The prototype simulated is the one presented in figure 3.16 and
is composed by several electrodes, equally spaced, placed around a circular tank with homogeneous conductivity of 1 S/m, which acts as a reference data source. The adjacent current injection
protocol is used.
4.3.1
Equipotentials Back-Projection
Contrarily to what happens in the MIT, in EIT the measurements can be directly related to the
impedance distribution of the medium since electrical potentials are acquired, which are generated by injecting current through that medium. The image is then obtained by back-projecting the
measured impedances along equipotential lines3 . Similarly to what was done in the MIT scenario,
these lines are calculated by first computing the electric potential for a given drive pair, using the
homogeneous conductivity distribution. Afterwards, the potential is measured at every electrode
position and for each one, the line is determined by finding the points which obey the following
relation,
|Vij − Vk | < ,
(4.12)
where is a small value to account for numerical errors, Vk is the electrical potential under
electrode k and Vij is the electrical potential in the entire medium. The ij subscripts refer to
pixel indexes. For each receiving pair, the space between the two equipotential lines is used as
a binary mask to which the value of the measured impedance is going to be applied. Taking this
into account, the image is reconstructed using the next equation [52]:
"
#
1 XX
ZmIik − ZmHik
fˆ(x, y) =
Tik (x, y) ·
·
,
M i=1
ZmHik
M
N
(4.13)
k=1
where M defines the number of current drive pairs and N the number of receiving pairs; ZmHik
is the impedance measured using the kth receiving pair when current is injected through the homogeneous medium by the ith drive pair; ZmIik is the impedance measured in the inhomogeneous medium, under the same conditions as ZmHik ; Tik (x, y) is a binary mask that connects the
kth receiving pair to the ith drive pair and therefore is 1, if (x,y) belongs to the space delimited by
the equipotential lines described previously, and 0 otherwise. Equation (4.13) implies that the normalized difference EIT was implemented. For a given drive pair, the one dimensional profile that
is going to be back-projected corresponds to the normalized difference between the impedance
measured in the inhomogeneous and homogeneous media, carried out by all receiving pairs.
4.3.2
Graz consensus Reconstruction algorithm for EIT
The GREIT is a linear image reconstruction algorithm developed in 2009 specifically for EIT,
namely for pulmonary imaging. The algorithm was developed by a large group of experts in the
3 The
66
full mathematical description of this method is given in [52]
4.3 EIT inverse problem
field of Electrical Impedance Tomography and its full description is published in [40]. The essential
aspects are presented here.
The objective of this algorithm is to create a linear reconstruction matrix which converts a set
of measurements into the corresponding conductivity distribution. Several figures of merit, such
as uniform amplitude response, small and uniform position error, small ringing effect, uniform
resolution, limited shape deformation and high resolution, while maintaining small noise amplification and small sensitivity to electrode and boundary movement, are directly incorporated in
the reconstruction algorithm. The GREIT is based on the difference EIT, which means that the
measurements represent changes in the conductivity of a homogeneous medium.
The desired reconstruction matrix is assembled using a training data consisting in small circular targets, with varying diameter, spread randomly and uniformly throughout the image plane.
The training set should be larger than the number of independent measurements, which for a 16
electrode system using the adjacent current injection protocol is 104. However, it should be much
(k)
larger in order to avoid training bias. For each training target xt , which corresponds to a small
change in the homogeneous conductivity medium, a forward problem solver is used to simulate
(k)
the corresponding measurements yt . In [40] the forward problem was solved using a 3D FEM
using the complete electrode model. In this work, the feasibility of the same method using the 2D
(k)
forward problem solver presented in section 3.4 was studied. Associated with each xt
sired image termed
(k)
x̃t ,
is a de-
the center of which coincides with the center of the training target, but its
circular area is wider, corresponding to the inherent blurring in EIT. The uniform image resolution
provided by the GREIT is based on this aspect. In figure 4.15 is presented the complete training
data for GREIT.
Figure 4.15: Training data for GREIT. Associated with each training target there is a set of measurements and a desired output image. The blue rectangles represent matrices, where each
column (k) is filled with the corresponding training sample. Adapted from [40].
There is still an weighting image w(k) that represents the weight given to each pixel, which
enables the specification of the relative importance of the performance metrics. However, since
in [40] the best selection of weights was not established, a uniform weighting w(k) is used.
Taking all this into account, the reconstruction matrix R is defined as the one that minimizes
67
4. Image Reconstruction
the error 2 in a least squares sense [40],
2
X
=
||x̃(k) − Ry(k) ||2W(k)
k
XX
=
k
[x̃(k) ]2i [w(k) ]2i − 2[x̃(k) ]2i [w(k) ]2i
i
+ [w(k) ]2i
X
Rij [y(k) ]j
X
‹2 !
Rij [y(k) ]j
‹
j
(4.14)
.
j
The k index goes through all training samples and W(k) = (diag
w(k) )2 is a diagonal matrix
representing the weights assigned to each measurement. The expression for R is found by setting
the matrix derivative
∂2
∂Rij
1 ∂2
−
2 ∂Rij
to zero. According to [40],
=
X
X
[x̃
(k)
]i [w(k) ]2i [y(k) ]j
−
k
=
X
[w(k) ]2i
X
X X
k
(k)
[x̃
k
= Aij −
]i [w(k) ]2i [y(k) ]j
X
−
Ril
l
(k)
Ril [y
‹
]l = 0
l
(k)
[y
]l [w(k) ]2i [y(k) ]j
‹
k
Ril Bijl .
(4.15)
l
The size of A is n × m and of B is n × m × m, where n is the number of cells composing the
grid for the discretization of the domain and m is the number of measurements. One can see that
if the same weights w are used for every measurement, than B does not depend on i and hence,
the matrix R can be expressed as follows.
R = AB−1
(4.16)
With these definitions, the conductivity distribution estimation can be obtained by simply multiplying the matrix R with the column vector containing the measurements. The good thing about
this method is that this matrix only needs to be assembled once and can henceforth be used for
every new set of measurements. The required large training set forces the forward problem solver
to be very efficient, otherwise the matrix assembly process would take far too long.
4.3.3
Results
In this section the results obtained for both implemented methods are presented. The first
ones concern the Equipotentials Back-Projection method, namely the effect of the number of
electrodes in the image reconstruction process. Similarly to what happened in the MIT case, the
increase of the electrode number leads to an increase of the number of different projections, and
hence, to the improvement of the image quality. This can be seen in figure 4.16, where the image
is reconstructed using two different systems composed by 16 and 32 electrodes respectively.
The true conductivity map consists in a single circular perturbation of 1.5 S/m of conductivity,
placed in the center of the homogeneous medium. One can see that, as expected, the image
68
4.3 EIT inverse problem
((a))
((b))
((c))
Figure 4.16: Effect of the number of electrodes in the Equipotentials Back-Projection method. (a)
depicts the true map; (b) 16 electrodes are used; (c) 32 electrodes are used.
reconstructed using the 32 electrodes system is better than the one obtained through the 16
electrodes system. The back-projection lines are no longer distinguishable which contrasts with
the image obtained using the former system, where these lines are clearly visible. Although the
images are slightly blurred, which is inherent to this reconstruction method and it worsens as
the object gets closer to the periphery [40, 53], the spatial location of the reconstructed objects
coincides with the true map.
The next results intend to compare the Back-Projection algorithm and GREIT. For that, two
spherical perturbations were used, one with lower and the other one with higher conductivities
when compared to the medium, and the 16 electrodes system was used to perform the required
measurements. The original conductivity distribution is displayed in figure 4.17 (a), and the reconstructed ones in (b) and (c). The training set required by the GREIT is composed by 520 samples,
which corresponds to five times the number of independent measurements in a 16 electrodes
system, and each target has 1.5 S/m of conductivity.
It is obvious that the GREIT provides a result far superior to the one obtained through the
Back-Projection method in terms of both image quality and correspondence between the reconstructed conductivities and the true ones. When multiple objects with different conductivities are
present the Back-Projection method along equipotential lines does not lead to satisfactory results.
The location of the objects is still accurate but the reconstructed values are slightly different than
the original ones (see figure 4.18). Also, similarly to what happened in the Back-Projection along
magnetic field lines in MIT, the information is smeared towards the periphery. This is more visible
in the case of the highest conductivity perturbation, where there is a reconstructed conductivity
path connecting the object to the electrodes’ region, that is not present in the true map. None
of this happens with GREIT. Minimum blurring is present, the location of the objects is accurate,
the correspondence between true and reconstructed conductivities is almost perfect, all of which
make this method much more useful than the previous one. Figure 4.18 displays the one dimensional conductivity profiles of the true and reconstructed maps, along the line that crosses the
69
4. Image Reconstruction
((a))
((b))
((c))
Figure 4.17: Image reconstruction using two different methods. (a) represents the true conductivity map, (b) the image reconstructed using the Equipotentials Back-Projection method and (c)
using the GREIT.
center of each image depicted in figure 4.17, in the horizontal direction.
The final result concerns only the GREIT and its behavior when a more complex conductivity
distribution is present. Since the GREIT is designed for pulmonary imaging, a conductivity phantom inspired by the work published in [54] was designed. Elliptical domains are used to represent
the heart and two lungs and the conductivities are chosen to simulate the conductivity distribution
during systole. Hence, since blood was ejected from the heart to the lungs, their conductivities
are respectively equal to 0.4 S/m and 1.75 S/m. The original conductivity map is depicted in figure 4.19 (a) and the reconstructed one in (b). The training data is the same as in the previous
result.
In the reconstructed image one can clearly identify the three objects composing the phantom.
Inside the region delimited by the electrodes, there is a maximum conductivity value assigned to
Figure 4.18: Comparison between the true and reconstructed one dimensional conductivity profiles. The red line represents the true conductivity distribution and the blue and green curves
correspond respectively to the reconstructions carried out using the GREIT and the Equipotentials Back-Projection method.
70
4.3 EIT inverse problem
((a))
((b))
Figure 4.19: Image reconstruction of a conductivity phantom representing the heart and lungs.
(a) represents the true conductivity map, (b) the image reconstructed using the GREIT based on
520 training data samples.
the lungs’ region of 1.713 S/m and a minimum value located in the heart region of 0.274 S/m.
This corresponds respectively to relative errors of 2.11 % and 31.5 %. Although the first error is
perfectly acceptable, the second one is not. This discrepancy of errors can perhaps be explained
by the fact that the true conductivity value assigned to the lungs is much closer to that of the
training targets and hence the algorithm may be more tuned to represent higher conductivity
objects than lower conductivity ones. However, the locations of the virtual heart and lungs are
well approximated by the reconstruction.
4.3.4
Discussion
Two different linear image reconstruction methods were implemented and tested. The first one
is the Equipotentials Back-Projection method, which is the simplest in concept but works relatively
well for simple conductivity distributions. The main aspect that makes its usage so popular is that
it is blind to the impedance of the skin-electrodes interface and hence only requires the knowledge
of the electrodes’ location and the current being injected in order to compute the back-projection
lines. However, the results it provides have really poor resolution and therefore different methods
should be properly developed and tested before the EIT technology is widely integrated in the
clinical world.
Such method is the GREIT, which is not very heavy in terms of computational requirements,
but leads to results which greatly surpass the ones obtained by Back-Projection. Its main disadvantage is that it is based on a forward model to compute the reconstruction matrix. Although it
works very well for simulated data, its clinical application can somewhat be compromised since
the electrodes’ contact impedance have to be properly modeled, which isn’t a trivial thing to do
when regarding the skin-electrode interface. This problem is avoided in MIT and hence this image
reconstruction algorithm is a very good candidate to solve the inverse problem of this system.
71
4. Image Reconstruction
Both reconstruction methods allow real-time imaging, where for a 16 electrodes’ system, the
Back-Projection method leads to a frame rate of around 10/s and the GREIT to around 150/s.
This difference lies in the fact that the first method requires back-projecting and summing the
one-dimensional profiles and the second one is based on the multiplication of a previously stored
matrix by a vector containing the measurements.
It should still be noted that, contrarily to what happened in the MIT scenario, in this case it is
possible to reconstruct the complex value of the conductivity, thus enabling the characterization
of the passive electromagnetic properties of the biological tissues.
72
5
Conclusions and Future
Developments
73
5. Conclusions and Future Developments
Throughout this work, two different imaging technologies were explored, the Magnetic Induction Tomography and Electrical Impedance Tomography, and for each, two very different areas
were approached, the forward problem and the inverse problem.
The forward problem for both cases was tackled using the Finite Integration Technique. For
the MIT, an adaptive multi-level mesh generation algorithm was developed that creates an octree
type grid, which allows concentrating high resolution regions solely where they are needed, thus
saving as much as four times the number of required cells to perform the discretization of the
domain. For the EIT, due to the size of the problem, a 2D regular grid with no subgridding regions
was implemented. Both forward problem solvers are automatic in the sense that they only require
the specification of the problem geometry and its physical parameters. The results obtained are in
accordance with the consulted bibliography and give validation to the developed solvers. However,
for both cases, analytical validations are still required in order to have a quantitative measurement
of the numerical errors introduced. Specifically for the MIT case, the next step after this analytical
validation consists in connecting the forward problem solver to a 3D modeling program based on
a graphical user interface in order to take the developed software a step closer to a commercial
version. Also in the MIT case, it is desirable to expand the used formulation to include multiharmonic signals. The usage of this kind of signals in the source would allow the reconstruction
of the conductivity map for different frequencies, which in theory could enhance the contrast and
the spatial resolution, since the electromagnetic properties of the tissues varies with the frequency
(refer to table 2.1).
The inverse problem, which consists in the image reconstruction process, was solved using
linear techniques for both imaging systems. For the MIT case, the Back-Projection and Filtered
Back-Projection methods along straight lines or along magnetic flux lines were implemented and
tested. It was found that although they work well for objects with simple conductivity distributions,
they have serious limitations when more complex distributions containing multiple conductivities
are present. Also, due to the fact that only the phase shifts information is used for the problem
inversion, the characterization of the passive electromagnetic properties of the object is compromised since there is only information about the absolute value of the conductivity but not its
real and imaginary components. Therefore, its application to a biological sample would render
insufficient results for the clinical application of this technology. Hence, new image reconstruction
algorithms need to be developed and tested. Regarding the EIT inverse problem, a variant of
the Filtered Back-Projection method based on equipotential lines was implemented, as well as
the GREIT, which relies on a training data set to compute a sensitivity matrix that is used for image reconstruction. It was observed that the GREIT provides much better results than the ones
obtained by Back-Projection method, in terms of image quality, spatial resolution and correspondence between reconstructed values and original ones. A very interesting thing to study would
be the feasibility of the application of the GREIT to the MIT. If it was possible its application to
74
experimental data would be much easier than in the EIT case, since the problem of the contact
impedance in the skin-electrode interface would not be present. It remains to test every image
reconstruction algorithm on simulated noisy measurements and on experimental data.
A completely new approach for the MIT was thought during this thesis. It consists in the
application of a technique called vector tomography (the full details can be found in e.g [55, 56]),
which is based on two orthogonal measurements of a solenoidal vector field, back-projection of
each measurement, and finding the third component by the null divergence equation. This could
perfectly be used for the reconstruction of the residual magnetic field. If an extra measurement
of the electric scalar potential was added, we could end up with a much easier method for the
reconstruction of the conductivity map. This idea is still in conception but it is considered an area
worth exploring.
Taking everything into account one can conclude that the objectives were successfully accomplished, allowing the author to come in contact with two state of the art imaging technologies and
opening up new perspectives of research and development in this very promising area.
75
5. Conclusions and Future Developments
76
Bibliography
[1] Inez Frerichs, ”Electrical impedance tomography (EIT) in applications related to lung and
ventilation: a review of experimental and clinical activities”, Physiol. Meas., no. 21, pp. R1R21, 2000
[2] R.H. Bayford, ”Bioimpedance Tomography (Electrical Impedance Tomography”, Annu. Rev.
Biomed. Eng., 2006
[3] M. Soleimani and W. R. B Lionheart, ”Absolute Conductivity Reconstruction in Magnetic
Induction Tomography Using a Nonlinear Method”, IEEE Transactions on Medical Imaging,
vol. 25, no. 12, pp. 1521-1530, December 2006
[4] C. Cohen-Bacrie, Y. Goussard and R. Guardo, ”Regularized Reconstruction in Electrical
Impedance Tomography Using a Variance Uniformization Constraint”, IEEE Transactions on
Medical Imaging, vol. 16, no. 5, pp. 562-571, October 1997
[5] Jan Nyboer et al., ”Electrical Impedance Plethysmography: A Physical and Physiologic Approach to Peripheral Vascular Study”,Journal of the American Heart Association,Volume II,
pp.811-821, December 1950
[6] Ursula G. Kyle et al., ”Bioelectrical impedance analysis part I: review of principles and methods”, Clinical Nutrition, vol. 23, no. 5, pp. 1226-1243, October 2004
[7] A Korjenevsky et al., ”Magnetic induction tomography: experimental realization”, Physiological Measurements, vol. 21, pp. 89-94, 2000
[8] Griffiths H, et al., ”Magnetic induction tomography: a measuring system for biological tissues”, Ann. New York Acad. Sci.,vol. 873, no. 3, pp.35-45,1999
[9] M. Soleimani, C. Ktistis, X. Ma, W. Yin, W.R.B Lionheart and A. J. Peyton, ”Magnetic Induction Tomography: Image reconstruction on experimental data from various applications”
[10] Griffiths H, ”Magnetic induction tomography”, Meas. Sci. Technol., vol. 12, no. 11, pp. 26-31,
2001
77
BIBLIOGRAPHY
[11] Bras,N. B., Martins, R. C., Serra A. C., ”Improvements in the Measurement System of a
Biological Magnetic Induction Tomographical Experimental Setup”,Journal of Physics: Conference Series 238, 2010
[12] David John Kyte, ”Magnetic Induction Tomography and Techniques for Eddy-Current Imaging”, Doctoral thesis submitted to the Faculty of Mathematical and Physical Sciences of the
University of Surrey
[13] H. Hong and M. D. Fox, ”Magnetic Backprojection Imaging”, IEEE, 1993
[14] Robert Merwa and Hermann Scharfetter, ”Magnetic induction tomography: comparison of
the image quality using different types of receivers”,Physiol. Meas., vol. 29, pp. 417-429,
2008.
[15] H. Scharfetter, K. Hollaus, J. Rossel-Ferrer, and R. Merwa, ”Single-Step 3-D Image Reconstruction in Magnetic Induction Tomography: Theoretical Limits of Spatial Resolution and
Contrast to Noise Ratio”, Annals of Biomedical Engineering, vol. 34, no. 11, pp. 1786-1798,
November 2006
[16] H. Scharfetter, R. Casanas, and J. Rosell, ”Biological tissue characterization by magnetic induction spectroscopy (mis): Requirements and limitations”, IEEE Transactions on Biomedical
Engineering, vol. 50, no. 7, pp. 870-880, 2003
[17] H. Scharfetter, A. Koestinger, and S. I. L. Scharfetter, ”Hardware for quasi-single-shot multifrequency magnetic induction tomography (mit): the graz mk2 system”, Physiol. Meas., vol.
15, pp. S431-S443, 2008.
[18] S. Hermann, R. Stephan, M. Robert, O. Biro, and H. Karl, ”Planar gradiometer for magnetic
induction tomography (mit): theoretical and experimental sensitivity maps for a low-contrast
phantom”, Physiol. Meas., vol. 25, no. 1, pp. 325-333, 2004
[19] O. Biro, K. Preis, K. R. Richter, ”On the Use of the Magnetic Vector Potential in the Nodal
and Edge Finite Element Analysis of 3D Magnetostatic Problems”, IEEE TRANSACTIONS
ON MAGNETICS, vol.32, no. 3, pp. 651-654, May 1996
[20] O. Biro, ”Edge element formulations of eddy current problems”, Comput. Methods Appl.
Mech. Engrg.,vol. 169, pp. 391-405, 1999
[21] M. Soleimani et al., ”Sensitivity Analysis for Magnetic Induction Tomography”,IEEE, 2004
[22] M. H. Pham and A. J. Peyton, ”A model for the forward problem in magnetic induction tomography using boundary integral equations”, IEEE Transactions on Magnetics, vol. 44, no. 10,
pp. 2262-2267, 2008
78
Bibliography
[23] M. Clemens and T. Weiland, ”Magnetic Field Simulation Using Conformal FIT Formulations”,
IEEE TRANSACTIONS ON MAGNETICS,, vol. 38, no. 2, pp. 389-392, March 2002
[24] J. Junak and U. V. Rienen, ”Application of Conformal FIT for Eddy Current Calculation in
Coils of a Superconducting Magnet System”, IEEE Transactions on Magnetics, vol. 40, no 2.
pp. 671-674, March 2004
[25] N. B. Bandeira, ”Magnetic Induction Tomography - New Approaches Towards a Higher Resolution Biomedical Imaging System”, Doctoral Program In Electrical and Computer Engineering, 2009
[26] R. Pethig and D. B. Kell, ”The passive electrical properties of biological systems: their significance in physiology, biophysics and biotechnology”, Phys. Med. Biol., vol. 32, no. 8, pp.
933-970, 1987
[27] R. Casanova, A. Silva, and A. R. Borges, ”MIT image reconstruction based on edgepreserving regularization,” Physiol. Meas., vol. 25, no. 1, pp. 195-207, 2004
[28] Barber et al., ”Electrical Impedance Tomography”, United States Patent, no. 5,626,146, May
6, 1997
[29] Brown BH, Seager AD. ”The Sheffield data collection system”, Clin. Phys. Physiol. Meas.,
1987
[30] Webster JG., ”Electrical Impedance Tomography”. Bristol/New York: Adam Hilger, 1990
[31] D. Holder, ”Part 1 of Electrical Impedance Tomography: Methods, History and Applications”,
Institute of Physics Publishing, pp. 3-64, 2004
[32] Wilson AJ, Miles P,Waterwoth AR, Smallwood RH, Brown BH. ”Mk3.5: a modular, multifrequency successor to the Mk3a EIS/EIT system”, Physiol. Meas., 2001
[33] Liu N, Saulnier G, Newell JC, Kao T, ”ACT4: a high-precision, multifrequency electrical
impedance tomography”, Conf. Biomed. Appl. Elec. Impedance Tomography, 2005
[34] Halter R, Hatrov A, Paulsen KD, ”High frequency EIT for breast imaging”, Conf. Biomed.
Appl. Elec. Impedance Tomography, 2005
[35] Yerworth RJ, Bayford RH, Cusick G, Conway M, Holder DS, ”Design and performance of
th eUCLHMark 1b 64 channel electrical impedance tomography (EIT) system, optimised for
imaging brain function”, Physiol. Meas., vol. 23, no. 1,pp. 149-158, 2002
[36] J Sikora J, S.R. Arridge SR, R.H. Bayford RH , Horesh L, ”The application of hybrid BEM/FEM
methods to solve electrical impedance tomography forward problem for the human head”,
Proc X ICEBI and V EIT, June 2004
79
BIBLIOGRAPHY
[37] Barber DC, Seagar AD, ”Fast reconstruction of resistance images”, Clin. Phys. Physiol.
Meas., vol. 8, pp.47-54, 1987
[38] Saulnier GJ, Blue SR, Isaacson D, ”Electrical impedance tomography”, IEEE Sig. Process.
Mag., 2001
[39] Gibson A, ”Electrical impedance tomography of human brain function”, Clinical neurophysiology, PhD thesis, 2000.
[40] A. Adler, J. Arnold, R. Bayford, A. Borsic, B. Brown, P. Dixon, T. Faes, I. Frerichs, H. Gagnon,
Y. Garber, B. Grychtol, G. Hahn, W. Lionheart, A. Malik, R. Patterson, J. Stocks, A. Tizzard, N. Weiler, G. Wolf, ”GREIT: a unified approach to 2D linear EIT reconstruction of lung
images”, 2009
[41] O. Biro, ”Use of a two-component vector potential for 3-D eddy current calculations”, IEEE
transactions on magnetics, vol. 24, no. 1, pp. 102-105, January 1988
[42] Soni NK, Paulsen KD, Dehghani H, Hartov A, ”A 3-D reconstruction algorithm for EIT planner
electrode arrays”, Conf. Biomed. Appl. Elec. Impedance Tomography, 2005
[43] D. Aruliah, U. Ascher, E. Haber, D. Oldenburg, ”A method for the forward modelling of 3D
electromagnetic quasi-static problems”, August 1999
[44] M. Clemens and T. Weiland, ”Discrete electromagnetism with the finite integration technique”,
Progress In Electromagnetics Research, PIER 32, pp. 65-87, 2001
[45] E. Haber and S. Heldmann, ”An octree multigrid method for quasi-static maxwell’s equations
with highly discontinuous coefficients”, Journal of Computational Physics, vol. 223, no. 2, pp.
783-796, 2007.
[46] N. B. Bras, R. C. Martins, A. C. Serra, and A. L. Ribeiro, ”A fast forward problem solver for
the reconstruction of biological maps in magnetic induction tomography”, IEEE Transactions
on Magnetics, submitted for publication.
[47] C. V. Dodd and W. E. Deeds, ”Analytical solutions to eddy-current probe-coil problems”,
Journal of Applied Physics, vol. 39, no. 6, pp. 28-29, 1968
[48] A. A. Kolyshkin and R. Vaillancourt, ”Series solution of an eddy-current problem for a sphere
with varying conductivity and permeability profiles”, IEEE Transactions on Magnetics, vol. 35,
no. 6, pp. 4445-4451, 1999
[49] M. A. Plonus, ”Applied Electromagnetics”, McGraw-Hill Book Company, 1978
[50] D. K. Cheng, ”Field and Wave Electromagnetics”, Addison Wesley Publishing Company,
1989
80
Bibliography
[51] R. A. Brooks and G. D. Chiro, ”Principles of computer assisted tomography (CAT) in radiographic and radioisotope imaging”, Phys. Med. Biol, vol. 21, n0. 5, pp. 689-732, 1976
[52] R. Guardo, C. Boulay, B. Murray and M. Bertrand, ”An Experimental Study in Electrical Impedance Tomography Using Backprojection Reconstruction”, IEEE Transactions on
Biomedical Engineering, vol. 38, no. 7, July 1991
[53] W. Lionheart, ”Reconstruction Algorithms for Permittivity and Conductivity Imaging”, MIMS
EPrint: 2008.43, 2001
[54] J. Mueller, S. Siltanen and D. Isaacson, ”A Direct Reconstruction Algorithm for Electrical
Impedance Tomography”, IEEE Transactions on Medical Imaging, vol. 21, no. 6, June 2002
[55] J. Prince, ”Convolution Back-Projection Formulas for 3-D Vector Tomography with Application
to MRI”, IEEE Transaction on Image Processing, september 1996
[56] S.J. Lade, D. Paganin, M.J. Morgan, ”3-D Vector tomography of Doppler-transformed fields
by filtered-backprojection”, Optics Communications, vol. 253, pp. 382-391, 2005.
81
Related documents
Download