Uploaded by Jake

Visualization of MRI data using C++

advertisement
CZECH TECHNICAL UNIVERSITY IN PRAGUE
Faculty of Nuclear Sciences and Physical Engineering
Department of Mathematics
Development of a Software Instrument for
MRI Data Manipulation and Visualization
MASTER'S THESIS
2013
Jakub Flaska, BS
Contents
1 Introduction
4
1.1
Magnetic Resonance Imaging
. . . . . . . . . . . . . . . . . .
5
1.2
PACS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.3
DICOM Standard . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.3.1
1.4
1.5
. . . . . . . . . . . . . . . . . . .
6
User Interface of DICOM Viewers . . . . . . . . . . . . . . . .
DICOM File Format
6
Requirements on the Developed Application
. . . . . . . . . .
7
1.5.1
Frequent Features of DICOM Viewers . . . . . . . . . .
7
1.5.2
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .
8
2 Developed Application
10
2.1
The Original Implementation of the Application . . . . . . . .
10
2.2
Description of the User Interface . . . . . . . . . . . . . . . . .
10
2.3
Object Model of the Application . . . . . . . . . . . . . . . . .
11
2.3.1
Rendering Part of the Application . . . . . . . . . . . .
12
2.3.2
Supporting Classes
13
. . . . . . . . . . . . . . . . . . . .
3 Source-Code Refactoring
3.1
Indication for Refactoring
17
. . . . . . . . . . . . . . . . . . . .
Source-code Duplicity
. . . . . . . . . . . . . . . . . .
18
3.1.2
Large Method . . . . . . . . . . . . . . . . . . . . . . .
18
3.1.3
Large Class
18
3.1.4
Multiple Lines Update
3.1.5
Divergent Change . . . . . . . . . . . . . . . . . . . . .
19
3.1.6
Data Clumps
20
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
4 Reimplementation of Rendering
4.1
18
3.1.1
19
21
Reasons for Reimplementation of the Rendering Part
. . . . .
21
. . . . . . . . . . . . . .
21
4.2
Characteristics of OpenGL . . . . . . . . . . . . . . . . . . . .
22
4.3
The Original Implementation of Rendering . . . . . . . . . . .
23
4.1.1
Portability of the Application
2
4.4
4.5
4.6
4.3.1
Workspace Rendering . . . . . . . . . . . . . . . . . . .
23
4.3.2
Image Rendering
25
. . . . . . . . . . . . . . . . . . . . .
Reimplementation of OpenGL Tasks
. . . . . . . . . . . . . .
4.4.1
Image Reconstruction . . . . . . . . . . . . . . . . . . .
25
4.4.2
Interpolation of Image Data
. . . . . . . . . . . . . . .
26
4.4.3
Rendering of Graphic Objects . . . . . . . . . . . . . .
27
4.4.4
Brightness and Contrast
. . . . . . . . . . . . . . . . .
28
Software Design of the Rendering Implementation . . . . . . .
29
4.5.1
Encapsulating the New Rendering Implementation . . .
29
4.5.2
Refactoring of Multi-planar Reconstruction . . . . . . .
31
4.5.3
Refactoring of CImage Class . . . . . . . . . . . . . . .
33
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
5 Interface for Image Processing Extensions
5.1
5.2
25
35
Image Segmentation Algorithms . . . . . . . . . . . . . . . . .
36
5.1.1
Image Segmentation Based on Ford-Fulkerson Algorithm 36
5.1.2
Image Segmentation Based on Level-set Methods
. . .
38
Assembling User Interface according to the Extension's Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
5.3
Passing Parameters to Extensions . . . . . . . . . . . . . . . .
41
5.4
Implementation of the Extensions Interface . . . . . . . . . . .
43
5.4.1
Structure of the Interface . . . . . . . . . . . . . . . . .
43
5.4.2
The Schema of the XML File with a List of Input Parameters . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5
44
5.4.3
Passing Parameters to Extensions . . . . . . . . . . . .
46
5.4.4
User Interface for Declaration of an Image Map
. . . .
47
5.4.5
Multi-thread Tools of Qt Library
. . . . . . . . . . . .
48
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
6 Conclusion
50
7 Appendix: Dicom-Presenter User's Guide
. . . . . . . . . . . . . . . . . . . . . . . .
51
7.1
The User Interface
7.2
Opening Images . . . . . . . . . . . . . . . . . . . . . . . . . .
52
7.3
Image Transformations . . . . . . . . . . . . . . . . . . . . . .
53
7.4
Image Layout
. . . . . . . . . . . . . . . . . . . . . . . . . . .
53
7.5
Moving and Resizing Images . . . . . . . . . . . . . . . . . . .
54
7.6
Managing Workspaces
55
7.7
Multi-Planar Reconstruction . . . . . . . . . . . . . . . . . . .
55
7.8
Animations
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
7.9
Image Processing Extensions . . . . . . . . . . . . . . . . . . .
56
. . . . . . . . . . . . . . . . . . . . . .
3
51
Chapter 1
Introduction
The subject of this thesis is a development of an application for displaying data captured by Magnetic Resonance Imaging. The goal is to deliver
an MRI data viewer tailored to the needs of colleagues at IKEM radiology
1
department .
This thesis works with a previous version of the application
developed in [16] and [6].
The rst objective of the thesis is a reimplementation of the rendering
engine of the application. The previous version was based on OpenGL library.
The presence of OpenGL library brought dependencies on three other image
processing libraries to the application. Hardware requirements of the libraries
resulted in problems with the application's portability. All tasks previously
performed by the included libraries will be implemented as a part of the
application.
The other goal is a development of an interface for application extensions. The application should become a handy and useful interface for image
segmentation algorithms used in medical imaging. Together with implementation of the interface, two image segmentation algorithms [12], [11] developed within FNSPE faculty, will be converted to a format suitable for the
interface.
The last objective of this thesis is an extensive testing and resolving issues
to make the application deployable.
Note:
Since the developed application is referenced many times in fol-
lowing text, often it is called by its name Dicom-Presenter.
1 Institute
for Clinical and Experimental Medicine, Radiodiagnostic and Interventional
Radiology Department.
4
1.1
Magnetic Resonance Imaging
Magnetic Resonance Imaging (MRI) is a medical imaging method focusing
on displaying of internal tissues and organs.
It is based on a principle of
absorbing and emitting energy by an electromagnetic dipole.
Physical Principles of MRI
Firstly, MRI emits an astable magnetic eld to the patient's body to oscillate hydrogen molecules. Once the molecules start oscillating, the magnetic
eld is turned o and the magnetic eld emitted by the oscillating hydrogen
dipoles is observed by receiver coils. By applying additional magnetic eld
it is possible to estimate the positions of the magnetic dipoles in a space.
Dierent time of the residual oscillation allows to dierentiate between tissues [26], [5].
Comparison to Computed Tomography
Computed Tomography (CT) is an older medical imaging method. Unlike
MRI the image is acquired by x-ray.
Thus, CT is suitable for detection
of tissues with dierent atomic number than its surrounding area.
MRI
can detect also artifacts in soft tissues since it disposes of better contrast
resolution. Owing to the fact MRI does not use ionizing radiation it is also
suitable for repeated scans during short period of time [30].
1.2
PACS
Modern medical imaging is generally realized by multi-device systems called
PACS (Picture Archiving and Communication System).
devices connected through a computer network.
PACS is a set of
In general it consists of
parts for image acquiring, viewing and storage [19].
PACS systems eliminate the necessity of issuing a hard copy of acquired
image data and also allow a remote access to the image data.
The existence of PACS systems required a standardization of medical
imaging data formats, in order to ensure correct communication of its devices.
This led to a development of the DICOM standard described below [19].
1.3
DICOM Standard
The standard describes storage, transfer and displaying of medical images.
It was developed in order to bring compatibility between various PACS systems [19]. The standard is developed for more than 20 years and is main-
5
tained by
standard
National Electrical Manufacturers Association . The name of
is an abbreviation for Digital Imaging and Communications.
2
1.3.1
the
DICOM File Format
The central part of the DICOM standard is a le format for storing images
acquired by PACS devices. The le format groups the image data together
with additional information, in order to allow exact reconstruction of the
image on another PACS device and also to prevent mismatching images of
dierent patients. The provided data can be divided into three main categories [19]:
•
Medical Data.
DICOM le format provides more than 2000 stan-
dardized attributes describing the patient such as: patient's name, sex,
age, weight, diagnosis, etc.
•
Image-Acquisition Parameters.
DICOM le format oers full sup-
port to describe technical properties of the acquired image. Parameters
as following are included: patient's orientation according to the picture,
the physical size of the captured area, the thickness of a slice, image
exposure parameters and other.
•
Acquiring Device Description.
Also a full description of the med-
ical imaging device is included (name of the device, resolution, color
depth, calibration parameters, etc.).
1.4
User Interface of DICOM Viewers
The goal of this section is to analyze a general user interface of currently
available DICOM viewing software and confront it with the interface provided
by Dicom-Presenter.
Ten freeware applications have been tested in order to obtain general
characteristics of DICOM viewing software:
•
•
•
•
•
2
Agnosco [27]
Ginkgo CADx [33]
InVesalius [37]
JiveX [39]
MedINRIA [40]
•
•
•
•
•
Onis [42]
RadiAnt [48]
Seg3D2 [49]
synedra View Personal [51]
Weasis [52]
National Electrical Manufacturers Association
is a group of US companies founded to
maintain common standards used in retail electronics.
6
1.5
Requirements on the Developed Application
The developed application should provide a general functionality of DICOM
viewing software and also dispose of the following features:
•
•
•
Displaying images using Multi-planar Reconstruction view.
Allow concurrent displaying of multiple images.
Oer simultaneous creating of multiple workspaces with dierent image
setups.
•
Allow users to fully customize image layout in the output scene.
1.5.1
Frequent Features of DICOM Viewers
All the tested applications provide displaying of the input image and basic
3
operations: zoom, translation, windowing
and an iteration through slices of
the image data. Following list includes other features generally supported in
DICOM displaying software.
Opening Multiple Images
Some DICOM viewers do not support opening of multiple DICOM studies
in the application. The user cannot switch between multiple images, but can
view only one image at a time.
Placing Multiple Images in the Output Screen
Assuming the tested application oers opening multiple images, some DICOM viewers do not support simultaneous displaying of the opened images.
The user can open multiple images, but needs to switch between them, while
displaying only one image at a time.
Perpendicular Plane View
Not all tested programs oer displaying the opened spatial image in dierent
plane than provided in the source les.
Single Slice View
Most of the tested programs support this option. The program allows displaying the requested DICOM image lling the entire application output
window.
Multi-Planar View
A three-dimensional image can be viewed using Multi-planar Reconstruction.
The application output window is divided into three parts, displaying the
image in three perpendicular planes.
3
Windowing
is a term used in radiology, referring to brightness and contrast adjust-
ment [3].
7
Grid View / Multiple Images
This property describes if it is possible to arrange displayed images into
a rectangular grid.
Customizable Layout View
Another possibility of viewing multiple pictures in a workspace is an option
to freely customize positions and sizes of all images.
Mouse Control of Image Enhancement
The tested application oers image manipulation to be achieved easily by
mouse.
Measuring Tool
The viewer oers measurement of distances and angles in the displayed picture.
Customizable Mouse Control
Assuming the viewer disposes of the previous property, the application allows
custom assignment of image enhancement actions to mouse buttons.
1.5.2
Conclusion
The feature of opening multiple workspaces is rarely supported among freeware DICOM viewers. The feature oering a custom image layout was not
supported by any of the tested viewers.
These facts explain why Dicom-
Presenter is developed.
On the other hand, Dicom-Presenter does not provide measuring tools,
often supported by DICOM viewing software.
8
Table 1.1: The support of the most common GUI features among the tested
DICOM viewers.
Agnosco
Version Tested
2.3.5
Multiple Images/Opened
No
Multiple Images/Workspace No
Dierent Plane View
No
Single Slice View
Yes
Multi-Planar View
No
Grid View
No
Customizable Layout View
No
Mouse Control of Img. Enh. No
Measuring Tools
No
Customizable Mouse Control No
Version Tested
Multiple Images/Opened
Multiple Images/Workspace
Dierent Plane View
Single Slice View
Multi-Planar View
Grid View
Customizable Layout View
Mouse Control of Img. Enh.
Measuring Tools
Customizable Mouse Control
Onis
2.4.2
Yes
No
No
Yes
No
Yes
No
Yes
Yes
Yes
9
Ginkgo
3.2.0
Yes
No
No
Yes
No
Yes
No
Yes
Yes
Yes
InVesalius
Beta 3.0
No
No
Yes
Yes
Yes
No
No
Yes
Yes
Yes
JiveX
4.5
Yes
No
No
Yes
No
No
No
Yes
Yes
Yes
Medinria
1.9.0
Yes
No
Yes
Yes
Yes
No
No
Yes
No
Yes
RadiAnt
1.1.8
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Seg3D2
2.1.4
No
No
Yes
Yes
Yes
Yes
No
No
No
No
synedra
3.2.0
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
No
Yes
Weasis
1.0.8
Yes
No
No
Yes
No
Yes
No
Yes
Yes
Yes
Chapter 2
Developed Application
2.1
The Original Implementation of the
Application
C++ was chosen as the programming language for Dicom-Presenter development.
It was preferred over other languages used for application development due
to its better performance in image processing (also examined in [1]).
Qt library was used as an application framework for programming of the
application's user interface. Qt is a well documented framework and oers
multi-platform development [5].
Due to the fact that Dicom-Presenter will work primarily with image
data, the implementation was based on OpenGL. The library moves image
processing responsibilities from CPU to GPU.
2.2
Description of the User Interface
The user interface of the application consists of four parts:
Workspace
is the output window where studied images are displayed. The
application allows displaying several images at a time in the Workspace. User
can customize the layout of pictures in the Workspace.
Image Explorer
is a list of all opened DICOM pictures in the application.
Image Explorer allows placing certain images to the Rendering Scene.
Workspace Explorer
is a list of all opened workspaces. It allows users to
switch between multiple opened workspaces.
Information Panel
provides information about selected image or workspace.
It allows numeric adjustment of image parameters such as brightness, contrast,
zoom, etc.
10
Figure 2.1: Elements of Dicom-Presenter's user interface.
2.3
Ob ject Model of the Application
The following text introduces the object model of the original implementation
of the application (see Fig. 2.2). Understanding the object model will ease
a description of modications done in the further text.
The key part of Dicom-Presenter's object model is the part responsible for
rendering. The GUI elements of Dicom-Presenter are represented by objects
organized in a tree structure.
It is based on the design patterns [9] listed
below.
Composite
The idea of arranging graphic objects into a tree structure is based on
Composite design pattern.
It allows working with a single graphic object
or a group of objects without any dierence.
Observer
Observer pattern is used for distributing events through the object model.
A member of the tree structure informs its child elements about received
events.
11
CObject
Variable/Function
SetGeometry(position, size)
paint()
mousePressEvent(event)
DrawBorders()
Inheritance
foreach iChildren as object
object->SetGeometry(position,size)
foreach iChildren as object
object->paint()
this->DrawBorders()
foreach iChildren as object
if (object->CursorOnObject(event))
object->mousePressEvent(event)
CWidget
iChildren
CWorkspace
CImageExplorer
AddImage(image)
RemoveImage(image)
SetLayout(layoutType)
OpenImage(fileName)
CloseImage(image)
SelectImage(image)
mousePressEvent()
0..N
workspaceManager
AddWorkspace()
RemoveWorkspace()
0..N SetActiveWorkspace()
GetActiveWorkspace()
CWorkspaceExplorer
mousePressEvent()
0..N
0..N
0..N
CWorkspaceSnapshot
CImage
paint()
mousePressEvent()
paint()
mousePressEvent(event)
DoAction(action,parameter)
Figure 2.2: Object model of the rendering part of the application.
Prototype
Prototype design pattern simplies instantiation of image objects.
Images
viewed in a workspace are clones of identical objects displayed in the image
explorer.
Singleton
The very rst object in the tree hierarchy of the rendering module is a
singleton. Therefore, events can be sent to the rendering module from any
point of the application.
2.3.1
Rendering Part of the Application
The following text introduces members of the rendering module. Each class
except CObject and workspaceManager represents a graphic object in Dicom-Presenter.
CObject
is a base class inherited by all graphic objects.
It denes an
interface common for all graphic objects. The class also implements supplementary
methods used for rendering objects such as rendering borders or storing an
object's size and position.
CWidget
class represents the rendering window of Dicom-Presenter. It is
the root element of the tree hierarchy. The class captures application events
related to rendering and forwards them down the tree structure.
12
CImage
is a class representing a two-dimensional image rendered to the
output scene. The responsibilities of the class can be roughly divided into
two groups:
Evaluation of mouse events.
Image objects receive mouse events
from the tree hierarchy. All scenarios of image manipulation start with
a mouse-click.
An object evaluates the event according to the mouse
position and turns into a predened state.
Following mouse events
received by the image are evaluated according to the object's state.
Rendering the image.
The rendering process starts with obtaining
a correct image slice from the attached three-dimensional data and
applying appropriate color transformation.
Then the image is zoomed
and cropped according to user's request. Finally, the cropped image is
rendered to its position in the output scene.
CWorkspace
is a class wrapping multiple opened images and their layout
into one object.
The existence of this class eases communication between
CWidget object and CImage instances.
workspaceManager and CWorkspaceExplorer
The two classes oer
basic control over opened workspaces. The rst class is a registry of CWorkspace
instances.
The other class oers a graphic user interface for manipulation
with the objects.
CImageExplorer
is a registry of all opened images. The class also provides
a user interface for management of the images.
CWorkspaceSnapshot
class represents a preview of a workspace in the
workspace explorer. It implements rendering of the preview and an evaluation
of incoming mouse events.
2.3.2
Supporting Classes
The following text describes classes which have a supporting role in the object
model.
Information Panel and Main Application Window
MainWindow is a class representing the main application's widget.
It
contains two child widgets: the main rendering window (class CWidget)
and InfoPanel class.
InfoPanel
is a class providing supplementary control elements.
The
class is responsible for layout of the elements and their connection to the
rest of the application.
13
Application
CDicom3DTextureManager
LoadTexture(fileName)
CDicom3DTexture
GetDicomHeader()
GetDicomFrames()
CDicom3DFrames
CDicom3DHeader
LoadFromFile(fileName)
GetImageData()
GetImageOrientation()
GetImagePosition()
GetSliceThickness()
GetSliceLocation()
GetNumberOfSlices()
Figure 2.3: Classes of Dicom-Presenter for manipulation with DICOM data.
DICOM manipulation
The module is responsible for loading image data from DICOM les (see Fig.
2.3). The raw image data obtained with use of DCMTK library are converted
into format acceptable by libraries used for rendering.
Dicom3DFrames
is responsible for the conversion of the input data,
to be processable by OpenGL library.
DicomHeader
is an adapter providing a simplied interface for
accessing a header of the DICOM les.
Dicom3DTexture is a wrapper of the preceding classes.
Dicom3DTextureManager is a registry of all opened DICOM studies.
It provides an access to all Dicom3DTexture objects.
Workspace Images Layout
The module controls positions of images in a workspace.
The module is
realized by two sibling classes. Each class implements a unique layout policy.
A workspace object contains an instance of one of these two classes.
All requests to move an image are supervised by the layout controlling
14
CAbstractLayout
CWorkspace
GetLayout()
CImage
MousePressEvent(event)
MouseReleaseEvent(event)
CFreeLayout
CGrowingGridLayout
PrepareToMoveImage(image)
ImageMoveFinished(image)
PrepareToMoveImage(image)
ImageMoveFinished(image)
if (MouseOnMoveIcon(event))
iWorkspace->GetLayout()->PrepareToMoveImage(this)
iOriginalImagePosition = image->GetPosition()
if (!ApproveImageNewPosition(image))
iImage->SetPosition(iOriginalImagePosition)
iWorkspace->GetLayout()->ImageMoveFinished(this)
Figure 2.4: Classes responsible for a layout of displayed images.
object. The object adjusts positions of images in the workspace or completely
disapproves the request and moves the image to its original position.
Animations
The module provides displaying DICOM images consisting of frames captured
in multiple time events.
Animation
is a class attached to an image object. It provides a public
method, which updates the content of the attached image.
AnimationManager
is a singleton containing a list of all Animation
objects. A public method of the class enforces mentioned update of all
Animation objects. The method can be called periodically by a QTimer
object to perform continuous animation.
Settings
Settings is a static class containing parameters describing the application
appearance such as mouse sensitivity, colors of borders and other.
15
iTimer->start()
connect(iTimer->timeout(time),this->TimerTimeout(time))
CWidget::GetInstance()->paint()
AnimationManager
TimerTimeout(time)
for all Animation in iAnimations
Animation->Do()
CWidget::GetInstance->paint()
iTimer
iAnimations 0...N
Animation
val = f(time)
iImage->MoveToDepth(val)
DoAnimation(time)
Figure 2.5: The Animations module.
16
Chapter 3
Source-Code Refactoring
Refactoring is a technique of working with an application's source-code. It is
a process of improving the quality of the source-code, not changing the
application's behavior.
The goal of refactoring is not to fulll application
requirements, but to restructure the existing source-code to a more clear
and well-organized form.
Refactoring contributes to easier understanding
of an application's source-code by improving the application's design and
bringing logic into the code. It is considered as an application of structured
programming paradigms to the source-code.
In a real world environment, applications are often continuously evolved
beyond the original requirements.
There is some design of the application
based on the original requirements.
The new modications may often not
easily t into the original design, making the source-code of the application
too complicated.
Therefore, software companies often spend part of their
budget for source-code maintenance.
Not maintained and complex source-code slows down development process.
Unclear source-code requires software engineers to spend more time understanding
it and also provides more opportunities for mistakes. Spending extra time on
code reviewing before starting further development may turn out to be more
economic than starting the development immediately and then debugging
the application [13], [7].
A basic knowledge of refactoring techniques will be useful for reimplementation
of the rendering engine of the application, described in the Chapter 4. Also
it will help to deliver correct object model of the developed plugin interface.
17
3.1
Indication for Refactoring
The following text describes the most common problems in a source-code,
which are subject of refactoring [7].
3.1.1
Source-code Duplicity
If there are two blocks in the source-code which are very similar, it is recommended
to unify them and extract them to a sub-routine. It makes the source-code
shorter and easier to read. Also modifying duplicated source-code is more
time-consuming and often it is a place for mistakes [7].
3.1.2
Large Method
Large blocks of a source-code are dicult to read. The source-code should be
divided into sub-routines. If the sub-routines are precisely named, the name
of a routine describes its content and then the code is easier to understand [7].
The main complications while dividing a block of code into sub-routines
are caused by local variables used across the divided parts. Consider dividing
a block into two sub-routines.
If there is a variable used in both parts, it
can be returned from the rst routine and passed to the second one as a
parameter. If there is a need to pass more than one variable, the variables
can be wrapped into an object. Often it is useful to reduce the number of
local variables, before starting the dividing process.
It might be done by
replacing the variables by a function call (see Listing 3.1). If the code is still
too complex and the variables cannot be separated nor passed, the block can
be extracted into an object including these sub-routines. Then the temporary
variables become variables of the object [7].
3.1.3
Large Class
If a class handles too much tasks and becomes large, it becomes a potential
place for an unclear code.
It can be often detected by having too many
instance variables. Related methods should be found and extracted into a
separate class. The new class can be a parent, a child or may not have any
relation to the original class. If a class is complex and it is not possible to
easily divide it, a useful trick is to extract the interface of the class.
The
functions accessed by other objects are extracted to a parent class for each
use of the original class.
18
1 void p r i n t P a y r o l l ( ) {
2
int m o n t h S a l a r y = h o u r s W o r k e d ∗ h o u r S a l a r y ;
3
p r i n t ( " S a l a r y : %d " , m o n t h S a l a r y ) ;
4
p r i n t ( " Net S a l a r y : %f " ,
m o n t h S a l a r y ∗ (1 − t a x R a t e ) )
5 }
6 −−−
7 void p r i n t P a y r o l l ( ) {
8
printSalary () ;
9
printNetSalary () ;
10 }
11 void p r i n t S a l a r y ( ) {
12
print
( " S a l a r y : %d " , g e t S a l a r y ( ) ) ;
13 }
14 void p r i n t N e t S a l a r y ( ) {
15
p r i n t ( " Net S a l a r y : %d " ,
g e t S a l a r y ( ) ∗ (1 − t a x r a t e ) ) ;
16 }
17 void g e t S l a r y ( ) {
18
return h o u r s W o r k e d ∗ h o u r S a l a r y ;
19 }
Listing 3.1:
;
Replacing a local variable by a query (getSalary) will help
dividing a large method.
3.1.4
Multiple Lines Update
If a source-code starts behaving the way, that a minor improvement in the
application requires updates of the source-code on many distant places, that
is recognized as an indication for refactoring. Probably the related variables
and methods might be extracted and unied to a separate class.
1 c l a s s Employee {
2
char name [ 1 0 ] ;
3
void s e t F i r s t N a m e ( char name [ 1 0 ] )
4
void s e t L a s t N a m e ( char name [ 1 0 ] ) ;
5 };
6 −−−
7 typedef struct {
8
char v a l [ 1 0 ] ;
9 }n ;
10 c l a s s E m p l o y e e 2 {
11
n name ;
12
void s e t F i r s t N a m e ( n name ) ;
13
void s e t L a s t N a m e ( n name ) ;
14 } ;
;
Listing 3.2: An example of removing a data type from multiple denitions
to avoid possible updating code on many places.
3.1.5
Divergent Change
If the same class needs to be modied in various unrelated cases, it is a
warning signal. If a class is being modied because of a new database and
the same class is modied when a new feature is added to the program, then
19
most probably this class holds too much responsibility. A class handling the
database should contain only methods for retrieving, inserting and updating
the data.
3.1.6
Data Clumps
Sometimes in applications, it is possible to see the same groups of data types
being present together. For example, two integers for width and height of
an object. It is recommended to wrap this data into a record (structure) or
into an object. It will be possible to change the data types easily and also
methods processing that data can be added to the object (scale, transpose).
20
Chapter 4
Reimplementation of Rendering
4.1
Reasons for Reimplementation of the
Rendering Part
4.1.1
Portability of the Application
The original implementation of Dicom-Presenter used OpenGL library for
rendering and management of image data. OpenGL oers high performance
due to moving image processing responsibilities from CPU to GPU.
On the other hand, the presence of OpenGL brought dependencies on
three additional libraries to Dicom-Presenter:
•
GLEW [34] An OpenGL extension for manipulation with three-dimensional
bitmaps.
•
Cg toolkit [29] A library oering pixel-shader programming was used
for brightness and contrast adjustment.
•
plib [50] OpenGL itself does not support rendering of texts.
Thus plib
library was used to provide the functionality.
Both GLEW and Cg toolkit require a hardware support on the target
computer. Due to the fact, Dicom-Presenter suered from several compatibility
issues during its deployment.
Considering the facts above, the author decided to reimplement the rendering
process in order to remove the libraries.
After removal of the libraries,
Dicom-Presenter should be deployable on any computer, regardless of GPU
support.
Moving image processing tasks to CPU will substantially aect application's
performance, but the performance was estimated in [6] as being sucient.
Methodology of OpenGL Replacement
According to software design paradigms [9], the recommended way of working
21
with a platform dependent code is maintaining it in a separated module.
Unfortunately, the extensive use of OpenGL code in the previous implementation
complicated the possible extraction of the code from the object model.
The application was rendered in multiple steps using framebuers and
texture objects as storing points for the rendering process.
These objects
can be accessed from any point of the application through global functions
of OpenGL. The eect of OpenGL functions depends on the actual context
of previously issued commands (Sec.
4.2).
Since OpenGL functions were
used during the rendering process across all classes of the rendering module,
it was dicult to track the process of rendering (Fig. 4.2).
Considering the facts above, the author decided to reimplement the rendering
process of Dicom-Presenter and then attach the remaining parts of the application
part-by-part, instead of extracting the hardware dependent code and then
replacing it in one step.
4.2
Characteristics of OpenGL
The below described features of OpenGL library complicated the refactoring
process of the rendering module.
Procedural Approach
OpenGL is considered as a procedural tool rather than descriptive. It provides
commands to control graphic hardware, instead of describing the graphic
output itself [22].
This approach gives more control over graphic device, but graphic operations
are achieved by multiple OpenGL commands.
Matrix Model of OpenGL
As OpenGL is originally designed for 3D graphics, it uses matrix arithmetic
for composing the output image from provided polygon mesh. The original
coordinates of a polygon are transformed through multiplying by two matrices
to nal position in the output image.
Projection Matrix
ModelView Matrix
Scene Coordinates
Camera Coordinates
Output Screen Coordinates
Figure 4.1: Conversion of polygon coordinates during the rendering process.
Execution Model
OpenGL uses client-server interface. OpenGL commands are global functions
22
accessible from any point of application.
The commands are captured by
OpenGL server maintained by operation system [22].
As a result, OpenGL commands are evaluated within the context of
previously issued commands.
This contrasts to an object approach used
in Qt library.
4.3
The Original Implementation of Rendering
The following text describes the rendering process of a workspace and an
image in the OpenGL version of the application.
The process of rendering is started in the CWidget object and then all
graphic objects are asked to render using Observer pattern.
4.3.1
Workspace Rendering
Firstly, a workspace is rendered into a framebuer.
Consequently, it is
rendered to the main output.
Pre-rendering Workspace Into Framebuer
1. At rst, an OpenGL framebuer is instantiated and attached to a
texture object.
All following OpenGL rendering commands will be
collected by the framebuer.
2. After several initializations, the workspace's images are required to
render themselves. Each image object issues a set of rendering commands
which are captured by the workspace's framebuer.
3. The framebuer object is detached and the OpenGL context is set back
to the main output.
The rendered content is stored in the texture object and used in the
second step.
Rendering the Workspace from the Framebuer to the Main Output
1. The texture object with the pre-rendered workspace is attached as an
active texture for further rendering.
2. The workspace is rendered to the main OpenGL output as a rectangle
covered by the texture.
23
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslate2d(iPosition.x,iPosition.y)
CObject
DrawWorkspaceToFramebuffer()
DrawWorkspaceFromFramebuffer()
Translate()
iPosition
inh
eri
tan
ce
iFramebuffer->bind()
Translate()
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor3d(255,255,255);
glViewport(0, 0, iActualTextureInfo.width, iActualTextureInfo.height);
for image in iImages
image->Translate()
image->DrawImage()
iFramebuffer->release()
CWorkspace
PaintGL()
DrawWorkspaceToFramebuffer()
DrawWorkspaceFromFramebuffer()
iFramebuffer
inheritance
iImages
CImage
glViewport(position.x1, positiony1, position.x2, position.y2);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, iFramebuffer->textureId())
glBegin(GL_PLYGONS)
for each corner
glTexCoord2d(corner.tex.x(),corner.tex.y());
glVertex2d(corner.pos.x(),corner.pos.y());
glEnd()
DrawImage()
iTexNr
glEnable(GL_TEXTURE_2D );
glBindTexture(GL_TEXTURE_2D, iTexNr);
glBegin(GL_QUADS );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER);
for each corner
glTexCoord2d(corner.tex.x(),corner.tex.y());
glVertex2d(corner.pos.x(),corner.pos.y());
glEnd();
Figure 4.2: Implementation of workspace rendering (simplied).
OpenGL
functions are denoted by a green color. The schema shows the use of OpenGL
functions across dierent classes while rendering the workspace object.
24
4.3.2
Image Rendering
Obtaining a 2D slice from 3D data
1. A framebuer object is instantiated and set as a target for further
OpenGL rendering.
2. A cubic texture containing the 3D image data is attached as a source
for rendering.
3. The application computes coordinates of the required slice inside the
three-dimensional bitmap.
4. The actual OpenGL context receives additional parameters describing
zoom, translation of the image and color adjustments.
5. The 2D slice is obtained by rendering a rectangle covered by part of
the 3D texture into the attached framebuer.
Rendering the image
Once the 2D slice is pre-rendered in framebuer, the image object is rendered
into the main OpenGL output.
4.4
Reimplementation of OpenGL Tasks
The followings tasks were previously performed by OpenGL and its supporting
libraries.
Since OpenGL was removed, the following tasks needed to be
reimplemented.
4.4.1
Image Reconstruction
Three-dimensional MRI data are stored as a set of two-dimensional images
within the DICOM format. The images dene a three-dimensional bitmap.
The pixels of the images lay in equidistantly distributed parallel planes.
Thus, the application can easily provide a view of the image data in a
plane parallel to the input images, but needs to reconstruct a view in any
other plane.
The previous implementation of the application used GLEW OpenGL
extension which obtains the appropriates views of the image data.
The
GLEW extension constructed a three-dimensional texture object out of provided
pixel data and then was able to deliver any slice of the 3D object.
The new implementation of Dicom-Presenter substitutes the functionality
of GLEW library with own code. Input data from the set of images are stored
in computer memory as an array of bytes. A new function of CImage class
can assemble a two-dimensional image in the desired direction.
25
DICOM dataset
reconstructed image
in perpendicular axis
Figure 4.3: Reconstruction of a 2D image from a three-dimensional data.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
int s l i c e = i P a r e n t I m a g e −>G e t D e p t h P o s i t i o n ( i P a r e n t I m a g e −>G e t O r i e n t a t i o n ( ) ) ;
int j = 0 ;
i f ( i P a r e n t I m a g e −>G e t O r i e n t a t i o n ( )==E I m a g e O r i e n t a t i o n S a g i t t a l ) {
for ( int f r a m e = 0 ; f r a m e <f r a m e s c o u n t ; f r a m e++){
int f r a m e s t a r t = f r a m e ∗ f r a m e i n t s ;
for ( int y = 0 ; y<d i c o m r a w d a t a h e i g h t ; y++){
i S l i c e [ j ]= i O r i g i n a l D a t a [ f r a m e s t a r t+ s l i c e +y
∗ dicomrawdatawidth
];
j ++;
}
}
}
if
( iParentImage
−>G e t O r i e n t a t i o n ( )==E I m a g e O r i e n t a t i o n C o r o n a l ) {
for ( int f r a m e = 0 ; f r a m e <f r a m e s c o u n t ; f r a m e++){
int s t a r t = d i c o m r a w d a t a w i d t h ∗ s l i c e + f r a m e ∗ f r a m e i n t s ;
for ( int x = 0 ; x<d i c o m r a w d a t a h e i g h t ; x++){
i S l i c e [ j ]= i O r i g i n a l D a t a [ x+ s t a r t ] ;
j ++;
}
}
}
Listing 4.1: Reconstruction of a 2D view of the image data.
4.4.2
Interpolation of Image Data
Images obtained by the reconstruction described above generally have a lower
resolution in the axis perpendicular to the plane given by the initial image set.
The voxel size of a three-dimensional DICOM image is in general asymmetric,
being larger in the mentioned direction.
The previous implementation of Dicom-Presenter used image interpolation
feature of OpenGL library to display DICOM data in the same resolution in
all three axes.
26
The new version of Dicom-Presenter implements a linear interpolation in
the axis of the lowest resolution. The new resolution is computed according to
the resolution in the other direction and the voxel size (given in millimeters).
Image interpolation feature provided by Qt library could not be used,
because it signicantly aected the application's performance.
1
2
for ( int
double
y =0;
y<n e w H e i g h t ;
realPosOldImage
oldHeight
3
4
5
6
7
8
9
10
11
12
13
=
y++){
((
double ) y ) / ( ( double ) n e w H e i g h t ) ∗ ( ( double ) (
− 1) ) ;
double o l d I m N e a r e s t I n d e x 0 = f l o o r ( r e a l P o s O l d I m a g e ) ;
double o l d I m N e a r e s t I n d e x 1 = o l d I m N e a r e s t I n d e x 0 + 1 ;
for ( int x = 0 ; x<l i n e W i d t h ; x++){
q u i n t 8 o l d I m V a l 0 = o r i g I m a g e [ ( int ) o l d I m N e a r e s t I n d e x 0 ∗ l i n e W i d t h+x ] ;
q u i n t 8 o l d I m V a l 1 = o r i g I m a g e [ ( int ) o l d I m N e a r e s t I n d e x 1 ∗ l i n e W i d t h+x ] ;
double p o s D i f f = r e a l P o s O l d I m a g e −o l d I m N e a r e s t I n d e x 0 ;
double v a l u e D i f f = ( double ) o l d I m V a l 1 − ( double ) o l d I m V a l 0 ;
newImage [ i ]
=
o l d I m V a l 0 +( q u i n t 8 ) ( p o s D i f f
∗ valueDiff ) ;
i ++;
}
}
Listing 4.2: Linear interpolation of low-resolution image data.
Figure 4.1:
An image before (left) and after(right) interpolation done by
Dicom-Presenter.
(left)
4.4.3
(right)
Rendering of Graphic Objects
All the code related to rendering of graphic objects needed to be replaced.
The previous implementation of rendering was based on OpenGL.
As said above, OpenGL requires a procedural description of rendering.
The code works with globally accessible OpenGL objects and properties.
This approach complicated extraction of all OpenGL code out of the object
model for simple replacement.
27
The new implementation of rendering is based on Qt classes related to
image manipulation.
The output bitmap of the application is represented
by a Qt object (QPixmap).
All graphic elements of Dicom-Presenter can
render to the output bitmap using a handle represented by another Qt
object (QPainter). All workspaces are pre-rendered into their own bitmap
(QPixmap).
The described approach simplies the rendering process in comparison
to the OpenGL code, which required manipulation with globally accessible
OpenGL objects such as OpenGL context, framebuer, texture and others.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
void
CWorkspace : : p a i n t ( ) {
QPainter
while
∗
handle
= new
QPainter ( ( QPaintDevice
∗ ) iOutputPixmap ) ;
( i I m a g e s . hasNext ( ) ) {
∗ image = i I m a g e s . n e x t ( )
−>p a i n t ( h a n d l e ) ;
CImage
image
;
}
. . .
}
void
∗
−>Draw Conte nt ( h a n d l e ) ;
t h i s −>D r a w B o r d e r s ( h a n d l e ) ;
t h i s −>DrawTexts ( h a n d l e ) ;
CImage : : p a i n t ( Q P a i n t e r
handle ) {
this
. . .
}
void
∗ handle ) {
−>P r e p a r e I m a g e C o n t e n t ( ) ;
h a n d l e −>d r a w I m a g e ( t h i s −>G e t P o s i t i o n ( ) , t h i s −>G e t C o n t e n t ( ) ) ;
CImage : : Dr awCo ntent ( Q P a i n t e r
this
}
Listing 4.3: Rendering of graphic objects in the new implementation.
4.4.4
Brightness and Contrast
Qt library does not provide functions for color adjustment of an image.
OpenGL library oers functions for elementary pixel arithmetic.
A value
can be added to all pixel values of the image, or all pixel values can be
multiplied by a factor. Qt oers only adjustment of one particular pixel of
the image. Using the feature for image-wide pixel adjustment does not oer
a sucient performance [6].
Owing to the facts mentioned above, Dicom-Presenter implements its own
function for brightness and contrast adjustment (Listing 4.4). The function
modies the image data directly in the array of pixel values.
28
1
2
3
4
5
6
7
8
9
10
11
12
void CImageData : : i m a g e E n h a n c e m e n t ( QImage ∗
for ( int y = 0 ; y<img−>h e i g h t ( ) ; y++){
QRgb
∗
i m a g e L i n e =(QRgb
img ,
float
bias ,
float
scale ){
∗ ) img−>s c a n L i n e ( y ) ;
for ( int x = 0 ; x<img−>w i d t h ( ) ; x++) {
f l o a t o r i g i n a l I n t e n s i t y = ( f l o a t ) qRed ( i m a g e L i n e [ x ] ) ;
int n e w I n t e n s i t y = ( int ) ( o r i g i n a l I n t e n s i t y ∗ s c a l e +b i a s ) ;
i f ( n e w I n t e n s i t y >254) n e w I n t e n s i t y = 2 5 4 ;
i f ( n e w I n t e n s i t y <1) n e w I n t e n s i t y = 1 ;
i m a g e L i n e [ x ]= qRgb ( n e w I n t e n s i t y , n e w I n t e n s i t y , n e w I n t e n s i t y ) ;
}
}
}
Listing 4.4: A function for brightness and contrast adjustment.
4.5
Software Design of the Rendering
Implementation
4.5.1
Encapsulating the New Rendering Implementation
After reimplementation of all the methods related to rendering, the application's
source-code became more simple. This is due to Qt library oering signicatly
simpler and clearer description of a rendering process. Thus, it was nally
possible to perform the refactoring process to separate the code related to
rendering from the rest of the application's code.
Extracting the code, would make the remaining part of the implementation
independent on the Qt rendering environment. It will also open a possibility
to include multiple implementations of rendering to the application if needed.
Qt and OpenGL rendering might coexist in the application and be used
according to a hardware capabilities of a hosting computer.
The solution aims to meet the following requirements:
1. All code directly using Qt classes for image processing needs to be
separated from the remaining code.
2. There should be minimum knowledge of the inner implementation of
the extracted rendering code required to compile the rest of the application.
The rst step of the solution was extracting all code related to Qt library's
rendering environment to separate classes (Fig. 4.4, (2)) owned by related
graphic objects (Fig. 4.4, (1)). The solution meets the rst requirement, not
meeting the second requirement.
The next step is based on Bridge design pattern [9]. The extracted classes
are equipped with a shared interface, so that the application does not need
to distinguish between the members of the rendering implementation.
29
All
iRenderImpl
CObject
CWidget
CWorkspace
iRenderImpl
CImage
iRenderImpl
CWidgetRenderImpl
CWorkspaceRenderImpl
CObjectRenderImpl
(1)
CPlanarWorkspace
iRenderImpl
iRenderImpl
CPlanarWorkspaceRenderImpl
CImageRenderImpl
(2)
Figure 4.4: Step 1: Code directly dependent on rendering classes of Qt library
is extracted to separate classes.
graphic objects of Dicom-Presenter inherit a pointer to the base class of the
rendering implementation. The rendering implementation is responsible for
instantiating of the correct class according to the object's type. It is done
by a static function:
1
2
3
4
5
6
7
CObjectRenderImpl
if
if
∗
CObjectRenderImpl : : C r e a t e R e n d e r I m p l e m e n t a t i o n ( CObject
∗
type ) {
( d y n a m i c _ c a s t<CWidget
return
new
( d y n a m i c _ c a s t<CImage
return
new
∗ >( t y p e ) )
C W i d g e t R e n d e r I m p l ( d y n a m i c _ c a s t<CWidget
∗ >( t y p e ) ) ;
∗ >( t y p e ) )
C I m a g e R e n d e r I m p l ( d y n a m i c _ c a s t<CImage
∗ >( t y p e ) ) ;
. . .
}
Listing
4.5:
A
function
for
instantiating
objects
of
the
encapsulated
implementation.
Implementation directly dependent on rendering classes of Qt library
CWidget
CWorkspace
CObjectRenderImpl
iRenderImpl
CObject
static CObjectRenderImpl* CreateImplementation(CObject* object)
virtual void render(TRenderTarget)
virtual void update()
virtual void SetValue(TRenderValue)
virtual TRenderValue GetValue(TRenderType)
CImage
CPlanarWorkspace
CPlanarWorkspace()
render()
iRenderImpl = CObjectRenderImpl::CreateImplementation(this);
CWidgetRenderImpl
CImageRenderImpl
CWorkspaceRenderImpl
CPlanarWorkspaceRenderImpl
render(TRenderTarget)
iRenderImpl->render(EWorkspaceArea);
Figure 4.5: Step 2: All the classes directly using Qt rendering environment
are equipped with a shared interface, so they can be accessed through a
pointer in the inherited CObject class.
30
Rendering Implementation Interface
The classes derived from CObjectRenderImpl implement following methods:
•
void SetValue(TRenderValue) A function to pass a parameter to
the implementation. The type TRenderValue is described below.
•
TRenderValue GetValue(TRenderType) A function to obtain a
value of an parameter from the implementation.
•
void update() A hook for CImage class to recompute the image data
according to the new position of the observed slice.
•
void render(TRenderArea)
render an element.
The function is triggered in order to
The parameter describes the place in the output
window, where the element should be rendered.
The type TRenderValue is a tagged union [24].
previously declared types.
It can carry one of
It consists of an enumerate declaring the type
and a union including the value.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
typedef enum{
EDicom3DTexture ,
EQString ,
. . .
} TRenderType ;
typedef union
{
CDicom3DTexture
QString
∗
∗
texture ;
string ;
. . .
} TRenderData ;
typedef struct {
TRenderType
type ;
TRenderData
data ;
} TRenderValue ;
typedef enum{
EWidget ,
EWorkspace ,
EImageExplorer ,
EWorkspaceExplorer
}
TRe nderA rea ;
Listing 4.6:
Tagged union
TRenderValue
can deliver several types to the
rendering interface.
4.5.2
Refactoring of Multi-planar Reconstruction
There has been a class for Multi-planar Reconstruction implemented in a
previous work [6]. The class was implemented as a stand-alone class without
any relation to the existing workspace class (CWorkspace). Such a decision
31
Figures 4.2: Unied interface of classes CWorkspace and CPlanarWorkspace
will simplify the implementation of workspaceManager.
for w in workspaceManager->GetWorkspaces()
w->paint()
for pw in workspaceManager->GetPlanarWorkspaces()
pw->paint()
CWorkspaceExplorer
for w in workspaceManager->GetWorkspaces()
w->paint()
paint()
mousePressEvent()
CWorkspaceExplorer
paint()
mousePressEvent()
workspaceManager
return iWorkspaces
GetWorkspaces()
return iWorkspaces
workspaceManager
GetWorkspaces()
GetPlanarWorkspaces()
iWorkspaces
0..N
0..N
CAbstractWorkspace
return iPlanarWorkspaces
0..N
virtual paint()
virtual mousePressEvent()
iPlanarWorkspaces
inheritance
CWorkspace
paint()
mousePressEvent()
iWorkspaces
CPlanarWorkspace
paint()
mousePressEvent()
CWorkspace
paint()
mousePressEvent()
inheritance
CPlanarWorkspace
paint()
mousePressEvent()
was made, because all methods of CWorkspace were required to be reimplemented
in CPlanarWorkspace.
The solution met requirements and seemed to be
sucient.
After extending the application, the parts of the application's source-code
related to both CWorkspace and CPlanarWorkspace seemed to bear attributes
of a refactorable code. Instead of a list of opened workspaces, the workspaceManager
required to maintain two lists - one for each kind of a workspace.
There
appeared to be too many conditional statements in the code to distinguish
actions for CWorkspace and CPlanarWorkspace. After each extension, the
complexity of the code seemed to be cumulative.
Although, there was no source-code duplicity in the implementations of
both classes, the similar role of both classes in the object model caused
source-code duplicity in other places of the application.
The solution was a declaration of a shared interface for both classes.
There is an abstract class CAbstractWorkspace inherited by both workspace
classes. Then other classes such as workspaceManager or CWorkspaceSnapshot
generally do not need to distinguish the exact identity of targeted workspace.
Where the identity needs to be obtained, the dynamic_cast operator is used.
32
4.5.3
Refactoring of CImage Class
Owing to the fact that Dicom-Presenter oers multiple features of image
customization, the implementation of CImage class took approximately 2000
lines of code.
The class indicated signs of unrefactored code described in
section 3.1.3.
Thus, the class was divided to ve elements related to each other as shown
in Fig. 4.6:
CImageSlicePosition
The class is responsible for computation of the image size after image reconstruction
and interpolation and also maintaining information about position and orientation
of the displayed slice.
CImageUI
A class responsible for processing of the user's input.
The class resolves
events received from the observer pattern and does necessary computations
about the image adjustments.
CImageData
A class responsible for reconstruction of the displayed image slice from the
set of input data. The class also performs a linear interpolation of the input
data.
CImageImplementation
A class responsible for rendering of the image using Qt library's image
processing environment.
CImage
The original class acts as a wrapper providing the original interface to the
rest of the application.
CImage
CImageSlicePosition
CImageUI
CImageRenderImpl
CImageData
Figure 4.6: Responsibilities of CImage class were divided into ve groups.
33
4.6
Conclusion
Application Performance
Since all the image processing tasks have been moved to a CPU, a signicant
change in the application's performance was expected. The application was
tested on the following conguration:
Intel Core i3 2.26GHz, 4GB RAM
1066MHz, ATI HD5470 (750MHz GPU + 900MHz RAM). The OpenGL
version of Dicom-Presenter was performing basic image operations (move,
zoom, brightness & contrast) with an average refresh rate of 90 FPS. The
software-rendering implementation runs on an average of 25 FPS. Based on
user experience, the performance seems sucient.
The user experience is
comparable to other DICOM viewers.
Application Compatibility
The previous implementation of Dicom-Presenter suered of incompatibility
issues due to presence of GLEW and Cg libraries. The tasks performed by
both libraries are now a performed by the application itself. The responsibility
for the tasks have been moved from GPU to CPU. Consequently, there have
been no compatibility issues observed while testing the application.
The
application can be now successfully deployed to computers without even
elementary support of hardware acceleration.
34
Chapter 5
Interface for Image Processing
Extensions
Modern medical imaging methods MRI and CT focus on imaging of internal
organs and tissues. Examined objects are tightly covered by a surrounding
tissue. This brings image processing methods into medical imaging. Algorithms
are used to improve image quality and help with detection and quantication
of studied structures [18].
This chapter describes a development of interface for image processing
extensions. The goal is to make the application handy and easily extensible
graphic front-end for image segmentation algorithms used in medical imaging.
Two image segmentation algorithms for medical imaging are developed
on FNSPE [12], [11].
Both have been used to determine requirements of
the image processing interface and to test the result.
As other sources
of information will be used documentation for image processing extensions
development for Gimp [10] and IrfanView [38] image editors.
The Input of Image Processing Modules
Based on FNSPE algorithms and Gimp Developer Resources [10], the following
set of input data should be sucient for basic image processing tasks:
1. A set of numeric parameters.
2. An input image to be processed.
3. A bitmap with a mask of the input image highlighting an area of
interest.
Issues of a Plugin Interface Development
The interface for image processing points out following tasks, which are
subject of interest of this chapter:
35
1. Obtaining information from the extension about its input parameters.
2. Adjusting application's user interface according to the plugin's requirements.
3. User interface to enter a mask of the input image.
4. Passing input parameters to the extension.
5.1
Image Segmentation Algorithms
The following text describes the two image segmentation algorithms developed
on FNSPE. Brief understanding of both algorithms will help designing the
user interface for Dicom-Presenter's extensions.
5.1.1
Image Segmentation Based on Ford-Fulkerson
Algorithm
The graph algorithm is based on a minimum cut problem from graph theory.
A graph based on the image is constructed. Edges of the graph are oriented
and weighted. Then Ford-Fulkerson algorithm is used to nd the minimal
cut, which describes the boundary of the segmented area.
There are two basic ideas of how to nd the boundary of the segmented
area [11]:
•
It is possible to search for pairs of pixels, with the highest dierence in
their intensities.
•
If some pixels inside the segmented area are known, it is possible to
search for pixels with a similar intensity.
The algorithm described in the following text comprehends both ideas.
Based on preceding description, the algorithm needs a set of pixels, which
S (seed). Besides,
denoted O (outside) [11].
are located inside the segmented area. Let's denote the set
a set of pixels outside the area must be given, it is
Creating the Graph based on the Segmented Image
Each pixel of the initial image is considered as a vertex of the graph. Two
more vertices are added: a source and a sink - denoted
s
and
k.
Edges of the graph can be divided into three groups. The rst group of
edges connects each pixel of the graph with its closest four neighbors (denoted
EG ).
The second group of edges connects all vertices from
source
s
the sink
Es ). The
a (denoted Ek ).
(denoted
last group connects all vertices
36
V − O with the
from V − S with
Setting the Edge Weights
The next step of the algorithm is setting appropriate weights to the graph
edges.
Finding the weights is based on the idea that all the pixels in the
segmented area should have similar intensities.
The process of nding the weights (capacities) of edges in
Es is following [11]:
•
have the highest
The edges connecting pixels from
S
with the source
s
weight.
•
s and vertices from V − O − S
intensity similar pixels from S .
The edges between the source
high weight as much is their
The weights of edges from
Ek
are found similarly.
have that
This leads to the fact,
that there will be a high ow from source to vertices inside the segmented
area, and also there will be a high ow from vertices outside of the area to
the sink.
Finding the weights of edges from
EG
is based on the idea, that the
boundary will be between the pairs of pixels with the greatest dierence in
their intensity. If the weights of the edges between those pairs of pixels will
be set low, then the boundary can be found as minimum cut of the graph.
Therefore, the weight of the edge between pixels
(Ip1 − Ip2 )
where
Ip1
p1
p2 is set as D −
D = Imax − Imin is
and
is the intensity of the rst pixel and
the dierence between the lowest and the highest intensity on the picture [11].
Figure 5.1:
The construction of a graph based on the segmented image.
There is the segmented image of the rst gure, S and O are a seed and an
output.
37
5.1.2
Image Segmentation Based on Level-set
Methods
The other tested algorithm is based on a curve expansion described by partial
dierential equation. The algorithm uses Level-set dierential equation [21].
The main idea of the algorithm is that the expanding curve can be dened
as a zero-value contour of a function
Φ(t).
The function is known to be
a solution of a dierential equation. The equation describes that a point in
the curve is moving in the normal direction and also the expansion is slowed
down according to the change of the intensity in the segmented picture. The
equation is discretized and solved numerically.
Level-set Equation used in the Algorithm
The following text shows a construction of the Level-set equation used in the
algorithm.
Γ(t = 0) be a closed curve. The idea of Level-set Methods is to work
with a function u(x, y, t = 0), which denes the curve Γ(t = 0) by its contour
u(x, y, t = 0) = 0 [12].
The algorithm uses function u(x, y, t = 0) dened as [12]:
Let
u(~x, t = 0) = ±dist(~x, Γ(t = 0))
where
~x = (x, y) ∈ R2 .
(5.1)
The value is positive for points outside the curve
and negative for points inside the curve.
The Level-set equation used in the algorithm can be derived from
0) = 0
in the following steps.
Firstly, the expression is dierentiated with respect to
0=
t:
∂u d~x
+
· ∇u
∂t
dt
Secondly, a normal speed of a point of the curve is dened as
where
u(x, y, t =
~n
(5.2)
V =
d~
x
dt
· ~n
is a vector in a normal direction. After substitution:
0=
∂u
+ V |∇u|
∂t
(5.3)
The expression (5.3) is called Hamilton-Jacobi equation [12].
Secondly, the normal speed of a point of the curve is determined using
Gibbs-Thomson equation by the following expression
V = −κ + F
38
(5.4)
where
κ is a local curvature in the point ~x and F
is a member representing
a force pushing the point in the normal direction to the curve.
The local
curvature can be expressed as
κ=∇·
∇u
|∇u|
(5.5)
After assembling (5.4) and (5.5) to the Hamilton-Jacobi equation, we get
the Level-set Equation:
∇u
∂u
= |∇u|∇ ·
− |∇u|F
∂t
|∇u|
(5.6)
The algorithm works with a modied Level-set equation:
ut = g 0 |∇u|ε ∇ · (
∇u
) + A∇g 0 · ∇u − g 0 |∇u| F
|∇u|ε
(5.7)
where
• ut = ∂u
∂t
• g 0 is a
function used as an edge detector (based on Perona-Malik
function [12]).
• |∇u|ε is a member for regularization of the equation. Sincepthe absolute
ε2 + |∇u|2 .
value of u gradient could be zero, it is replaced by |∇u|ε =
• A is a new parameter, which will allow to control the advection of the
curve.
Pre-processing of the Input Data
Before the Level-set equation is solved, the input data are pre-processed.
The algorithm uses a gaussian blur to reduce an expected white noise in the
image.
Afterwards, the algorithm equalizes the histogram of the segmented image.
The intensities of image pixels will be better distributed between the lowest
and the highest value.
Lastly, the algorithm performs a thresholding of the image. The idea is
based on the fact, that the blood in the picture has brighter color and the
heart tissue has darker color. Assuming the blood in the picture covers a large
area and has a uniform color and also the heart tissue has a uniform color,
the algorithm searches for two maximums in the histogram of the image.
One of the maximums should represent the blood and the other one should
represent the heart tissue. A local minimum between the two values is set as
the threshold.
39
Solution of the Level-set Equation
The Level-set Equation 5.7 is solved using combination of nite-element and
nite-volume methods.
The dierential equation is discretized and converted to a system of linear
algebraic equations.
The system of linear equations is solved using SOR
method.
5.2
Assembling User Interface according to the Extension's Requirements
Each plugin will require a unique user interface to insert initial parameters.
There are several ways of how the application can obtain an information
about the desired user interface:
Qt User Interface XML Schema
Qt library, which is extensively used in the application, disposes of a XML
schema for GUI description [46]. The library oers a graphic tool for form
design, which could be used for Dicom-Presenter's plugin development [45].
Custom Interface Markup Standard
Another option is to use a custom XML markup for GUI description. The
syntax would be based on GUI description from Qt, GTK+ [35] or other
widget toolkits.
Unlike the framework markups, the syntax would not be
bound to particular features of any library.
API Header File
A practice used for Gimp plugins is a development kit including necessary
denitions related to the application interface. The plugin describes its input
using predened types from the development kit. The result is stored in a
static variable accessible to the host application [10].
1
2
3
4
5
6
7
8
typedef enum
{
T_INTEGER =
T_DOUBLE =
T_CHAR =
0,
1,
2
. . .
}
EInputTypes ;
static
EInputTypes
iInputTypes [ 2 ]
=
{
T_INTEGER,
T_DOUBLE
};
Listing 5.1: A possible way of storing information about a plugin's interface
according to Gimp development kit [10].
40
Conclusion
Using Qt XML markup language produces a complex code for GUI description.
The standard is meant to be used within the graphic editor, not to be
assembled manually.
The standard would bring additional dependency on
Qt library to plugin development.
Also the Qt markup is a subject to be
modied within upcoming releases of Qt library. Custom XML markup would
avoid dependency on any third-party library.
A benet of the custom XML markup over a shared header le is the fact,
that it requires very few modications to the existing plugin's source-code.
Most of the requirements on the developed plugin are moved to the attached
XML le. This is the reason why the second option was preferred for plugin
development.
5.3
Passing Parameters to Extensions
Presuming the application obtains information about input parameter types,
the following text examines options of delivering values of the parameters
back to the plugin.
Component Object Model
Component Object Model (COM) is a standard for using classes from concurrent
processes.
COM allows to instantiate and use a class, which is previously
unknown. This is done through a shared abstract interface and a registry of
all COM processes [20], [23], [14].
1
2
3
4
5
6
7
8
9
10
11
interface
__declspec ( unique_id )
PluginInterface
:
public
IUnknown
{
public :
virtual
HRESULT STDMETHODCALLTYPE
runPlugin ( )
=
0;
};
HRESULT
InstantiatePlugin ( PluginInterface
plugin
return
∗
This
This
= new
∗∗
plugin )
{
plugin ;
−>Q u e r y I n t e r f a c e ( __uuidof ( P l u g i n I n t e r f a c e )
,
(PVOID
∗) plugin ) ;
}
PluginInterface
HRESULT
plugin
hr
=
∗
plugin ;
I n s t a n t i a t e P l u g i n (& p l u g i n ) ;
−>r u n P l u g i n ( )
;
Listing 5.2: Instantiating previously unknown class from concurrent process
by using COM interface [20].
Plugin Framework
Plugin interface of Dicom-Presenter could be based on a specialized library.
For instance, Boost library [28] or Pluma library [44] oer own plugin interface.
In both cases, the interface is based on similar concepts as Component Object
Model [44], [28]:
•
A shared header le with an abstract interface.
41
•
•
A shared registry of accessible modules.
A factory method for creating specic descendants of the abstract
interface.
1 −−−S h a r e d H e a d e r F i l e −−−
2 class PluginInterface {
3 public :
4
v i r t u a l void s t a r t P l u g i n ( ) const = 0 ;
5 };
6 −−−P l u g i n −−−
7 c l a s s LevelSet : public PluginInterface {
8 public :
9
void s t a r t P l u g i n ( ) { . . . } ;
10 } ;
11 pluma : : H o s t h o s t ;
12 PLUMA_PROVIDER_HEADER( L e v e l S e t , P l u g i n I n t e r f a c e ) ;
13 h o s t . add ( new L e v e l S e t P r o v i d e r ( ) ) ;
14 −−− A p p l i c a t i o n −−−
15 pluma : : Pluma m a n a g e r ;
16 m a n a g e r . l o a d ( " l e v e l s e t . d l l " ) ;
17 s t d : : v e c t o r <L e v e l S e t P r o v i d e r ∗> l i s t O f P r o v i d e r s ;
18 m a n a g e r . g e t P r o v i d e r s ( l i s t O f P r o v i d e r s ) ;
19 P l u g i n I n t e r f a c e ∗ l e v e l S e t = l i s t O f P r o v i d e r s . a t ( 0 )−> c r e a t e
20 l e v e l S e t −>s t a r t P l u g i n ( ) ;
() ;
Listing 5.3: An example of a plugin interface implementation based on Pluma
framework.
Function Parameters
Parameters can be passed to a plugin module through exported functions of
the module. The input interface of the module needs to exactly match one
of available interfaces of the application.
The interface can be explicit:
1
2
3
typedef
bool
inputFunction (
int , f l o a t ) ;
. . .
inputFunction
func
=
( inputFunction
∗)
GetProcAddress ( LoadedLibrary ,
functionName ) ;
4
func ( intValue , f l o at V a l u e ) ;
Listing 5.4: Passing parameters explicitly through an exported function.
Alternatively, parameters of multiple types can be enveloped into an
abstract structure:
1
2
3
4
5
6
7
8
9
10
11
12
enum
ParameterType
{
E_INT ,
E_FLOAT
};
union P a r a m e t e r D a t a
int i n t D a t a ,
float floatData
{
}
struct
PluginParameter
ParameterType
type ,
ParameterData
data
{
};
42
13
14
15
16
struct P a r a m e t e r L i s t
int l i s t L e n g t h ,
PluginParameter
∗
{
pluginParameters
};
Listing 5.5: Input parameters of various types are enwrapped into an abstract
structure.
String in a Data Serialization Format
Initial parameters might be converted to strings and enwrapped with some
object notation standard such as XML [32] or YAML [41]. The result can be
passed through exported function of the plugin module.
1
2
3
4
5
6
7
8
input_parameters :
lambda :
sigma :
1.001
1.100
<i n p u t _ p a r a m e t e r s >
<lambda >1.001 </ lambda>
<s i g m a >1.100 </ s i g m a>
</ i n p u t _ p a r a m e t e r s >
Listing 5.6: YAML and XML object notation.
Conclusion
Using predened plugin interface such as COM or one of mentioned frameworks,
would avoid misinterpretation of parameters during unexpected data conversions
and also allow a simultaneous use of a module. On the other hand, one of
the requirements on the selected method was simplicity and no dependency
on third party tools, to ensure easy implementation for plugin developers.
Wrapping parameters into a data serialization text format would also
bring additional dependencies into Dicom-Presenter modules in order to parse
the input data.
The option of passing parameters through exported functions is recommended
in [15] as a light-weight solution for non-extensive plugin interfaces. Considering
the reasons above, the solution was chosen for Dicom-Presenter's plugin
interface.
5.4
Implementation of the Extensions
Interface
5.4.1
Structure of the Interface
The plugin interface of Dicom-Presenter is implemented within the following
classes:
43
1.
2.
3.
XMLParser loads data from the plugin's input le.
PluginInterface provides the user interface for the plugin.
DrawModule provides the input for highlighting areas on the processed
image.
4.
PluginInit
is responsible for passing the input parameters to the
plugin.
5.
PluginManager is a wrapper of the classes above.
PluginManager
XMLParser
PluginInterface
PluginInit
DrawModule
Figure 5.2: Classes of the plugin interface.
5.4.2
The Schema of the XML File with a List
of Input Parameters
Information about the required user interface of a plugin is obtained from a
XML le. An example of such le can be found on Listing 5.7.
Structure of a Plugin Input File
The XML le describing a plugin needs to contain the following elements:
Name The name under which the plugin should be displayed in Dicom-Presenter.
Category A name of the category under which the plugin should be listed
in Dicom-Presenter.
Filename The lename of a dynamic library with the compiled plugin.
ImageInput A closer specication of PGM format for input les expected
by the plugin. Possible values are PGMP2 (plain data) and PGMP5 (compressed).
Help
Instructions about using the plugin. Should include information about
input parameters and what to highlight on the input image (in case the
features are used).
InputArea
A list of areas, which are required to be highlighted on the map
of the input image.
Each area is represented by a child element with an
44
attribute intensity which denotes the shade of a gray color, which will the
plugin use for identication of the area. The content part of each element
should include a name or short description of the area.
PluginInput
A list of all parameters required by the plugin. Accepted tags
are number and enumerate.
Number
The tag represents parameters which will be specied by its
exact value. The tag includes following attributes:
1.
2.
3.
4.
5.
name Name of the parameter to be displayed to user.
type Type of the input parameter: int or double.
min Minimum accepted value.
max Maximum accepted value.
default The default value of the parameter. This value will be pre-lled
in the provided user interface.
6.
function The name of the exported function in the plugin's library to
receive the parameter's value.
Enumerate The tag represents a parameter which can accept only predened
values.
Thus, enumerate is a nested element including a list of accepted
values.
Enumerate tag includes three attributes: type, name and function.
The same rules as for Number tag apply to the attributes, except the enumerated
parameter can accept also char type.
Each listed value of the parameter includes attributes value and name
denoting the value which will be passed to the appropriate function and also
a short name of the value.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
name>Graph</name>
category>I m a g e S e g m e n t a t i o n </category>
<filename >g r a p h −method . d l l </ filename >
<imageinput>PGMP5</imageinput>
<inputarea >
<area intensity=" 1 "> I n n e r Area </ area>
<area intensity=" 2 5 5 ">O u t e r Area </ area>
</ inputarea >
<help>I n n e r p a r t
of
t h e s e g m e n t e d area n e e d s t o b e
. . . </ help>
<plugininput >
<number type=" i n t " min=" 3 " max=" 3 0 " default=" 2 0 " name="Lambda P a r a m e t e r "
function=" lambda "/>
<number type=" i n t " min=" 3 " max=" 1 0 " default=" 5 " name=" Sigma P a r a m e t e r "
function=" s i g m a "/>
<enumerate type=" c h a r " name=" A l g o r i t h m " function=" a l g o r i t h m ">
<option value=" a " name=" M o d i f i e d a l g o r i t h m "/>
<option value=" p " name=" O r i g i n a l
a l g o r i t h m "/>
</enumerate>
</ plugininput >
<
<
Listing 5.7:
An example of an XML le with a description of a plugin's
interface.
45
XML Parsing
The XML le with plugin data is parsed using Qt library. Qt oers a parser
which converts provided XML le to a Document Object Model.
Then
the application obtains the required information while iterating through the
elements of the DOM object.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
void
P l u g i n M a n a g e r : : parseXML ( Q S t r i n g
QDomDocument
fileName ) {
document ;
document . s e t C o n t e n t ( Q F i l e ( f i l e N a m e ) ) ;
QDomNode
while
node
QDomElement
if
=
document . f i r s t C h i l d ( ) ;
( node . i s E l e m e n t ( ) )
e
=
{
node . toEle ment ( ) ;
( c o m p a r e ( e . tagName ( ) , " h e l p " ) ) {
setHelp ( e . text () ) ;
}
if
( c o m p a r e ( e . tagName ( ) , " p l u g i n i n p u t " ) ) {
p a r s e U s e r I n p u t ( node ) ;
}
. . .
node
=
node . n e x t S i b l i n g ( ) ;
}
}
Listing 5.8: A method for parsing a XML le with plugin's description.
5.4.3
Passing Parameters to Extensions
Computation parameters are passed to the plugin through exported functions
of its dynamic library. Each parameter is represented by one function.
There are three types of parameters which can be passed to the plugin,
thus Dicom-Presenter maintains three lists of pointers to plugin's input functions
(Lst. 5.10: 1-6).
The application obtains the addresses to the input functions while parsing
the provided XML (Lst. 5.10: 8-20).
All parameters are passed to the plugin when user asks to start the
algorithm (Lst. 5.10: 22-27).
46
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
typedef
typedef
typedef
int ) ;
double ) ;
c h a r F u n c t i o n ( char ) ;
bool
intFunction (
bool
doubleFunction (
bool
Q L i s t <i n t F u n c t i o n
∗>
intFunctionsList ;
∗
Q L i s t <d o u b l e F u n c t i o n >
∗
Q L i s t <c h a r F u n c t i o n >
doubleFunctionsList ;
charFunctionsList ;
. . .
void XMLParser : : p a r s e I n p u t P a r a m e t e r s ( QDomNode n o d e ) {
while ( ! n o d e . i s N u l l ( ) ) {
i f ( e . a t t r i b u t e ( " t y p e " )==" i n t " | | e . a t t r i b u t e ( " t y p e " )==" d o u b l e " ) {
i f ( e . hasAttribute (" function ") ){
iPluginManager
−>AddFunc ( e .
a t t r i b u t e ( " f u n c t i o n " ) , type ) ;
. . .
void
P l u g i n M a n a g e r : : AddFunc ( Q S t r i n g
FARPROC
if
ProcessAdress
=
functionName , QString
type ) {
GetProcAddress ( Library , functionName ) ;
( t y p e==" i n t " )
AddIntFunction ( ( i n t F u n c t i o n
∗) ProcessAdress ) ;
. . .
void
PluginManager : : AddIntFunction ( i n t F u n c t i o n
intFunctionsList
<<
∗
func ) {
func ;
. . .
void
P l u g i n I n i t : : initRun ( ) {
Q L i s t I t e r a t o r <i n t F u n c t i o n
while
∗>
i t e r a t o r ( iPluginManager
−>G e t I n t F u n c t i o n s ( ) ) ;
( i t e r a t o r . hasNext ( ) ) {
intFunction
QSpinBox
∗
∗
parameterFunction
=
i t e r a t o r . next ( ) ;
c o n t r o l E l e m e n t=d y n a m i c _ c a s t<QSpinBox
∗ >( i L o a d e r −>
g e t C o n t r o l E l e m e n t ( i , EINT ) ) ;
27
parameterFunction ( controlElement
−>v a l u e ( ) ) ;
Listing 5.9: Passing parameters to an extension.
5.4.4
User Interface for Declaration of an Image Map
The user is given an option to highlight initial areas for the algorithm on the
input image. A simple painting interface has been implemented. The user
is oered several colors assigned to the types of input areas and then he/she
can highlight the area by painting on the input image.
Dicom-Presenter maintains two bitmaps for the painting interface. Firstly,
the bitmap the user can see with the input image and colorized input areas.
Secondly, a bitmap in a format suitable for the algorithm (a grayscale bitmap
with the highlighted areas only).
1
2
3
4
5
6
7
8
9
10
11
void d r a w L a b e l : : c h a n g e C o l o r ( int c o l o r ) {
int v a l = i L o a d e r −>g e t C o l o r V a l u e ( c o l o r ) ;
iMemoryColor . setRgb ( v a l , v a l , v a l ) ;
∗ v a l ) %256) ;
−>c h a n g e C o l o r P i x m a p ( i O u t p u t C o l o r ) ;
iOutputColor . setRgb ( 2 5 5 , 0 , ( 2 5 5
iLoader
}
void
if
d r a w L a b e l : : m o u s e P r e s s E v e n t ( QMouseEvent
( event
−>b u t t o n ( )==Qt : :
pressedButton
=
∗
event ) {
LeftButton )
leftButton ;
}
void
d r a w L a b e l : : mouseMoveEvent ( QMouseEvent
47
∗
event ) {
12
13
14
15
16
17
18
19
20
21
22
23
if
( p r e s s e d B u t t o n==n o B u t t o n )
QPainter
return ;
p a i n t e r ( ( QPaintDevice
∗ ) iOutputPixmap ) ;
p a i n t e r . setPen ( iOutputColor ) ;
painter . drawEllipse
( event
−>x ( )
, event
−>y ( )
, iWidth , iWidth ) ;
update ( ) ;
QPainter
p a i n t e r 2 ( ( QPaintDevice
∗ ) iMemoryPixmap ) ;
p a i n t e r 2 . s e t P e n ( iMemoryColor ) ;
painter2 . drawEllipse
( event
−>x ( )
, event
−>y ( )
, iWidth , iWidth ) ;
}
void
d r a w L a b e l : : m o u s e R e l e a s e E v e n t ( QMouseEvent
pressedButton
=
∗
event ) {
noButton ;
}
Listing 5.10: Implementation of a painting interface to highlight initial areas
on the image.
5.4.5
Multi-thread Tools of Qt Library
The computation time of the tested algorithms frequently reaches several
minutes.
The application cannot be unresponsive for such time, thus the
plugin computation needs to be moved to a separate thread.
Qt library oers an object interface for thread management. The plugin
computation is started in a descendant object of QThread class (5.11 l. 3,
25, 27).
The information that the plugin nished is passed back to the
application through its signals & slots framework (5.11 l. 26).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
typedef const char ∗
class
pluginStartFunction () ;
PluginComputation
:
public
QThread
{
Q_OBJECT
signals :
void
ComputationFinished (
const char ∗ ) ;
public :
void
SetInputFunction ( pluginStartFunction
∗) ;
private :
void
run ( ) ;
pluginStartFunction
∗
iPluginStartFunction ;
};
void
PluginComputation : : SetInputFunction ( p l u g i n S t a r t F u n c t i o n
iPluginStartFunction
=
∗
f ){
f ;
}
void P l u g i n C o m p u t a t i o n : : r u n ( ) {
const char ∗ r e s u l t = i P l u g i n S t a r t F u n c t i o n ( ) ;
. . .
}
HINSTANCE
FARPROC
LoadedLibrary
ProcessAdress
=
=
L o a d L i b r a r y ( " . / segm . d l l " ) ;
G e t P r o c A d d r e s s ( L o a d e d L i b r a r y , " main " ) ;
∗ p l u g i n = new P l u g i n C o m p u t a t i o n ( ) ;
−>S e t I n p u t F u n c t i o n ( ( m a i n F u n c t i o n ∗ ) P r o c e s s A d r e s s ) ;
c o n n e c t ( p l u g i n , SIGNAL ( C o m p u t a t i o n F i n i s h e d ( const char ∗ ) )
C o m p u t a t i o n F i n i s h e d ( const char ∗ ) ) ) ;
27 p l u g i n −> s t a r t ( ) ;
PluginComputation
plugin
, t h i s , SLOT(
Listing 5.11: Thread management tools of Qt library.
48
Figure 5.3: The user interface assembled for the Level-set algorithm.
The
right image shows the result of the segmentation.
5.5
Conclusion
Requirements on the plugin interface were estimated from the two FNSPE
plugins and also from Gimp's extensions interface.
The aim of the design
was to make other algorithms to be easily convertible to Dicom-Presenter.
The plugin system was implemented and the FNSPE algorithms were
equipped with a compatible interface. Both algoritmhs are working in Dicom-Presenter,
but unfortunately there are limitations on segmented pictures.
The graph
algorithm seems to have problems with handling pictures with very high
contrast, and on the other hand the Level-set algorithm cannot handle pictures
with low contrast. But successful segmentation of other images proves that
the extensions interface is working.
The experiences with the algorithms points out an option, that both
algorithms could be equipped with some image pre-processing, which would
convert the picture to a more suitable form (color adjustments), or otherwise
do a better estimation of the input parameters.
49
Chapter 6
Conclusion
Reimplementation of Rendering
The rst objective of this thesis was a reimplementation of the rendering
part of the application. The objective was successfully achieved. All tasks
previously performed by OpenGL and the related libraries have been implemented
as a part of the application.
The current implementation of rendering was encapsulated to a separate
module accessed through a simple interface.
This should ease any further
development in the rendering part of the application.
As a result of the reimplementation, the application does not now suer
from any compatibility issues.
Image Processing Extensions Interface
The other objective was an interface for image processing extensions. The
interface generates a graphic front-end according to the extension's requirements.
The user is able to enter computation parameters and also denote a map of
the processed image.
All the parameters are transferred to the compiled
algorithm.
The interface has been tested with two image segmentation algorithms
developed on FNSPE, so the application could be used as a graphic front-end
for other image processing algorithms for medical imaging developed on the
faculty.
Application Deployment
The application has been tested on multiple congurations and went through
several debugging sessions.
The application seems to be stable enough to
be released and provided to the IKEM radiology department.
It was also
equipped with a user's manual and installer to ease application's deployment.
50
Chapter 7
Appendix: Dicom-Presenter
User's Guide
7.1
The User Interface
Figure 7.1: Opening an image.
The user interface of Dicom-Presenter is divided into four parts (see Fig. 7.1):
51
•
Workspace A space for displaying images.
It is possible to view several
images at a time.
•
•
Image-Explorer A list of all images opened in the application.
Workspace-Explorer A list of all active workspaces. The Workspace-Explorer
allows to switch workspaces.
•
Info-Panel A window allowing to enable particular features and view
image parameters.
7.2
Opening Images
Figure 7.2: Opening an image.
To open a new image, the user needs to press Open Dicom Image" button
(no.
1, Fig 7.2).
A thumbnail of the image then appears in the Image
Explorer (no. 2, Fig 7.2). To add the image to a Workspace, user presses
Create Copy" button (no. 3, Fig 7.2). Then the user can explore the image
using provided image manipulation functions (described later).
52
7.3
Image Transformations
Dicom-Presenter oers a few elementary image transformations to improve
image viewing:
•
•
•
Zoom.
Translation.
Brightness and contrast adjustment.
The transformations can be achieved using a computer mouse, or explicitly in
the Info-Panel. While, holding the left mouse button and dragging the mouse,
the image is moved. Holding the right mouse button and dragging the mouse,
the contrast and brightness of the image are modied. A middle-button click
starts zooming.
7.4
Image Layout
Table 7.1: Dierent ways of grouping images on a Workspace.
Images
can
be
grouped
into
a
rectangular grid.
Images can be placed freely by a user
without any bindings.
Dicom-Presenter allows user to view multiple images at the same time.
There are two dierent ways of how images can be placed in a Workspace.
The user can use a Grid Layout so the images are placed in a rectangular
grid. Also, image positions can be completely customized without any other
bindings using a Free Layout.
To change the grouping of images, the user selects a Workspace and then
chooses on of the options available in the Info-Panel: Growing grids layout
- horizontal", Growing grids layout - vertical" and Free Layout". The rst
two options make pictures to be aligned into a rectangular grid, the last
option allows the user to freely manipulate with the images.
53
7.5
Moving and Resizing Images
Figure 7.3: Moving and resizing of an image.
Each image in a Workspace is equipped with three icons:
an icon for
moving (no. 1, Fig. 7.3), an icon for resizing (no. 2, Fig. 7.3) and an icon
which allows to close the image. When the Free Layout is selected, using the
icons is intuitive and is similar to moving and resizing windows in a usual
windowing environment. If a grid layout is selected, then the resizing icon
manipulates with image row or image column in the layout. The move icon
allows the user to drag an image in place of a dierent image and switch the
two of them. If the image is not dragged to a position of another image, then
the move action takes no eect. To distinguish between these two situations,
the image is colored when moved. If the image is in a place of a dierent
image and can be switched, it has a green color. If the images is at a not
accepted position, it has a red color.
54
Figure 7.4: Creating a Workspace.
7.6
Managing Workspaces
A new Workspace can be created when the Workspace-Explorer (2) is selected
(green border), by clicking the New Workspace" (1) button or by double-clicking
the Workspace-Explorer. Workspace-Explorer allows easy switching among
opened Workspaces.
7.7
Multi-Planar Reconstruction
Multi-Planar Reconstruction is a way of displaying a MRI image in three
dierent slices. For each slice, the positions of the other two slices are stated
by two perpendicular lines. When the user clicks at some point in a slice,
then the two other slices are recomputed so both intersect the point in two
perpendicular planes.
The Multi-Planar Reconstruction can be performed by selecting an image
and then clicking the button Multi-Planar Reconstruction".
There will be three images located in the new Workspace. Each image
contains two perpendicular lines to highlight the positions of the two other
images.
55
Figure 7.5: Multi-Planar Reconstruction.
7.8
Animations
Multi-frame images consisting of views of the same location in dierent time
intervals can be viewed using Animation property. When an image is selected,
there is an "Animation" check-box in the Info-Panel.
If the Animation
check-box is selected, the image is being iterated through its time-frames.
The time interval between two frames can be adjusted and also the start
position and the end position can be adjusted.
An animation can be exported to a video le by clicking File
→
Save
workspace animation". The output le is an uncompressed AVI le.
7.9
Image Processing Extensions
Dicom-Presenter oers the use of external tools for image processing. When
an Image is selected there is an option Plugins
→
Image Processing" in the
application top-bar. A XML le with plugin description must be specied.
Then a related GUI is prepared.
According to used external library, the
user might need to specify inner part of the segmented area, outer part of
56
Figure 7.6: Creating a Multi-Planar Reconstruction Workspace.
the segmented area, and computation parameters. Instructions given by the
plugin will appear in the opened window. Once the required information is
given, user can run the plugin. The resulting image as well as a text output
of the library will be shown in the GUI window. The user is able to import
the result image to Dicom-Presenter using the "Load Image in presenter"
button.
57
Figure 7.7: Running an Animation.
58
Bibliography
[1] Badawi,
S.:
C++
vs
AI
Java
Computer
review.
Vision:
Computer
Vision
http://blog.samibadawi.com/2008/09/
computer-vision-c-vs-java-review.html.
Handbook of Medical Imaging. Volume 2. Medical Image
Processing and Analysis. Academic Press, 2000.
[2] Bankman, I. N.:
[3] Benseler, J. S.:
Imaging.
Ohio University Press, 2006.
[4] Dunn,
How
The Radiology Handbook: A Pocket Guide to Medical
M.:
to
Introduction
Use
It.
to
COM
-
What
It
Is
and
http://www.codeproject.com/Articles/633/
Introduction-to-COM-What-It-Is-and-How-to-Use-It.
[5] Fla²ka, J.:
Processing and Visualization of Magnetic Resonance Data.
Faculty of Nuclear Sciences and Physical Engineering, Czech Technical
University in Prague, 2010, Bachelor's Thesis.
[6] Fla²ka, J.:
alization.
Software instrument for MRI data manipulation and visu-
Faculty of Nuclear Sciences and Physical Engineering, Czech
Technical University in Prague, 2011, Research Project.
[7] Fowler,
M.:
Refactoring, Improving the Design of Existing Code.
Addison-Wesley, 1999.
[8] Fowler,
M.:
Patterns of Enterprise Application Architecture.
Addison-Wesley, 2002.
[9] Gamma, E.:
Software.
Design Patterns, Elements of Reusable Object-Oriented
Addison-Wesley, 1995.
[10] GIMP: GIMP developers: Plug-In Development. http://developer.gimp.
org/plug-ins.html.
59
Applications of Graph Cuts for Segmentation of Images and
Data from MRI. Faculty of Nuclear Sciences and Physical Engineering,
[11] Loucký, J.:
Czech Technical University in Prague, 2010, Bachelor's Thesis.
Degenerate Diusion Methods in Computer Image Processing
and Applications. Faculty of Nuclear Sciences and Physical Engineering,
[12] Máca, R.:
Czech Technical University in Prague, 2010, Master's Thesis.
[13] Mens, T.; Tourwe, T.: A Survey of Software Refactoring.
actions on Software Engineering,
IEEE Trans-
2004.
[14] Microsoft: Microsoft Developer Network. 2013, http://msdn.microsoft.
com/en-us.
[15] Mihaescu,
G.:
applications
A
on
simple
Win32.
plug-in
architecture
Technology Articles,
pattern
2006,
for
C++
http://www.
abstraction.net/ViewArticle.aspx?articleID=67.
[16] Ne²kudla, P.:
Sciences
and
Zpracování a vizualizace dat z MRI.
Physical
Engineering,
Czech
Faculty of Nuclear
Technical
University
in
Prague, 2010, Master's Thesis.
[17] Norvig, P.:
Design Patterns in Dynamic Languages.
O'Reilly, 1998.
Handbook of Medical Imaging. Volume
2. Medical Image Processing and Analysis. SPIE - The International
[18] ’onka, M.; Fitzpatrick, J. M.:
Society for Optical Engineering, 2004.
[19] Pianykh, O. S.:
COM).
[20] Rae,
-
Digital Imaging and Communications in Medicine (DI-
Springer, 2008.
M.
Part
E.:
I.
Microsoft
2010,
:
Working
With
COM
Objects
http://www.dreamincode.net/forums/topic/
165742-microsoft-working-with-component-object-model-objects-part-i/.
[21] Sethian, J. A.:
Level Set Methods.
Cambridge University Press, 1996.
OpenGL reference manual: the ocial reference document
to OpenGL, version 1.4. Addison-Wesley, 2004.
[22] Shreiner, D.:
[23] Tenouk:
The Component Object Model - COM Part 1. 2010, http:
//www.tenouk.com/visualcplusmfc/visualcplusmfc23.html.
[24] Tibell,
J.:
Tagged
unions.
514520-tagged-unions.
60
http://bytes.com/topic/c/answers/
[25] Vandevoorde, D.:
Plugins in C++.
The C++ Standards Committee,
2006, jTC1/SC22/WG21 - Papers 2006.
Zobrazování pomocí magnetické rezonance analýza hydratace CNS u drobných hlodavc·. Fakulta biomedicínckého inºenýrství,
[26] Vy²ín, L.:
ƒeské vysoké u£ení technické v Praze, 2006, Bakalá°ská práce.
61
Online References
[27] Agnosco DICOM Viewer. http://www.e-dicom.com/.
[28] Boost: C++ Libraries. http://boost-extension.redshoelace.com/.
[29] Cg toolkit. https://developer.nvidia.com/cg-toolkit.
[30] CT Scan vs MRI.
http://www.dien.com/dierence/CT_Scan_vs_
MRI.
[31] DICOM sample image sets. http://www.osirix-viewer.com/datasets/.
[32] Extensible Markup Language. http://www.w3.org/XML/.
[33] Ginkgo CADx DICOM Viewer. http://ginkgo-cadx.com/en/.
[34] GLEW:
The
OpenGL
Extension
Wrangler
Library.
http://glew.
sourceforge.net/.
[35] The GTK+ Project. http://www.gtk.org/.
[36] I Do Imaging. http://idoimaging.com/.
[37] InVesalius DICOM Viewer.
http://svn.softwarepublico.gov.br/trac/
invesalius.
[38] IrfanView Website. http://www.irfanview.com/plugins.htm.
[39] JiveX
DICOM
Viewer.
http://www.visus.com/cz/download/
free-dicom-viewer.html.
[40] MedINRIA DICOM Viewer. http://med.inria.fr/.
[41] The Ocial YAML Web Site. http://www.yaml.org/.
[42] Onis DICOM Viewer. http://www.onis-viewer.com/.
[43] OpenGL Homepage. http://www.opengl.org/.
62
[44] Pluma Framework. http://pluma-framework.sourceforge.net/.
[45] Qt
Designer
Manual.
http://qt-project.org/doc/qt-4.8/
designer-manual.html.
[46] Qt
Designer's
UI
File
Format.
http://qt-project.org/doc/qt-4.8/
designer-ui-le-format.html.
[47] Qt Framework. http://qt.digia.com/.
[48] RadiAnt DICOM Viewer. http://www.radiantviewer.com/.
[49] Seg3D2
DICOM
Viewer.
http://www.sci.utah.edu/cibc/software/
42-seg3d.html.
[50] Steve's Portable Game Library. http://plib.sourceforge.net/.
[51] Synedra View Personal DICOM Viewer. http://www.synedra.com/en/
index.php.
[52] Weasis DICOM Viewer. http://www.dcm4che.org/conuence/display/
WEA/Home.
63
Download