A utomatic recognition of cortical sulci of the human

Medical Image Analysis 6 (2002) 77–92
www.elsevier.com / locate / media
Automatic recognition of cortical sulci of the human brain using a
congregation of neural networks
` a , *, Jean-François Mangin a , Dimitri Papadopoulos-Orfanos a ,
Denis Riviere
´ c
Jean-Marc Martinez b , Vincent Frouin a , Jean Regis
a
´ ´
´ ´
Leclerc, 91401 Orsay, France
Service Hospitalier Frederic
Joliot, CEA, 4 place du General
b
´
´
´ , CEA, Saclay, France
Service d’ Etude des Reacteurs
et de Mathematiques
Appliquees
c
´ ´
, La Timone, Marseille, France
Service de Neurochirurgie Fonctionnelle et Stereotaxique
Received 24 January 2001; received in revised form 18 June 2001; accepted 21 September 2001
Abstract
This paper describes a complete system allowing automatic recognition of the main sulci of the human cortex. This system relies on a
preprocessing of magnetic resonance images leading to abstract structural representations of the cortical folding patterns. The
representation nodes are cortical folds, which are given a sulcus name by a contextual pattern recognition method. This method can be
interpreted as a graph matching approach, which is driven by the minimization of a global function made up of local potentials. Each
potential is a measure of the likelihood of the labelling of a restricted area. This potential is given by a multi-layer perceptron trained on a
learning database. A base of 26 brains manually labelled by a neuroanatomist is used to validate our approach. The whole system
developed for the right hemisphere is made up of 265 neural networks. The mean recognition rate is 86% for the learning base and 76%
for a generalization base, which is very satisfying considering the current weak understanding of the variability of the cortical folding
patterns.  2002 Elsevier Science B.V. All rights reserved.
Keywords: Neural networks; Cortical sulci; Folding patterns; Automatic recognition system
1. Introduction
The development of image analysis methods dedicated
to automatic management of brain anatomy is a widely
addressed area of research. A number of works focus on
the notion of deformable atlases, which can be elastically
transformed to reflect the anatomy of new subjects. An
exhaustive bibliography of this approach initially proposed
by Bajcsy and Broit (1982) is largely beyond the scope of
this paper (see (Thompson et al., 2000) for a recent
review). The complexity and the striking inter-individual
variability of the human cortex folding patterns, however,
have led several groups to question the behaviour of the
*Corresponding author. Tel.: 133-1-6986-7852; fax: 133-1-69867868.
`
E-mail addresses: riviere@shfj.cea.fr (D. Riviere),
http: / / www`
dsv.cea.fr (D. Riviere).
deformable atlas framework at the cortex level (Mangin et
al., 1995b; Collins et al., 1998; Hellier and Barillot, 2002;
Lohmann and von Cramon, 2000; Cachier et al., 2001).
Two main issues have to be addressed:
1. What are the features of the cortex folding patterns
which should be matched across individuals? While
some sulci clearly belong to this set of landmark
features because they are usually considered as
boundaries between different functional areas, nobody
knows to which extent secondary folds should play the
´
same role (Regis
et al., 1989, 1995). Some answers to
this important issue could stem from foreseeable advances in mapping brain functional organization (Watson et al., 1993) and connectivity (Poupon et al.). While
the number of reliable landmarks to be matched is today
relatively limited, comparison of deformable atlas methods at the cortex level should focus on the pairing of
these landmarks.
1361-8415 / 02 / $ – see front matter  2002 Elsevier Science B.V. All rights reserved.
PII: S1361-8415( 02 )00052-X
78
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
2. Deformable atlas methods rely on the optimization of
some function which realizes a trade-off between
similarity to the new brain and deformation cost.
Whatever the approach, the function driving the deformations is non-convex. When high-dimensional deformation fields are used, this non-convexity turns out
to be particularly problematic since standard optimization approaches are bound to lead to a local optimum.
While multi-resolution methods may guarantee that an
‘interesting optimum’ is found, the complexity of the
cortical folding patterns implies that a lot of other
similar optima exist. An important issue is raised by
this observation: is the global optimum the best one
according to the pairing of sulcal landmarks? The
answer to this issue should be taken into account when
comparing various approaches.
To overcome some of the difficulties related to the nonconvexity of the problem, several teams have proposed to
design composite similarity functions relying on manual
identifications of the main sulci (Thompson and Toga,
1996; Collins et al., 1998; Vailland and Davatzikos, 1999).
These composite functions impose the pairing of homologous sulcal landmarks. While a lot of work remains to be
done along this line, this evolution seems required to adapt
the deformable atlas paradigm to the human cortex. This
new point of view implies a preprocessing of the data in
order to extract and identify automatically these sulcal
landmarks, which is the subject of our paper.
The various issues mentioned above have led us to
initiate a long term project aiming first at a better understanding of the cortical folding patterns (Mangin et al.,
´
1995a; Regis
et al., 1995), and second at the automatic
identification of the main sulci (Mangin et al., 1995b).
During a feasibility study, this project led to a first
generation of image analysis tools extracting automatically
each cortical fold from a T1-weighted MR image. Then, a
sophisticated browser allowed our neuroanatomist to navigate through various 3D representations of the cortical
patterns in order to identify the main sulci. This visualization tool led to the creation of a database of brains in
which a name was given to each fold. This database was
used to train an automatic sulcus recognition system based
on a random graph model. Any cortical folding pattern was
considered as a realization of this model, which led us to
formalize the recognition process as a consistent labelling
problem. The solution was obtained from a maximum a
posteriori estimator designed in a Markovian framework.
While this first tool generation has been used for four years
for the planning of depth electrode implantation in the
context of epilepsy surgery (about 40 operations), a
number of serious flaws had to be overcome to allow a
wider use of the toolbox. This paper gives an overview of
the second tool generation with emphasis on the more
important improvement, which consists in using standard
neural nets to build a better model of the random graph
probability distribution.
Our approach may be considered as a symbolic version
of the deformable atlas approach. The framework is made
up of two stages. An abstract structural representation of
the cortical topography is extracted first from each new
T1-weighted MR image. This representation is supposed to
include all the information required to identify sulci. A
contextual pattern recognition method is then used to label
automatically cortical folds. This method can be interpreted as a graph matching approach. Hence, the usual
iconic anatomical template is replaced by an abstract
structural template. The one to many matching between the
template nodes and the nodes of one structural representation is simply a labelling operation. This labelling is driven
by the minimization of a global function made up of local
potentials. Each local potential is a measure of the
likelihood of the labelling of a restricted cortex area. This
potential is given by a virtual expert in this area made up
of a multi-layer perceptron trained on a learning database.
While the complexity of the preprocessing stage required by our method may appear as a weakness compared
to the straightforward use of continuous deformations, it
results in a fundamental difference. While the evaluation of
functions driving continuous deformations is costly in
terms of computation, the function used to drive the
symbolic recognition relies on only a few hundred labels
and can be evaluated at a low cost. Hence, stochastic
optimization algorithms can be used to deal with the
non-convexity problems. In fact, working at a higher level
of representation leads to more efficiency for the pattern
recognition process, which explains an increasing interest
in the community (Lohmann and von Cramon, 1998, 2000;
Le Goualher et al., 1998, 1999).
In the following, the second section summarizes the
main steps of the preprocessing stage. The third section
gives an overview of the building-up of a database of
manually labelled brains used to teach cortical anatomy to
the pattern recognition system. The fourth section introduces the probabilistic framework underlying the graph
matching procedure. The fifth section focuses on the
training of the artificial neural networks. The sixth section
describes the stochastic minimization heuristics and some
results. Finally, the last section highlights the fact that
improving the current system will require collaborative
work with various neuroscience teams.
2. The preprocessing stage
This section describes briefly the robust sequence of
treatments that automatically converts a T1-weighted MR
image into an abstract structural representation of the
cortical topography. The whole sequence requires about
half an hour on a conventional workstation. All the steps
have been validated with at least 50 different images, some
of them with several hundred. These images have been
acquired with 6 different scanners using various MR
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
sequence parameters. Several experiments have led us to
select inversion–recovery sequences as the best choice for
our purpose. Most of the treatments rely on several years
of fine tuning which assures today a robust behaviour with
non-pathological images. Further work has to be done to
deal with the pathologies that invalidate some of our
assumptions. The system should be rapidly facilitated with
an interface allowing a step by step check of intermediate
results and proposing alternative treatments in case of
problems. The following descriptions focus on the main
ideas behind each treatment. Most of the refinements
79
added to get robust behaviour are beyond the scope of the
paper.
2.1. Bias correction ( Fig. 1( B))
The first step aims at correcting the standard inhomogeneities in MR images. This is achieved using a smooth
multiplicative field which minimizes the entropy of the
corrected image intensity distribution. This method can be
used without adaptation with various MR sequences
because the underlying hypothesis is only low entropy of
Fig. 1. A sketch of the sequence of image analysis treatments (G and J 3D renderings represent views from inside white matter).
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
80
the actual distributions of each tissue class (Mangin, 2000;
Likar et al., 2000).
2.2. Histogram analysis ( Fig. 1( C))
The second step leads to estimations of the gray and
white matter mean and standard deviations. It relies on a
scale-space analysis of the histogram which is robust to
modifications of the MR sequence (Mangin et al., 1998).
2.3. Brain segmentation ( Fig. 1( D))
The parameters given by the previous step are used to
segment the brain. This result is obtained following the
standard mathematical morphology sketch (erosion, selection of the largest connected component, reconstruction).
Two important refinements have been added for robustness: a regularized binarization using a standard Markov
field based model, and additional morphological treatments
to prevent morphological opening of thin gyri (Mangin et
al., 1998).
2.4. Hemisphere separation ( Fig. 1( E))
A second sequence of morphological processing is used
to separate both hemispheres from the rest of the brain.
This algorithm, which is similar to the previous one, is
applied to a regularized segmentation of white matter. A
priori knowledge on the brain orientation is used to select
the seeds which are reconstructed to get three objects: the
white matter of each hemisphere and the cerebellum / stem
white matter. A second reconstruction recovers the gray
matter of each object (Mangin et al., 1996). A standard
affine spatial normalization could be used in the future to
get rough mask of the hemispheres that may be used to
increase the robustness of the seed selection (Friston et al.,
1995). All the following steps are applied independently to
each hemisphere.
2.5. The gray /CSF union ( Fig. 1( F))
This step aims at segmenting an object with a spherical
topology. Its external interface is the hemisphere hull
defined by a morphological closing and its internal interface is the gray / white boundary. This segmentation is
achieved using a sequence of homotopic deformations of
the hemisphere bounding box (Mangin et al., 1995a). The
topological constraints assure the robustness of the following treatments. The detection of the gray / white boundary
relies on the minimization of a Markov field like global
energy including the usual regularization provided by the
Ising model.
2.6. Skeletonization ( Fig. 1( G))
The gray / CSF object provided by the previous step is
skeletonized. This skeletonization is done using a
homotopic erosion that preserves the initial topology. An
important refinement relative to our previous work (Mangin et al., 1995a) is the use of a watershed like algorithm
embedded in the erosion process. The landscape driving
the water rise is the mean curvature of the MR image
isosurfaces, which is used to mark ridges corresponding to
the medial localization of cortical folds. Topologically
simple points (Malandain et al., 1993) are iteratively
removed from the initial object according to a sequence of
increasing altitudes. As soon as a point verifies the
topological characterization of surface points (Malandain
et al., 1993), it is preserved until the end of the process.
Some pruning procedures remove curves from the final
result in order to yield a skeleton made up of discrete
surfaces.
2.7. Simple surfaces ( Fig. 1( H,I))
Skeleton points connected to the outside are first
gathered to represent the hemisphere hull. The remaining
part of the skeleton is then segmented into topologically
simple surfaces, which will represent cortical folds. This
algorithm relies on the topological characterization proposed by Malandain et al. (1993). Simple surfaces are
defined from an equivalence relationship defined for a set
of surface points. A refinement relative to previous work
(Malandain et al., 1993; Mangin et al., 1995a) consists of
an erosion of the initial set of skeleton surface points at the
level of junction points. This erosion aims at improving the
robustness of the split. The standard equivalence relationship then provides simple surface seeds. A morphological
reconstruction yields the complete simple surfaces.
2.8. Buried gyri ( Fig. 1( J))
The previous segmentation of the skeleton is not sufficient to separate all of the cortical sulci. Indeed, some of
the simple surfaces sometimes include several sulci, which
is not tractable for our symbolic recognition process.
´
According to our anatomical research hypothesis (Regis
et
al., 1995; Manceaux-Demiau et al., 1997), this situation is
related to the fact that some gyri can be buried in the depth
of the folds. Since our recognition process is based on a
labelling using the sulcus names, we have to assure as far
as possible that the preprocessing yields an oversegmentation of the sulci. Therefore, the previous simple surfaces
are split according to a detection of putative buried gyri. In
our opinion, these gyri can be revealed by two kinds of
clues: local minima of the geodesic depth along the bottom
of the fold, and points with negative Gaussian curvature on
the gray / white boundary. This point of view, which is
related to the approach of Lohmann and von Cramon
(1998), led us to design the following algorithm, which is
inspired by the usual morphological construction of the
catchment basins dual to a watershed line. First, points of
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
the gray / white interface having a negative Gaussian
curvature are removed from the gray / CSF object. Then,
consistent local maxima of the distance to the hull
geodesic to the remaining gray / CSF domain are detected.
They represent the seeds of the catchment basins. The
basins are then reconstructed following the usual water rise
approach using the inverse of the previous distance for the
altitude. Finally, simple surfaces which belong to several
catchment basins are split according to the basin parcellation.
2.9. Graph construction ( Fig. 2)
The objects provided by the last step are finally gathered
in a structural representation which describes their relationships. Three kinds of links are created between these nodes
(cf. Fig. 2): rT links represent splits related to the simple
surface definition; rP links represent splits related to the
presence of a putative buried gyrus (the ‘pli de passage’
´
anatomical notion (Regis
et al., 1995)); and rC links
represent a neighborhood relationship geodesic to the
hemisphere hull. This last type of link is inferred from a
Voronoı¨ diagram computed conditionally to the hemisphere hull using the set of junctions between hull and
nodes as seeds (Mangin et al., 1995a). The resulting graph
is enriched with numerous semantic attributes which will
be used by the recognition system. Some of these attributes
are computed relative to the well-known Talairach reference system, which is computed from the manual selection
of anterior and posterior commissures but will be inferred
automatically from virtual spatial normalization in the
future (Talairach and Tournoux, 1988). Nodes are described by their size, minimal and maximal depth, gravity
center localization, and mean normal. Links of type rT and
rP are described by their length, extremity localizations,
minimal and maximal depth, and mean direction. Links of
type rC are described by their size and the localization of
81
the closest points of the linked nodes. The resulting
attributed graph is supposed to include all the information
required by the sulcus recognition process.
3. The learning database
Our preprocessing tool can be viewed as a compression
system which provides for each individual brain a synthetic description of the cortex folding patterns. A sophisticated 3D browser allows our neuroanatomist to label
manually each node with a name chosen in a list of
anatomical entities. The lack of a validated explanation of
the structural variability of the human cortex is an important problem during this labelling. Indeed, standard
sulci are often split into several folds with various connections, which leads to ambiguous configurations (Ono et
al., 1990).
It has to be understood that this situation prevents the
definition of an unquestionable gold standard to be reached
by any sulcus recognition method. Therefore, one of the
aims of our research is to favour the emergence of new
anatomical descriptions relying on smaller sulcal entities
than the usual ones. According to different arguments that
would be too long to develop in this paper, these units, the
primary cortical folds that appear on the fœtal cortex, are
stable across individuals; a functional delimitation meaning
´
is probably attached to them (Regis
et al., 1995). During
ulterior stages of brain growth, some of these sulcal roots
merge with each other and form different patterns depending on the subjects. The more usual patterns correspond to
the usual sulci. In our opinion, some clues on these sulcal
root fusions can be found in the depth of the sulci (Fig. 5).
A model of these sulcal roots derived from our anatomical research has been used to label 26 right hemispheres. This model shares striking similarities with the
Fig. 2. A subset of the final structural representation.
82
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
model recently proposed by Lohmann and von Cramon
(1998, 2000). This new type of anatomical model, however, requires further validations before being properly
used by neuroscientists. Therefore, the results described in
the following have been obtained after a conversion of this
fine grain labelling to the standard (Ono et al., 1990),
which will allow comparisons to other group’s works. This
choice leads to a list of 60 names for each hemisphere,
where each name represents one standard sulcus or one
usual sulcus branch.
The 26 right hemispheres have been randomly separated
into three bases: a learning base made up of 16 brains is
used to train the local experts leading to the inference of a
global probability distribution; a test base of five brains is
used to stop the training before over-learning; and finally, a
generalization base of five brains is used to assess the
actual recognition performance of the system. We encourage the reader to study Figs. 3 and 4, which give an idea of
the variability of the folding patterns. Of course, our
manual labelling can not be considered as a gold standard
and could be questioned by other anatomists. It has to be
noted, however, that a lot of information used to perform
the manual recognition is concealed in the depth of the
sulci.
Fig. 3. A survey of the labelled database. The three first rows present nine brains of the learning base, the fourth row presents three brains of the test base,
and the last row presents three brains of the generalization base. Each color labels one entity of the anatomical model. Several hues of the same color are
used to depict different roots or stable branches of one given sulcus. For instance, color codes of main frontal sulci are: 2 reds5central, 5
yellows5precentral, 3 greens5superior, 2 blues5intermediate, 4 purples5inferior, 8 blues5lateral fissure, red5orbitary, rose5marginal, yellow5
transverse.
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
83
Fig. 4. A survey of the labelled database which provides an idea of inter-individual variability in areas not covered by Fig. 3.
Fig. 5. The sulcal root model in the temporal lobe. Left: a virtual representation where only sulcal roots are drawn on an adult size brain. It should be noted
that this configuration can not be observed during brain growth because some sulcal root merge occur before the apparition of the whole set of roots. Right:
a usual actual anatomical configuration at adult age where potentially buried gyri are indicated by a double arrow.
84
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
4. The random graph and Markovian models
The structural model underlying our pattern recognition
system is a random graph, namely a structural prototype
whose vertices and relations are random variables (Fig. 6).
In order to allow vertices and relations of the random
graph to yield sets of several nodes or several links in
individual brains, the classical definition proposed by
Wong and You (1985) is extended by substituting the
monomorphism by a homomorphism (Mangin et al.,
1995b). The recognition process can be formalized as a
labelling problem, where a label is associated with each
vertex of the random graph. Such a labelling of the nodes
of an individual graph, indeed, is equivalent to a homomorphism towards the random graph. Hence, the sulcus
recognition problem amounts to searching for the labelling
with the maximum probability. For the application to the
right hemisphere described in this paper, the random graph
is made up of 60 vertices corresponding to the 60 names
used to label the database.
Once a new brain has been virtually oriented according
to a universal frame, in our case the Talairach system, the
cortical area where one specific sulcus can be found is
relatively small. This localization information can already
lead to interesting recognition results (Le Goualher et al.,
1998, 1999). Localization, however, is largely insufficient
to perform a complete recognition. Indeed, a lot of
discriminating power only stems from contextual information. This situation has led us to introduce a Markovian
framework (Mangin et al., 1995b) to design an estimator of
the probability distribution associated with the random
graph. This framework provides us with a very flexible
model: Gibbs distributions relying on local potentials
(Geman and Geman, 1984). These potentials are inferred
from the learning base. They embed interactions between
the labels of neighboring nodes. These interactions are
related to contextual constraints that must be adhered to in
order to get anatomically plausible recognitions.
During our past experiments (Mangin et al., 1995b), the
system potentials were designed as simple ad hoc functions. Various failures of the global system rapidly led us to
the firm belief that the complex dependencies between the
pattern descriptors used to code sulcus shapes require a
more powerful approach. Neural nets represent an efficient
approach to the approximation of complex functions.
Hence, each potential of the current system is now given
by a multi-layer perceptron (MLP) (Rumelhart et al.,
1986). Each perceptron may be considered as a virtual
expert of some local anatomical feature. The choice of
MLPs mainly stems from the fact that they have led to a
lot of successful applications, which implies that a large
amount of information on their behaviour can be found in
¨
literature (Orr and Muller,
1998).
Two families of potentials are designed. The first family
evaluates the sulcus shapes and the second family evaluates the spatial relationships of pairs of neighboring sulci.
Hence, the first family is associated with the random graph
vertices, while the second family is associated with the
random graph relations. Each potential depends only on
the labels of a localized set of nodes, which corresponds to
the Markov field interaction clique (Geman and Geman,
1984). For a given individual graph, each clique corresponds to the set of nodes included in the field of view of
the underlying expert (Fig. 7). For sulcus experts, this field
of view is defined from the learning base as a parallelepiped of the Talairach coordinate system. The parallelepiped is the bounding box of the sulcus instances in the
learning base computed along the inertia axes of this
instance set.
For sulcus pair relationship experts, the field of view is
simply the union of the fields of view of the two related
sulcus experts. Pairs of sulci are endowed with an expert if
at least 10% of the learning base brains possess an actual
link between the two related sulci in the structural representation (cf. Fig. 2). For the model of the right hemisphere described in this paper, this rule leads to 205
Fig. 6. A small random graph (left) and one of its realizations, an attributed relational graph representing one individual cortical folding pattern (right). ai
represent vertices of the random graph, while bij represent relations. ai realizations are sets of nodes (SS ki ) representing folds, while bij realizations are sets
of links ( r kij ) representing junctions, ‘plis de passage’ and gyri.
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
85
Fig. 7. 60 sulcus experts and 205 relationship experts are inferred from the learning base. Each expert evaluates the labelling of the nodes included in its
field of view.
relationship experts. The whole system, therefore, is made
up of a congregation of 265 experts, each expert e being in
charge of a potential Pe . The expert single opinions are
gathered by the Gibbs distribution ]Z1 exph 2 o e Pe (l)j,
which gives the likelihood of a global labelling l (Z is a
normalization constant). Hence, the sulcus recognition
amounts to minimizing the sum of all of the perceptron
outputs.
5. Expert training
5.1. MLP topology and pattern coding
The choice of MLP topology (number of layers, number
of neurons in each layer, connectivity) is known to be a
difficult problem without general solution. For our application where a lot of different MLPs have to be designed, an
adaptive strategy may have been the best choice. In the
following, however, only two different topologies will be
used: one for sulcus experts and one for relationship
experts. The small size of our learning database, indeed,
prevents a consistent adaptive strategy to be developed.
Different experiments with a few experts have led us to
endow our perceptrons with two hidden layers and one
output neuron.
The first hidden layer is not fully connected to the input
layer, which turned out to improve the generalization
power of the networks used by our application. This first
hidden layer is split into several blocks fed by a specific
subset of inputs with a related meaning (see Fig. 7). This
sparse topology largely reduces the number of weights to
be estimated by the backpropagation algorithm used to
train the MLPs (Rumelhart et al., 1986). Some experiments beyond the scope of this paper have shown that this
choice usually leads to a restricted area of low potential
(good patterns), which was not necessarily the case with a
fully connected network. Finally, first and second layers
are fully connected, and neurons of the second layer are all
connected to the output neuron.
The numbers of neurons in each layer are the following:
(27–44–8–1) for sulcus experts and (23–32–5–1) for
relationship experts. Once again, this ad hoc choice stems
from experiments with a few experts. While smaller
networks can lead to good results for some experts in
charge of simple pattern recognition tasks, other experts
seem to require large networks to perform their task
correctly. Anyway, since our training process includes a
protection against overlearning, our system is robust to
over-proportioned networks.
Expert inputs are vectors of descriptors of the anatomical feature for which the expert is responsible. These
descriptors constitute a compressed code of sulcus shapes
and relationships. The descriptors are organized in consistent blocks which feed only one subset of the first
hidden layer. Sulcus shapes are summarized by 27 descriptors and sulcus relationships by 23 descriptors. These
descriptors are computed from a small part of the graph
corresponding to one single label (sulcus) or one pair of
labels (relationship). A few Boolean logical descriptors are
used to inform of the existence of a non-empty instance of
some anatomical entity (sulcus, junction with the hemisphere hull, actual link between two sulci, . . . ). Integer
syntactic descriptors and continuous semantic descriptors
are inferred from the attributes and the structure of the
subgraph to be analyzed. For instance, the size of a sulcus
is the sum of the sizes of all the nodes endowed with this
sulcus label. A detailed description of all the procedures
used to compute these descriptors is largely beyond the
` 2000). The different blocks of
scope of this paper (Riviere,
descriptors are the following (the (N 2 N9) notation means
that N input neurons corresponding to N descriptors feed
N9 first hidden layer neurons).
5.1.1. Sulcus experts
5.1.1.1. Empty instance (1 – 36). One Boolean which
86
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
feeds all the first layer neurons informs on the existence of
an instance of the sulcus.
5.1.1.2. Localization (10 – 16). Gravity center, extremities
of the junction with brain hull, one Boolean informs on the
existence of a hull junction.
5.1.1.3. Orientation (7 – 8). Mean normal, mean direction
of the junction with brain hull, one Boolean informs on the
existence of a hull junction.
5.1.1.4. Size (3 – 10). Sulcus size, minimal and maximal
geodesic depth.
5.1.1.5. Syntax (6 – 10). Number of connected components using all links or only contact links; number of
non-contact links between contact related connected components, maximal gap between these components (continuous); number of internal links of ‘buried gyrus’ type.
5.1.2. Relationship experts
5.1.2.1. Empty instance (1 – 32). One Boolean which
feeds all the first layer neurons informs on the existence of
a link between both sulci.
5.1.2.2. First sulcus (3 – 6). Sulcus size, number of connected components, number of such components implied
in actual links between the sulci.
5.1.2.3. Second sulcus (3 – 6). Same as above for second
sulcus.
5.1.2.4. Semantic description (11 – 14). Minimal distance
between the sulci; semantic attributes of the contact link
(junction or buried gyrus): namely junction localization,
mean direction, distances between the contact point and
the closest sulcus extremities, respective localization of the
sulci, and angle between sulcus hull junctions.
5.1.2.5. Syntactic description (3 – 6). Number of contact
points, number of links of ‘buried gyrus’ type between the
sulci, minimal depth of such links (continuous).
5.2. Training
The supervised training of the experts relies on two
kinds of examples. Correct examples extracted from the
learning base must lead to the lowest output, namely the
null value. Counterexamples are generated from correct
examples through random modifications of some labels of
the clique nodes. For examples of a sulcus l, two random
numbers are used: n a nodes are added to the sulcus correct
pattern while n d nodes are deleted. For examples of a
relationship (l 1 ,l 2 ), the two sulci are corrupted simultaneously. In order to obtain a good sampling of the space
surrounding the correct pattern domain, the previous
numbers are drawn from a distribution which favorizes
small numbers. For the same reason, in half of the cases,
the nodes to be added to the sulcus have to be chosen
randomly only among the nodes linked with a node of the
sulcus correct pattern. For the rest of the cases, they are
chosen randomly among all the nodes of the clique.
Unfortunately, the blind generation of counterexamples
sometimes yields ambiguous patterns. For instance, if a
small branch is added to a correct sulcus pattern, the
resulting example may still be considered as valid from the
anatomical point of view. If many such ambiguous examples are presented to the expert as incorrect, the result of
the training is unpredictable (like for a human expert).
This difficulty is overcome via the use of a rough
continuous distance between the correct example and the
generated counterexample. For sulcus experts, this distance
is made up of the variation of the total sulcus size added to
the variation of the number of connected components
multiplied by an ad hoc weighting factor. For relationship
experts, a similar distance is defined by the variation of the
total size of the links implied in the relationship. These
distances are used to choose the output taught to the
perceptron during the training. The ad hoc rule used to
1
compute this output is: output 5 ]]]
. Hence, small
d
s
1 1 exp 2 ]
100
d
distances lead to intermediate outputs (0.5) while larger
distances lead to the highest output (1). This means that
the output taught for ambiguous examples is lower than for
the reliable counterexamples, which clarifies the situation.
Indeed, if the domain of correct examples is corrupted by
some ambiguous counterexamples, the network will lead to
an average output below 0.5, while the surrounding
domain full of reliable counterexamples will lead to an
average output largely over 0.5. Moreover, the choice of a
continuous taught output creates some slope into the
landscape of the potential provided by the expert, which
helps the final minimization used for sulcus recognition to
find its way towards a deep minimum.
The balancing of the number of counterexamples versus
the number of correct examples presented during the
training is another important point. The training is made up
of iterations over the learning base. Therefore, while new
counter-examples are generated during each iteration, the
correct examples are always the same, which may be
problematic with a small base. It should be noted, however, that the situation is not so critical because counterexamples include some anatomical knowledge. Therefore,
since few counter-examples can be located in the middle of
the correct pattern domain, a good generalization can be
obtained from only a few correct examples. We have
verified with a few experts that the crucial parameter is in
fact the ratio between correct examples and close counterexamples. Here ‘close’ refers to a threshold on the taught
output (0.75). When the ‘correct / close’ ratio is too low,
the error function driving the backpropagation algorithm
leads to forbid any area of low potential. When this ratio is
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
87
Fig. 8. A survey of the training of the central sulcus (top) and intermediate precentral sulcus (bottom) experts. The x-axis represents the number of
iterations over the learning base, while the y-axis represents the perceptron output between 0 and 1. Dark (blue) points represent correct examples, light
(green) points close counter-examples, and middle grey (red) points remote counter-examples. The output taught to the perceptrons are 0 for correct
examples, about 0.75 for close counter-examples, and 1 for remote counter-examples. The first chart shows the evolution of the perceptron output for the
learning base during the training. The second chart is related to the output for the test base. The third chart presents the evolution of the mean error on the
test base. A consistent increase of this criterion corresponds to overlearning beginning.
too high, the low potential area is too large and includes a
lot of incorrect patterns. The final ratio was tuned via
experiments with a few experts: two close counter-examples and seven remote counter-examples for one correct
example. A high number of remote counter-examples was
chosen to get bounded low potential areas.
A last point to be solved is related to counter-examples
without instance of the underlying sulci (no node with the
sulcus label). If the sulcus always exists in the learning
base, the taught output is 0.75. This output is lower than
the highest output because a missing identification is more
acceptable than a wrong answer. When the sulcus does not
exist in all the brains of the learning base, the taught output
is related to its frequency of appearance: output 5 0.5 1
0.25
]]]]] . This ad hoc rule allows us first to deal
1 1 exp(240( f 2 0.9))
with situations where the sulcus is missing erroneously in a
few brains ( f . 0.9). In that case the taught output is close
to the previous situation (0.75). Second, for sulcus existing
only in a subset of the learning base, the taught output
tends to be 0.5, which means that the empty instance can
only be challenged by good instances.
Finally, the backpropagation algorithm requires a criterion to stop the training when a sufficient learning has been
done and to avoid over-learning. This criterion is computed from a second base, the test base. The stop criterion
is made up of the sum of two mean errors computed,
respectively, for correct examples and for remote counterexamples of the test base. The learning is stopped when
this criterion presents a consistent increase (Fig.
8(bottom)) or after a maximum number of iterations (Fig.
8(top)).
The minimum value of the stop criterion is used to get a
measure of confidence in the expert opinion. This measure
is used to weight the output of this expert during the
recognition process. It should be noted that some experts
are endowed with a very low confidence, for instance when
the sulcus shape is so variable that its identification stems
only from the identification of the surrounding sulci.
Another explanation to the various levels of confidence is
the small size of the learning base which is not sufficient to
learn all the variations of the sulcus patterns. Base size
effects on learning are explored in Figs. 9 and 10 for the
Fig. 9. Evolution of the central sulcus expert output on the test base during training on three different bases obtained by permutations. The color code is
the same as in Fig. 8. The learning base includes 16 brains and the test base includes five brains. Left: perfect generalization. Middle and right: two brains
are problematic. This dependence on the choice of the learning base means that the learning base size is too small.
88
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
Fig. 10. Evolution of the central sulcus expert output on the test base during training on 6 different configurations of learning and test bases. The chart title
give the respective number of brains in each base. Top: The three charts show that the learning base size has to be sufficient to get good generalization.
Bottom: The three last charts show that increasing the test base size provides a quicker observation of overlearning. This effect, however, is very difficult to
predict with small learning bases.
central sulcus expert. It should be noted that since this
sulcus shape is especially stable, these base size effects are
bound to be more important for most of the other experts.
Fortunately, since the final recognition of a sulcus results
from the opinion of several experts, the global system is
already rather efficient in spite of the weaknesses of
individual experts.
6. Results
The 265 expert training process has been performed on a
network of ten standard workstations and lasts about 24 h.
Of course, while this high training cost was cumbersome
during the tuning of the system, it is acceptable in a
standard exploitation situation. Indeed, this training is done
only one time, or more exactly each time we decide to
enlarge the learning database.
6.1. Minimization
The sulcus recognition process itself consists of the
minimization of the energy made up of the weighted sum
of the expert outputs. For practical reasons, expert outputs
are first scaled between 21 and 1 and then multiplied by a
confidence measure. During the minimization, each node
label is chosen in a subset of the sulcus list corresponding
to the expert fields of view which include this node, plus
the unknown label which has no related expert. The
minimization is performed using a stochastic algorithm
inspired by the simulated annealing principle (Geman and
Geman, 1984). This algorithm is made up of two kinds of
iterations.
While most iterations correspond to the standard approach (Geman and Geman, 1984), one in ten follows a
different algorithm dedicated to our application. These
special iterations aim at overcoming bad situations where
the minimization is lost very far from the correct labelling
area. Such situations which occur during the high temperature period are problematic because a number of node
transitions are required to reach a domain where the global
energy embeds meaningful anatomical information. A fast
annealing schedule, however, has not enough time to find
such paths only by chance. Therefore, the standard algorithm gets trapped in a non-interesting local minimum.
This problem is solved when one considers more sophisticated transitions involving several nodes simultaneously,
which is very usual to the field of stochastic minimization
(Tupin et al., 1998). The two kinds of iterations are as
follows:
• Standard iterations browse the nodes in a random order.
For each node, the energy variations DU(l) corresponding to transitions towards each possible label l are
computed. Then, the actual transition is drawn from a
distribution where each label l is endowed with the
e 2DU (l ) / T
probability ]]]
, where T is a temperature paramo e 2DU (l ) / T
l
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
eter. This temperature parameter is multiplied by 0.98
at the end of each global iteration, which is the usual
scheduling of simulated annealing.
• Special iterations are made up of two successive loops
over the labels in a random order. For each label l, the
‘erasing loop’ computes the energy variations induced
either by replacing l by the unknown label globally, or
for only one l related connected component. Anatomically speaking, this operation aims at challenging
globally the current identification of the underlying
sulcus. Such transitions may imply a lot of nodes
simultaneously and therefore be very difficult to find
during the standard iteration process. The actual transition is drawn from a distribution similar to the standard
iteration one. The ‘identification loop’ envisages for
each label l all the transitions that replace unknown
label by l for one unknown related connected component. This loop takes advantage of the fact that
suspicious identifications have been erased by the
previous loop, which means that a whole sulcus may be
identified at a time in the unknown space even if it is
made up of a lot of nodes.
Our implementation of the simulated annealing principle is
beyond the framework of standard convergence proofs
(Geman and Geman, 1984). The transitions considered
during the special iterations, indeed, are not reversible
because they depend on the current graph labelling. Hence,
the usual Markov chain approach to the proof is not
89
directly applicable. A solution could stem from theoretical
works dedicated to sophisticated samplers used to study
Gibbs field phase transitions (Swendsen and Wang, 1987).
Indeed, these samplers are applied to study the fractal
nature of the Ising model realizations at critical temperature, which implies the use of connected component
related transitions. Anyway, theoretical proofs are usually
related to very slow annealing schedule. Therefore, our
implementation which performs only about 400 global
iterations has to be considered as a heuristics (Fig. 11). For
the following results, the minimization lasts about 2 h on a
conventional workstation. While an optimized implementation is planned in order to achieve a significant speed-up, it
should be noted that the manual labelling work is even
slower.
Because of the heuristical nature of our minimization,
the improvements resulting from the special iterations can
only be assessed on a statistical basis, using different
brains. This algorithm, indeed, is bound to be trapped in a
local minimum because of the highly non-convex nature of
the underlying energy. Implementations with or without
special iterations have been compared during a one shot
experiment on the 26 brains (Fig. 12). The implementation
including special iterations led to a lower energy for 18
brains. Further studies should be done to assess the
influence of the frequency of occurrence of special iterations. This first experiment led also to the interesting
observation that the nature of the global energy landscape
Fig. 11. Global energy behaviour during simulated annealing. The special iterations lead to large energy decreases during the high temperature period,
while their influence becomes imperceptible later.
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
90
Fig. 12. Final energy yielded by simulated annealing relative to energy of
the manual labelling. From left to right: 16 brains of the learning base, 5
brains of the test base, 5 brains of the generalization base. For each brain,
the square / circle corresponds to an annealing including only standard
iterations while the cross / star corresponds to the complete scheme.
depends on the base. Indeed, the differences between both
minimizations is larger for the learning base than for the
generalization base. This effect could be related to expert
over-learning which creates deeper local minima for the
learning base. This could predict an easier minimization in
generalization situations which could afford us to use
faster implementations.
6.2. Recognition rate
A global measure is proposed to assess the correct
recognition rate. This measure corresponds to the proportion of cortical folds correctly identified according to the
manual labelling. The contribution of each node to this
global measure is weighted by its size (the number of
voxels of the underlying skeleton; Mangin et al., 1995a).
The mean recognition rate on each of the three bases is
proposed in Fig. 13. In order to check the reproducibility
of the recognition process, the minimization has been
repeated ten times with different initializations for one
brain of each base (Fig. 14(left)). This experiment has
shown that the recognition rate is related to the depth of
the local minimum obtained by the optimization process.
This result is confirmed by Fig. 14(right) which shows the
recognition rates for the 52 minimizations of the experiment described in Fig. 12. This result tends to prove that
the global energy corresponding to our recognition system
is anatomically meaningful, whatever the minimization
difficulties. Therefore, the recognition rate could be easily
improved if the best of several minimizations was kept as
the final result.
The recognition rate obtained for the generalization base
is 76%, which is very encouraging considering the variability of the folding patterns. As matters stand relative to
our understanding of this variability, it should be noted
that numerous ‘errors’ of the system correspond to ambiguous configurations. In fact, after a careful inspection of
the results, the neuroanatomist of our team often admits to
a preference for the automatic labelling. Moreover, the
automatic system often corrects flagrant errors due to the
cumbersome nature of the manual labelling. Such disagreements between manual and automatic labelling explain the
surprising observation that whatever the underlying base,
the final energy yielded by the minimization is lower than
the energy related to the manual labelling. The base
influence on the results call for an enlargement of the
learning base and of the test base, which was foreseeable
Fig. 13. Node number, recognition rate, energy of the manual labelling (Ubase ), and energy of the automatic labelling Uannealing for each base.
Fig. 14. Left: Recognition rate relative to final energy for ten different minimizations applied on one brain of each base. Right: Recognition rate relative to
final energy for the 52 minimizations of Fig. 12. Squares / circles denote standard annealing, while cross / stars denote complete annealing.
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
and should improve the results. We also plan to develop a
system using several experts for each anatomical entity in
order to get a better management of the coding of the
` et al., 1998). This work will
structural variability (Riviere
include automatic adaptation of the topology of the neural
networks to each expert.
The pattern recognition system described is this paper
includes many ad hoc solutions that are sometimes difficult
to justify. The design of a computational system actually
dealing with the problem of the sulcus recognition, however, leads necessarily to such choices. Providing a discussion for each problematic point would be too cumbersome
to be interesting. A few of them, however, have to be
addressed.
6.2.1. The oversampling requirement
We have mentioned during the description of the
preprocessing stage that a requirement to get a good
behaviour of our method was an oversampling of the
anatomical structures to be identified. While this oversampling is usually reached at the level of standard sulci, we
are not satisfied yet with the sulcus split into sulcal roots.
Therefore, a new segmentation related to mean curvature
of the cortical surface has been recently proposed in order
to use detection of the sulcal wall deformations induced by
buried gyri (Cachia et al., 2001). Moreover, a study of the
brain growth process from antenatal to adult age has been
triggered in order to improve the current sulcal root point
of view. Finally, we plan to add into our random graph
model a new kind of anatomical entities corresponding to
the merge of two smaller entities. This would allow us to
consistently tackle the recognition of the sulcal roots
although some of the buried gyri are not always detected.
6.2.2. The recognition rate
The choice of a global measure to assess the recognition
rate gives a very crude idea of the results. This measure,
however, is sufficient to study the behaviour of the
framework relative to the size of the databases. The
cumbersome sulcus by sulcus analysis underlying this
`
global measure may be found in (Riviere,
2000). In our
opinion, however, the small size of the learning base
should lead to analyze these results with great cautions.
Another weakness of our recognition rate is the fact that
the same sulcus segmentation is used both for manual and
automatic labelling. This is clearly a bias in favour of our
method. Therefore, in the future, more careful studies will
have to be performed using several segmentations for each
brain using for instance several MR scans. Considering the
cumbersome manual identifications, however, we have
decided to postpone that kind of validation studies until the
discovery of a reliable detector of buried gyri.
6.2.3. The probability map
While our framework has been intentionally developed
with weak localization constraints, accurate probability
91
maps of the localization of the main structures in a
standard space may be used. In our opinion, however, such
constraints could lead to a much less versatile system
unable to react correctly to outlier brains. In fact, large
scale experiments will have to be performed in order to
find the good balance between localization and structural
constraints.
7. Conclusion
A number of approaches relying on the deformable atlas
paradigm consider that anatomical a priori knowledge can
be completely embedded in iconic templates. While this
point of view is very powerful for anatomical structures
presenting low inter-individual variability, it seems insufficiently versatile to deal with the human cortical
anatomy. This observation has led several teams to investigate approaches relying on higher levels of representation.
All these approaches rely on a preprocessing stage which
extracts sulcal related features describing the cortical
topography. These features can be sulcal points (Chui et
al., 1999), sulcal lines inferred from skeletons (Royackkers
et al., 1999; Caunce and Taylor, 1999), topologically
simple surfaces (Mangin et al., 1995), 2D parametric
models of sulcal median axis (Le Goualher et al., 1997;
Vaillant and Davatzikos, 1997; Zeng et al., 1999), crest
lines (Declerck et al., 1995; Manceaux-Demiau et al.,
1997) or cortex depth maxima (Lohmann and von Cramon,
1998; Rettmann et al., 1999). In our opinion, this direction
of research can lead further than the usual deformable
template approach. In fact these two types of work should
be merged in the near future. It has to be understood,
however, that some of the challenging issues about cortical
anatomy mentioned in the introduction require new neuroscience results to be obtained. As such, image analysis
teams addressing this kind of research must be responsible
for providing neuroscientists with new tools in order to
speed-up anatomical and brain mapping research. Our
system is used today to question the current understanding
of the variability and to help the emergence of better
anatomical models. Various direct applications have been
developed in the fields of epilepsy surgery planning and
brain mapping.
References
Bajcsy, R., Broit, C., 1982. Matching of deformed images. In: IEEE
Proceedings of the Sixth International Conference on Pattern Recognition, October, pp. 351–353.
` D., Boddaert, N., Andrade, A., Kherif,
Cachia, A., Mangin, J.-F., Riviere,
F., Sonigo, P., Papadopoulos-Orfanos, D., Zilbovicius, M., Poline,
´
J.-B., Bloch, I., Brunelle, F., Regis,
J., 2001. A mean curvature based
primal sketch to study the cortical folding process from antenatal to
adult brain. In: Proceedings of MICCAI’01, LNCS, Utrecht. Springer,
Berlin, in press.
92
` et al. / Medical Image Analysis 6 (2002) 77 – 92
D. Riviere
` D., Papadopoulos-Orfanos,
Cachier, P., Mangin, J.F., Pennec, X., Riviere,
´
D., Regis,
J., Ayache, N., 2001. Multipatient registration of brain MRI
using intensity and geometric features. In: Proceedings of MICCAI’01,
LNCS, Utrecht. Springer, Berlin, in press.
Caunce, A., Taylor, C.J., 1999. Using local geometry to build 3D sulcal
models. In: Proceedings of IPMI’99, LNCS 1613. Springer, Berlin, pp.
196–209.
Chui, H., Rambo, J., Duncan, J., Schultz, R., Rangarajan, A., 1999.
Registration of cortical anatomical structures via robust 3D point
matching. In: Proceedings of IPMI’99, LNCS 1613. Springer, Berlin,
pp. 168–181.
Collins, D.L., Le Goualher, G., Evans, A.C., 1998. Non-linear cerebral
registration with sulcal constraints. In: Proceedings of MICCAI’98,
LNCS 1496, pp. 974–984.
Declerck, J., Subsol, G., Thirion, J.-P., Ayache, N., 1995. Automatic
retrieval of anatomical structures in 3D medical images. In: Proceedings of CVRMed, LNCS 905, pp. 153–162.
Friston, K.J., Ashburner, J., Frith, C.D., Poline, J.B., Heather, J.D.,
Frackowiak, R.S.J., 1995. Spatial registration and normalization of
images. Hum. Brain Mapping 2, 165–189.
Geman, S., Geman, D., 1984. Stochastic relaxation, Gibbs distributions,
and the bayesian restoration of images. IEEE Proc. Am. Med. Inst. 4
(6), 721–741.
Hellier, P., Barillot, C., 2002. Cooperation between local and global
approaches to register brain images. In: Proceedings of IPMI’01,
University of California, Davis, in press.
Le Goualher, G., Barillot, C., Bizais, Y., 1997. Modeling cortical sulci
using active ribbons. Int. J. Pattern Recognit. Artific. Intell. 11 (8),
1295–1315.
Le Goualher, G., Collins, D.L., Barillot, C., Evans, A.C., 1998. Automatic
identification of cortical sulci using a 3D probabilistic atlas. In:
Proceedings of MICCAI’98, MIT, LNCS 1496. Springer, Berlin, pp.
509–518.
Le Goualher, G., Procyk, E., Collins, D.L., Venugopal, R., Barillot, C.,
Evans, A.C., 1999. Automated extraction and variability analysis of
sulcal neuroanatomy. IEEE Trans. Med. Imaging 18 (3), 206–217.
Likar, B., Viergever, M., Pernus, F., 2000. Retrospective correction of
MR intensity inhomogeneity by information minimization. In:
Proceedings of MICCAI’2000, LNCS 1935. Springer, Berlin, pp.
375–384.
Lohmann, G., von Cramon, Y., 1998. Automatic detection and labelling of
the human brain cortical folds in MR data sets. In: Proceedings of
ECCV, pp. 369–381.
Lohmann, G., von Cramon, D.Y., 2000. Automatic labelling of the human
cortical surface using sulcal basins. Medical Image Analysis 4 (3),
179–188.
Malandain, G., Bertrand, G., Ayache, N., 1993. Topological segmentation
of discrete surfaces. Int. J. Comput. Vis. 10 (2), 158–183.
Manceaux-Demiau, A., Mangin, J.-F., Regis, J., Pizzato, O., Frouin, V.,
1997. Differential features of cortical folds. In: Proceedings of
CVRMED/ MRCAS, Grenoble, LNCS-1205. Springer, Berlin, pp.
439–448.
Mangin, J.-F., 2000. Entropy minimization for automatic correction of
intensity nonuniformity. In: Proceedings of MMBIA, South Carolina,
pp. 162–169.
´
Mangin, J.-F., Frouin, V., Bloch, I., Regis, J., Lopez-Krahe,
J., 1995a.
From 3D MR images to structural representations of the cortex
topography using topology preserving deformations. J. Math. Imaging
Vis. 5 (4), 297–318.
Mangin, J.-F., Regis, J., Bloch, I., Frouin, V., Samson, Y., Lopez-Krahe, J.,
1995b. A Markovian random field based random graph modelling the
human cortical topography. In: Proceedings of CVRMed, Nice, LNCS905. Springer, Berlin, pp. 177–183.
´
Mangin, J.-F., Regis,
J., Frouin, V., 1996. Shape bottlenecks and conservative flow systems. In: Proceedings of MMBIA, San Francisco, pp.
319–328.
Mangin, J.-F., Coulon, O., Frouin, V., 1998. Robust brain segmentation
using histogram scale-space analysis and mathematical morphology.
In: Proceedings of MICCAI’98, MIT, LNCS-1496. Springer, Berlin,
pp. 1230–1241.
Ono, M., Kubik, S., Abernethey, C.D., 1990. Atlas of the Cerebral Sulci.
Thieme, New York.
¨
Orr, G., Muller,
K.-R., 1998. Neural Networks: Tricks of the Trade.
LNCS 1524. Springer, Berlin.
´
Poupon, C., Mangin, J.-F., Clark, C.A., Frouin, V., Regis,
J., LeBihan, D.,
Bloch, I., 2001. Towards inference of human brain connectivity from
MR diffusion tensor data. Medical Image Analysis 5, 1–15.
´
Regis,
J., Mangin, J.-F., Frouin, V., Sastre, F., Peragut, J.C., Samson, Y.,
1995. Generic model for the localization of the cerebral cortex and
preoperative multimodal integration in epilepsy surgery. Stereotactic
Funct. Neurosurg. 65, 72–80.
Rettmann, M.E., Xu, C., Pham, D.L., Prince, J.L., 1999. Automated
segmentation of sulcal regions. In: Proceedings of MICCAI’99,
Cambridge, UK, LNCS-1679. Springer, Berlin, pp. 158–167.
` D., 2000. Automatic learning of the variability of the patterns of
Riviere,
the human cortical folding. PhD thesis (in French), Evry University.
` D., Mangin, J.-F., Martinez, J.-M., Chavand, F., Frouin, V., 1998.
Riviere,
Neural network based learning of local compatibilities for segment
grouping. In: Proceedings of SSPR’98, LNCS-1451. Springer, Berlin,
pp. 349–358.
Royackkers, N., Desvignes, M., Fawal, H., Renenu, M., 1999. Detection
and statistical analysis of human cortical sulci. NeuroImage 10, 625–
641.
Rumelhart, D.E., Hinton, G.E., Williams, R.J., 1986. Learning Internal
Representations By Error Backpropagation. MIT Press, Cambridge,
MA, pp. 318–362.
Swendsen, R.H., Wang, J.S., 1987. Nonuniversal critical dynamics in
Monte Carlo simulations. Phys. Rev. Lett. 58, 86–88.
Talairach, J., Tournoux, P., 1988. Co-planar Stereotaxic Atlas of the
Human Brain. Thieme, New York.
Thompson, P., Toga, A.W., 1996. Detection, visualization and animation
of abnormal anatomic structure with a deformable probabilistic brain
atlas based on random vector field transformation. Medical Image
Analysis 1 (4), 271–294.
Thompson, P.M., Woods, R.P., Mega, M.S., Toga, A.W., 2000. Mathematical / computational challenges in creating deformable and probabilistic atlases of the human brain. Hum. Brain Mapping 9, 81–92.
Tupin, F., Maitre, H., Mangin, J.-F., Nicolas, J.-M., Pechersky, E., 1998.
Linear feature detection on SAR images: application to the road
network. IEEE Geosci. Remote Sens. 36 (2), 434–453.
Vaillant, M., Davatzikos, C., 1997. Finding parametric representations of
the cortical sulci using active contour model. Medical Image Analysis
1 (4), 295–315.
Vailland, M., Davatzikos, C., 1999. Hierarchical matching of cortical
features for deformable brain image registration. In: Proceedings of
IPMI’99, LNCS 1613. Springer, Berlin, pp. 182–195.
Watson, J.D.G., Myers, R., Frackowiak, R. et al., 1993. Area V5 of the
human cortex: evidence from a combined study using positron
emission tomography and magnetic resonance imaging. Cerebral
Cortex 3, 79–94.
Welker, W., 1989. Why does the cerebral cortex fissure and fold. Cerebral
Cortex 8B, 3–135.
Wong, A.K.C., You, M.L., 1985. Entropy and distance of random graph
with application to structural pattern recognition. IEEE Proc. Am.
Med. Inst. 7, 599–609.
Zeng, X., Staib, L.H., Schultz, R.T., Tagare, H., Win, L., Duncan, J.S.,
1999. A new approach to 3D sulcal ribbon finding from MR images.
In: Proceedings of MICCAI’99, Cambridge, UK, LNCS-1679. Springer, Berlin, pp. 148–157.