Pacific Graphics 2008 T. Igarashi, N. Max, and F. Sillion (Guest Editors) Volume 27 (2008), Number 7 The Craniofacial Reconstruction from the Local Structural Diversity of Skulls Yuru Pei1 , Hongbin Zha1 and Zhongbiao Yuan2 1 Key Laboratory of Machine Perception (Ministry of Education), Peking University, China 2 Criminal Investigation Department, Guangzhou Police Station, China Abstract The craniofacial reconstruction is employed as an initialization of the identification from skulls in forensics. In this paper, we present a two-level craniofacial reconstruction framework based on the local structural diversity of the skulls. On the low level, the holistic reconstruction is formulated as the superimposition of the selected tissue map on the novel skull. The crux is the accurate map registration, which is implemented as a warping guided by the 2D feature curve patterns. The curve pattern extraction under an energy minimization framework is proposed for the automatic feature labeling on the skull depth map. The feature configuration on the warped tissue map is expected to resemble that on the novel skull. In order to make the reconstructed faces personalized, on the high level, the local facial features are estimated from the skull measurements via a RBF model. The RBF model is learnt from a dataset of the skull and the face feature pairs extracted from the head volume data. The experiments demonstrate the facial outlooks can be reconstructed feasibly and efficiently. Categories and Subject Descriptors (according to ACM CCS): I.3.8 [Computer Graphics]: Applications; I.4.7 [Image Processing and Computer Vision]: Feature Measurement; G.3 [Probability and Statistics]: Multivariate Statistics; G.1.2 [Numerical Analysis]: Approximation-Approximation of Surfaces and Contours 1. Introduction The craniofacial reconstruction has been practiced in the forensic anthropology since the late 19th century. The goal is to estimate the face of an individual from the skull remains to give a visual outlook. The traditional plastic method relies on the tedious manual work, which requires the artists’ experiences and guessworks. It is a challenging work to evaluate the reconstructed faces from different practitioners. As we can see, the procedure based on the tabulated tissue thickness at dowels is similar to the 3D surface fitting. Considerable efforts have been made to the computerized craniofacial reconstruction [Arc97,BAHS06,BDBP05,CBRY06, CVG∗ 07, KHS03, PZY04, TBL∗ 07, THL∗ 05, VVMN00]. The template-fitting based craniofacial reconstruction systems rely on the tissue thickness on the markers. However, the facial surface consists of some feature details, which are difficult to be reconstructed from the sparse landmarks. The statistical values at the landmarks are limited to reflect the tissue thickness distribution. The 3D facial reconstruction based on the complete tissue map registration c 2008 The Author(s) ° c 2008 The Eurographics Association and Blackwell Publishing Ltd. Journal compilation ° Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. [PZY04, THL∗ 05] has been proposed. However, the manual labeling of a large set of markers for the tissue map registration is tedious and prone to the practitioner’s errors. Moreover, the reconstructed face bears the facial features, e.g. the eyes and the nose of the reference [PZY04]. The only difference from the reference is the tissue thickness distribution. The personality of the facial features has not been addressed in the process. Few computerized craniofacial reconstruction systems have dealt with the direct relationship between the local skull morphology and the facial features. For instance, what the nose looks like when given the measurements of the nasal cavity on the skull? Confronted with this problem, we propose a two-level craniofacial reconstruction framework based on the local structural diversity of the skull (Figure 1). Firstly, the reference tissue map is non-rigidly warped to fit the unknown skull for the holistic reconstruction. The warping is controlled by the curve patterns on the skull maps. Secondly, the local facial features are estimated from the skull measurements via a RBF model, which is learnt from the skull Yuru Pei & Hongbin Zha & Zhongbiao Yuan / The Craniofacial Reconstruction from the Local Structural Diversity of Skulls and the face feature pairs in the training dataset. The current training dataset includes 102 head CT volume data, which are used to extract the facial tissue maps, as well as the local skull and the facial features. In the low-level module, the complete tissue map registration is achieved by a RBF model, which is learnt from the feature curve patterns of the novel and the reference skulls (Section 3). Instead of the manual marker labeling, the skull feature identification in this paper is implemented under an energy minimization framework. Given a novel skull, the predefined curve pattern is warped accordingly by minimizing the objective function. The feature configuration on the warped tissue map is expected to resemble that on the novel skull. With a straightforward depth map addition, the dense candidate point clouds specific to the target face can be obtained. The map warping can be seen as the tissue registration in the 3D space. However, the computation in the 2D plane is faster, easier and more efficient. In the high-level module, three non-overlapping regions, i.e. the eye, the mouth and the nose, are identified on the skulls and the corresponding facial surfaces extracted from the CT images. A mapping between the skull feature measurements and their counterparts on the facial surface is established by the RBF model (Section 4). Given the skull measurements, the mapping can produce the facial features specific to the skull. We improve the tissue-map-based reconstruction technique and introduce an automatic feature pattern extraction method. In the proposed two-level framework, the resulted facial model bears the tissue distribution consistent with the novel skull. More importantly, the local facial features are estimated from the skull measurements. The mapping learnt from the local skull structural diversity and the facial appearances facilitates the personalized feature reconstruction. 2. Related work Due to the ambiguity between the skull shape and the facial appearance, it is a challenging work to develop a 3D craniofacial reconstruction system. There are the anatomical-based and the tissue-map-based approaches to the skin reconstruction from the skeletons. Wilhelms et al. [WG97] propose an anatomical-based technique to generate the human skin. The sub-skin components, e.g. the bones and the muscles, are defined by a few parameters. The skin surface is extracted from the outmost component layer. Scheepers et al. [SPCM97] construct the realistic muscle components attached on the skeletons. Aside from the layer by layer anatomical-based skin reconstruction, the template fitting constrained by the tissue depth is a frequently used method. Archer [Arc97] employs the hierarchical B-spline to represent a generic facial model for the shape editing, and a prototypical system is proposed based on the spline surface Figure 1: Illustration of the craniofacial reconstruction framework. fitting. Vanezis et al. [VVMN00] generate a database of facial templates using the 3D scanner. The RBF-based warping constrained by the statistical tissue depth adapts the template to the target skull. Kahler et al. [KHS03] deform the generic face and muscle models to the target skull simultaneously, and an animatable head model is generated. Instead of using just one template, Claes et al. [CVG∗ 07] propose a statistical facial model learnt from a set of 3D faces based on the PCA technique. The template is synthesized considering the BMI (Body Mass Index), the age and the gender information. The statistical model is in spirit similar to the 3D morphable model proposed by Blanz et al. [BV99], which has been used in the forensic applications to reconstruct the facial model from the witnesses’ vague descriptions [BAHS06]. Similarly, Tu et al. [TBL∗ 07] construct a face space from the head CT images. Once the alignment of the skull to the face space is done, the parameters of the age and the weight can be adjusted to give a set of shape variations. The reconstruction can be formulated as the missing data problem in the PCA subspace of the face and the skull volume data [BDBP05], and the subspace coordinates estimated from the skull are used to reconstruct the facial appearance. The tabulated statistical values at the landmarks are limited to reflect the tissue thickness distribution. The 3D facial reconstruction based on the complete tissue map registration [PZY04, THL∗ 05] has been proposed, in which a collection of facial tissue maps are extracted from the CT images. The manual labeling in the tissue map registration is a tedious work [PZY04]. Our approach falls into the tissuemap-based method. We propose an automatic curve pattern fitting technique to avoid the manual labeling. The structural characteristics are crucial to the face idenc 2008 The Author(s) ° c 2008 The Eurographics Association and Blackwell Publishing Ltd. Journal compilation ° Yuru Pei & Hongbin Zha & Zhongbiao Yuan / The Craniofacial Reconstruction from the Local Structural Diversity of Skulls Figure 2: The tissue map extracted by the spherical sampling. The warm color is related to the small depth value, while the cold color to the large depth value. tification. The Euclidean/geodesic distances between the anthropometric fiducial points have been used in the 3D face recognition [GAMB07], and a systematic procedure for selecting facial fiducial points associated with the diverse structural characteristics is proposed. The EBGM-based facial feature representations [WFKM97] have a good performance in the 2D face recognition. In this paper, we put emphasis on the influence of the local structural diversity of the skull on the facial feature appearance. The possibility of the mapping between the skull measurements and the facial feature shapes is investigated. 3. The low-level reconstruction based on the tissue map registration In the system, the 3D faces and the underlying skull substrates are reconstructed from the CT images by the marching cubes algorithm [LC87]. Instead of the computation in the 3D shape space, the polygonal meshes are parameterized to the 2D depth maps based on the spherical unwrapping. The in-between tissue layer (Figure 2) is computed by subtracting the skull depth map from the face depth map. Figure 3: The feature curve pattern fitting. V S = {vSi, j |i = 1, ..., 14, j = 1, ..., ni }, where i is the subscript for the curve index, and ni the number of points on the ith curve. As we can see, the curves are along the salient boundaries of the skull features. When the 3D skull is unwrapped to a plane, it is easy to extract the points with abrupt depth variations on the feature boundaries. The points denoted T as Vcandi possess greater depth differences to the neighbors Neib(vTcandi ) than the predefined threshold ζ. ¯ ¯ ¯ ¯ ∃v0 ∈ Neib(vTcandi ), ¯d(vTcandi ) − d(v0 )¯ > ζ, where d(v) is the depth value of the point v. Similar to the template-based mesh fitting [ACP03], a 2 × 3 affine transformation matrix Hi, j is assigned to the point vSi, j . The objective function E pat is defined as a weighted combination of two measures: the pattern smoothness and the proximity to the candidate set. pat pat E pat (Hi, j ) = αEd + βEs , pat and Ed ° °2 ni ° ° = ∑ ∑ ai, j °Hi, j vSi, j − vT ° , i j=1 pat Ed The facial surface can be reconstructed as the superimposition of the registered tissue layer on the novel skull. The registration of the reference tissue map to the target skull is crucial to the reconstruction. Under the assumption that the corresponding tissue and skull maps bear the same feature configurations, the markers on the reference and the novel skull are used to train the RBF model. It is tedious to label the markers on the skulls manually. Confronted with this problem, we introduce a curve pattern fitting technique for the automatic feature labeling on the skull maps. where is the distance item used to minimize the difT T ference between the pattern V S and Vcandi . vT ∈ Vcandi and has the smallest Euclidean distance to the transformed point Hi, j vSi, j . The exception occurs when it comes to the curves around the tooth region, where the correspondence does not exist in the candidate set. ai, j is set at zero under this situation. Otherwise, ai, j is set at 1. Under most instances, the markers in the tooth region can be located reasonably only by the smoothness constraints. In the experiments, the mislocations of the markers in the tooth region occur in only two skull depth maps, and are rectified manually. 3.1. Feature curve patterns on skull maps Es is the smoothness item to preserve the structure of the curve pattern during the transformation. The smoothness is formulated as the minimization of the matrix difference between the point vSi, j and its one-ring neighbors Neib pat (vSi, j ). A neighboring graph is constructed on the pattern by a triangulation as shown in Figure 3. A feature curve pattern is defined as a set of correlated curves on the skull depth map. As shown in Figure 3, the pattern is composed of 14 curves. The 1st curve is around the nasal cavity, and the 2nd two curves along the orbits. Four curves are on the cheekbones and five along the mandible boundaries. There are two curves around the tooth region. We employ a non-rigid template fitting technique for the pattern extraction. The pattern is denoted as a set of points c 2008 The Author(s) ° c 2008 The Eurographics Association and Blackwell Publishing Ltd. Journal compilation ° pat pat Es ni =∑∑ ∑ i j=1 vS0 0 ∈Neib pat (vSi, j ) i ,j °2 i0 , j 0 ° γi, j °Hi, j − Hi0 , j0 °F , Yuru Pei & Hongbin Zha & Zhongbiao Yuan / The Craniofacial Reconstruction from the Local Structural Diversity of Skulls Figure 4: The registration of the tissue map. (1) Use the feature patterns of the novel and the reference skulls to train a RBF model. (2) Apply the RBF model to the reference tissue map. (3) Output the tissue map specific to the target skull. i0 , j 0 where k·kF is the Frobenius norm. γi, j is the weight as- vSi, j vSi0 , j0 in the neighboring graph. sociated with the edge The comparatively large weights are assigned to the edges along the feature curves. In the experiments, the ratio between the weights of the curve edges and other edges is set at 5. The L-BFGS-B algorithm [ZBN97] is employed to solve the affine transformation Hi, j . The constants, α and β, are used to balance the distance and the smoothness items. In our experiments, α = 1 and β = 500. By minimizing the objective function, the automatic labeling of the skull features is achieved. The curve patterns on the skull maps are used to train the RBF model in the tissue map registration. 3.2. Tissue map registration The feature configurations on the reference skull and tissue maps are equivalent because they are extracted from the head volume data of the same subject. Thus, the non-rigid registration between the reference and the novel skull maps is assumed to be equal to that between the reference tissue map and the tissue map specific to the novel skull. The non-rigid registration is implemented by a RBF model, which is frequently used to fit the multi-variable surface in a high dimensional space. We employ the thin-plate spline function as the basis functions for the smooth interpolation. The RBFbased map registration can be formulated as an interpolation constrained by the feature patterns on the skull maps. In the experiments, the RBF model f tis is learnt from the feature patterns, V S and V T , of the novel and the reference skull maps (Figure 4). vTi = f tis (vSi ) = ∑ c j φ(kvSi − vSj k), vTi ∈ V T , vSi , vSj ∈ V S , j where φ(·) is the thin-plate spline basis functions, and c j the unknown weights. A standard LU decomposition is used to solve the linear system. In the registration of the reference tissue map, the pixels on the tissue depth map are fed to the RBF model. The feature configuration of the output is expected to resemble that on the novel skull map (Figure 5). The tissue map warping constrained by the curve patterns modifies the reference tissue depth distribution according to the target skull. However, the local features, e.g. the eyes Figure 5: The snapshot of the tissue map registration. The lower left is the reference tissue map, and the warped result on the lower right. The curve patterns are overlaid, where the green lines are related to the curve pattern on the reference skull and the blue to the target skull. The purple point clouds on the lower right are sampled in the region with reliable tissue depth values. The upper left shows the points clouds on the 3D skull, where the blue points are the skull markers, and the green are the points after the tissue depth addition. The bar length is related to the extracted tissue depth. The upper right is the smooth surface satisfying the tissue depth constraints. and the noses, are difficult to be reconstructed directly by the sampled tissue depth values. The reason is that there are holes in the skull feature regions, and the related tissue depth values are not reliable. The unreliable depth values can cause errors in the tissue depth addition. In this paper, the feature shape estimation is by a mapping between the skull structural measurements and the facial feature appearances as described in the next section. 4. Estimation of local facial features The face model resulted from the tissue map addition bears the unwanted facial features of the reference. For the purpose of the personalized reconstruction, the facial feature shapes are estimated based on the measurements of the novel skull in our system. A mapping between the local skull measurements and the facial feature shapes is learnt from a training dataset. We generate the mappings for three local features, i.e. the eye, the mouth and the nose. 4.1. Local feature definitions Three local features are identified on the skull and the face respectively. The skull feature is denoted as a vector of the c 2008 The Author(s) ° c 2008 The Eurographics Association and Blackwell Publishing Ltd. Journal compilation ° Yuru Pei & Hongbin Zha & Zhongbiao Yuan / The Craniofacial Reconstruction from the Local Structural Diversity of Skulls Figure 6: The local feature extraction. (a) The polygonal regions of the eye, the mouth and the nose are shown on the facial depth map. (b), (c), and (d) are the noses, eyes, and mouths extracted from the dataset. Euclidean distance between the landmarks. As described in Section 3.1, there is a feature curve pattern associated with each skull map. The markers in the curve pattern are assigned to each feature region. The skull feature vector, reg ri (reg = eye, mouth, nose), is composed of the distances between the pairwise landmarks. The corresponding local facial features are represented as the polygonal regions on the facial depth maps as shown in Figure 6. The region comprises a set of discrete sampling points. The depth values d(θ j , ϕ j ) at each sampling point (θ j , ϕ j ) are concatenated together to represent the local facial features. Dreg = (θ1 , ϕ1 , d(θ1 , ϕ1 ), ..., θnreg , ϕnreg , d(θnreg , ϕnreg )), where nreg is the number of sampling points in the region. Three polygon templates are interactively defined on one facial depth map. And then, the templates are adapted to the other facial depth maps in the dataset by a nonlinear optimization. In our system, the numbers of the sampling points in the polygons are 2115, 1624, 1849 with respect to the nose, the eye and the mouth. The optimization process is to establish the point-to-point correspondence between the features on different facial maps. A modified ICP algorithm similar to that described in Section 3.1 is employed to register the template regions non-rigidly to the facial depth map. The objective function is defined as ¯ ¯2 ¯ ¯ E reg (Hθ j ,ϕ j ) = ∑ ¯d0 (θ j , ϕ j ) − d(Hθ j ,ϕ j · (θ j , ϕ j , 1)0 )¯ , (θ j ,ϕ j )∈reg where d(Hθ j ,ϕ j · (θ j , ϕ j , 1)0 ) is the depth of the transformed c 2008 The Author(s) ° c 2008 The Eurographics Association and Blackwell Publishing Ltd. Journal compilation ° point Hθ j ,ϕ j · (θ j , ϕ j , 1)0 on the new facial depth map. d0 (·) is the depth function of the template feature region. Hθ j ,ϕ j is a 2 × 3 transformation matrix assigned to the point (θ j , ϕ j ). Similar to the pattern curve extraction, the smoothness item defined by the transformation matrix difference between the neighboring points is used. By minimizing the energy function, the transformations of the sampling points make the feature template coincide with the new facial depth map. The dimensionality of the facial features, three times the sampling point number, is comparatively high. We apply the principal component analysis [Bis96] to the depth maps to learn a linear subspace representation. The feature depth map is represented as Dreg = D reg + wreg ·U reg . The mean vector of the feature depth map is D reg = 1 M reg ∑ Di . M i=1 U reg is the eigenvectors corresponding to the largest eigenvalues of the covariance matrix computed from the depth maps, and M is the number of depth maps in the dataset. There is an associated low dimensional weight vector wreg ∈ Rk to represent the depth map Dreg ∈ RK (k < K, K = 3 × nreg ). The novel depth map can be reconstructed by the linear combination of U reg . The local facial feature definition is similar to [BV99]. Instead of the holistic correspondence established based on the optic flow method [BV99], the local facial features are extracted by the non-rigid registration on the facial depth maps. Yuru Pei & Hongbin Zha & Zhongbiao Yuan / The Craniofacial Reconstruction from the Local Structural Diversity of Skulls Figure 7: Illustration of the mapping between the local skull measurements and the facial features. 4.2. Learning local feature mappings Once the local skull and the facial features are extracted, we investigate the possibility of determining the relations between the local skull structures and the facial feature appearances (Figure 7). As described in Section 4.1, the local skull features are denoted as a vector rreg composed of the Euclidean distance between the markers, and the local facial features are represented as the PCA coefficients wreg . The skull and the face features in the dataset are used to learn the parameters of the RBF model f reg . reg wi reg = f reg (ri ), i = 1, ..., M, reg = eye, mouth, nose. 4.3. Local feature reconstruction Given the mapping learnt from the training dataset, the facial features specific to the novel skull can be reconstructed. The reg local measurements rnovel of the novel skull are fed to f reg . reg reg wnovel = f reg (rnovel ). reg The PCA coefficients wnovel related to the facial features are reg obtained. The depth map Dnovel can be recovered as reg Dnovel = D reg reg + wnovel ·U reg . 5. Experiments To demonstrate the validity of the method, we recreate the facial outlooks of several unknown skulls. One is a skull aged approximate 10, 000 years discovered in the suburb of Beijing (the 2nd row in Figure 8). The others are the skulls of modern people involved in the forensic cases. A set of reference tissue maps, as well as the local skull and the facial features used to train the RBF model, have been pre-computed from the available head CT data. The training dataset in our system includes 102 faces and skulls, among which there are 72 males and 30 females. All individuals are the Asians and the age ranges from 17 to 44 years. The dataset is classified according to the subjects’ age and gender information. Given a novel skull, the reference tissue map is selected manually according to the estimated information, e.g. the age and the gender. A forensic expert can tell the age, the gender, and even the degree of the fatness from the skull appearance. However, it is hard to determine the skull category automatically and accurately. Instead of using a statistical face template leant based on the PCA [CVG∗ 07], one template is selected from the dataset. The reason is that for a limited and variable dataset, the statistical template model needs most of the components to have a good variance covering. It is not easy to determine the semantic meaning of the components and edit the coefficients according to the skull attributes. Whereas, the manually selected reference tissue map is helpful to lower the uncertainty in the reconstruction. The feature curve pattern is adapted to the unwrapped 2D depth map of the target skull. The candidate point labeling and the feature pattern extraction based on the non-linear optimization can be implemented within a second. The warping of the reference tissue map is illustrated in Figure 5. The tissue map warping is confined inside the region of interest (ROI), which covers the facial area. Ideally, each point inside the ROI of the novel skull is assigned a reasonable depth value, which is from the warped tissue map. However, some pixels near the boundary can be mapped outside the ROI of the reference tissue map and assigned the abnormal tissue thickness values. To handle this problem, a predefined depth threshold is used to get rid of the invalid tissue depth values from the computation of the point clouds. For the purpose of the local facial feature estimation, a non-linear mapping based on the RBF model is learnt from the training dataset. The input of the RBF model is the Euclidean distance vector computed based on the curve feature pattern. The output is the coefficients in the low dimensional embedded shape space. As to the nose, the eye and the mouth, we keep 26, 28 and 30 largest eigenvectors as the basis for the linear shape combination, which retain 99% of the shape variations. The time-consuming matrix decomposition for the PCA needs to be done once in the preprocessing. The final merging of the local estimated features with the holistic reconstructed face is not a major issue of this article. The merging based on the cylindrical projection following the RBF deformation is similar to [NN01]. The results are illustrated in Figure 8. 5.1. Evaluations The evaluation and the validation remain challenging in the craniofacial reconstruction. The evaluation of the two-level reconstruction is as follows. In the local feature estimation module, the leave-one-out experiments are performed. One skull and face pair is removed from the dataset and used as the test case. The other pairs are used to learn the RBF regression parameters. The feature estimation error ei (Figure 9) is defined as the average L2 distance between the rec 2008 The Author(s) ° c 2008 The Eurographics Association and Blackwell Publishing Ltd. Journal compilation ° Yuru Pei & Hongbin Zha & Zhongbiao Yuan / The Craniofacial Reconstruction from the Local Structural Diversity of Skulls Figure 8: The reconstruction results. (a) The 3D polygonal novel skulls. (b) The skull depth map and overlaid point clouds after the tissue addition. (c) and (d) are the 3D reconstructed face models based on the tissue map addition from two views. (e) The facial features reconstructed from the skull measurements. (f) The merging results with the reconstructed facial features. constructed feature vectors and the ground truth. ° ° 1 ° e reg reg ° D − D ei = ° °, i = 1, ..., M, ∑ i i ∑reg nreg reg e reg is the estimated features by the mapping f reg where D i n o reg reg learnt from the dataset (r j , w j )| j = 1, ..., M, j 6= i . The Euclidean distances (millimeter) between the estimated feature mesh and the ground truth are plotted in Figure 10. As to the holistic reconstruction based on the tissue map registration, one head CT scan of a living person is removed from the dataset as the test case. A tissue template is selected from the remaining dataset by an expert. The side by side comparison between the ground truth and the reconstruction is illustrated in Figure 10. Under the tissue-mapaddition mechanism, it is easy to adjust the tissue thickness when computing the point clouds on the facial surface. A factor q is introduced to scale the tissue depth according to the degree of the fatness. The tissue thickness editing is confined inside the facial area. 6. Conclusions The craniofacial reconstruction can be seen as transferring the tissue layer from the reference to the novel skull. The accurate correspondence between the reference and the novel skull is essential to the reconstruction. The current computerized reconstruction only serves as an initialization in the c 2008 The Author(s) ° c 2008 The Eurographics Association and Blackwell Publishing Ltd. Journal compilation ° forensic applications. The precise identification is conventionally by the radiograph and the DNA. Not surprisingly, the reasonable feature modelings will result in the reliable reconstructions. In this paper, we propose a two-level framework for the craniofacial reconstruction. On the low level, the global facial appearance is reconstructed based on the tissue map registration, which is guided by the feature curve patterns. On the high level, the local facial feature shapes specific to the novel skull are estimated based on a mapping system learnt from the feature dataset. We investigate the possibility of reconstructing the local facial features from the skull measurements. As we can see, the mapping system relies on the shape space spanned by the training data. It cannot be expected the feature shape space from the limited dataset to cover the likely shape variations of the diverse populations. In the future work, we expect to enlarge the size of the local feature dataset to improve the shape estimation capacity. The current two-level framework only undergoes the informal evaluations by the forensic practitioners. In the future work, the influence of the high-level feature estimation on the holistic reconstruction is to be analyzed quantitatively. 7. Acknowledgements The authors would like to thank the anonymous referees for the useful suggestions for improving this pa- Yuru Pei & Hongbin Zha & Zhongbiao Yuan / The Craniofacial Reconstruction from the Local Structural Diversity of Skulls [CBRY06] C HOWDHURY A., B HANDARKAR S., ROBIN SON R., Y U J.: Virtual craniofacial reconstruction from computed tomography image sequences exhibiting multiple fractures. In Proc. IEEE ICIP’06 (2006), pp. 1173– 1176. Figure 9: The feature estimation error. [CVG∗ 07] C LAES P., VANDERMEULEN D., G REEF S. D., W ILLEMS G., S UETENS P.: Craniofacial reconstruction using a combined statistical model of face shape and soft tissue depths: Methodology and validation. Forensic Science International 159 (2007), 147–158. [GAMB07] G UPTA S., AGGARWAL J., M ARKEY M., B OVIK A.: 3d face recognition founded on the structural diversity of human faces. In Proc. IEEE CVPR’07 (2007), pp. 1–7. [KHS03] K AHLER K., H ABER J., S EIDEL H.-P.: Reanimating the dead: Reconstruction of expressive faces from skull data. In Proc. SIGGRAPH’03 (2003), pp. 554–561. Figure 10: The comparison of two actual faces with the reconstructions by our system. From left to right: the face from the CT scan, the superimposition of the actual face and the skull, the skull from the CT scan, the estimated facial features plotted with the Euclidean distance (mm) to the ground truth, the superimposition of the reconstruction with the skull, and the reconstructed facial surface. per. This work was supported in part by NKBRPC (No. 2004CB318000), NHTRDP 863 Grant No. 2006AA01Z302 and No. 2007AA01Z336, Key Grant Project of Chinese Ministry of Education (No. 103001). References [ACP03] A LLEN B., C URLESS B., P OPOVIC Z.: The space of all body shapes: reconstruction and parameterization from range scans. In Proc. SIGGRAPH’03 (2003), pp. 587–594. [Arc97] A RCHER K. M.: Craniofacial reconstruction using hierarchical b-spline interpolation. Master’s thesis, University of British Columbia (1997). [BAHS06] B LANZ V., A LBRECHT I., H ABER J., S EIDEL H.-P.: Creating face models from vague mental images. Computer Graphics Forum 25, 3 (2006), 645–654. [BDBP05] B ERAR M., D ESVIGNES M., BAILLY G., PAYAN Y.: 3d statistical facial reconstruction. In Proc. Int. Symp. on Image and Signal Processing and Analysis (2005), pp. 365– 370. [Bis96] B ISHOP C.: Neural Network for Pattern Recognition. Cambridge University Press, 1996. [BV99] B LANZ V., V ETTER T.: A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH ’99 (1999), pp. 187–194. [LC87] L ORENSEN W. E., C LINE H. E.: Marching cubes: A high resolution 3d surface construction algorithm. Computer Graphics 21 (1987), 163–169. [NN01] N OH J., N EUMANN U.: Expression cloning. In Proc. SIGGRAPH ’01 (2001), pp. 277–288. [PZY04] P EI Y., Z HA H., Y UAN Z.: Tissue map based craniofacial reconstruction and facial deformation using rbf network. In Proc. Int. Conf. on Image and Graphics (2004), pp. 398–401. [SPCM97] S CHEEPERS F., PARENT R. E., C ARLSON W. E., M AY S. F.: Anatomy-based modeling of the human musculature. In Proc. SIGGRAPH’97 (1997), pp. 163–172. [TBL∗ 07] T U P., B OOK R., L IU X., K RAHNSTOEVER N., A DRIAN C., W ILLIAMS P.: Automatic face recognition from skeletal remains. In Proc. CVPR’07 (2007), pp. 1–7. [THL∗ 05] T U P., H ARTLEY R., L ORENSEN W., A LYASSIN M., G UPTA R., H EIER L.: Face reconstruction using flesh deformation modes. Computer Graphic Facial Reconstruction (2005), 145–162. [VVMN00] VANEZIS P., VANEZIS M., M C C OMBE G., N IBLETT T.: Facial reconstruction using 3-d computer graphics. Forensic Science International 102, 2 (2000), 81–95. [WFKM97] W ISKOTT L., F ELLOUS J.-M., K UIGER N., M ALSBURG C.: Face recognition by elastic bunch graph matching. IEEE Trans. PAMI 19, 7 (1997), 775–779. [WG97] W ILHELMS J., G ELDER A. V.: Anatomically based modeling. In Proc. SIGGRAPH’97 (1997), pp. 173–180. [ZBN97] Z HU C., B YRD R. H., N OCEDAL J.: L-bfgs-b: Algorithm 778: L-bfgs-b, fortran routines for large scale bound constrained optimization. ACM Trans. on Mathematical Software 23, 4 (1997), 550 – 560. c 2008 The Author(s) ° c 2008 The Eurographics Association and Blackwell Publishing Ltd. Journal compilation °