Uploaded by Mitchelle Makokove

Advances on Mechanics, Design Engineering and Manufacturing Proceedings of the International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing (JCM 2016), 14-16 September, 2016, Catania, Italy ( PDFDrive )

advertisement
Lecture Notes in Mechanical Engineering
Benoit Eynard
Vincenzo Nigrelli
Salvatore Massimo Oliveri
Guillermo Peris-Fajarnes
Sergio Rizzuti Editors
Advances on
Mechanics, Design
Engineering and
Manufacturing
Proceedings of the International Joint Conference
on Mechanics, Design Engineering & Advanced
Manufacturing (JCM 2016), 14–16 September,
2016, Catania, Italy
Lecture Notes in Mechanical Engineering
About this Series
Lecture Notes in Mechanical Engineering (LNME) publishes the latest
developments in Mechanical Engineering—quickly, informally and with high
quality. Original research reported in proceedings and post-proceedings represents
the core of LNME. Also considered for publication are monographs, contributed
volumes and lecture notes of exceptionally high quality and interest. Volumes
published in LNME embrace all aspects, subfields and new challenges of
mechanical engineering. Topics in the series include:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Engineering Design
Machinery and Machine Elements
Mechanical Structures and Stress Analysis
Automotive Engineering
Engine Technology
Aerospace Technology and Astronautics
Nanotechnology and Microengineering
Control, Robotics, Mechatronics
MEMS
Theoretical and Applied Mechanics
Dynamical Systems, Control
Fluid Mechanics
Engineering Thermodynamics, Heat and Mass Transfer
Manufacturing
Precision Engineering, Instrumentation, Measurement
Materials Engineering
Tribology and Surface Technology
More information about this series at http://www.springer.com/series/11236
Benoit Eynard Vincenzo Nigrelli
Salvatore Massimo Oliveri
Guillermo Peris-Fajarnes
Sergio Rizzuti
•
Editors
Advances on Mechanics,
Design Engineering
and Manufacturing
Proceedings of the International Joint
Conference on Mechanics, Design
Engineering & Advanced Manufacturing
(JCM 2016), 14–16 September, 2016,
Catania, Italy
Organizing Scientific Associations:
AIP-PRIMECA—Ateliers Inter-établissements de
Productique—Pôles de Resources Informatiques
pour la MECAnique—France
ADM—Associazione nazionale Disegno e Metodi
dell’ingegneria industriale—Italy
INGEGRAF—Asociación Española de Ingeniería
Gráfica—Spain
123
Editors
Benoit Eynard
Université de Technologie de Compiègne
Compiègne
France
Guillermo Peris-Fajarnes
Universidad Politecnica de Valencia
Valencia
Spain
Vincenzo Nigrelli
Dipartimento di Ingegneria Chimica,
Gestionale, Informatica, Meccanica
Università degli Studi di Palermo
Palermo
Italy
Sergio Rizzuti
Dipartimento di Ingegneria Meccanica,
Energetica e Gestionale
Università della Calabria
Rende, Cosenza
Italy
Salvatore Massimo Oliveri
Dipartimento di Ingegneria Elettrica,
Elettronica e Informatica (DIEEI)
Università degli Studi di Catania
Catania
Italy
ISSN 2195-4356
ISSN 2195-4364 (electronic)
Lecture Notes in Mechanical Engineering
ISBN 978-3-319-45780-2
ISBN 978-3-319-45781-9 (eBook)
DOI 10.1007/978-3-319-45781-9
Library of Congress Control Number: 2016950391
© Springer International Publishing AG 2017
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.
Printed on acid-free paper
This Springer imprint is published by Springer Nature
The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface and Acknowledgements
The JCM Conference has arrived at its second event, following JCM 2014 held in
Toulouse (F) in 2014. The cycle of conferences started in 2003 with biennial editions
organized by ADM (Design and Methods of Industrial Engineering Society—Italy)
and INGEGRAF (Asociación Española de Ingeniería Gráfica—Spain). In the conjoint conference held at Venice in June 2011 (IMProVe 2011) also AIP-PRIMECA
(Ateliers Inter-établissements de Productique—Pôles de Resources Informatiques
pour la MECAnique—France) took part the event as organizer.
JCM 2016 has been organized by Rapid Prototyping and Geometric Modelling
Laboratory of the University of Catania (IT)—Department of Electronic, Electric
and Informatics Engineering (DIEEI).
JCM 2016 has gathered researchers, industry experts in the domain of
“Interactive and Integrated Design and Manufacturing for Innovation” to disseminate their major and recent results, studies, implementations, tools and techniques at
an international level.
The overall number of authors involved has been 404. JCM 2016 attracted
138 abstract submissions that became 135 papers. Due to a peer-review process,
123 papers have been selected and accepted for presentation at conference, as
podium or poster session. The precious reviewing work has been possible because
111 persons have been involved in the process, which has been coordinated by
24 track chairs, providing not less than two reviews per paper, for a total of
381 reviews.
The book is organized in several parts, each one corresponding to the tracks
of the Conference. Each part is briefly introduced by the track chairs, that followed
the review process.
We would like to personally thank all the people involved in the review process
for their strong commitment and expertise demonstrated in this not easy,
time-consuming and very important task. We would like to thank the persons of the
v
vi
Preface and Acknowledgements
Organizing Committee that allowed the conference to be held and in particular
Ph.D. Gaetano Sequenzia for his work in all phases of Conference organization and
management, his support to the Program Chair during the review process and the
communication to authors, invited speakers, sponsors and so on.
Catania, Italy
Rende, Italy
Salvatore Massimo Oliveri
Sergio Rizzuti
Organization Committee
Conference Chair
Salvatore Massimo Oliveri, Univ. Catania
Conference Program Chair
Sergio Rizzuti, Univ. della Calabria
Conference Advisory Chairmen
Benoit Eynard, UT Compiègne
Xavier Fischer, ESTIA
Vincenzo Nigrelli, Univ. Palermo
Guillermo Peris-Fajarnes, Univ. Polit. Valencia
Scientific Committee
Angelo Oreste Andrisano, Univ. Modena e Reggio Emilia
Fabrizio Micari, Univ. Palermo
Fernando J. Aguilar, Univ. Almería
Pedro Álvarez, Univ. Oviedo
Agustín Arias, Univ. País Vasco
Sandro Barone, Univ. Pisa
Antonio Bello, Univ. Oviedo
Alain Bernard, Ecole Centrale Nantes
Jean-François Boujut, Grenoble INP
Daniel Brissaud, Grenoble INP
Fernando Brusola, Univ. Polit. Valencia
vii
viii
Enrique Burgos, Univ. País Vasco
Gianni Caligiana, Univ. Bologna
Monica Carfagni, Univ. Firenze
Antonio Carretero, Univ. Polit. Madrid
Pierre Castagna, Univ. Nantes
Patrick Charpentier, Univ. Lorraine
Vincent Cheutet, INSA Lyon
Gianmaria Concheri, Univ. Padova
Paolo Conti, Univ. Perugia
David Corbella, Univ. Polit. Madrid
Daniel Coutellier, ENSIAME
Alain Daidié, INSA Toulouse
Jean-Yves Dantan, ENSAM Metz
Beatriz Defez, Univ. Polit. Valencia
Paolo Di Stefano, Univ. L’Aquila
Emmanuel Duc, SIGMA Clermont
Alex Duffy, Univ. Strathclyde
Francisco Xavier Espinach, Univ. Girona
Georges Fadel, Clemson Univ.
Mercedes Farjas, Univ. Polit. Madrid
Jesùs Fèlez, Univ. Polit. Madrid
Gaspar Fernández, Univ. León
Livan Fratini, Univ. Palermo
Benoît Furet, Univ. Nantes
Mikel Garmendia, Univ. País Vasco
Philippe Girard, CNRS-IMS
Samuel Gomes, UTBM
Bernard Grabot, ENI Tarbes
Peter Hehenberger, Johannes Kepler University Linz
Francisco Hernández, Univ. Polit. Cataluña
Isidro Ladrón-de-Guevara, Univ. Málaga
Antonio Lanzotti, Univ. Napoli “Federico II”
Jesús López, Univ. Pública Navarra
Ferruccio Mandorli, Polit. Delle Marche
Mª Luisa Martínez-Muneta, Univ. Polit. Madrid
Christian Mascle, Polytechnique Montréal
Chris McMahon, Univ. of Bristol
Rochdi Merzouki, Univ. Lille
Rikardo Mínguez, Univ. País Vasco
Giuseppe Monno, Polit. Bari
Paz Morer, Univ.Navarra
Javier Muniozguren, Univ. País Vasco
Frédéric Noël, Grenoble INP
César Otero, Univ. Cantabria
Manuel Paredes, INSA Toulouse
Organization Committee
Organization Committee
Basilio Ramos, Univ. Burgos
Didier Rémond, INSA Lyon
Caterina Rizzi, Univ. Bergamo
Louis Rivest, ETS Montréal
José Ignacio, Rojas-Sola Univ. Jaén
Lionel Roucoules, ENSAM Aix
Carlos San-Antonio, Univ. Polit. Madrid
José Miguel Sánchez, Univ. Cantabria
Jacinto Santamaría-Peña, Univ. La Rioja
Félix Sanz-Adan, Univ. La Rioja
Irene Sentana, Univ. Alicante
Sébastien Thibaud, Univ. Franche Comté
Stefano Tornincasa, Polit. Torino
Christophe Tournier, ENS Cachan
Pedro Ubieto, Univ. Zaragoza
Mercedes Valiente Lopez, Univ. Polit. de Madrid
Jozsef Vancza, MTA SZTAKI
Bernard Yannou, CentraleSupélec
Eduardo Zurita, Univ. Santiago de Compostela
Additional Reviewers
Niccolò Becattini, Polit. Milano
Giovanni Berselli, Univ. Genova
Francesco Bianconi, Univ. Perugia
Elvio Bonisoli, Polit. Torino
Yuri Borgianni, Univ. Bolzano
Fabio Bruno, Univ. Calabria
Francesca Campana, Univ. Roma “La Sapienza”
Nicola Cappetti, Univ. Salerno
Alessandro Ceruti, Univ. Bologna
Giorgio Colombo, Polit. Milano
Filippo Cucinotta, Univ. Messina
Francesca De Crescenzio, Univ. Bologna
Luigi De Napoli, Univ. Calabria
Luca Di Angelo, Univ. L'Aquila
Francesco Ferrise, Polit. Milano
Stefano Filippi, Univ. Udine
Michele Fiorentino, Polit. Bari
Daniela Francia, Univ. Bologna
Rocco Furferi, Univ. Florence
Salvatore Gerbino, Univ. Molise
Michele Germani, Polit. Marche
Lapo Governi, Univ. Firenze
ix
x
Organization Committee
Serena Graziosi, Polit. Milano
Tommaso Ingrassia, Univ. Palermo
Francesco Leali, Univ. Modena and Reggio Emilia
Antonio Mancuso, Univ. Palermo
Massimo Martorelli, Univ. Napoli “Federico II”
Maura Mengoni, Polit. Marche
Barbara Motyl, Univ. Udine
Maurizio Muzzupappa, Univ. Calabria
Alessandro Paoli, Univ. Pisa
Stanislao Patalano, Univ. Napoli “Federico II”
Marcello Pellicciari, Univ. Modena and Reggio Emilia
Margherita Peruzzini, Univ. Modena and Reggio Emilia
Roberto Raffaeli, Univ. eCampus
Armando Razionale, Univ. Pisa
Roberto Razzoli, Univ. Genova
Fabrizio Renno, Univ. Napoli “Federico II”
Francesco Rosa, Polit. Milano
Federico Rotini, Univ. Firenze
Davide Russo, Univ. Bergamo
Gianpaolo Savio, Univ. Padova
Domenico Speranza, Univ. Cassino
Davide Tumino, Univ. Enna Kore
Antonio E. Uva, Polit. Bari
Alberto Vergnano, Univ. Modena and Reggio Emilia
Enrico Vezzetti, Polit. Torino
Maria Grazia Violante, Polit. Torino
Organizing Committee
Salvatore Massimo Oliveri, Univ. Catania
Gaetano Sequenzia, Univ. Catania
Gabriele Fatuzzo, Univ. Catania
University of Catania (IT)—Department of Electronic, Electric and Informatics
Engineering—Rapid Prototyping and Geometric Modelling Laboratory
Viale Andrea Doria, 6, Building 3
95125, Catania, Italy
Contents
Part I
Integrated Product and Process Design
Section 1.1 Innovative Design Methods
A Systematic Methodology for Engineered Object Design:
The P-To-V Model of Functional Innovation . . . . . . . . . . . . . . . . . . . . . .
Geoffrey S. Matthews
5
Influence of the evolutionary optimization parameters
on the optimal topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tommaso Ingrassia, Antonio Mancuso and Giorgio Paladino
15
Design of structural parts for a racing solar car . . . . . . . . . . . . . . . . . . .
Esteban Betancur, Ricardo Mejía-Gutiérrez, Gilberto Osorio-gómez
and Alejandro Arbelaez
25
Section 1.2 Integrated Product and Process Design
Some Hints for the Correct Use of the Taguchi Method
in Product Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sergio Rizzuti and Luigi De Napoli
35
Neuro-separated meta-model of the scavenging process
in 2-Stroke Diesel engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Stéphanie Cagin and Xavier Fischer
45
Subassembly identification method based on CAD Data . . . . . . . . . . . . .
Imen Belhadj, Moez Trigui and Abdelmajid Benamara
Multi-objective conceptual design: an approach to make
cost-efficient the design for manufacturing and assembly
in the development of complex products . . . . . . . . . . . . . . . . . . . . . . . . . .
Claudio Favi, Michele Germani and Marco Mandolini
55
63
xi
xii
Contents
Modeling of a three-axes MEMS gyroscope with feedforward
PI quadrature compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
D. Marano, A. Cammarata, G. Fichera, R. Sinatra and D. Prati
A disassembly Sequence Planning Approach for maintenance . . . . . . . .
Maroua Kheder, Moez Trigui and Nizar Aifaoui
A comparative Life Cycle Assessment of utility poles manufactured
with different materials and dimensions . . . . . . . . . . . . . . . . . . . . . . . . . .
Sandro Barone, Filippo Cucinotta and Felice Sfravara
71
81
91
Prevision of Complex System’s Compliance during
System Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
J-P. Gitto, M. Bosch-Mauchand, A. Ponchet Durupt, Z. Cherfi
and I. Guivarch
Framework definition for the design of a mobile
manufacturing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Youssef Benama, Thecle Alix and Nicolas Perry
An automated manufacturing analysis of plastic parts
using faceted surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Jorge Manuel Mercado-Colmenero, José Angel Moya Muriana,
Miguel Angel Rubio- Paramio and Cristina Martín-Doñate
Applying sustainability in product development . . . . . . . . . . . . . . . . . . . . 129
Rosana Sanz, José Luis Santolaya and Enrique Lacasa
Towards a new collaborative framework supporting the design
process of industrial Product Service Systems . . . . . . . . . . . . . . . . . . . . . 139
Elaheh Maleki, Farouk Belkadi, Yicha Zhang and Alain Bernard
Information model for tracelinks building in early design stages . . . . . . 147
David Ríos-Zapata, Jérôme Pailhés and Ricardo Mejía-Gutiérrez
Section 1.3 Interactive Design
User-centered design of a Virtual Museum system: a case study . . . . . . 157
Loris Barbieri, Fabio Bruno, Fabrizio Mollo and Maurizio Muzzupappa
An integrated approach to customize the packaging of heritage
artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
G. Fatuzzo, G. Sequenzia, S.M. Oliveri, R. Barbagallo and M. Calì
Contents
Part II
xiii
Product Manufacturing and Additive Manufacturing
Section 2.1 Additive Manufacturing
Extraction of features for combined additive manufacturing
and machining processes in a remanufacturing context . . . . . . . . . . . . . . 181
Van Thao Le and Henri Paris Guillaume Mandil
Comparative Study for the Metrological Characterization
of Additive Manufacturing artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Charyar Mehdi-Souzani, Antonio Piratelli-Filho and Nabil Anwer
Flatness, circularity and cylindricity errors in 3D printed models
associated to size and position on the working plane . . . . . . . . . . . . . . . . 201
Massimo Martorelli, Salvatore Gerbino, Antonio Lanzotti,
Stanislao Patalano and Ferdinando Vitolo
Optimization of lattice structures for Additive Manufacturing
Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Gianpaolo Savio, Roberto Meneghello and Gianmaria Concheri
Standardisation Focus on Process Planning and Operations
Management for Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . 223
Jinhua Xiao, Nabil Anwer, Alexandre Durupt, Julien Le Duigou
and Benoît Eynard
Comparison of some approaches to define a CAD model from
topological optimization in design for additive manufacturing . . . . . . . . 233
Pierre-Thomas Doutre, Elodie Morretton, Thanh Hoang Vo,
Philippe Marin, Franck Pourroy, Guy Prudhomme
and Frederic Vignat
Review of Shape Deviation Modeling for Additive Manufacturing . . . . . 241
Zuowei Zhu, Safa Keimasi, Nabil Anwer, Luc Mathieu
and Lihong Qiao
Design for Additive Manufacturing of a non-assembly robotic
mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
F. De Crescenzio and F. Lucchi
Process parameters influence in additive manufacturing . . . . . . . . . . . . . 261
T. Ingrassia, Vincenzo Nigrelli, V. Ricotta and C. Tartamella
Multi-scale surface characterization in additive manufacturing
using CT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Yann Quinsat, Claire Lartigue, Christopher A. Brown
and Lamine Hattali
xiv
Contents
Testing three techniques to elicit additive manufacturing
knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Christelle Grandvallet, Franck Pourroy, Guy Prudhomme
and Frédéric Vignat
Topological Optimization in Concept Design: starting approach
and a validation case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Michele Bici, Giovanni B. Broggiato and Francesca Campana
Section 2.2 Advanced Manufacturing
Simulation of Laser-Sensor Digitizing for On-Machine
Part Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Nguyen Duy Minh Phan, Yann Quinsat and Claire Lartigue
Tool/Material Interferences Sensibility to Process
and Tool Parameters in Vibration-Assisted Drilling . . . . . . . . . . . . . . . . . 313
Vivien Bonnot, Yann Landon and Stéphane Segonds
Implementation of a new method for robotic repair operations
on composite structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Elodie Paquet, Sébastien Garnier, Mathieu Ritou, Benoît Furet
and Vincent Desfontaines
CAD-CAM integration for 3D Hybrid Manufacturing . . . . . . . . . . . . . . . 329
Gianni Caligiana, Daniela Francia and Alfredo Liverani
Section 2.3 Experimental Methods in Product Development
Mechanical steering gear internal friction: effects on the drive feel
and development of an analytic experimental model
for its prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Giovanni Gritti, Franco Peverada, Stefano Orlandi, Marco Gadola,
Stefano Uberti, Daniel Chindamo, Matteo Romano
and Andrea Olivi
Design of an electric tool for underwater archaeological restoration
based on a user centred approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Loris Barbieri, Fabio Bruno, Luigi De Napoli, Alessandro Gallo
and Maurizio Muzzupappa
Analysis and comparison of Smart City initiatives . . . . . . . . . . . . . . . . . . 363
Aranzazu Fernández-Vázquez and Ignacio López-Forniés
Involving Autism Spectrum Disorder (ASD) affected people
in design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Stefano Filippi and Daniela Barattin
Contents
Part III
xv
Engineering Methods in Medicine
Patient-specific 3D modelling of heart and cardiac structures
workflow: an overview of methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Monica Carfagni and Francesca Uccheddu
A new method to capture the jaw movement . . . . . . . . . . . . . . . . . . . . . . 397
Lander Barrenetxea, Eneko Solaberrieta,
Mikel Iturrate and Jokin Gorozika
Computer Aided Engineering of Auxiliary Elements
for Enhanced Orthodontic Appliances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Roberto Savignano, Sandro Barone, Alessandro Paoli
and Armando Viviano Razionale
Finite Element Analysis of TMJ Disks Stress Level
due to Orthodontic Eruption Guidance Appliances . . . . . . . . . . . . . . . . . 415
Paolo Neri, Sandro Barone, Alessandro Paoli
and Armando Razionale
TPMS for interactive modelling of trabecular scaffolds
for Bone Tissue Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
M. Fantini, M. Curto and F. De Crescenzio
Mechanical and Geometrical Properties Assessment
of Thermoplastic Materials for Biomedical Application . . . . . . . . . . . . . . 437
Sandro Barone, Alessandro Paoli, Paolo Neri, Armando Viviano Razionale
and Michele Giannese
The design of a knee prosthesis by Finite Element Analysis . . . . . . . . . . 447
Saúl Íñiguez-Macedo, Fátima Somovilla-Gómez, Rubén Lostado-Lorza,
Marina Corral-Bobadilla, María Ángeles Martínez-Calvo
and Félix Sanz-Adán
Design and Rapid Manufacturing of a customized foot orthosis:
a first methodological study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
M. Fantini, F. De Crescenzio, L. Brognara and N. Baldini
Influence of the metaphysis positioning in a new reverse shoulder
prosthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
T. Ingrassia, L. Nalbone, Vincenzo Nigrelli, D. Pisciotta
and V. Ricotta
Digital human models for gait analysis: experimental validation
of static force analysis tools under dynamic conditions . . . . . . . . . . . . . . 479
T. Caporaso, G. Di Gironimo, A. Tarallo, G. De Martino,
M. Di Ludovico and A. Lanzotti
xvi
Contents
Using the Finite Element Method to Determine the Influence
of Age, Height and Weight on the Vertebrae
and Ligaments of the Human Spine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Fátima Somovilla-Gómez, Rubén Lostado-Lorza, Saúl Íñiguez-Macedo,
Marina Corral-Bobadilla, María Ángeles Martínez-Calvo
and Daniel Tobalina-Baldeon
Part IV
Nautical, Aeronautics and Aerospace Design
and Modelling
Numerical modelling of the cold expansion process in mechanical
stacked assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
Victor Achard, Alain Daidie, Manuel Paredes and Clément Chirol
A preliminary method for the numerical prediction of the behavior
of air bubbles in the design of Air Cavity Ships . . . . . . . . . . . . . . . . . . . . 509
Filippo Cucinotta, Vincenzo Nigrelli and Felice Sfravara
Stiffness and slip laws for threaded fasteners subjected
to a transversal load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Rémi Thanwerdas, Emmanuel Rodriguez and Alain Daidie
Refitting of an eco-friendly sailing yacht: numerical prediction
and experimental validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
A. Mancuso, G. Pitarresi, G.B. Trinca and D. Tumino
Geometric Parameterization Strategies for shape Optimization
Using RBF Mesh Morphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Ubaldo Cella, Corrado Groth and Marco Evangelos Biancolini
Sail Plan Parametric CAD Model for an A-Class Catamaran
Numerical Optimization Procedure Using Open Source Tools . . . . . . . . 547
Ubaldo Cella, Filippo Cucinotta and Felice Sfravara
A reverse engineering approach to measure the deformations
of a sailing yacht . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Francesco Di Paola, Tommaso Ingrassia, Mauro Lo Brutto
and Antonio Mancuso
A novel design of cubic stiffness for a Nonlinear Energy Sink (NES)
based on conical spring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Donghai Qiu, Sébastien Seguy and Manuel Paredes
Design of the stabilization control system of a high-speed craft . . . . . . . 575
Antonio Giallanza, Luigi Cannizzaro, Mario Porretto
and Giuseppe Marannano
Contents
xvii
Dynamic spinnaker performance through digital photogrammetry,
numerical analysis and experimental tests. . . . . . . . . . . . . . . . . . . . . . . . . 585
Michele Calì, Domenico Speranza and Massimo Martorelli
GA multi-objective and experimental optimization for a tail-sitter
small UAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
Luca Piancastelli, Leonardo Frizziero and Marco Cremonini
Part V
Computer Aided Design and Virtual Simulation
Section 5.1 Simulation and Virtual Approaches
An integrated approach to design an innovative motorcycle rear
suspension with eccentric mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
R. Barbagallo, G. Sequenzia, A. Cammarata and S.M. Oliveri
Design of Active Noise Control Systems for Pulse Noise . . . . . . . . . . . . . 621
Alessandro Lapini, Massimiliano Biagini, Francesco Borchi,
Monica Carfagni and Fabrizio Argenti
Disassembly Process Simulation in Virtual Reality Environment . . . . . . 631
Peter Mitrouchev, Cheng-gang Wang and Jing-tao Chen
Development of a methodology for performance analysis and synthesis
of control strategies of multi-robot pick & place applications . . . . . . . . . 639
Gaël Humbert, Minh Tu Pham, Xavier Brun, Mady Guillemot
and Didier Noterman
3D modelling of the mechanical actions of cutting: application
to milling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
Wadii Yousfi, Olivier Cahuc, Raynald Laheurte, Philippe Darnis
and Madalina Calamaz
Engineering methods and tools enabling reconfigurable
and adaptive robotic deburring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Giovanni Berselli, Michele Gadaleta, Andrea Genovesi,
Marcello Pellicciari, Margherita Peruzzini
and Roberto Razzoli
Tolerances and uncertainties effects on interference fit
of automotive steel wheels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
Stefano Tornincasa, Elvio Bonisoli and Marco Brino
An effective model for the sliding contact forces in a multibody
environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
Michele Calì, Salvatore Massimo Oliveri, Gaetano Sequenzia
and Gabriele Fatuzzo
xviii
Contents
Systems engineering and hydroacoustic modelling applied
in simulation of hydraulic components . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
Arnaud Maillard, Eric Noppe, Benoît Eynard
and Xavier Carniel
Linde’s ice-making machine. An example of industrial archeology
study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
Belén Pérez Delgado, José R. Andrés Díaz, María L. García Ceballos
and Miguel A. Contreras López
Solder Joint Reliability: Thermo-mechanical analysis on Power Flat
Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
Alessandro Sitta, Michele Calabretta, Marco Renna
and Daniela Cavallaro
Section 5.2 Virtual and Augmented Reality
Virtual reality to assess visual impact in wind energy projects . . . . . . . . 719
Piedad Eliana Lizcano, Cristina Manchado, Valentin Gomez-Jauregui
and César Otero
Visual Aided Assembly of Scale Models with AR . . . . . . . . . . . . . . . . . . . 727
Alessandro Ceruti, Leonardo Frizziero and Alfredo Liverani
Section 5.3 Geometric Modelling and Analysis
Design and analysis of a spiral bevel gear. . . . . . . . . . . . . . . . . . . . . . . . . 739
Charly Lagresle, Jean-Pierre de Vaujany and Michèle Guingand
Three-dimensional face analysis via new geometrical descriptors . . . . . . 747
Federica Marcolin, Maria Grazia Violante, Sandro Moos, Enrico Vezzetti,
Stefano Tornincasa, Nicole Dagnes and Domenico Speranza
Agustin de Betancourt’s plunger lock: Approach to its geometric
modeling with Autodesk Inventor Professional . . . . . . . . . . . . . . . . . . . . . 757
José Ignacio Rojas-Sola and Eduardo De La Morena-De La Fuente
Designing a Stirling engine prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
Fernando Fadon, Enrique Ceron, Delfin Silio and Laida Fadon
Design and analysis of tissue engineering scaffolds based on open
porous non-stochastic cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
R. Ambu and A.E. Morabito
Geometric Shape Optimization of Organic Solar Cells for Efficiency
Enhancement by Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
Grazia Lo Sciuto, Giacomo Capizzi, Salvatore Coco
and Raphael Shikler
Contents
xix
Section 5.4 Reverse Engineering
A survey of methods to detect and represent the human symmetry
line from 3D scanned human back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
Nicola Cappetti and Alessandro Naddeo
Semiautomatic Surface Reconstruction in Forging Dies . . . . . . . . . . . . . . 811
Rikardo Minguez, Olatz Etxaniz, Agustin Arias, Nestor Goikoetxea
and Inaki Zuazo
A RGB-D based instant body-scanning solution for compact box
installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
Rocco Furferi, Lapo Governi, Francesca Uccheddu and Yary Volpe
Machine Learning Techniques to address classification issues
in Reverse Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
Jonathan Dekhtiar, Alexandre Durupt, Dimitris Kiritsis,
Matthieu Bricogne, Harvey Rowson and Benoit Eynard
Recent strategies for 3D reconstruction using Reverse Engineering:
a bird’s eye view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
Francesco Buonamici, Monica Carfagni and Yary Volpe
Section 5.5 Product Data Exchange and Management
Data aggregation architecture “Smart-Hub” for heterogeneous
systems in industrial environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
Ahmed Ahmed, Lionel Roucoules, Rémy Gaudy and Bertrand Larat
Preparation of CAD model for collaborative design meetings:
proposition of a CAD add-on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
Ahmad Al Khatib, Damien Fleche, Morad Mahdjoub,
Jean-Bernard Bluntzer and Jean-Claude Sagot
Applying PLM approach for supporting collaborations in medical
sector: case of prosthesis implantation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
Thanh-Nghi Ngo, Farouk Belkadi and Alain Bernard
Section 5.6 Surveying, Mapping and GIS Techniques
3D Coastal Monitoring from very dense UAV-Based Photogrammetric
Point Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
Fernando J. Aguilar, Ismael Fernández, Juan A. Casanova,
Francisco J. Ramos, Manuel A. Aguilar, José L. Blanco
and José C. Moreno
xx
Contents
Section 5.7 Building Information Modelling
BiMov: BIM-Based Indoor Path Planning . . . . . . . . . . . . . . . . . . . . . . . . 891
Ahmed Hamieh, Dominique Deneux and Christian Tahon
Part VI
Education and Representation Techniques
Section 6.1 Teaching Engineering Drawing
Best practices in teaching technical drawing: experiences
of collaboration in three Italian Universities . . . . . . . . . . . . . . . . . . . . . . . 905
Domenico Speranza, Gabriele Baronio, Barbara Motyl, Stefano Filippi
and Valerio Villa
Gamification in a Graphical Engineering course - Learning
by playing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
Valentín Gómez-Jáuregui, Cristina Manchado and César Otero
Reliable low-cost alternative for modeling and rendering 3D Objects
in Engineering Graphics Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
J. Santamaría-Peña, M. A. Benito-Martín, F. Sanz-Adán, D. Arancón
and M. A. Martinez-Calvo
Section 6.2 Teaching Product Design and Drawing History
How to teach interdisciplinary: case study for Product Design
in Assistive Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
G. Thomnn, Fabio Morais and Christine Werba
Learning engineering drawing and design through the study
of machinery and tools from Malaga’s industrial heritage . . . . . . . . . . . 941
M. Carmen Ladrón de Guevara Muñoz, Francisco Montes Tubio,
E. Beatriz Blázquez Parra and Francisca Castillo Rueda
Developing students’ skills through real projects
and service learning methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
Anna Biedermann, Natalia Muñoz López and Ana Serrano Tierz
Integration of marketing activities in the mechanical design process . . . 961
Cristina Martin-Doñate, Fermín Lucena-Muñoz
and Javier Gallego-Alvarez
Section 6.3 Representation Techniques
Geometric locus associated with thriedra axonometric projections.
Intrinsic curve associated with the ellipse generated . . . . . . . . . . . . . . . . 973
Pedro Gonzaga, Faustino Gimena, Lázaro Gimena and Mikel Goñi
Contents
xxi
Pohlke Theorem: Demonstration and Graphical Solution . . . . . . . . . . . . 981
Faustino Gimena, Lázaro Gimena, Mikel Goñi and Pedro Gonzaga
Part VII
Geometric Product Specification and Tolerancing
Section 7.1 Geometric Product Specification and Tolerancing
ISO Tolerancing of hyperstatic mechanical systems with deformation
control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
Oussama Rouetbi, Laurent Pierre, Bernard Anselmetti
and Henri Denoix
How to trace the significant information in tolerance analysis
with polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
Vincent Delos, Denis Teissandier and Santiago Arroyave-Tobón
Integrated design method for optimal tolerance stack evaluation
for top class automotive chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
Davide Panari, Cristina Renzi, Alberto Vergnano, Enrico Bonazzi
and Francesco Leali
Development of virtual metrology laboratory based on skin model
shape simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
Xingyu Yan, Alex Ballu, Antoine Blanchard, Serge Mouton
and Halidou Niandou
Product model for Dimensioning, Tolerancing and Inspection . . . . . . . . 1033
L. Di Angelo, P. Di Stefano and A.E. Morabito
Section 7.2 Geometric and Functional Characterization
of Products
Segmentation of secondary features from high-density acquired
surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
L. Di Angelo, P. Di Stefano and A.E. Morabito
Comparison of mode decomposition methods tested on simulated
surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
Alex Ballu, Rui Gomes, Pedro Mimoso, Claudia Cristovao
and Nuno Correia
Analysis of deformations induced by manufacturing processes
of fine porcelain whiteware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
Luca Puggelli, Yary Volpe and Stefano Giurgola
Characterization of a Composite Material Reinforced
with Vulcanized Rubber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
D. Tobalina, F. Sanz-Adan, R. Lostado-Lorza, M. Martínez-Calvo,
J. Santamaría-Peña, I. Sanz-Peña and F. Somovilla-Gómez
xxii
Contents
Definition of geometry and graphics applications on existing
cosmetic packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083
Anna Maria Biedermann, Aranzazu Fernández-Vázquez
and María Elipe
Part VIII
Innovative Design
Section 8.1 Knowledge Based Engineering
A design methodology to predict the product energy efficiency
through a configuration tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
Paolo Cicconi, Michele Germani, Daniele Landi
and Anna Costanza Russo
Design knowledge formalization to shorten the time
to generate offers for Engineer To Order products . . . . . . . . . . . . . . . . . 1107
Roberto Raffaeli, Andrea Savoretti and Michele Germani
Customer/Supplier Relationship: reducing Uncertainties
in Commercial Offers thanks to Readiness, Risk and Confidence
Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
A. Sylla, E. Vareilles, M. Aldanondo, T. Coudert, L. Geneste
and K. Kirytopoulos
Collaborative Design and Supervision Processes Meta-Model
for Rationale Capitalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123
Widad Es-Soufi, Esma Yahia and Lionel Roucoules
Design Archetype of Gears for Knowledge Based Engineering . . . . . . . . 1131
Mariele Peroni, Alberto Vergnano, Francesco Leali
and Andrea Brentegani
The Role of Knowledge Based Engineering
in Product Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
Giorgio Colombo, Francesco Furini and Marco Rossoni
Section 8.2 Industrial Design and Ergonomics
Safety of Manufacturing Equipment: Methodology Based on a Work
Situation Model and Need Functional Analysis . . . . . . . . . . . . . . . . . . . . 1151
Mahenina Remiel Feno, Patrick Martin, Bruno Daille-Lefevre,
Alain Etienne, Jacques Marsot and Ali Siadat
Identifying sequence maps or locus to represent the genetic structure
or genome standard of styling DNA in automotive design . . . . . . . . . . . . 1159
Shahriman Zainal Abidin, Azlan Othman, Zafruddin Shamsuddin,
Zaidi Samsudin, Halim Hassan and Wan Asri Wan Mohamed
Contents
xxiii
Generating a user manual in the early design phase
to guide the design activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167
Xiaoguang Sun, Rémy Houssin, Jean Renaud and Mickaël Gardoni
Robust Ergonomic Optimization of Car Packaging in Virtual
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
Antonio Lanzotti, Amalia Vanacore and Chiara Percuoco
Human-centred design of ergonomic workstations on interactive
digital mock-ups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187
Margherita Peruzzini, Stefano Carassai, Marcello Pellicciari
and Angelo Oreste Andrisano
Ergonomic-driven redesign of existing work cells: the “Oerlikon
Friction System” case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
Alessandro Naddeo, Mariarosaria Vallone, Nicola Cappetti,
Rosaria Califano and Fiorentino Di Napoli
Section 8.3 Image Processing and Analysis
Error control in UAV image acquisitions for 3D reconstruction
of extensive architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
Michele Calì, Salvatore Massimo Oliveri, Gabriele Fatuzzo
and Gaetano Sequenzia
Accurate 3D reconstruction of a rubber membrane inflated during
a Bulge Test to evaluate anisotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
Michele Calì and Fabio Lo Savio
B-Scan image analysis for position and shape defect definition
in plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
Donatella Cerniglia, Tommaso Ingrassia, Vincenzo Nigrelli
and Michele Scafidi
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
Part I
Integrated Product and Process Design
Designing and developing a new product is a complex and mult idisciplinary task.
It could, in the case of very complex products, involve a big number of specialist
and equipment. Always, the main goals of the process must be satisfying the customer demands while preserving the company or project team performance. To
ensure the success of the process a big number of methods, methodologies and
tools have been developed. These methodologies are subject to continuous improvements and adaptations to specific cases. In this sense, the framework of the
integrated product and process design was initially developed for the big c o mpanies, but in the recent years its use in mediu m size or even in small size co mpanies
has been reported. Another fact that has been detected is the growing interest to
propose greener products and more environ mentally friendly processes.
Some of the papers that are presented in this chapter correspond to proposals to
adapt, enhance or present new methods, tools and methodologies for integrated
product and process design. Some other papers present case studies that could help
increasing the knowledge and the easiness to imp lant similar processes to other
cases. All these articles could be of interest to the researchers and practitioners interested in increasing their knowledge in the state of the art of the integrated pro duct and process design.
Francisco X. Espinach - Univ. Girona
Roberto Razzoli - Univ. Genova
Lionel Roucoules - ENSAM Aix
Section 1.1
Innovative Design Methods
A SYSTEMATIC METHODOLOGY FOR
ENGINEERED OBJECT DESIGN :
THE P-TO-V MODEL OF FUNCTIONAL
INNOVATION
Geoffrey S Matthews
ABC Optimal Ltd, Botley Mills, Southampton, SO30 2GB, United Kingdom
Geoffrey S Matthews. Tel.: +44 756 8589569.
E-mail address: geoffrey.s.matthews@abcoptimal.uk.com
Abstract
This paper seeks to establish the foundations of a methodology offering
practical guidance to aid the innovative design of Engineered Object functionality.
The methodology is set in a P-To-V framework. The concept of the framework is
borrowed from an earlier work, but constituent elements are new.
Much recent work focuses on different aspects of innovation. However, there
seems to be a gap for an overarching framework guiding the process of innovative
design but with a clear focus on the technical aspects of the object to be
engineered. In other words, ‘A Systematic Methodology For Engineered Object
Design’.
The term ‘Engineered Object’ rather than ‘Product’ has been used, to make the
scope as wide as possible. Three Innovation Groups are proposed – Elemental,
Application and Combination.
From a case study review, factors are identified which provided a ‘spark of
imagination’ leading to technical problem resolution. The term Influencing Factor
is defined along with the concept of Innovation Groups. The Influencing Factor
Matrix is generated to highlight patterns linking Innovation Group and Influencing
Factor(s).
The final step in the construction of the P-To-V Model is the generation of an
overarching Model Operating Chart, which aggregates the various elements of the
model.
Keywords: Design, Methodology, Model, Operating Chart
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_1
5
6
G.S. Matthews
1 Introduction
1.1 Background
In ‘Winning At Innovation’(1), Trías De Bes, T. and Kotler P., propose a
model to enable a structured approach to navigate through the multitudinous
phases, steps and activities involved in innovation. Their work encompasses all
aspects from a company wide perspective. It does address technical elements by
reviewing established tools and techniques, but that is not the focus of the work.
Parraquez, P., 2015, in his thesis ‘A Networked Perspective on the Engineering
Design Process’(2), seeks to provide a framework to evaluate the efficiency and
effectiveness of the process, but the work is not intended to, and does not try to
address technical elements of the output from the process. Financial issues are
dealt with in the paper by Ripperda S, and Krause D., 2015, ‘Cost Prognosis of
Modular Product Structure Concepts’(3). Specific technical problems are dealt
with in theses such as the one by Gmeiner T., 2015, ‘Automatic Fixture Design
Based on Formal Knowledge Representation, Design Synthesis and
Verification’(4).
This paper however, constructs a model specifically related to the functional
aspects of design, which leads to an integrated approach to innovation.
1.2 Terminology
The title of this work contains the description ‘Engineered Object’ rather than
the more commonly occurring ‘Product’. The term ‘Product‘ is mostly associated
with items which result from some kind of factory based process which are often
purchased directly by the consumer, eg. a car, a washing machine, an electric
toothbrush. It is considered that this scope is too narrow for the intended purposes
of the methodology. Is the composite material wing of an aircraft or the space
frame roof support of an exhibition hall a ‘Product’? It is with this in mind that the
term ‘Engineered Object’ has been used to make the scope as wide and
generalised as possible.
For the purposes of this paper, ‘Innovation’ is taken to mean developing
something new but which is based on, or has some linkage with what already
exists. To enable practical application of the methodology three groupings are
proposed, which are defined as follows:Elemental Innovation – The complete Engineered Object remains in its well
established form but there is an ‘internal’ change to an element of the object which
improves the function.
A Systematic Methodology for Engineered …
7
Application Innovation – The Engineered Object itself is not fundamentally
altered but its use and application is changed in terms of positioning or orientation.
Combination Innovation – The Engineered Object in this case is new but in
itself, has no innovative elements. It is rather the bringing together of various
existing elements in a new combination or aggregation which provides advantages
hitherto unavailable.
Innovative designs require a ‘spark of imagination’ and from a case study
review, the paper identifies various examples. To enable classification, further
reference and eventually use as tools in the process of innovative design, the
‘sparks of imagination’ are described using an adjective / noun format. These
descriptions are defined as Influencing Factors.
The Influencing Factor Matrix links Innovation Group and Influencing
Factor(s).
1.3 Model Structure : P-To-V
This section establishes the structure of the model. Passing reference is made to
typical phases of the design process and established innovation methodologies but
as these are well researched and documented subjects, the attention is brief. The
paper’s subject is the provision of a tool which can be used at various stages by all
of the participants in the innovation process.
Previous works have well illustrated the point that innovative design does not
proceed in an orderly time sequence and this paper sets out the interactions
between the roles in a multi-nodal display showing frequent reverse interchange of
ideas and information, but at all times coordinated by the Innovation Project
Leader.
The final step in the construction of the P-To-V Model is the generation of an
overarching Model Operating Chart. It is here that the various elements of the
model are aggregated.
1.4 Limitations and Further Work
This short section brings the paper to a conclusion and establishes next steps.
8
2
G.S. Matthews
Influencing Factor Concept
2.1 Concept Origin
The idea for the use of ‘Influencing Factors’ originates in the author’s work
experience in the analysis and improvement of various processes – mainly
industrial but also administrative – using the Methods Time Measurement (MTM)
methodology(5) . The focus there is on the time required by human operatives to
perform certain tasks and the basis is analysis of the movements undertaken
(mainly) by the arms, hands and fingers. The time required is dependent on
several variables, eg., distance, visibility of the reach-to point, frequency of the
motion, etc. Clearly it takes longer to reach 40cm than it does to reach 10cm.
These variables are called Influencing Factors.
2.2 Concept Application
The idea in this paper is to apply this Influencing Factor concept, albeit in a
modified form, to provide structure to the identification, evaluation and listing of
many varied and different events which have an impact on the initial phases of the
functional innovation process.
2.3 Out Of The Box
Within the context of a technical paper it is natural to align the analysis with
engineering characteristics – mass, force, elasticity, statistical validity, etc.
However, there is a deliberate attempt in this paper to seek examples of
Influencing Factors detached from the world of scientific method. Further
explanation will follow, but it is initially surprising to find the inspiration for an
innovative solution to a construction site challenge while preparing entertainment
for a child’s birthday party.
3
Brief Case Study Review
Space does not permit a full description of the case studies reviewed. The target
was to identify what type of events caused the initial ‘spark of imagination’ which
then led on to the development of innovative designs.
A Systematic Methodology for Engineered …
9
Of particular interest were examples where the initial spark was not found
through a ‘classic’ engineering procedure. One such example was a road sign
stating simply ‘Bridge May Be Icy’ – Figure 1 below, which led to conductive
concrete(10). A further example was an innovative method of providing the
temporary support work for the construction of a concrete dome – Figure 2 below.
This borrowed the idea of inflated bouncy castles used for children’s
entertainment purposes(6).
Fig. 1. Bridge May Be Icy
Fig. 2. Concrete Dome Support
A summary of the case studies along with the allocation to one or other of the
Influencing Factors is shown in Figure 3 below. References to the case study
sources are provided in the figure.
10
G.S. Matthews
Case Study Summary
New Object Description
Concrete Dome Formwork inflated temporary dome using
1mm thick rubber based flexible
fabric
Low Cost Micro-Hydro Scheme
(Peru) Intake Structure
Rubik 's Cube as Toy/Game
BMC Mini
New Object Basis
Ref Influencing Factor
Observation of 'Bouncy
Castle' erection while
preparing for children's
entertainment activities
6
Masonry construction
7
Designed as a teaching aid to
8
explain spatial relationships
Transverse mounted engine /
9
in-sump gearbox
Family Activity
Traditional
Expertise
Alternative
Perspective
Limited Space
Conductive Concrete
Addition of steel shavings
and carbon particles to an
otherwise standard concrete
10
Natural
Observation
Bitumen Emulsion Binders used for highway maintenance
purposes
Short term viscosity
reduction through emulsion
technology
11
Changed
Regulations
F1 Hydro-Pneumatic Suspension Micro-Filter technology
Virtual Fencing for Cattle
Control
Dyson Dual Cyclone Household
Vacuum Cleaner
Sony Walkman Stereo Cassette
Player
Higgs Boson Paper (Physics
Letters)
High Pressure Gasoline Fuel
Pump Inlet Valve
Satellite technology linked to
programmable cattle collars
Industrial cyclone extraction
system
Transportable cassette
recorder/player
Addition of model
explanation
Snowboard
Basestone
CollaborationTool
Smartphone / Tablet
Fig. 3. Case Study summary
12
13
14
15
Changed
Regulations
Conference
Proceedings
Adjacent
Functionality
Individual
Requests
16
Rejected Ideas
Spiral blade spring
17
Technical
Reflection
Skateboard and surfboard
Linkage between office and
construction site digital
information via tablet app
Mobile phone, digital
camera, computer
18
Group Input
19
Personal
Frustration
20 New Technology
A Systematic Methodology for Engineered …
4
11
The P-To-V Model
4.1 Design and Innovation Methodologies
There are many proven design methodologies and models. Cross(22) notes in
ascending order of complexity, those proposed by French, Archer, Pahl and Beitz,
VDI (Verein Deutscher Ingenieure) and March. Bürdek(23) reproduces his
previously proposed feedback loop model, but with the explanatory comment that
‘the repertoire of methods to be applied depends on the complexity of the
problems posed’.
Methodologies for innovation are also well established. Brainstorming is
clearly a classical starting point with progression using techniques such as
Innovation Funneling, Technique Mapping and many others.
All of the above models seem to lack a linkage between the idea generation
activities and the roles / responsibilities of those involved in the process.
‘Winning At Innovation’(1), has the subtitle of ‘The A-To-F Model’. It
identifies six roles - Activators, Browsers, Creators, Developers, Executors,
Facilitators. Innovation is addressed at the strategic level, covering general
business aspects of the process. Some attention is given to Product Design but that
is not the focus of that work.
The A-To-F concept gave rise to the idea of a similar model based
methodology, but with more focus on the functional aspects of the Engineered
Object. Hence the P-To-V Model.
4.2 P-To-V Characterisitics
The P-To-V Model has the following roles – Provokers, Quantifiers,
Researchers, Specifiers, Transformers, Utilisers, Validators.
Intended features of this model though are fluidity and flexibility. It should not
be applied by following common lines of demarcation between organizational
departments. The roles should be thought of as ‘task’ focused and not ‘function’
focused. Of course, such fluidity and flexibility, if not properly managed would
lead to chaos and failure. This requires a certain amount of oversight and control.
The responsibility for this lies in the role of the Provoker, and here there must be
an element of continuity. At the strategic level, this person will be the sponsor for
any particular project and at the operational level the recommendation is the
appointment of an Innovation Project Leader (IPL).
12
G.S. Matthews
4.3 Influencing Factors - Linkage to Roles
The next step is to connect the Innovation Groups, the Influencing Factors and
the Roles. In order to do that it is necessary to explore a little the function of the
Influencing Factors. They are not intended to be technical formulae giving a
definite and precise answer to a specific question. Rather, they are intended to be
Signposts suggesting where inspiration may be found. A potential but not
exhaustive linkage is provided in the Influencing Factor Matrix in Figure 4 below.
Influencing Factor Matrix
Innovation
Group
Influencing Factor
Application
Application
Application
Application
Elemental
Elemental
Elemental
Elemental
Elemental
Elemental
Elemental
Combination
Combination
Combination
Family Activity
Traditional
Alternative
Limited Space
Natural
Changed
Conference
Adjacent
Individual Requests
Rejected Ideas
Technical
Group Input
Personal
New Technology
Participant Roles
P
Q
x
x
x
x
x
x
x
x
x
x
R
S
T
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
U
V
x
x
x
x
x
x
x
x
x
x
x
x
Fig. 4. Influencing Factor Matrix
4.4 Model Operating Chart (MOC)
The Model Operating Chart (MOC) can be seen in Figure 5 below. A short
explanatory description follows:Overall Layout – This is basically in circular format indicating that innovation
is an iterative process with several loops.
Provokers – The role of the Provoker lies outside the circle because this is an
‘oversight’ role rather a ‘task’ role. Actual world reality demands that there is a
managerial role providing continuity and this is indicated by the existence of the
Innovation Project Leader who has a double function. Firstly, to be the
representative of the Provoker on a day-to-day, week-to-week basis, and secondly
A Systematic Methodology for Engineered …
13
to manage and co-ordinate the activities of the other role holders. This role
therefore sits at the center of the MOC.
Other Roles – These are located round the circumference of the circle, with
one-way arrows leading from one role to the next. These arrows indicate how
innovation projects should ideally (and on odd occasions do actually) flow.
Fig. 5. Model Operating Chart
Interface with IPL – It is seen that there is a two-way arrow connecting each of
the circumferential roles to the IPL. This recognizes two things. Firstly, that the
IPL has an overall co-ordination responsibility and secondly that the project
activities may not, and in fact often do not, flow in a laminar fashion. Turbulence
does occur and a Systematic Methodology needs to recognize that and have an
appropriate mechanism.
Innovation Group / Influencing Factor – This short loop provides a roadmap
suggesting that following each new task allocation, the person discharging the role
should undertake a short review of the Influencing Factors to aid the decision
about how the task should be discharged.
5
Further Work
The P-To-V Model is a work in progress. The next steps will be to extend the
range of Influencing Factors and add algorithmic analysis using, for example,
weightings against the various Influencing Factors depending on which type of
Innovation Group is relevant.
14
G.S. Matthews
Acknowledgments My thanks go to the following individuals who responded personally to
questions about ‘sparks of imagination’. Professor C Tuan – University of Nebraska-Lincoln, Ms
S Selvakumaran – Cambridge University, Ingenieur L Mancini, Magneti Marelli S.p.a.,
Professor T Waterhouse – Scotland’s Rural College.
References
1. Trías De Bes, T. and Kotler P., ‘Winning At Innovation – The A-to-F Model’, 2011,
Palgrave Macmillan, England
2. Parraquez, P., 2015, ‘A Networked Perspective on the Engineering Design Process’.
3.
Ripperda S, and Krause D., 2015, ‘Cost Prognosis of Modular Product Structure
Concepts’ 20th International Conference on Engineering Design, ICED15, Mailand
(2015)
4. Gmeiner T., 2015, ‘Automatic Fixture Design Based on Formal Knowledge
Representation, Design Synthesis and Verification’.
5. Bokranz R. and Landau K., Handbuch Industrial Engineering Produktivitätsmanagement mit MTM, 2012, Schäffer-Poeschel Verlag für WirtschaftSteuern-Recht
6. Priestly A. Engineering The Domes. Magazine of The Institution Of Civil Engineers,
2016, March, P28.
7. Selvakumaran S. Making low-cost micro-hydro schemes a sustainable reality.
Proceedings of The Institution Of Civil Engineers, Volume 165, Issue CE1, Paper
1100012.
8. Smith N. Classic Project. Magazine of The Institution OF Engineering And Technology,
2016, March, P95.
9. Bardsley G. Issigonis: The Official Biography, Icon Books, ISBN 1-84046-687-1.
10. Tuan, C. (2008). "Roca Spur Bridge: The Implementation of an Innovative De-icing
Technology." J. Cold Reg. Eng., 10.1061/(ASCE)0887381X
11. Heslop M.W. and Elborn M.J. Surface Treatment Engineering, Journal of The
Institution Of Highways And Transportation, 1986, Aug/Sept, P19.
12. Cross N. Design Thinking, London/NewYork, Bloomsbury Academic, P37
13. Umstätter C. The evolution of virtual fences: A review, Computers and Electronics in
Agriculture, Volume 75, Issue 1, January 2011, Pages 10–22.
14. Adair J. Effective Innovation, London, Pan Macmillan, P225
15. Cross N. Engineering Design Methods, Chichester, John Wiley and Sons, P208
16. Carroll S. The Particle At The End Of The Universe, London, Oneworld Publications,
P223.
17. Mancini L. Email-28 April 2016, Magneti Marelli S.p.a.
18. Schmidt M. Innovative Design Functions Only In Teams. VDI Nachrichten, 2011,
Nr43, S26.
19. Siljanovski A. Sharing Network. Magazine of The Institution Of Civil Engineers, 2016,
March, P46.
20. Norman D.A. The Design Of Everyday Things, New York, Basic Books, P265.
21. Norman D.A. The Design Of Everyday Things, New York, Basic Books, P279-280.
22. Cross N. Engineering Design Methods – Strategies For Product Design, Chichester,
John Wiley and Sons, P29-42.
23. Bürdeck B.E. History, Theory and Practice of Product Design, Basel, Birkhäuser, P113.
Influence of the evolutionary optimization
parameters on the optimal topology
Tommaso Ingrassiaa,*, Antonio Mancusoa, Giorgio Paladinoa
a
DICGIM, Università degli Studi di Palermo, viale delle Scienze, 90128 Palermo, Italy
*
Corresponding
author.
tommaso.ingrassia@unipa.it
Tel.:
+3909123897263;
E-mail
address:
Abstract Topological optimization can be considered as one of the most general
types of structural optimization. Between all known topological optimization
techniques, the Evolutionary Structural Optimization represents one of the most
efficient and easy to implement approaches. Evolutionary topological optimization
is based on a heuristic general principle which states that, by gradually removing
portions of inefficient material from an assigned domain, the resulting structure
will evolve towards an optimal configuration. Usually, the initial continuum
domain is divided into finite elements that may or may not be removed according
to the chosen efficiency criteria and other parameters like the speed of the
evolutionary process, the constraints on displacements and/or stresses, the desired
volume reduction, etc. All these variables may influence significantly the final
topology.
The main goal of this work is to study the influence of both the different
optimization parameters and the used efficiency criteria on the optimized
topology. In particular, two different evolutionary approaches, based on the von
Mises stress and the Strain Energy criteria, have been implemented and analyzed.
Both approaches have been deeply investigated by means of a systematic
simulation campaign aimed to better understand how the final topology can be
influenced by different optimization parameters (e.g. rejection ratio, evolutionary
rate, convergence criterion, etc..). A simple case study (a clamped beam) has been
developed and simulated and the related results have been compared. Despite the
object simplicity, it can be observed that the evolved topology is strictly related to
the selected parameters and criteria.
Keywords: Topology optimization, Evolutionary optimization, rejection ratio, FEM,
efficiency criteria.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_2
15
16
T. Ingrassia et al.
1 Introduction
The improvements in the design of structural components are often reached by an
iterative approach driven by the designer experience. Even if this represents a key
aspect of the design process, an approach that is completely based on experience,
usually, can lead to only marginal improvements and would take quite a long time.
A complementary approach is what makes use of structural optimization methods
[1,2] to determine the optimal characteristics, topology and/or shape of an object.
In the recent years, structural optimisation has considerably developed and the
interest concerning its practical applications is steadily growing in many
engineering fields [3-8]. Of course, the improvements of the information
technology tools have strongly contributed to the spreading of the numerical
analysis methods, like FEM or BEM, which can be effectively used during the
optimization process of a structure. In the past, many research activities related to
the optimization methods were focused primarily on mathematical aspects of the
problem, trying to adapt the available analytical and numerical methods to solve
particular structural problems. These kinds of problems, in fact, are quite difficult
to solve making use of non-convex functions with several variables (continuous
and discrete). Practical applications of these optimization methods usually forces
the designer to simplify the problem, often dramatically, with a consequent lost of
reliability.
Therefore, in the engineering field, the need for new optimization procedures
(alternative to classic mathematical approaches) has arisen during years. These
alternative approaches would allow maintaining some generality and accuracy in
the description of real complex problems, but leading to solutions reasonably
similar to those considered rigorously optimal. Consequently, since the early
1990s, different new optimization methodologies, based on numerical approaches
[3, 8, 9], have been proposed. In this scenario, the Evolutionary Structural
Optimization (ESO) has become one of the most interesting and known technique
[6, 10, 11]. Following the ESO approach, the optimal solution is searched basing
on heuristic rules. Unlike traditional methods, the evolutionary strategy has shown
a high degree of efficiency for different typologies of structural problems [11].
The solutions found using the ESO approach, however, might be influenced by the
chosen optimization parameters [10, 11]. Although several papers are found in
literature concerning the ESO approach, to the authors knowledge, much little
information is available regarding the effect of the parameters on the optimal
solution.
In this work, it has been investigated how the main control parameters, used in an
evolutionary optimization process, can affect the result. One of the main
advantages of the proposed approach concerns the comparison between two of the
most commonly used efficiency criteria. The goal is to provide useful guidelines
that can lead designers to obtain the best result for every (particular) optimization
problem.
Influence of the evolutionary optimization ...
17
2 Evolutionary Structural Optimization
The ESO method represents one of the most efficient and easily implemented
approach. The working principle of the evolutionary technique requires to
gradually eliminate parts of inefficient material from an assigned domain. In this
way, the topology of the structure evolves toward an optimal configuration. The
initial domain is typically divided into Finite Elements (FE) and the removal of
material is based on particular efficiency criteria. An evolutionary optimization
procedure is generally structured as follows [12-14]. At first, the whole domain is
meshed using finite elements; then the boundary conditions (loads and constraints)
are imposed and a numerical FEM analysis is performed. As soon as the solution
is found, the obtained numerical results are sorted on the basis of the chosen
efficiency criterion (e.g. von Mises stress, strain energy, displacement, etc..). The
values of the chosen parameter of each finite element are then compared with a
reference value; if the FE value is lower than the reference one, the finite element
is removed. The reference value is usually a percentage of the maximum
parameter value found in the structure. As an example, if the von Mises stress
efficiency criterion is used, for each finite element the following inequality is
checked:
௏ெ
ߪ௝௏ெ ൑ ܴܴ௜ ‫ߪ כ‬௠௔௫
(1);
where:
-ߪ௝௏ெ is the von Mises stress of the j-th element;
-ܴܴ଴
൏ ܴܴ௜ ൏ ܴܴ௙ is the Rejection Ratio during the i-th iteration;
- ܴܴ଴ ܴܴܽ݊݀௙ are, respectively, the initial and final Rejection Ratios;
௏ெ
- ߪ௠௔௫
is the maximum value of the von Mises stress calculated in the structure at
the i-th iteration.
As soon as all elements that verify the inequality (1) during the i-th iteration are
removed, a steady state is reached. Consequently, the rejection ratio must be
increased to further improve the structure. It can be done according to the
following formulation [12,14]:
ܴܴ݅ + 1 = ܴܴ݅ + ‫ܴܧ‬Ǣ
where ER represents the Evolutionary Rate.
So that, a new FEM analysis is performed, the von Mises stress values are updated
and all the finite elements verifying the efficiency criterion (1) are removed. The
procedure is recursively repeated and it stops as soon as the convergence criterion
[12, 15] is verified (e.g. when the final value of the rejection ratio, RRf, is reached
or the Maximum Reduction of Volume, MRV, is obtained). The initial rejection
ratio is usually defined in the range 0 < ܴܴ0 <1% but, in some cases, values higher
than 1% can be considered to avoid absence of elements to be removed (since the
inequality (1) is not verified). The ends values (initial and final) of the rejection
18
T. Ingrassia et al.
ratio are usually empirically defined basing on the experience of the designer. A
suitable choice of these values [11, 15] can assure a progressive removal process
of the elements.
3 Implementation of the procedure
In this study, two different efficiency criteria, based respectively on the von Mises
(VM) stress and the Strain Energy (SE) [9 - 11], were investigated. In the first
case, as described in the previous section, the elements removal is based on the
value of the von Mises stress of each element, compared with a percentage of the
௏ெ
, calculated in the domain. Through this approach, a
maximum stress value, ߪ௠௔௫
homogeneous equivalent stress level structure can be obtained (uniform strength
structure).
The approach based on the second efficiency criterion, instead, removes the
elements having the lowest values of strain energy.
Both optimization procedures have been implemented using the Ansys Parametric
Design Language (APDL) and the Ansys software as finite element code.
In order to ensure a more gradual evolutionary process, a new control parameter,
called RER (Removed Element Rate), has been introduced. The RER parameter
takes into account the number of elements removed at each iteration. In particular,
if before reaching the steady state of the i-th iteration, the number of removed
elements exceeds the value RER, the iteration is interrupted, the rejection ratio is
updated and a new iteration starts. If the rejection ratio value is erroneously too
large, the use of the new parameter avoids to remove too much material during a
single iteration and, consequently, it ensures more accurate and reliable results.
Independently from the efficiency criterion, the optimization procedure is
structured [16-17] as shown in Figure 1.
Influence of the evolutionary optimization ...
19
Fig. 1 Workflow of the implemented ESO procedure
4 Case Study
In order to better understand the influence of the described parameters on the final
topology, a clamped steel beam has been used as a case study. A vertical load of
100 N has been applied to the free end. The main dimensions and the FEM model
(meshed with 8-node brick elements) of the beam are shown in Figure 2.
Fig.2 Dimensions (left) and FEM model (right) of the case study.
20
T. Ingrassia et al.
Table 1 – Range values of the main optimization parameters.
Efficiency Criterion
Parameter
von Mises Stress
Strain Energy
Initial Rejection Ratio – R0
1% ÷ 6%
1% ÷ 6%
Final Rejection Ratio - Rf
5% ÷ 30%
5% ÷ 30%
Evolutionary Rate - ER
0.5% ÷ 2%
0.5% ÷ 2%
Maximum Reduction of Volume - MRV
60%
60%
Removed Elements Rate - RER
10 ÷ 20
10 ÷ 20
Number of Finite Elements
1500 - 3920
1500 - 3920
Table 1 shows the values ranges of the main parameters for a given 60% of MRV.
According to Table 1, a deep investigation has been carried out aimed to find the
influence of the described parameters on the final topology. In the following, the
main interesting results will be highlighted and discussed.
5 Results
Figure 3 shows the results obtained using the von Mises efficiency criterion with
different values of ER (1% - 2%) and without any check on the number of
elements removed at each iteration (no RER control imposed).
.
Fig. 3 – VM criterion: influence of the ER parameter on the optimized solution.
Introducing the RER parameter in the VM efficiency criterion, for a given
constant value of ER (equal to 1%), different results have been obtained. In
particular, figure 4 shows how the optimal topology is remarkably affected when
the RER parameter changes from 10 to 20.
Fig. 4 – VM criterion: influence of the RER parameter on the optimized solution
Figure 5, instead, shows that using the VM efficiency criterion, the final topology
slightly changes by varying the mesh size.
Influence of the evolutionary optimization ...
21
Fig. 5 – VM criterion: influence of the mesh size on the optimized solution
Finally, the plot in figure 6 shows that the final rejection ratio (RRf) does not
affects considerably the maximum von Mises stress value while, on the contrary, it
has a significant influence on the minimum value on the optimized structure.
Fig. 6 – VM criterion: influence of the RRf on the von Mises stresses
Results of the optimization process based on the strain energy criterion are
influenced in a similar way with respect to the von Mises stress criterion. Figure 7
shows how the RER parameter affects the optimal topology obtained using the SE
efficiency criterion. These results have been obtained considering constant values
of RR0 (1%) and ER (0.5%).
Fig. 7 – SE criterion: influence of the RER parameter on the optimized solution
Moreover, a comparison of the optimized structures obtained with both the criteria
is shown in figure 8. One can notes many details that differentiate the optimal
topologies.
22
T. Ingrassia et al.
Fig. 8 – Optimized structures using the von Mises (on the left) and the Strain Energy (on the
right) criteria
Finally, as it can be noticed from figure 9, the criterion of the strain energy allows
to obtain higher volume reductions than the von Mises stress criterion for a given
value of the final rejection ratio.
Fig. 9 – (final volume/initial volume) vs final rejection ratio.
6 Conclusions
Topology optimization methods allow to obtain high-performance structures with
significant reductions in overall dimensions and masses. In this scenario, the ESO
method represents one of the most effective approach to solve large-scale
topological optimization problem. The designer, however, is not always able to
choose a priori the most suitable parameters set of the ESO optimization process
to obtain the best result in the shortest time. In this work, two different efficiency
criteria, commonly used in the evolutionary optimization processes, have been
investigated. In particular, the von Mises stress and the strain energy criteria have
been implemented and a systematic numerical campaign has been performed
aimed to better understand how the optimization parameters can affect the ESO-
Influence of the evolutionary optimization ...
23
based solutions. In this contest, a new parameter, called RER – Removed
Elements Rate, has been introduced by the author for the first time. The obtained
results have shown the remarkable influence of the efficiency criteria on the
optimal topology in terms of material distribution and volume reduction.
Moreover, the new parameter RER allows a more accurate control of the elements
removal process and a better solving of the optimization problem. The study can
provide useful guidelines for a better understanding and foreseeing of the results
of an ESO-based optimization process, so contributing to a larger spreading and
use of this methodology during the design of high-performances structures.
References
1.
Vanderplaats, Garret N., Numerical optimization techniques for
engineering design: with applications. Vol. 1. New York: McGraw-Hill,
1984
2. Ingrassia, T., Nigrelli, V., Design optimization and analysis of a new rear
underrun protective device for truck, (2010) Proceedings of the 8th
International Symposium on Tools and Methods of Competitive
Engineering, TMCE 2010, 2, pp. 713-725
3. Tromme, E., Tortorelli, D., Brüls, O., Duysinx, P., Structural
optimization of multibody system components described using level set
techniques, (2015) Structural and Multidisciplinary Optimization, 52 (5),
pp. 959-971
4. Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., A multi-technique
simultaneous approach for the design of a sailing yacht, (2015)
International Journal on Interactive Design and Manufacturing, DOI:
10.1007/s12008-015-0267-2
5. Cappello, F., Ingrassia, T., Mancuso, A., Nigrelli, V., Methodical
redesign of a semitrailer, (2005) WIT Transactions on the Built
Environment, 80, pp. 359-369
6. Nalbone, L., et al., Optimal positioning of the humeral component in the
reverse shoulder prosthesis, 2014, Musculoskeletal Surgery, 98 (2), pp.
135-142.
7. Cerniglia, D., Montinaro, N., Nigrelli, V., Detection of disbonds in multilayer structures by laser-based ultrasonic technique, 2008, Journal of
Adhesion, 84 (10), pp. 811-829
8. Savas, S., Evolutionary Topological Design of Two Dimensional
Composite Structures American International Journal of Contemporary
Research, 2012, Vol.2 No.3, pp-76-88
9. Ingrassia, T., Nigrelli, V., Buttitta, R., A comparison of simplex and
simulated annealing for optimization of a new rear underrun protective
device, (2013), Engineering with Computers, 29 (3), pp. 345-358
10. Nha Chu, D., Xie, Y.M., Hira, A., Steven, G.P., On various aspects of
evolutionary structural optimization for problems with stiffness
24
T. Ingrassia et al.
11.
12.
13.
14.
15.
16.
17.
constraints, Finite Elements in Analysis Design, vol 24, 1997, pp.197212
Deaton, J. D., Grandhi, R. V., A survey of structural and
multidisciplinary continuum topology optimization: post 2000”, 2014,
Struct Multidisc Optim 49:1-38,
Yildiz, A. R., Comparison of evolutionary-based optimization algorithms
for structural design optimization, Engineering applications of artificial
intelligence, 2013, 26.1, pp. 327-333
Li, Q., Steven, G.P., Xie, Y.M., Evolutionary structural optimization for
connection topology design of multi-components system, Engineering
Computations, 2001, Vol.18 No.3/4, pp. 460-479
Garcia, M.J, Ruiz, O.E., Steven, G.P, Engineering design using
evolutionary structural optimisation based on iso-stress-driven smooth
geometry removal. NAFEMS World Congress, NAFEMS, Milan, Italy,
April 2001; 349–360
Xie, Y.M., Steven, G.P., Optimal design of multiple load case structures
using an evolutionary procedure, Engineering computations, 1994, Vol
11, pp.295-302
Leu, L.J., Lee, C.H., Optimal design system using finite element package
as the analysis engine, Advances in Structural Engineering, 2007, 10.6,
713-725
Zhang, D., Liang, S., Yang, Y., A constraint and algorithm for stressbased evolutionary structural optimization of the tie-beam problem, 2015,
Engineering Computations, 32:6, 1753-1778
Design of structural parts for a racing solar car
Esteban BETANCUR1*, Ricardo MEJÍA-GUTIÉRREZ1, Gilberto OSORIOGÓMEZ1 and Alejandro ARBELAEZ1
1
Universidad EAFIT, Medellín, Colombia
*Corresponding author. Tel.: +57-3136726555; E-mail address: ebetanc2@eafit.edu.co
Abstract The racing solar cars are characterized by the constant pursuit of energy
efficiency. The tight balance between energy inputs and consumption is the main
reason to seek optimization in different areas. The vehicle weight is directly related to the energy consumption via rolling resistance of the tires. The relation between weight and energy consumption is quantified. The structural optimization
techniques are studied and a series of rules is obtained to iteratively improve the
shape of structural parts reducing its weight. The implementation is done in a
practical case and satisfactory results are achieved.
Keywords: Solar car, structural optimization, weight reduction, energy efficiency, innovative design.
1 Introduction
A solar car is an electric vehicle that has solar panels as energy input. Since three
decades, solar cars competitions have been executed in order to compare different
solar car concepts. The most important competition is called the World Solar
Challenge (WSC) and its carried out every two years in Australia. The challenge is
to travel with the solar car from Darwin to Adelaide (3022km) using only solar
energy. Every event, new strict rules are imposed to force the cars optimization.
Last event principal regulations for the challenger class were: 6m2 of Silicon solar
panel (1000 watts approx.), 20 kg of Lithium battery (5kWh approx.), one driver
and 4 wheels. With these constrains every team should design and build the most
efficient car to achieve the challenge. Between the teams that achieve the challenge, a race time classification is done; therefore the complete objective is to finish the race in the less possible time.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_3
25
26
E. Betancur et al.
The solar car development involves innovation in multiple engineering areas such
as aerodynamics, control, mechanics, electronics, solar panels, and others. These
multidiscipline projects have been studied by different research groups and universities all over the world and nowadays is getting more popular time by time.
Some of the main studies on this topic are done by [1] and [2].
The weight reduction of components is a necessity that was first found on the aerospace industry, in our time is present in automotive, mobility and mechanical industries with the need of cost and energy efficiencies. On solar cars applications,
the weight is proportional to the energy consumption and is a major topic to consider since the design.
2 Solar car efficiency
The solar car challenges come with the objective of testing the energy efficiency
of the vehicles. In general, they consist on meeting a distance in the shortest possible time and using only solar energy. To optimize the vehicle energy efficiency,
one has to maximize the energy input and storage and minimize the energy output
at high speeds. The energy input depends on the solar panel properties, the storage
is defined by the battery cells properties and array topology and the energy output
is defined by the traction system and the loss forces. Assuming that the vehicle is
on a flat road and travelling at constant speed, the loss forces are mainly the aerodynamic drag and the roll resistance force that are defined by equations (1) and (2)
respectively.
1
Faero = Cd AUv2
2
(1)
Froll = CrrW
(2)
Where:
Faero: Aerodynamic force [N]
ρ: Air density [kg/m3]
Cd: Drag coefficient [Adim]
A: Vehicle frontal area [m2]
v: Vehicle velocity [m/s]
Froll: Roll resitance force [N]
Crr: Roll resistance coefficient [Adim]
W: Vehicle weight [N]
Design of structural parts for a racing solar car
27
Then, the total energy consumption can be known by calculating the work exerted
by the forces on the complete race distance (see Eq. (3)). The WSC has a distance
of 3022 km and the used tires have a roll resistance coefficient of C rr=0,005 approximately. Therefore, for this specific case every kilogram of mass on the vehicle means an energy consumption of 41,2 Wh on the race (0,8% of the total battery capacity). There is a direct dependency between the energy efficiency and the
car weight, and then it is mandatory to reduce this weight as much as possible.
xf
Erace = ³ (Fdrag Froll )dx
(3)
x0
3 Mechanical components design
The solar car suspension system has the main function of giving support to the car
all the time and under any possible load condition that the race can cause. The
bump damping and the low weight are secondary properties of this system. For the
design and manufacture of the suspension components the objective is to perform
the main function with the less possible weight including the bump damping needed for the vehicle stability on the road.
The design constrains involve loading conditions, geometric limits on vehicle,
manufacture capabilities, available materials and components, wheel diameter,
among others. Therefore the suspension design process can be defined as an optimization problem where the weight should be minimized changing the shape of
the components and subject to all the listed design constrains. To solve this problem, structural optimization techniques are studied.
3.1 Structural Optimization
The structural optimization using computational methods can be divided in two
main categories, the topology and the structural one. Although both can have the
same objective functions, the mathematical definition differs from one to another.
For topology optimization the most common method is to define a design space
and a density function (ρ) on it. Then, one can have regions with almost zero (ρ =0,
non-material) or one (ρ=1, structural material) density values (See [3]). This way,
any shape that fits the design space can be obtained. Since the late eighties, these
techniques have been studied and new variances have been proposed, the most
popular methods are SIMP [4], Evolutionary Structural Optimization (ESO) [5]
and Bidirectional ESO (BESO) [6].
28
E. Betancur et al.
The shape optimization is, in general terms, to modify only the boundary of the
body in order to optimize an objective function. In contrast to the topology optimization, these techniques do not produce (in a natural way) topological changes
on the domain (i.e. new holes) and are useful for small shape variations. Shape
sensitivity and shape derivatives are the main topics for this discipline (see [7]).
Level Set boundary representation (See [8] and [9]) proposes a macrogeometrical
implicit boundary representation by isocontours of a Level Set Function to control
the shape and topology variations during the optimization process.
Different combinations of the two optimization families (topological and shape
optimization) have been proposed and studied. In 2000 Sethian and Wiegmann [9]
developed an optimization procedure using level set boundary representation defined by the following steps:
1. Initialize; find stresses in initial design.
2. While termination criteria are not satisfied do
3. Cut new holes according to the stress distribution and the removal rate.
4. Move boundaries via the level set method, according to the stress distribution and the removal rate.
5. Find displacements, stresses, etc.
6. If the constraints are violated reduce the removal rate and revert to previous
iteration.
7. Update the removal rate.
With this process, topological changes are made with the new holes creation and
shape variations with the boundary movement.
These different optimization methods and the available commercial programs for
structural optimization, do not take into account the manufacture methods and capabilities. The obtained shapes usually are only reachable using 3D printers and
the standard CNC manufacture process results impractical or non-viable.
4 Design process
The vehicle suspension design begins with the definition of basic shapes that fulfill the geometric constrains, the designation of the maximum loading condition
that can happen on race and the safety factor for the design. Then the loads of each
component are defined and the stress (σ) and strain conditions are found using finite elements simulation.
An iterative shape improvement process is done for each component. The process
steps are listed below and the redesign conditions are obtained from the structural
optimization literature review.
1. Redesign the element following these rules:
a. Remove low stressed material making holes if possible
Design of structural parts for a racing solar car
29
b. Remove low stressed material moving shape boundary inwards
c. Add material (move boundary outwards) where the stress exceeds the
maximum
d. Design the new shape according to the manufacture capabilities
2. Find the weight, stress and strain distribution of the new design
3. Stop If (weight<desired weight) and (σmax< material yield stress) or maximum iterations reached, else go to step 1.
With this design process, the optimal shape is not reached but a significant weight
reduction is achieved. The resulting structural component has an improved material distribution with respect to the initial one and the stress distribution is homogenized.
6 Results
The proposed method was used for the design of the structural components for the
EAFIT-EPM Solar Car participating in the WSC 2015. A single-sided swing-arm
suspension (see Fig. 1) was defined for the 4 wheels, a preliminary design was
created and the components were improved one by one. The CAD software CREO
Parametric 3.0 was used for the design of the components and the CAE software
ANSYS Workbench 15.0 for the structural FEM simulation. Since the necessity of
the designer and manufacturer criteria for modifications, the improvement iterations were not done automatically.
Fig. 1. Single-sided swing-arm suspension diagram.
The knuckle is a 3-link component that is articulated to the suspension arm, supports the wheel axis, and is attached to the suspension shock (see Fig. 1.). This
component was upgraded on 10 iterations using the proposed design process and
the manufacture was to be done on a CNC milling machine, the material is 7075
30
E. Betancur et al.
Aluminum. For the design of the components, a maximum state of loads was defined as the critical instant on which the vehicle is turning, braking and bumping
with a 3.5G, 1.2G and 3.5G acceleration respectively (G: acceleration of gravity).
Figure 2 illustrates the loads for the complete suspension system. Using rigid body
approximations, the loads were calculated on all the components and the boundary
conditions for the knuckle simulation were defined (See Fig. 3).
Fig. 2. System state of loads: Front view (left) and side view (right) based on the car orientation.
Fig. 3. Knuckle state of loads.
An initial CAD design is created with parametric dimensions based on the design
space. The boundary conditions for the structural simulation are defined as: displacement constraint on the arm link, force and moment reactions on the wheel axis link and force reaction on the shock link (See Fig. 3). Based on the stress distribution (See Fig. 4), the material is removed or added as defined on Section 4, The
maximum stress is on the suspension arm support for all the proposed designs
(See Fig. 4), a material addition is done in this region. The weight and max stress
evolution of the improvement process are presented on Figure 5, a gain of 113g
was obtained from the initial design while the structural safety of the component is
Design of structural parts for a racing solar car
31
ensured. The initial, intermediate and final shape are shown on Fig. 6 and the
manufactured component on Fig. 7. For fatigue resistance of the component, the
cyclic loads are found (significantly lower than the used design loads) and the
stress distribution is calculated, finding the maximum stress below the endurance
limit of the material.
Fig. 4. Final knuckle Displacement (left) and Von-mises stress distributon (right).
Fig. 5. Maximum von misses stress and weight for the design iterations.
Fig. 6. Knuckle evolution. Initial design (left), intermediate design (center), final design (right),
Fig. 7. Knuckle manufactured in CNC milling machine.
32
E. Betancur et al.
7 Conclusion
Every gram reduced in a solar car means a direct reduction on the energy consumption; therefore a main objective on these projects is the minimization of the
total weight.
The proposed manual shape improvement method lists a series of rules to modify
shape design in order to reduce weight on structural components. Although the resulting shape can be far from the global optimum, the improvement is noticeable.
The classical numerical shape and topology optimization methods do not take into
account the manufacturability of the components, and in most cases the resultant
shapes can only be achieved with 3D printers. With this method, designers can
have parameters to vary shapes and topology manually to guarantee manufacture
constrains.
The application of the method on the design of the suspension components has a
direct influence on the energy efficiency of the vehicle. The reduction of 113g
(20%) on each one of the 4 knuckles, means a total weight reduction of 452g and
an energy saving of 18,6Wh on the race. This procedure is recommended for the
design of all the suspension components.
References
[1] D. Roche, Speed of Light: The 1996 World Solar Challenge, University of New South
Wales Press, 1997.
[2] G. Tamai, The Leading Edge: Aerodynamic Design of Ultra-streamlined Land Vehicles,
Robert Bentley, 1999.
[3] M. P. Bendsoe and O. Sigmund, Topology Optimization: Theory, Methods, and Applications, Springer Science \& Business Media, 2003.
[4] G. I. N. a. Z. M. a. B. T. Rozvany, «Generalized shape optimization without homogenization,» Structural optimization, vol. 4, nº 3, pp. 250-252, 1992.
[5] Y. Xie and G. P. Steven, «A simple evolutionary procedure for structural optimization,»
Computers \& structures, vol. 49, nº 5, pp. 885-896, 1993.
[6] X. Yang, Y. Xei, G. Steven and O. Querin, «Bidirectional evolutionary method for stiffness optimization,» AIAA journal, vol. 37, nº 11, pp. 1483-1488, 1999.
[7] J. Sokolowski and J.-P. Zolesio, Introduction to shape optimization, Springer, 1992.
[8] G. Allaire, F. Jouve and A.-M. Toader, «Structural optimization using sensitivity analysis and a level-set method,» Journal of computational physics, vol. 194, nº 1, pp.
363-393, 2004.
[9] J. A. Sethian and A. Wiegmann, «Structural boundary design via level set and immersed
interface methods,» Journal of computational physics, vol. 163, nº 2, pp. 489-528,
2000.
Section 1.2
Integrated Product and Process Design
Some Hints for the Correct Use of the Taguchi
Method in Product Design
Sergio RIZZUTI* and Luigi DE NAPOLI
Università della Calabria, Dipartimento di Ingegneria Meccanica, Energetica e Gestionale –
DIMEG, Ponte Pietro Bucci 46/C, 87030 Rende (CS) Italia
* Corresponding author. Tel.: +39-0984-494601; fax: +39-0984-494673. E-mail address:
sergio.rizzuti@unical.it
Abstract. The paper discusses the problem of the correct identification of the Objective Function and the associated SNR function that designers must choose
when employing the Taguchi method in product design, considering this step as
the basic element to quantify the uncertainty of the device performance prediction.
During product design, when many design aspects must still be understood by the
design team, it is important to identify the most suitable “loss function” that can
be associated with the characteristic function. The second step considers the variability of the characteristic function. The Taguchi method considers many Signal to
Noise Ratio functions whereas in the paper the use of a unique function is suggested for all kinds of loss function. The discussion is argued in the context of socalled parameter design, with the perspective of identifying the best ranges of variation of the parameters that designers have identified as influential on the characteristic function, and also to adjust those ranges in order to obtain twofold results:
reduce Bias between the mean value of the characteristic function response and
the target value; obtain less variability of the characteristic function. The discussion of a case of study will point out the approach and the use of a unique Noise
Reduction function.
Keywords: Taguchi method; Loss Function; Signal to Noise ratio; Noise Reduction.
1 Introduction
One of the most important phases during product design is the moment when the
designer team try to evaluate the efficiency of the design solution. Considering
that the solution under development was selected among several alternatives and
that also customers have suggested their desiderata, the designer must verify
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_4
35
36
S. Rizzuti and L. de Napoli
whether the solution is sufficiently robust to behave as was suggested in the initial
design phase, when a set of requirements were stated.
The methodology of Robust Design by means of the Taguchi Method is particularly suited to be applied during design. Even if the methodology has a wide
range of uses, its employment during product design allows designers to have a
better insight on the device behavior in the scenario in which it will be used. Basically designers have to identify the characteristics that must be checked. For each
characteristic an Objective Function must be identified, in order it might be verified by the Taguchi method. This is the first problem the team must solve: it is not
simple to translate the behavior of the device into an Objective Function, which
represents it in mathematical terms. The second step consists in setting an experimental plan by which to investigate the influence of a set of design parameters
over the Objective Function. The discussion of the results obtained is this starting
point addresses designers to perform the so-called “parameters design”. The parameter ranges must be chosen in order to adhere to the essence of the Objective
Function and this must be given in strict relation to the behavior of the “loss function” introduced by Taguchi. At the same time it is important to reduce the influence of noise in the device behavior: the variability. For this point Taguchi introduced a set of functions, called SNR (Signal to Noise Ratio), which allows the
quantification of the dispersion of the device behavior when it is imagined operating in a set of altered scenarios, simulating real operational conditions. This latter
step must be identified in a suitable manner, because it can be responsible for misunderstanding. The theme of SNR is one of the most critical points that has been
pointed out in the never-ending debate between statisticians and engineering
communities.
The paper discusses the problem of the correct identification of the Objective
Function and the associated SNR function, considering this step as the basic element to quantify the uncertainty of the device performance prediction. During
product design, when many design aspects must still be understood by the design
team, it is important to evaluate the effects the design parameters have on the
mean effect on the Objective Function and the corresponding trend of the SNR,
and to identify which of them can be adjusted in order: to reduce the BIAS from
the behavior of the Objective Function and the target value assigned in the list of
requirements; to reduce the variance of the Objective Function dispersion.
The uncertainty quantification during the product design can be pursued by
CAE simulation. The authors believe that the right connection between the
Taguchi method and computer simulation, also in Multiphysics, can provide the
right suggestions towards the identification of the best solution in product design.
The right choice of the Objective Function and the law by which to measure the
dispersion of the results can affect the reasoning on the design solution. For this
reason, the paper will point out the nature of the loss function of the “target the
better” approach, and will suggest the Noise Reduction function that allows variability to be quantified.
Some Hints for the Correct Use of the Taguchi ...
37
2 Relevant problems
During the 1980s, when the Taguchi Method was successfully employed in western countries [1], a serious debate emerged that involved statisticians and industrial engineers, regarding some relevant problems about the method. The most famous was the round table published by Technometrics [2] in 1992. Statisticians
revealed a set of flaws in the process, even though they recognized the validity of
the Taguchi method in opening the world of statistical approach to otherwise skeptical people. Industrial engineers were extremely interested in how it might be addressed to the investigation of processes or products, by means of the so-called
Taguchi philosophy.
In the following, only two questions will be considered that are particularly relevant to the users of the method, because the Taguchi method in many statistical
software programs is presented in the “classical” terms with which it became popular in the nineties. In fact, the principal flaws, which can be found in the supplementary material of the textbook by Montgomery [3], have still not been resolved,
in the sense that several solutions are not or cannot be generally applicable.
The essential elements of the methodology derived by Design of Experiment
(DOE) will not be discussed here, referring these parts to the book of Phadke [4].
Briefly, these are: the identification of the objective function associated with the
main characteristic; the construction of the plane of experiment, or the adoption of
an orthogonal array; the identification of design parameters and their ranges of
variation; the identification of noise effects and their ranges of variation; discussion of the results comparing simultaneously the Main Effect and the Signal-toNoise Ratio [5-6].
2.1 Problem 1: Definition of the characteristic function
At the basis of the Taguchi philosophy there is the concept of the so-called “loss
function - L”: a cost that society must sustain because of the inconsistencies of
product/process characteristics”. The main definition of Quality is then translated
from the verification of the standard condition declared in the product to the reduction to the minimum of the bias between the characteristic function (mean value) and its target value, in conjunction with its low variation. Taguchi introduced
the Mean Squared Deviation (MSD) that measures variation around the ideal target μ,
‫ ܦܵܯ‬ൌ ߪ ଶ ൅ ሺ‫ݕ‬ത െ ߤሻଶ
(1)
where ߪ represents the standard deviation of the characteristic function and ‫ݕ‬ത its
mean value.
38
S. Rizzuti and L. de Napoli
In order to reduce the bias it is important to choose the best ranges of variation
for each design parameter that influences the characteristic function. Operating in
conjunction with the loss function, the designer has to vary the parameter range or
move it in the direction shown by the results obtained with the plane of experiments.
Even the laws “the smaller the better” (where μ = 0) and “the larger the better”
(where μ = ∞) are immediately identified with a direction (reduction or increasing)
of the characteristic function for which the ranges of variation of each design parameter must be tuned, the law “target the better”, which incidentally appears as
the easiest to learn, hides the problem of not suggesting to the designer how to orientate the design parameter ranges, with their reduction or increase. Figure 1
shows the qualitative description of these laws, in which the Loss function (in red)
and the pdf (in blue) of the characteristic function are superimposed .
a) Small the better [ሺ‫ݕ‬ሻଶ ݉݅݊]
b) Large the better [ሺ‫ݕ‬ሻଶ ݉ܽ‫ ”‘ ݔ‬ሺͳΤ‫ݕ‬ሻଶ ݉݅݊]
c) Target μ the better [ሺ‫ ݕ‬െ ߤሻଶ ݉݅݊ሿ
Fig. 1. The loss functions L and the pdf of the characteristic function.
2.2 Problem 2: Variance of noise effects
The concept associated with SNR (signal to noise ratio) takes into consideration
the mean value and the variance of the characteristic function at the same time in a
set of scenarios in which the device under development can operate It is derived as
the reciprocal of the coefficient of variation. Equation (2) represents the most
common SNR, generally employed in the “target the better” approach.
ܴܵܰ ൌ ͳͲ݈‫݃݋‬ଵ଴ ሺ‫ݕ‬ത ଶ Τߪ ଶ ሻ
(2)
At a first sight, this element is interesting for its strength in taking into consideration both peculiar terms of the problem. However, a certain difficulty emerges and
Some Hints for the Correct Use of the Taguchi ...
39
at least three different formulas were defined by Taguchi. This is the second element that should be clarified.
It has been stated that SNR must be maximized, because what really must be
pursued is to converge towards a design solution that has minimum variance, in
the sense that, whatever the noise affecting the device can be, it will perform almost uninfluenced by them. Figure 2 illustrates the convergence towards the more
valid design solution by the pdf with minor variance (the bell curve shown with
blue line). This is in agreement with the DSS approach (Design for Six Sigma).
Fig. 2. Reduction of noise effect.
3 New set of evaluation criteria
Considering the Taguchi Method as fundamental in product design, because it allows the designer to investigate around his/her design solution with many insights,
it is important to point out the flaws that might compromise the adoption of the
right decisions.
The suggestions can derive from the following two subsections.
3.1 Reducing Bias of characteristic function
In many applications of the Taguchi method in product design the problem of the
characteristic function converging towards a target must be managed. In this kind
of application, the third case described by Taguchi cannot be applied in a straightforward way, because the target cannot be considered as the mean value of the
characteristic, whereas it assumes the meaning of a lower bound or an upper
bound.
The image reported in Figure 1c should be modified as reported in Figure 3.
In this case the problem must be clearly identified, verifying that the distribution
of the characteristic function does not pass the target value. We can separate this
case in two different occurrences, which can be named: target the better with the
lower bound and target the better with the upper bound.
40
S. Rizzuti and L. de Napoli
Fig. 3. The loss function L and the target μ identified as extreme value of a lower or upper bound
condition.
The management of the condition target the better with the lower bound (see
Figure 4a) requires that the characteristic function is written in such a way that it
corresponds to a coordinate translation ሺ‫ݕ‬ത െ Ɋሻ. The condition that should be satisfied is the smaller the better problem with the following law: ሺ‫ݕ‬ത െ Ɋሻଶ ݉݅݊Ǥ
Figure 4b describes this kind of situation.
a)
b)
Fig. 4. a) The loss function L target the better with the lower bound ; b) the coordinate translation.
The management of the condition target the better with the upper bound (see
Figure 5a) requires that the characteristic function is written in such a way it corresponds to a coordinate translation ሺɊ െ ‫ݕ‬തሻ. The condition that should be satisfied
is the smaller the better problem with a the following law: ሺɊ െ ‫ݕ‬തሻଶ ݉݅݊Ǥ
Figure 5b describes this kind of situation.
a)
b)
Fig. 5. a) The loss function L target the better with the upper bound ; b) the coordinate translation.
3.2 Reducing variance of noise effect
The unique rule that can be followed, as reported in Figure 2, is the reduction
of variance. Or, to maintain the meaning that the Noise Reduction must be maximized, a formula can be written as:
Some Hints for the Correct Use of the Taguchi ...
ܴܰ ൌ െͳͲ ݈‫݃݋‬ଵ଴ ߪ ଶ
41
(3)
similar to those invented by Taguchi, where ߪ is the standard deviation of the distribution.
Really this is already contained in the general SNR formula (see eq. 2) if we rewrite it as
ܴܵܰ ൌ ͳͲ݈‫݃݋‬ଵ଴ ‫ݕ‬ത ଶ െ ͳͲ݈‫݃݋‬ଵ଴ ߪ ଶ
(4)
where the effect of the mean value ‫ݕ‬ത was eliminated.
The Noise Reduction reported in eq. 3 is the unique formula that can be used in all
kinds of problem.
4 Case study
One of the most frequent characteristic functions to be analyzed in product design
is the Factor of Safety (FoS) of a structure. It is generally assumed as a value of 1
or greater in relation to the model employed and the condition of employment.
Considering the Taguchi philosophy on Robust Design, FoS cannot be identified
as the target of an experiment, whereas it is really a lower limit that cannot be violated. Employing the third law of the Loss function in a straightforward manner, in
this case, designers might incur in accepting combinations of design parameters
that give a not allowable condition.
The case study proposed in this section is the first dimensioning of an E-bike
frame (see Figure 6). The E-bike was designed with the electric motor integrated
in the rear wheel, and the battery housed on the main tube in order to balance the
weights.
a)
b)
Fig. 6. a) The first drawing of the E-bike; b) The geometric model of the frame.
The dimensioning of the main components of the frame was performed by the
Taguchi method as a “target the better” problem. An orthogonal array L8 (2 7) was
used and a set of seven design parameters was identified. In order to evaluate the
response of the structure made in 5086 Aluminium Alloy, three different load
conditions were modeled, which allowed designers to assess the variability of the
characteristic function. The constraints were applied at the base of the head tube
and the rear dropouts. In table 1 the three load conditions (in N) are summarized.
42
S. Rizzuti and L. de Napoli
Table 1. Load conditions in several scenario.
Scenario
Load conditions
Load on left
handlebar
Load on right
handlebar
Load on
left pedal
1
Be seated
200
200
200
200
700
2
Out saddle
250
250
500
500
0
3
Accidental
0
0
03
0
1500
Load on Load on the
right pedal
saddle
In Figure 7 the set of design parameters selected for the investigation is reported. They are:
A – Slope of the seat stays
B – Joint of the down tube to the head tube
C – Joint of the top tube to the head tube
D – Diameter of the stays
E – Thickness of the stays
F – Down tube minor semi axis
G – Down tube major semi axis
= 32 ÷ 37 degrees
= 40 ÷ 55 mm
= 35 ÷ 48 mm
= 12 ÷ 17 mm
= 1.5 ÷ 2 mm
= 25 ÷ 33 mm
= 45 ÷ 52 mm
In the following the results of the investigation are reported, following two approaches: the first reports the “classical” elaboration of the “target the better “case
(see tables 2 and 3); the second reports the elaboration performed by the present
proposal (see tables 4 and 5).
a)
b)
Fig. 7. a) The set of design parameter employed during the investigation; b) CAE simulation of
the loaded frame with the response in terms of FoS in false colors.
തതതതത ൌ ʹ.
Table 2. Mean value of the Objective Function, FoS to be guided to the target value ‫ܵ݋ܨ‬
Some Hints for the Correct Use of the Taguchi ...
43
Table 3. SNR performed by eq. 4.
Table 4. Mean value of the Objective Function, FoS to be minimized, revisited by the law “tarതതതതതሻ ൌ Ͳ.
get the best” with lower bound: ሺ‫ ܵ݋ܨ‬െ ‫ܵ݋ܨ‬
Table 5. NR performed by eq. 3.
As can be seen no differences appear between tables 2 and 4. The results have
obviously the same trends because they have been subjected to a translation. The
graphs of table 4 give direct information on the characteristic function and designer can judge immediately the state of stress in comparison to the target value.
Table 5 differs in many parts with respect to table 3. This is an interesting fact.
In detail, factor B has now a higher variability, D, F and G changed their slopes.
The comparison of the Mean Value and NR graphs (tables 4 and 5) allow the designer to identify a great number of design parameter levels that better respond to
the problem. In fact, the levels that simultaneously give the small value to the
characteristic function and the maximum to Noise Reduction are A2-B2-D1-E1F1-G1.
Further investigation must be made on C, because ambiguity remains. Following
the procedure discussed in [6] a second step can be pursued employing a plane of
experiment at three levels. In any case, the range of factor C must be moved towards the lower level, in accordance with the Mean Value graph. The new parameter range can be (30; 35; 40). The other parameters to be examined in an L9 (34)
44
S. Rizzuti and L. de Napoli
orthogonal array are A, D and F, obtained by ANOVA as the most influential on
the characteristic function Mean Value, and their initial range can be subdivided.
5 Conclusion
The main task of the present paper is underlying the extreme importance of the
Taguchi method in product design. During the first step of the embodiment phase,
designers need to understand and evaluate the points of strength and weakness of
the solution under development. To project a plan of experiment, together with the
identification of the possible design parameters and their ranges, the identification
of the possible noise sources, allows designers to familiarize with the device behavior and to decide the eventual changing of product architecture.
The use of the Taguchi method can be done with the help of some statistical
software, even the SNR that can be employed are the same that have been subjected to criticism. Therefore the researcher must be alerted as to the straightforward
use of it. At the same time it appears that many researchers do not worry about it
and indeed it seems they ignore the discussion on the Taguchi method.
The Taguchi method does not require specialized software because it can be
used with the help of a datasheet. Designers should be invited to implement it with
the awareness underlined in this paper.
Acknowledgments The authors would like to thank Dr. Alessandro Burgio (PhD), President of
CRETA – Regional Consortium for Energy and Environment Protection, and the students Raso
A., Tropeano E.F., Sergi A. and Mirabelli F. for the materials produced and presented in the paper.
References
[1] Taguchi G. and Wu Y. Introduction to Off-line Quality Control, Central Japan Quality Control Association, Nagoya, Japan, 1985.
[2] Nair V.N. editor. Taguchi’s Parameter Design: A Panel Discussion. Technometrics, 34 (2),
1992.
[3] Montgomery D.C., Design and analysis of experiment, 8 th Ed., 2012, John Wiley & Sons
[4] Phadke M.S., Quality engineering using Robust Design, 1989, Prentice-Hall International.
[5] Rizzuti S., The Taguchi method as a means to verify the satisfaction of the information axiom
in axiomatic design, Smart Innovation, Systems and Technologies, 2015, 34, pp. 121-131
[6] Rizzuti S., A procedure based on Robust Design to orient towards reduction of information
content, Procedia CIRP, 2015, 34, 37-43.
Neuro-separated meta-model of the scavenging
process in 2-Stroke Diesel engine
Stéphanie Cagin1*, Xavier Fischer1
ESTIA Recherche (France),
Corresponding author E-mail address: s.cagin@estia.fr
1
Abstract The complexity of flow inside cylinder leads to develop new accurate
and specific models. Influencing the 2-stroke engine efficiency, the scavenging
process is particularly dependent to the cylinder design. To improve the engine
performances, the enhancement of the chamber geometry is necessary. The development of a new neuro-separated meta-model is required to represent the scavenging process depending on the cylinder configuration. Two general approaches
were used to establish the meta-model: neural networks and NTF (Non-negative
Tensor Factorization) separation of variables. To fully describe the scavenging
process, the meta-model is composed by four static neural models (representing
the Heywood parameters), two dynamic neural models (representing the evolution
of gases composition through the ports) and one separated model (the mapping of
the flow path during the process). With low reduction errors, these two methods
ensure the accuracy and the relevance of the meta-model results. The establishment of this new meta-model is presented step by step in this article.
Keywords: Neuro-separated meta-model; model reduction; neural network; NTF
variables separation; scavenging; 2-stroke engine; ports.
1 Introduction
Due to drastic emissions standards and fuel consumption constraints, internal
combustion engines used for transportation are widely studied to improve efficiency and obtain better performances. Despite its low efficiency, automotive industry has recently a renewed interest for 2-stroke engines thanks to its advantages. Smaller and lighter than 4-stroke engines, 2-strokes engines fire once
every revolution offering higher power, greater and smoother torque (Mattarelli
[1]; Trescher [2]). The interest is also linked to the development of new technologies approaches which increase their efficiencies.
The complexity of engines leads to focus the studies on one special moment of
the engine cycle. Less studied but just as important, the scavenging process has a
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_5
45
46
S. Cagin and X. Fischer
considerable influence on engine pollution, especially in 2-stroke engines with
ports. Scavenging is the process by which the fresh gases come in the combustion
chamber, pushing through the exhaust ports the burnt gases. This process is highly
dependent on the cylinder geometry. To improve the scavenging efficiency, it is
necessary to model it and to observe the influence of geometric parameters on it.
Some scavenging models have been already developed such as the perfect displacement and perfect mixing models proposed by Hopkinson [3], the Maekawa
“three-zone” model [4] or the Sher “S-shape” [5]. These models evaluate the exchange of gas masses during scavenging. Their use requires the evaluation of
some empirical parameters and they do not explicitly integrate cylinder parameters
which make them unsuitable in a design optimization process. Other models have
been established ever since but they are specific to the engine studied.
To represent the scavenging process in 2-stroke engine, we needed to develop
new models perfectly adapted to our Diesel engine (2-stroke with ports) and to our
use (to find an optimized cylinder design).
2 Scavenging process
2.1 The new model
During the engine cycle, scavenging occurs between the expansion and the compression phases (Blair [6]). Only ports are used to do the gaseous exchanges.
Three phenomena are usually associated to the scavenging by ports: the backflow (Mattarelli [1]), the short-cutting and the mixing (Lamas [7]). The backflow
is observed when the burnt gases flow through the intake ports stopping fresh gases from entering. The short-cutting characterizes the exit of fresh gases before the
end of the scavenging process. Less problematic than the short-cutting, the mixing
implies that a part of fresh gases goes out the cylinder but they entail some burnt
gases with them. To enhance the scavenging process, both backflow and shortcutting should be reduced. The two phenomena can evaluated knowing the composition of the gases going through the ports.
To fully characterize the scavenging efficiency, it is also necessary to evaluate
the masses of gases exchanged: the aim of scavenging is to replace burnt gases by
fresh gases as far as possible. The gaseous exchanges are determined thanks to the
four Heywood parameters [8]: the delivery ratio Λ, the trapping efficiency ηtr, the
scavenging efficiency ηsc and the charging efficiency ηch.
Finally, the model is complete by the evolution of the gases distribution inside
the cylinder during the process. The observation of the flow path is also useful in a
design optimization process to determine the more suitable cylinder parameters.
Neuro-separated meta-model of the scavenging …
47
2.2 The cylinder variables
To optimize a design, the parameters of the cylinder directly influencing the scavenging process have been defined (Cagin [9]): the angles βin and βexh, the height of
the exhaust port θend_exh_port, the advance of opening of intake and exhaust ports θin_advance and θexh_advance-, the boost pressure Pboost and the difference between intake and exhaust pressures ΔP. Their ranges of value have been determined to respect the engine constraints (mechanical constraints, duration of power phase…).
Then, a database of behavior has been built from 2D CFD results [10]. The different configurations of cylinder (combinations of parameters) were determined
using a design experiment [11]. The developed models ensue from CFD results.
3. Neuro-separated meta-model
The new developed model is called “neuro-separated meta-model” because of the
techniques used to establish it. Indeed, to explicitly express the cylinder variables,
neural networks have been used to develop two sub-models: one for the Heywood’s parameters and one for the evolution of the gases composition going
through the ports. The path flow model is defined thanks to a separated variables
method, the β-NTF approach. All the sub-models are brought together to form the
neuro-separated meta-model of the scavenging process.
3.1. Neural network
Neural networks (NN) can be seen as a complex mathematical function that accepts all numerical inputs and generates associated numerical outputs. After the
training phase, the NN is able to approximate the system’s behavior. Its ability to
handle all input combinations makes it very useful in optimization process: successive combinations of variables can be tested to converge towards an optimized
design. The NN analytical model of the scavenging is established with multiple interests. First, the analytical model will run faster than CFD calculations (few seconds vs 10h). Second, the NN model size is far smaller than a database of CFD
results. Third, any parameter combination can be tested if the scavenging behavior
is considered continuous and linear outside the learning points.
48
S. Cagin and X. Fischer
3.2. Heywood neural sub-model
The neural networks were developed thanks to python code and pybrain library.
The network is used to evaluate the Heywood parameters (λ, ηtr, ηsc, ηch) at the end
of the process depending on the cylinder design.
After several tests with different structures (changing the number of neurons
and the activation function type), the results illustrated the NN’s inability to reduce the relative error of all four outputs at the same time. So, one NN for each
parameter is used. Depending on the number of hidden neurons and the activation
functions selected, the relative error for the learning data and the relative error for
all the data are calculated. Weights and biases of the network are randomly initialized. The results for each Heywood parameter NN are presented in Table 1.
Table 1. Neural network results
Relative error
Output
Activation function
Number of hidden neurons
Training data
All data
λ
Sigmoid
10
2.56%
7.9%
ηtr
Sigmoid
8
0.65%
2.5%
ηch
Tanh
12
0.36%
7.1%
ηsc
Tanh
8
3.02%
5.2%
The first striking result is that the best structure for each output is different from
the others which confirms the need to develop one NN per parameter.
These four outputs indicate the influence of each input on the scavenging process in order to optimize the cylinder design. However, the relative errors of delivery ratio and charging efficiency are over 7% which makes the NNs’ relevance
questionable. To reduce the errors, the database can be completed with other CFD
results. Another way is to change the NN structure to something more appropriate
and/or to initialize the weights and biases with better values.
In any case, NNs are very flexible with the input values which is very useful in
optimization problems which justify their widespread use in various sectors. The
four neural networks of Table 1 are integrated to the neuro-separated meta-model.
3.3. Mass fractions neural sub-model
Now, the goal of the model developed in this section is to describe the evolution
of the gases composition flowing through the ports. The composition is evaluated
0.99 crankshaft degrees. The mass fraction of burnt gases is the output of the NN.
The function the NN has to approximate (the blue in Fig. 1) models the Behavior of a Design Configurations Family (the BDCF model). All the inputs were
numbered from 1 to 2429 (intake data) or to 4077 (exhaust data), they correspond
to the abscissa axis. On the ordinate are the outputs (mass fraction of burnt gases).
Neuro-separated meta-model of the scavenging …
49
The number of layers is chosen as small as possible. The number of nodes on
the hidden layer arbitrarily varies from 6 to 12. To compare the performance of
different NNs, the training duration and the relative error are computed.
(1)
And to obtain the average error, equation (2) was used:
(2)
®
Matlab software was used to generate and train the NNs. The linear activation
function was selected for the input layer, whereas the tangent sigmoid is used for
the hidden and output layers [11]. The results of the NN are provided in
Table 2.
Table 2. Network efficiencies depending on the number of nodes on the hidden layer
Number of nodes
(hidden layer)
6
8
10
12
Training duration
5:48
6:03
6:19
6:41
Average
3.67 %
3.33 %
3.15 %
3.14 %
Absolute error
Minimum
Maximum
~ 10-6
26.9 %
~ 10-7
24.6 %
~ 10-6
28.8 %
-5
~ 10
27.0 %
The influence of neurons number is quite low: all the errors are almost the
same. With less than 4% average relative errors, whatever the number of nodes in
the hidden layer, the neural networks appear to be very efficient for modelling the
scavenging process. Only the maximum relative error remains unsatisfying. To
decrease the maximum relative error, others hidden layers can be added. In this
case, a compromise between error reduction and the increased complexity of the
analytical model has to be found.
Fig. 1. Expected outputs vs BFDC model
The NN with 8 nodes seems to be the best compromise between training duration and relative errors. The general shape of NN outputs is close to the function
shape of the real outputs as shown in Fig. 1. The use of the sigmoid function for
the output layer is validated. This NN was selected to compute the composition
through the exhaust ports. Same NN structure is used for the intake ports. The two
NNs are also integrated to the neuro-separated meta-model.
50
S. Cagin and X. Fischer
3.4. Gases distribution sub-model
The Non-negative Tensor Factorization (NTF) algorithm is very attractive because
of its ability to take into account spatial and temporal correlations between variables more accurately than 2D Non-negative Matrix Factorization (NMF). First
proposed by Shashua and Hazan [12], NTF is a generalization of the NMF (Lee
and Seung [13]). Based on a PARAFAC (PARAllel FACtors) analysis, the particularity of the NTF method is to impose nonnegative constraints on tensor and factor matrices. NTF also provides greater stability and a unique solution, as well as
meaningful latent (hidden) components or features with physical or physiological
meaning and interpretation [14]. Finally, the NTF algorithm provides powerful
implementation with multi-array data. The principal of decomposition of NTF is
illustrated by Fig. 2:
Fig. 2. NTF Concept
The 3D matrix form of the separated model is particularly suitable for data storage and data mapping: the separated variable model is used to visualize the flow
path during the process. To use the NTF algorithm with the β divergence expressed by Cichocki [14], data are organized to obtain the Y model. The three dimensions were respectively associated with space, time and combinations of cylinder parameter (cf. Fig. 2). The dimensions of the Y matrix in this study are 174
x 6561 x 27: 174 crankshaft angles for the scavenging duration, 6561 mesh nodes
(after normalization of the data) and 27 configurations of engine modeled.
First, the influence of the number of design configurations on the average relative error was tested (Fig. 3). The number of design configurations directly impacts the model size and the dimension of the Y matrix. The relative error was
calculated with (1) and the average error with (2).
Fig. 3. Evolution of the relative error depending on the number of configurations
Fig. 3 shows that, for the same number of modes, the reduction error rises when
the number of configurations increases. However, it also highlights that the influence of the model size on the error decreases when the size of the model increase.
Beyond 14 configurations, the error increases are almost negligible. Any new configuration results added to the CFD model will not impact the accuracy of the reduced model.
Neuro-separated meta-model of the scavenging …
51
As seen in Table 3, the number of modes should be chosen carefully. This
number influences both the reduction error and the time needed to establish the
reduced model. The duration of the model development proportionally increases
with the number of modes. With 160 modes, more than 5 days are needed to get
the complete reduced model.
Table 3. Results of NTF reduced models
10
Average rel.
error
7.12%
Normalized
model size
30,823,578
Reduced model
size
67,620
80
3.45%
30,823,578
120
2.80%
160
2.41%
Modes
% of reduction
Time
99.78%
18.6h
540,960
98.24%
71.2h
30,823,578
811,440
97.37%
99.2h
30,823,578
1,081,920
96.49%
123.2h
Fig. 4. Average relative error depending on the number of modes
On the contrary, the error decreases when the number of modes rises (Fig. 4). In
addition, over 10 modes, the error is always under 8% which confirms the NTF
method is well adapted to extensive models: it is able to compress 99.8% introducing less than 8% of errors. An error of 2% seems to be the minimum that can be
achieved (Fig. 4). The difference between CFD and NTF models are presented by
Fig. 5.
Fig. 5. Comparison between CFD and NTF models (for 60 and 160 modes)
Due to its matrix form, the NTF reduced model is only usable for the 27 configurations tested. It cannot directly provide results for any other design configurations but, associated with a kriging method, the separated variable model forecasts
the evolution of the distribution of gases inside the cylinder during the whole
scavenging process whatever the configuration thanks to data interpolation [11].
The NTF reduced model completes the neuro-separated meta-model already integrating six neural models (four neural networks for each Heywood parameter
and two for the gases compositions).
52
S. Cagin and X. Fischer
5. Conclusion
Thanks to neural networks and NTF separation of variables method, a neuroseparated meta-model of the scavenging process has been developed. This new
meta-model characterizes the whole process integrating seven reduced models: six
neural and one separated variable models. The meta-model combines static aspects of scavenging thanks to the Heywood parameters evaluated at the end of the
process and dynamic aspect with the evolution of gases composition through the
ports and the mapping of the flow path during the process. In addition to be perfectly adapted to our specific engine, the meta-model has the advantage to explicitly integrate the design variable. This meta-model can be easily used in a design
optimization process; the low reduction errors ensure the accuracy and the relevance of the meta-model results. This kind of new meta-models, composed by
several accurate reduced sub-models, will be more and more used: each of them
offers the opportunity to represent one phenomenon or one aspect of a global process whereas their combination provides a complete description of the process.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Mattarelli E. "Virtual design of a novel two-stroke high-speed directinjection Diesel engine". International Journal of Engine Research. 10, 3,
175–93, 2009;
Trescher D. "Development of an Efficient 3–D CFD Software to Simulate
and Visualize the Scavenging of a Two-Stroke Engine". Archive of Computational Methods in Engineering. 15, 1, 67–111, 2008;
Hopkinson B. "The Charging of Two-Cycle Internal Combustion Engines".
Journal of the American Society for Naval Engineers. 26, 3, 974–85, 1914;
Maekawa M. "Text of Course". JSME G36. 23, 1957;
Sher E. "A new practical model for the scavenging process in a two-stroke
cycle engine". SAE Technical paper 850085. 1985;
Blair G.P. "Design and Simulation of two-stroke engines". Society of Automotive Engineers, Inc.; 1996.
Lamas Galdo M.I., Rodríguez Vidal C.G. "Computational Fluid Dynamics
Analysis of the Scavenging Process in the MAN B&W 7S50MC TwoStroke Marine Diesel Engine". Journal of Ship Research. 56, 3, 154–61,
2012;
Heywood J.B. "Internal Combustion Engines Fundamentals". McGraw-Hil.
Duffy A, Moms JM, editors. 1988.
Neuro-separated meta-model of the scavenging …
[9]
[10]
[11]
[12]
[13]
[14]
53
Cagin S., Fischer X., Bourabaa N., Delacourt E., Morin C., Coutellier D. "A
Methodology for a New Qualified Numerical Model of a 2-Stroke Diesel
Engine Design". The International Conference On Advances in Civil, Structural and Mechanical Engineering - CSME 2014. Hong-Kong; 2014.
Cagin S., Bourabaa N., Delacourt E., et al. "Scavenging Process Analysis in
a 2-Stroke Engine by CFD Approach for a Parametric 0D Model Development". 7th International Exergy, Energy and Environment Symposium. Valenciennes; 2015.
Cagin S., Fischer X., Delacourt E., et al. "A new reduced model of scavenging to optimize cylinder design". Simulation: Transactions of the Society
for Modeling and Simulation International. 2016;
Shashua A., Hazan T. "Non-negative tensor factorization with applications to
statistics and computer vision". ICML ’05 Proceedings of the 22nd International Conference on Machine learning. New-York; p. 792–9, 2005.
Lee D.D., Seung H.S. "Learning the parts of objects by nonnegative matrix
factorization". Nature. 401, 788–91, 1999;
Cichocki A., Zdunek R., Choi S., Plemmons R., Amari S.-I. "Non-Negative
Tensor Factorization using Alpha and Beta Divergences". IEEE International Conference on Acoustics, Speech and Signal Processing. Honolulu, HI;
2007.
SUBASSEMBLY IDENTIFICATION METHOD BASED
ON CAD DATA
Imen BELHADJ1, Moez TRIGUI1 and Abdelmajid BENAMARA1
1
Author one affiliation Mechanical Engineering Laboratory, National Engineering School of
Monastir, University of Monastir, Av. Ibn Eljazzar, 5019, Monastir, Tunisia.
* Corresponding author. Tel.: +216-95 296 671. E-mail address: imenne.belhadj@gmail.com
Abstract Seen the significant number of parts constituting a mechanism,
assembly or disassembly sequence planning became a very hard problem. The
subassembly identification concept can constitutes the most original way to solve
this problem particularly for complex product. This concept aims to break down
the multipart assembly product into particular number of subassemblies, and each
subassembly is constituted by a small number of parts. Consequently the
generation of assembly or disassembly sequence planning between parts can be
determined relatively easily because it becomes between the subassemblies
constituting the product. Then, each subassembly is assembled or disassembled
using the same approach. In literature subassemblies identification approach from
CAD model is not very developed and still a relevant research subject to be
improved. In this paper, a novel subassemblies identification approach is
presented. This proposed approach starts with the exploration of the CAD
assembly data to get an adjacency matrix. Then, the extracted matrix is enriched
by adding the contact in all directions in order to determine and to classify the
base parts initiator of each subassembly. The next step is to identify subassemblies
using a new matrix called sum matrix obtained from contact all direction matrix
and fit matrix. For better discussing and explaining the stages of the proposed
approach an example of CAD assembly product is presented in all sections of this
paper.
Keywords: CAD data extraction, Subassembly concept, Base parts
1 Introduction
The search space of Assembly or Disassembly Sequence Planning (ASP/DSP)
for a product becomes a very complex task not only in the design cycle but also in
the product cost. This complexity is proportional to the number of parts
constituting an assembly product. To reduce this difficulty a novel concept such
Subassembly Identification (SI) was proposed by many researchers [1]. This
concept aims to break down the multipart assembly product into particular number
of subassemblies, and each subassembly is constituted by a small number of parts.
Then, the ASP/DSP of the subassemblies can be realized easily. Despite the
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_6
55
56
I. Belhadj et al.
efficiency of the subassemblies method, the automatic identification of
subassembly from CAD model is still a research subject to be improved.
Due to the different varieties of assembly constraints, the result of SI is
multiple for the same assembly product involving the erroneous one.
Consequently, many criteria to evaluate the correctness or the feasibility of the SI
were introduced according to the various engineering applications.
In the literature, many techniques of SI are presented. The most important will
be presented later
Some researchers classified SI into two principal categories which are
topological and geometrical assembly constraints [2]. Other investigators
proposed the method of the graph theory which represents contacts or connections
between assembly parts. The nodes represent the component constituting the
assembly and the lines between nodes represent the contacts [3]. This method is
ameliorated by taking into account assembly precedence with various methods.
The “ask–answer” method is used by Kara et al. [4] to identify subassemblies and
generate the DSP of the product. The arrowed lines between parts represent
assembly precedence. Wang et al. [5] introduced the concept of base parts, which
is very practical in the SI. The base parts and precedence are judged by experts,
which represents a limitation of this approach. Wang et al. [3] used the assembly
tree to break down a complex CAD model to some subassemblies and each
subassembly can enclose some simpler subassemblies. Given the importance of
the assembly structure tree, the SI can’t be determinate automatically because the
assembly structure tree is not unique, depending on how the planner realizes the
assembly [6, 7]. Sugato [8] proposed an SI approach based on the assembly
structure knowledge. Then, these subassemblies are taken as typical cases to
extract the other similar subassemblies. In the same ways, other researchers used a
case-based reasoning method to identify subassemblies. This method is based on
existing cases or experiences [9]. Santochi et al. [10] used a set of rules to
construct an incidence matrix which represents the assembly mating features. The
subassemblies are recognized with respect to the incidence matrix and the inherent
information. Referred to the literature overview detailed previously, some relevant
points are identified:
x There is not a unique or standard approach to identify subassembly.
x SI is a combinatory problem, the search space of SI increases with the
number of parts.
x The intervention of the planner is necessary in most of the previous
research works; subsequently the automation of the SI is not
satisfactory.
x Few works use the CAD data to identify subassembly automatically.
x The use of base parts can provide an advanced way to identify
subassembly from the complex product.
This paper aims to presents a SI approach based on the extracted CAD
assembly constraints. The remaining part of this paper is organized as follows:
First, the data extraction and relationship method from the CAD assembly model
is detailed. Then, an adjacency matrix is constructed to be enriched later by
mounting parameters and the SI procedures can be started. An industrial example
is dealt with all sections of this paper in order to explain different stages of the
proposed approach.
Subassembly Identification Method …
57
2. Proposed approach to identify subassemblies
The figure 1 describes the strategy adopted to present the SI approach which is
composed of two principal steps.
- CAD assembly data extraction: it aims to extract all assembly constraints
and all parts attributes;
- Subassembly identification process: it aims to identify the subassemblies
set based on assembly constraints and parts attributes;
Assembly
CAD model
STEP1
CAD assembly
data
extraction
Subassembly
STEP2 identification
procedures
Figure 1. Strategy of the proposed subassembly identification approach
Begin
CAD
Assembly
data
extraction
Save the list of
subassemblies
Identify
assembly
constraints
(a)
All assembly
constraints
are
identified
No
All Bpi treated
Yes
All contact
with Bpi
are treated
No
(d)
Identification
of the base
parts set
Analyze
connection with all
parts
Yes
Identify
topological and
geometrical data
of parts
No
Identification
of the
subassemblies
list
End
CAD assembly
model
Save base
parts list
Select base part
Bpi
All
parts
are
treated
Yes
All Fni
calculate
d
No
Yes
Calculate Fni
Browse feature
manager design
tree
(b)
Identify
connector parts
Select part i
No
Remove all
connector parts
All
connector
parts are
treated
Yes
(c)
Supressio
n of all
connector
parts
Figure 2. Flowchart of the proposed approach
2.1. CAD assembly data extraction
The use of Application Programming Interface (API) facilitates the extraction
of informations related to the assembly model created by CAD model. This
information can be exploited to generate the subassemblies from CAD assembly
model (figure 2 (a)). The data extraction step is composed of two stages:
58
I. Belhadj et al.
x Extraction of geometrical and topological informations related to the part
(vertex (coordinate); Edge (associated vertex, length); Wire (associated
edges, orientation, length); face(associated wires, normal and area))
x Data extraction associated to the assembly constraints between parts
(mate type, mate entities, mate parameters, concerned parts).
These data will be stored in a data-base related to the part and the assembly.
The proposed approach is presented by the flowchart in figure 2. In the
following section, each step will be described.
2.2 Subassembly identification process
A demonstrative CAD model is introduced which is a Reduction Gear (figure
3), aims to explain the different stages of this approach. This mechanism is
composed of 23 parts.
1
3
8
13
12
14
11
9
4
2
18
19
20
5
23
21
7
16
17
6
22
10
15
Figure 3. Illustrative example: Reduction Gear.
2.2.1 Adjacency matrix of CAD model
The first result of data extraction step is the adjacency matrix [Adj].The [Adj] is a
symmetric and square matrix, its size is equal to (NxN) where N represents the
total number of parts. The Adj (i,j) element represents an existing contact between
two parts, i and j can have three possible attributes as follows:
- Adj(i,j)= 1 if there is a relationship between i and j;
- Adj(i,j)= 0 if there is no relationship between i and j;
- Adj(i,j)= 0 if i=j.
The adjacency matrix in the treated example is given by equation (1). As can be
noticed from the size of [Adj] matrix, the number of connector element (screw,
bolt) is significant when compared with the total numbers of parts. Moreover these
parts are generally used to link together all the subassembly of a mechanism.
Therefore, to minimize the solution space of SI, the size of [Adj] matrix is reduced
by removing all connector elements (figure 2 (b)). These connectors are identified
directly from the hierarchical assembly tree. In this example, all parts from item
18 to item23 are removed from the adjacency to get a reduced one [Adj].
59
Subassembly Identification Method …
>Adj@
ª
«1
«
«2
«
«3
«4
«
«5
«6
«
«7
«8
«
«9
«
«10
«11
«
«12
«13
«
«14
«15
«
«16
«17
«
«18
«
«19
«20
«
« 21
«22
«
«¬23
1
0
0
1
0
0
1
0
1
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
2
0
0
1
1
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
4
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5
0
1
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
7
0
0
0
0
1
1
0
0
0
1
0
0
0
0
1
1
1
0
0
0
0
0
0
8
1
0
0
0
0
0
0
0
1
1
1
0
1
0
0
0
0
1
1
1
1
1
1
9
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
10
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
11
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
12
0
0
0
0
0
0
0
0
0
0
1
0
0
1
1
0
0
0
0
0
0
0
0
13
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
1
1
1
1
1
1
1
14
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
15
0
0
0
0
0
0
1
0
0
1
0
1
0
0
0
1
0
0
0
0
0
0
0
16
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
17
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
18
1
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
19 20 21
1 1 1
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 1 1
0 0 0
0 0 0
0 0 0
0 0 0
1 1 1
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
22 23º Total Contact
3
1 1 »»
4
0 0»
»
2
0 0»
2
0 0»
»
3
0 0»
»
2
0 0
»
6
0 0»
5
1 1»
»
2
0 0»
»
3
0 0»
2
0 0»
»Ÿ
3
0 0»
»
3
1 1
»
2
0 0»
4
0 0»
»
2
0 0»
2
0 0 »»
3
0 0»
»
3
0 0»
»
3
0 0
»
3
0 0»
3
0 0»
»
3
0 0 »¼
(1)
2.2.2 Determination of the base part set
The identification of the Base parts (Bp), which represents a key step for SI can
be started, once the reduced adjacency matrix is achieved. A base part is an
initiator of each subassembly; it can be defined as a central part on which most
parts will be mounted As can be noticed from the adjacency matrix of the
illustrative example (Eq.2), the Input wheel (15), having relatively an important
volume, include four contacts and cannot be a base part as it must be assembled in
the intermediate shaft (7). Moreover, the Main cover (1) having a contact number
equal to 3, smaller than the input wheel, can constitute a base part. To circumvent
this problem and make a decision in the determination of the base parts, two steps
are proposed. The [Adj] matrix is transformed by adding the contact according to
the three directions. A fitness function involving other criteria is introduced.
An explanation of the two aspects is detailed in the following section.
x Transformed matrix [Cad]
x The transformation of [Adj] matrix consists in considering the three
directions (x,y,z) contacts between parts. The new Contact of all
directions matrix [Cad] is calculated as follows:
(2)
Cad Cx & Cy & Cz
> @ > @ > @ > @
Where, [Cx] , [Cy] and [Cz] represent the three contact matrix according to the
directions
(x, y, z).
The new total contact of the Input wheel (15) becomes 7 while for the Main
cover (1) it is 9, this results shows the importance of this stage in the detection
procedure of base parts.
In the second steps, a fitness function considering more than the contact
criterion is introduced. The considered criteria are: The largest boundary surface,
the higher volume and the maximal number of relationships (identified from the
adjacency matrix). Figure 2 (c) details the flowchart of this step. The Bp is
60
I. Belhadj et al.
identified according to a fitness function Fn. As a result, for a part i, the score of
the fitness function is calculated by equation (3).
Si
Vi
(3)
Fni α βNr γ
St
Vt
Where
- Si :represents the boundary surface of part Pi;
- St: represents the total surface of parts existing in the assembly;
- Nr :represents the total of relationships between part Pi and other
parts in the assembly calculated from the adjacency matrix [Adj];
- Vi :represents the volume of part Pi;
- Vt: represents the total volume of parts existing in the assembly;
- α, β, γ: represent the weighting coefficients introduced by planer.
Figure 4 illustrates the evolution of the fitness function score of each part of the
Reduction Gear with different values of α, β, and γ. It has been found that the
obtained base parts list is {1, 2, 7, 8, 12, 13} for the different values used
α 0.3, β 0.4, γ 0.3
α 0.3, β 0.2, γ 0.5
α 0.5, β 0.2, γ 0.3
Figure 4: Evolution of the fitness function score, with different values of the weight
coefficients
2.2.3 Subassembly research
When the determination of the base parts list is established, the SI algorithm
begins (Figure 2 (d)). The SI algorithm starts by browsing each base part (Bpi)
and its relations ships with the other parts and removing all connections with other
base parts. Figure 5 shows the graph liaisons of the treated example before and
after this suppression. When analyzing the achieved graph, two particular cases
(illustrated in figure 6) are presented.
Subassembly Identification Method …
61
Bp13
16
14
Bp12
Bp13
16
17
15
Bp12
14
10
10
6
Bp1
11
Bp7
Bp7
5
3
4
Bp2
9
(a)
6
Bp1
11
5
3
Bp8
17
15
Bp8
4
Bp2
9
(b)
Figure 5. Graph liaison of Reduction Gear
(a)
Pm
BpM
Pk
Pn
We(BpM,Pl),
(b)
Pl
We(Pl, BpK)
BpM
BpK
Figure 6. Mechanism Graph liaisons. (a): Situation 1, (b): Situation 2.
Case1: figure 6 (a), all parts Pm, Pn and Pk belong to the considered Bp M set.
Case2: if the situation of figure 6 (b) is presented (in the illustrative example,
the part (9) has two connections with Bp2 and Bp8), the SI algorithm is decided
by considering the weight (We) of each connection as follows:
- If We( BP ,Pl) ! We( Pl,Bp ) ,Pl belongs to the set of BpM ;
M
k
- Else, Pl belongs to the set of Bpk.
The (We) is calculated by the formula (4).
We(Pi,Pj) S(i, j )
>S@ >Cad @& >Fit@
Where
(4)
(5)
[S] is a sum matrix calculated using the formula (5) and [Fit] is a square and
symmetric matrix, and its size is equal to (N*N) where N represents the total
number of parts. The element Fit (i,j) of [Fit] which represents an existing fitting
contact between two parts Pi and Pj can have three possible attributes as follows:
-
>S @
ª
«1
«
«2
«
«3
«4
«
«5
«6
«
«7
«8
«
«9
«
«10
«11
«
«12
«13
«
«14
«15
«
«16
«17
¬
Fit (i,j) = 1 if the contact between i and j is a tight fit;
Fit (i,j) = 0 if i=j and if the contact between i and j is a clearance fit.
1
0000
0000
1110
0000
0000
1110
0000
1110
0000
0000
0000
0000
0000
0000
0000
0000
0000
2
0000
0000
1111
1110
1111
0000
0000
0000
1111
0000
0000
0000
0000
0000
0000
0000
0000
3
1110
1111
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
4
0000
1110
0000
0000
0100
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
5
0000
1110
0000
0100
0000
0000
1011
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
6
1110
0000
0000
0000
0000
0000
1111
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
7
0000
0000
0000
0000
1011
1111
0000
0000
0000
1111
0000
0000
0000
0000
1111
1110
1111
8
1110
0000
0000
0000
0000
0000
0000
0000
1110
1010
1110
0000
1110
0000
0000
0000
0000
9
0000
1111
0000
0000
0000
0000
0000
1110
0000
0000
0000
0000
0000
0000
0000
0000
0000
10
0000
0000
0000
0000
0000
0000
1111
1010
0000
0000
0000
0000
0000
0000
0100
0000
0000
11
0000
0000
0000
0000
0000
0000
0000
1110
0000
0000
0000
1111
0000
0000
0000
0000
0000
12
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
1111
0000
0000
1111
1011
0000
0000
13
0000
0000
0000
0000
0000
0000
0000
1110
0000
0000
0000
0000
0000
1110
0000
0000
1110
14
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
0000
1111
1110
0000
0000
0000
0000
15
0000
0000
0000
0000
0000
0000
1110
0000
0000
0100
0000
1010
0000
0000
0000
0100
0000
16
0000
0000
0000
0000
0000
0000
1110
0000
0000
0000
0000
0000
0000
0000
0100
0000
0000
17 º
0000»»
0000»
»
0000»
0000»
»
0000»
0000»
»
1111»
0000»
»
0000»
»
0000»
0000»
»
0000»
1110»
»
0000»
0000»
»
0000»
0000»¼
(6)
In the Fit matrix of the reduction gear, there are 11 contacts with fitting, for
example between the bearing1 (5) and the output shaft (2). The Sum matrix of the
62
I. Belhadj et al.
mechanism is presented by equation (6). This procedure is repeated for each part
Pi in the assembly without considering the base parts. This stage represents the
achievement of the first base part and the output is the first set of the subassembly.
The SI algorithm repeats this procedure for alls base parts (figure 2 (d)). The
output of the SI algorithm is a set of subassemblies. For the treated example, the
list of the identified subassemblies is represented by figure 7.
Sub 2
Sub 6
Sub6: {13}
Sub 3
Sub3 : {2,3,4,5,9}
{
Sub5 : {12,11,14}
Sub 5
Sub 1
Sub 4
Sub1 : {7,6,10,15,16,17}
Sub2 : {8}
Sub4 : {1}
Figure 7. The identified subassemblies of the Reduction gear.
3. Conclusion
In this paper, an SI approach composed of two main steps is proposed. It starts
with the exploration of the CAD assembly data to generate three matrices
(Adjacency matrix, Contact all directions matrix and Sum matrix). Then, the
extracted matrix is enriched by mounting parameters in order to extract the base
parts and identify subassemblies. To highlight the efficiency of the SI approach,
SolidWorks© and Matlab© are used to perform the numerical implementation and
an example of CAD assembly mechanism is tested.
4 References
[1] Hyoung RL., Gemmill DD., Improved methods of assembly sequence determination for
automatic assembly systems. Eur J Oper Res 131(3):611–621. 2001.
[2] Laperrière L., EIMaraghy HA., Assembly sequences planning for simultaneous engineering
applications. Int J Adv Manuf Technol 9(4):231–244. 1994.
[3] Lai HY., Huang CT., A systematic approach for automatic assembly sequence plan
generation. Int J Adv Manuf Technol 24 (9/10):752–763. 2004.
[4] Kara S., Pornprasitpol P1., Kaebernick H., Selective disassembly sequencing: a methodology
for the disassembly of end-of-life products. Annals of the CIRP 55(1):37–40. 2006.
[5] Wang JF., Liu JH., Zhong YF., Integrated approach to assembly sequence planning of
complex products. Chin J Mech Eng 17 (2):181–184. 2004.
[6] Moez Trigui, Riadh BenHadj, Nizar Aifaoui, An interoperability CAD assembly sequence
plan approach. Int J Adv Manuf Technol (2015) 79:1465–1476. 2015.
[7] Imen Belhadj, Moez Trigui, Abdelmajid Benamara, Subassembly generation algorithm from
a CAD model. Int J Adv Manuf Technol (2016):1–12. 2016.
[8] Sugato C., A hierarchical assembly planning system. Texas A&M University, Austin. 1994.
[9] Swaminathan A., Barber KS., An experience-based assembly sequence planner for
mechanical assemblies. IEEE Trans Robot Autom 12(2):252–266.1996
[10] Santochi M., Dini G., Computer-aided planning of assembly operations: the selection of
assembly sequences. Robot Comput-Integrated Manuf 9(6):439–446. 1992.
Multi-objective conceptual design: an approach
to make cost-efficient the design for
manufacturing and assembly in the
development of complex products
Claudio FAVI1*, Michele GERMANI1 and Marco MANDOLINI1
1
Università Politecnica delle Marche, via brecce bianche 12, 60131, Ancona (IT)
*Tel.: +39-071-220-4880; fax: +39-071-220-4801. E-mail address: c.favi@univpm.it
Abstract: Conceptual design is a central phase for the generation of the best
product configurations. The design freedom suggests optimal solutions in terms of
assembly, manufacturing, cost and material selection but a guided decision making approach based on multi-objective criteria is missing. The goal of this approach is to define a framework and a detailed approach for the definition of feasible design options and for the selection of the best one considering the
combination of several production constrains and attributes. The approach is
grounded on the concept of functional basis and the module heuristics used for the
definition of product modules and the theory of Multi Criteria Decision Making
approach (MCDM) for a mathematical assessment of the best design option. A
complex product (tool-holder carousel of a machine tool) is used as a case study to
validate the approach. Product modules have been re-designed and prototyped to
efficiently assess the gain in terms of assembly time, manufacturability and costs.
Keywords: Conceptual Design, Multi-objective Design, Multi Criteria Decision
Making, Design to Cost, Design for Manufacturing and Assembly.
1 Introduction
Design-for-X (DfX) methods have been developed in recent years to aid designers
during the design/engineering process for the maximization of specific aspects.
Methods for efficient Design-for-Assembly (DfA) are well-known techniques and
widely used throughout many large industries. DfA can support the reduction of
product manufacturing costs and it provides much greater benefits than a simply
reduction in assembly time [1, 2]. However, these methods are rather laborious
and in most cases, they require a detailed product design or an existing product/prototype. Other approach investigates the product assemblability starting
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_7
63
64
C. Favi et al.
from the product functional structure [3, 4]. In this way, the DfA technique can be
applied during the conceptual design phase when decisions greatly affect production costs. Even so, the conceptual DfA, do not consider manufacturability aspects
such as the material selection or the most appropriate process to build up components and parts. Furthermore, product design and optimization is a multi-objective
activity and not only limited to the assembly aspects.
In this context, this paper proposes an improvement to overcome the abovementioned weak points and to optimize the product assemblability as well as the
parts manufacturability by taking into account the best cost-effective technical solutions. The main goal of this work is to define a multi-objective design approach
which aims to have a comprehensive analysis of the manufacturing aspects. This
is particularly important to avoid design solutions which can be excellent, for example, from the assembly point of view but not cost-efficient in terms of manufacturing costs and investments.
In the following sections, the proposed approach is reported in detail after a
brief review of the research background. The general workflow of the proposed
approach and its application in a real case study (tool-holder carousel) has been
analysed, including a results discussion and future improvements.
2 State of the art and research background
The design stage is a long and iterative process for the development of certain
products. Design stage activities can be divided into four main phases: (i) Problem
definition and customer needs analysis, (ii) Conceptual design, (iii) Embodiment
design, and (iv) Detail design. In the first phase, customer requirements are collected and analysed, then, the requirements are translated into product features,
and finally, concepts that can satisfy the requirements are generated and modelled
[5]. It is well-known that, although design costs consume approx. 10% of the total
budget for a new project, typically 80% of manufacturing costs are determined by
the product design [6, 7]. Manufacturing/assembly cost is decided during the design stage and its definition tends to affect the selection of materials, machines and
human resources that are being used in the production process [8].
DfA is an approach which gives the designer a thought process and guidance so
that the product may be developed in a way which favors the assembly process
[9]. In the industrial practice, the Boothroyd and Dewhurst (B&D) is one of the
most diffused DfA approach [2]. Different design solutions can be compared by
evaluating the elimination or combination of parts in the assembly and the time to
execute the assembly operations [10]. The main drawback of this approach is that
DfA is applied in the detailed design phase when much of the design solutions
have been already identified. Stone et al. [3] define a conceptual DfA method in
order to support designers during the early stages of the design process. The approach uses two concepts: the functional basis and the module heuristics [11]. The
Multi-objective conceptual design …
65
functional basis is used to derive a functional model of a product in a standard
formalism and the module heuristics are applied to the functional model to identify a modular product architecture [12]. The approach has two weak points: (i) the
identification of best manufacturing process for part production and (ii) related
cost-efficient material.
The selection of the most appropriate manufacturing process is dependent on a
large number of factors but the most important considerations are shape complexity and material properties [13]. According to Das et al. [14], Design-forManufacturing (DfM) is defined as an approach for designing a product which: (i)
the design is quickly transitioned into production, (ii) the product is manufactured
at a minimum cost, (iii) the product is manufactured with a minimum effort in
terms of processing and handling requirements, and (iv) the manufactured product
attains its designed level of quality. DfA and DfM hardly integrate together, and
the Design-for-Manufacturing-and-Assembly (DfMA) procedure can typically be
broken down into two stages. Initially, DfA is conducted, leading to a simplification of the product structure and economic selection of materials and processes.
After iterating the process, the best design concept is taken forward to DfM, leading to detailed design of the components for minimum manufacturing costs [15].
Cost estimation is concerned with the predication of costs related to a set of activities before they have actually been executed. Cost estimating or Design-toCost (DtC) approaches can be broadly classified as intuitive method, parametric
techniques, variant-based models, and generative cost estimating models [16].
However, the most accurate cost estimates are made using an iterative approach
during the detail design phase [17]. While DtC is usually applied at the embodiment design or even worse in the detail design phase, to be efficient DtC requires
to be applied at the same time of DfMA (conceptual design phase) [18, 19]. In this
way, DtC is only an optimization of an already selected design solution.
The only way to overcome the aforementioned issues is the multi-objective approach which takes into account all the production aspects (assemblability, manufacturability, materials, costs, etc.) at the same time. Different mathematical models can be used as a solver for the multi-objective problem. MCDM is one of the
common approach for multi-objective problems [20]. Novelty of the proposed approached is based on the application of MCDM in the conceptual design phase to
account multiple production aspects in the development of complex products.
3 Multi-objective conceptual design approach
In order to describe the proposed multi-objective design approach, some concepts
need to be introduced. The first one is to set out the product modules and properties considering the functional basis and the module heuristics. Then, grounded on
the concept of morphological matrix it is necessary to define feasible design solutions. Finally, considering the multi-objective approach based on the MCDM theo-
66
C. Favi et al.
ry, suggestions for the product structure simplification and for the selection of
economic materials and manufacturing processes are stated.
Fig. 1 shows the workflow of the proposed multi-objective design approach.
Different target design methodologies (DfX) can be applied early in the product
design concept. In particular, the focus of this research work is related to the production (assembly, manufacturing, material selection and cost) aspects.
Fig. 1: Flow diagram of the proposed multi-objective conceptual design approach
3.1 Product modules, properties definition and design solutions
Through functional analysis and module heuristic approach, it is possible to determine the number of functions which identify a product and the related flows
(energy, material and signal). The functional analysis is able to break up the product in its constituent functions as a first step of design process. This is the first step
of the conceptual design and helps designers and engineers in the definition of the
product functions as well as in the identification of the overall product structure.
The module heuristic identifies the in/out flows of each function. By using this
approach, it is possible to translate the product functions into functional modules.
Functional modules define a conceptual framework of the product and the initial
product configuration. A one-to-one mapping between product functions and
modules is expected, but can be possible that several functions are developed only
by one physical module.
Furthermore, heuristics allow determining the specific properties of each functional module. Attributes and properties need to be defined for each module in order to identify the technical and functional aspects which must be guaranteed as
well as a basis for the definition of the feasible and not-feasible design solutions.
The transition from product modules to potential design solutions (components
or sub-assemblies) is based on the knowledge of specific properties identified dur-
Multi-objective conceptual design …
67
ing the generation of the product modules. A very helpful tool at this step is the
morphological matrix which can improve the effectiveness of the conceptual analysis and translates functional modules to physical modules such as sub-assemblies
or components. A morphological matrix is traditionally created by labelling each
line with all the identified products’ modules and, for each module, the possible
design options, listing the solutions as columns and the product’ modules as rows.
[20]. In a manual engineering design context, the morphological matrix is limited
to the concepts generated by the engineer, although the morphological matrix is
one technique that can be used in conjunction with other design activities (brainstorming processes, knowledge repository analysis, etc.) [21].
In particular, the alternative design options are developed and analyzed based
on the concepts of DfA, DfM and DtC to retrieve, at conceptual level, the best
configuration in terms of costs and productivity. Designer skills, supplier and
stakeholder surveys as well as well-structured and updated knowledge repositories
can help in the definition of the design options suitable to implement the module
under investigation and for the population of the morphological matrix. The morphological matrix finally shows existing design options for each functional module of a complex system and it permit a rapid configuration of the product with the
selection of the best option for a specific module. Design options must be reliable
and compliant with the properties defined in the module assessment.
3.2 Multi-objective approach
The multi-objective approach is the core of the proposed workflow and aim to
balance different aspects of industrial production, such as assembly, materials and
manufacturing processes taking into account the overall cost as a driver for the optimization design process. The multi-objective approach is following the product
modules definition and the classification of design solutions, but it is still part of
the conceptual design phase. In fact, in this phase are available only general information and not specific details about geometry, shape, manufacturing parameters, material designation, etc. The selection of the best design options is made using a MCDM method call TOPSIS (Technique for Order of Preference by
Similarity to Ideal Solution). The TOPSIS was first developed by Hwang & Yoon
and it is attractive in that limited subjective input (the only subjective input needed
from decision makers is weights) [22]. According to this technique, the best alternative would be the one that is nearest to the positiveǦideal solution and farthest
from the negative ideal solution. The positive ideal solution is a solution that maximizes the benefit criteria and minimizes the cost criteria [23]. Using a TOPSIS
method, the different design options are ranked. The TOPSIS method is not time
consuming due to the easy implementation in a common spreadsheet or in a dedicated software tool. Inputs required are only: (i) attributes weight (based on company targets and requirements) and (ii) scores for each design option in relation to
68
C. Favi et al.
the selected attributes. Obviously, a sensitivity analysis of the results is recommended due to the dependency with scores and weights assigned during the
evaluation. This issue does not limit the applicability of the approach but encourage to set the weights based on the specific targets and to implement a sensitivity
analysis to investigate the influence of each attribute.
4 Case study: A tool-holder carousel of machine tool
A tool-holder carousel of a machine tool for wood processing and machining has
been analysed. This system is responsible to feed the tool head with different tools
for specific manufacturing operations (cutting, milling, drilling, etc.). By the functional analysis and the modular approach, several product modules have been
identified in the conceptual design stage. The overall function of this complex system is “feed the machine head with specific tool”. Different design options have
been pointed out for each product module by the use of morphological matrix.
Alternative design solutions have been analyzed following the multi-objective approach and the TOPSIS methodology. An overview of the implementation of the
approach for the Bracket module is presented in Fig. 2.
Fig. 2: TOPSIS implementation for the ranking of the Bracket module options
Different design options and a rating for each aspect of production (Assembly,
Material, Manufacturing and Cost) have been assessed by the different target design methodology. Weights assignment for each attribute have been done based on
the company targets and requirements. The approach is cost-driven and for this
reason the maximum weight has been assigned to the cost feature.
As educational example, a complete re-design process has been carried out to
compare, accurately, design alternatives after the conceptual design and so in the
detail design phase. Complete 3D CAD models have been built up for a comprehensive and detailed analysis as well as method validation. Fig. 3 highlight the re-
Multi-objective conceptual design …
69
sults obtained for the Bracket module (Welded structure vs. Plastic piece). Production rate has been roughly estimated approx. 2500 pieces in 10 years according to
the average production rate of the machine tool.
Fig. 3: CAD models and features of Bracket module options (Welded structure vs. Plastic piece)
5 Results discussion and concluding remarks
The proposed work aims to develop a multi-objective design approach for a comprehensive analysis of the manufacturing aspects in the conceptual design phase.
The approach is able to support engineering team in the selection of the optimal
design solution. An overview of the results obtained for the proposed case study
(tool-holder carousel) is presented in Table 1.
Table 1. Main attributes comparison for the tool holder carousel before and after re-design.
Components
Assembly time
Total Cost (material + manuf. + assembly)
Original design
325 pcs.
88 min.
359.73
After re-design
123 pcs.
33 min.
225.74
In particular, more than 35% of cost saving is highlighted by the application of
this approach and approx. 60% reduction in assembly time and the number of
components. Another important outcome has been the easy implementation of the
proposed approach in the traditional design workflow of the of the company.
Future perspectives on this topic will be a deeply validation of the method for
other case studies as well as the definition of a framework for the implementation
of the approach in a design tool. A step forward will be to include other interesting
production aspects such as environmental impacts, energy consumptions, etc.
References
1.De Fazio T.L., Rhee S.J., and Whitney D.E. Design specific approach to design for assembly
(DFA) for complex mechanical assemblies. In IEEE Robotics and Automation, 1999,
pp.869-881.
70
C. Favi et al.
2.Boothroyd G., Dewhurst P., Knight W. Product design for manufacture and assembly, 2nd edition, 2002 (Marcel Dekker).
3.Stone R.B. and McAdams D.A. A product architecture-based conceptual DFA technique. Design Studies, 2004, 25, pp.301-325.
4.Favi C. and Germani M. A method to optimize assemblability of industrial product in early design phase: from product architecture to assembly sequence. International Journal on Interactive Design and Manufacturing, 2012, 6(3), pp. 155-169.
5.Pahl G. and Beitz W. Engineering design: a systematic approach, 2nd edition, 1996 (Springer).
6.Ulrich K.T. and Eppinger S.D. Product design and development, 3rd Edition, 2003 (McGrawHill Inc.).
7.Huang Q. Design for X: concurrent engineering imperatives, 1996 (Chapman and Hall).
8.Nitesh-Prakash W., Sridhar V.G. and Annamalai K. New product development by DfMA and
rapid prototyping. Journal of Engineering and Applied Sciences, 2014, 9, pp.274-279.
9.Otto K. and Wood K. Product design: techniques in reverse engineering and new product development, 2001 (PrenticeHall).
10.Samy S.N. and ElMaraghy H.A. A model for measuring products assembly complexity. International Journal of Computer Integrated Manufacturing, 2010, 23(11), pp.1015-1027.
11.Stone R.B., Wood K.L. and Crawford R.H. A heuristic method for identifying modules for
product architectures. Design Studies, 2000, 21, pp.5-31.
12.Dahmus J.B., Gonzalez-Zugasti J.P. and Otto K.N. Modular product architecture. Design
Studies, 2001, 22(5), pp.409-424.
13.Estorilio C. and Simião M.C. Cost reduction of a diesel engine using the DFMA method.
Product Management & Development, 2006, 4, pp.95-103.
14.Das SK., Datla V. and Samir G. DFQM - An approach for improving the quality of assembled
products. International Journal of Production Research, 2000, 38(2), pp. 457-477.
15.Annamalai K., Naiju C.D., Karthik S. and Mohan-Prashanth M. Early cost estimate of product during design stage using design for manufacturing and assembly (DFMA) principles.
Advanced Materials Research, 2013, pp.540-544.
16.Nepal B., Monplaisir L., Singh N. and Yaprak, A. Product modularization considerıng cost
and manufacturability of modules. International Journal of Industrial Engineering, 2008,
15(2), pp.132-142.
17.Hoque A.S.M., Halder P.K., Parvez M.S. and Szecsi T. Integrated manufacturing features and
design-for-manufacture guidelines for reducing product cost under CAD/CAM environment. Computers & Industrial Engineering, 2013, 66, pp.988-1003.
18.Shehab E.M. and Abdalla H.S. Manufacturing cost modelling for concurrent product development. Robotics and Computer Integrated Manufacturing, 2001, 17, pp.341-353.
19.Durga Prasad K.G., Subbaiah K.W. and Rao, K.N. Multi-objective optimization approach for
cost management during product design at the conceptual phase, Journal of Industrial Engineering International, 2014, 10(48).
20.Ölvander J., Lundén B. and Gavel H. A computerized optimization framework for the morphological matrix applied to aircraft conceptual design. CAD, 2009, 41, pp.187-196.
21.Bryant Arnold C.R., Stone R.B. and McAdams D.A. MEMIC: An interactive morphological
matrix tool for automated concept generation. In the proceedings of Industrial Engineering
Research Conference, 2008.
22.Hwang C.L. and Yoon K. Multiple attribute decision making: methods and applications. 1981
(Springer-Verlag).
23.Wang Y.J. and Lee H.S. Generalizing TOPSIS for fuzzy multipleǦcriteria group decisionǦmaking. Computers & Mathematics with Applications, 2007, 53, pp.1762Ǧ1772.
Modeling of a three-axes MEMS gyroscope with
feedforward PI quadrature compensation
D. Marano1 , A. Cammarata2∗ , G. Fichera2 , R. Sinatra2 , D. Prati3
1
Department of Engineering ”Enzo Ferrari”, University of Modena and Reggio Emilia, Italy.
E-mail: acamma@diim.unict.it
2
Dipartimento Ingegneria Civile e Architettura, University of Catania, Italy. E-mail:
acamma@diim.unict.it, gabriele.fichera@dii.unict.it, rsinatra@dii.unict.it
3
ST Microelectronics, Catania, Italy, E-mail: daniele.prati@st.com
∗ Corresponding author. Tel.: +39-095-738-2403 ; fax: +39 0931469642. E-mail address:
acamma@diim.unict.it
Abstract: The present paper is focused on the theoretical and experimental analysis
of a three-axes MEMS gyroscope, developed by ST Microelectronics, implementing an innovative feedforward PI quadrature compensation architecture. The gyroscopes structure is explained and equations of motion are written; modal shapes
and frequencies are obtained by finite element simulations. Electrostatic quadrature
compensation strategy is explained focusing on the design of quadrature cancellation electrodes. A new quadrature compensation strategy based on feedforward
PI architecture is introduced in this device to take into account variations of device parameters during lifetime. Obtained results show a significant reduction of the
quadrature error resulting in a improved performance of the device. Fabrication and
test results conclude the work.
Keywords: Quadrature error, MEMS, Gyroscope, FEM modeling, Electrostatic
quadrature compensation, Feedforward PI.
1 Introduction
Gyroscopes are physical sensors that detect and measure the angular rotations of
an object relative to an inertial reference frame. MEMS gyroscopes are typically
employed for motion detection (e.g. in consumer electronics and automotive control systems), motion stabilization and control (e.g. antenna stabilization systems,
3-axis gimbals for UAV cameras) [1]. Combining MEMS gyroscopes, accelerometers and magnetometers on all three axes yields an inertial measurement unit (IMU);
the addition of an on-board processing system computing attitude and heading leads
to a AHRS (attitude and heading reference system), highly reliable device, in common use in commercial and business aircrafts. Measurement of the angular position
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_8
71
72
D. Marano et al.
in rate gyroscopes can be achieved by numerical integration of the gyroscope’s output; the time integration of the output signal, together with the associated errors and
noise, leads to orientation angle drifts [2]-[4]. Among all the major error sources, the
undesired sense-mode vibration resulting from the coupling of drive-mode displacement and sense mode of the gyroscope is the mechanical quadrature signal [5]-[11].
Since its magnitude can reach thousand degrees per second, the measurement of
a low electric signal generated by very small Coriolis force in presence of a much
bigger electric signal becomes a difficult problem [12]. Several techniques, based either on mechanical or electronic principles, have been proposed for quadrature error
compensation; among all, an efficient approach able to provide a complete quadrature error cancellation is the electrostatic quadrature compensation. This approach is
based on the electromechanical interaction between properly designed mechanical
electrodes and the moving mass of the gyroscope: electrostatic forces, mechanically
balancing quadrature forces, are generated biasing electrodes with differential dc
voltages [13]-[18]. In most devices, the magnitude of biasing dc voltages is determined in order to nullify an experimentally measured quadrature error. In this way,
however, it is not possible changing the dc voltages during the lifetime of the device to accomplish variations of structural device properties. A possible solution to
this problem is addressed in the present paper, where an innovative feed-forward PI
quadrature compensation architecture implemented on a novel three-axes MEMS
gyroscope, manufactured by ST Microelectronics, is discussed.
2 Gyroscope Structure And Dynamics
2.1 Structure
The three-axes Coriolis Vibrating Gyroscope presented in the following is a compact
device, manufactured by ST Microelectronics, combining a triple tuning-fork structure with a single vibrating element. The device is fabricated using ThELMA-ISOX
(Thick Epipoly Layer for Microactuators and Accelerometers) technology platform,
a surface micromachining process proprietary of ST Microelectronics. This platform
allows to obtain suspended seismic masses electrically isolated but mechanically
coupled with high and controlled vacuum inside the cavity of the device. The structure (Fig.1) is composed of four suspended plates (M1,2,3,4 ) coupled by four folded
springs, elastically connected to a central anchor by coupling springs. The fundamental vibration mode (driving mode) consists of a planar oscillatory radial motion
of the plates: globally, the structure periodically expands and contracts, similarly to
a ”beating heart”. Plates M1,2 are actuated by a set of comb-finger electrodes and
the motion is transmitted to the secondary plates M3,4 by the folded springs at the
corners. The sensing modes of the device consist of two out-of-plane modes (Roll
and Pitch) characterized by counter-phase oscillation of plates M1,2 (M3,4 ) and one
in-plane counter-phase motion of the yaw plates (M3,4 ) (Yaw mode). Rotation of
Modeling of a three -axes MEMS gyroscope …
73
yaw plates (M3,4 ) is measured by a set of parallel-plate electrodes, P P1,2 , located
on the yaw plates. Pitch and roll angular rotations are measured sensing the capacitive variations between each plate and an electrode placed below (respectively
R1,2 and P1,2 for roll and pitch masses); the driving mode vibration is measured by
additional comb-finger electrodes SD1,2 . Electrostatic quadrature compensation is
implemented on Roll (Quadrature Compensation Roll, QCR) and Pitch axis (QCP)
by means of electrodes placed under each moving mass. Yaw axis quadrature compensation electrodes (QCY) are slightly different from the ones of other axis since
they are not placed underneath the moving mass and have height equal to the gyroscope’s rotor mass.
Fig. 1: Case-study gyroscope layout
2.2 Dynamics
The gyroscope’s equations of motion are derived in the general case in [4, 19]. The
coordinate-system model shown in Fig. 2 consists of three coordinate frames respectively defined by their unit vectors Σi = [X, Y, Z]; Σp = [x, y, z]; Σ =
[x̂, ŷ, ẑ]. The frame Σi represents the inertial reference system, Σp is the inertial
platform frame, Σ is a body-frame with origin at a point P of a moving body (for
a 3-axes gyroscope the considered body is one of the four moving suspended plates
and the platform frame is usually assigned to the fixed silicon substrate). For a decoupled three axes gyroscope simplifying assumptions (constant angular rate inputs,
operating frequency of the gyroscope much higher than angular rate frequencies)
can be done [19, 20], and the equations of motion (EoM) become:
74
D. Marano et al.
mr̈x + cx ṙx + kx rx = −2mΩy ṙz + 2mΩz ṙy + FDx
(1a)
mr̈y + cy ṙy + ky ry = 2mΩx ṙz − 2mΩz ṙx + FDy
(1b)
mr̈z + cz ṙz + kz rz = −2mΩx ṙy + 2mΩy ṙx + FDz
(1c)
Fig. 2: Coordinate system model for the derivation of kinematic equations
2.2.1 Modal analysis
The device eigenfrequencies are determined by FEM simulation (Fig. 3). As imposed by mechanical design the fundamental mode of vibration consists of an inplane inward/outward radial motion of the plates in which the structure cyclically
expands and contracts. Several spurious modes at higher frequencies, not reported
here for brevity, have been also identified.
3 Electrostatic quadrature cancellation
3.1 Quadrature force
The dynamics equations of a linear yaw vibrating gyroscope can be expressed, considering the off-diagonal entries of the mechanical stiffness matrix, as
Modeling of a three -axes MEMS gyroscope …
75
Fig. 3: Fundamental vibration modes (drive, pitch, yaw, roll)
m 0
dx 0
kx kxy
Fd
p̈(t) +
ṗ(t) +
p(t) =
kyx ky
FC
0 dy
0 m
(2)
where p(t) = [x(t), y(t)]T is the position vector of the mass in drive and sense
direction, m represents the Coriolis mass, dx (dy ) and kx (ky ) represent the damping
and stiffness along the X-axis (Y-axis); kxy (kyx ) are the cross coupling stiffness
terms bringing the quadrature vibration response; Fd is the driving force and FC is
the Coriolis force. The dynamic equation in sense direction can be expressed as
mÿ + dy ẏ + ky y = FC + Fq
(3)
where FC = −2mΩz ẋ is the Coriolis force and Fq = −kyx x is the quadrature force. The Coriolis mass is usually actuated into resonant vibration with constant amplitude in drive direction, thus the drive-mode position can be expressed
by x(t) = Ax sin(ωx t). Introducing the sinusoidal drive movement, Coriolis and
quadrature force can be expressed as
FC = 2mΩz ωx Ax cos(ωx t),
Fq = −kyx Ax sin(ωx t)
(4)
3.2 Quadrature cancellation electrodes design
Quadrature compensation electrodes for out-of-plane Roll (Pitch) motion are shown
in Fig. 4; the electrostatic force generated by the i-th electrode is given by
76
D. Marano et al.
1
FR,P i = ± 0
2
H0
± Ax sin(ωx t) L0
2
(V ± ΔV )2
(g)2
(5)
where Ax sin(ωx t) = x(t) is the drive movement, H0 and L0 are respectively width
and length of quadrature compensation electrodes and g is the air gap. The voltage
sign is chosen either positive (V + ΔV ) or negative (V − ΔV ) according to the
electrode biasing, whereas the x sign is chosen according to the overlap variation
among the proof mass and quadrature compensation electrodes (QCE) as shown in
Fig. 4. The total force is obtained as the product
of the force generated by a single
electrode by the number n of electrodes: Ftot = i Fi · n.
Fig. 4: Roll (Pitch) quadrature compensation electrode; detail of Fig. 1 (QCR and QCP electrodes)
The quadrature force FQ (Eq. (4)) is balanced by the drive dependent component
of the electrostatic force, properly tuning the ΔV potential applied to the pitch (roll)
quadrature compensation electrodes:
kyx Ax sin(ωx t) =
1 Ax sin(ωx t)L0
0
(V ± ΔV )2
2
(g)2
(6)
Quadrature compensation electrodes for the in-plane yaw motion are shown in Fig.
5. The electrostatic force generated by the i-th electrode is given by
1
(LOV ± x)
F Yi = ± 0 h
(V ± ΔV )2
2
(g ± y)2
(7)
where h denotes the electrodes height and g the air gap between the moving mass
and the quadrature compensation electrode.
Design parameters of quadrature cancellation electrodes for the three-axes gyro
are reported in Tab. 1 respectively for roll (pitch) and yaw electrodes. Quadrature
compensation forces are regulated tuning the differential voltage ΔV such that the
Modeling of a three -axes MEMS gyroscope …
77
Fig. 5: Yaw quadrature compensation electrode; detail of QCY2,3 electrodes in Fig. 1
Table 1: Quadrature compensation electrodes parameters
Axis
g [μm]
H0 [μm]
L0 [μm]
Roll
1.2
20
1200
(Pitch)
Yaw
1.1
-
LOV [μm] h [μm]
-
-
25
24
residual quadrature is canceled out; the ΔV value corresponding to the minimum
residual quadrature is denoted by ΔVOpt . Residual quadrature signals are reported
in Tab. 2
3.3 Feedforward PI architecture
Quadrature is measured for each device during the electric wafer sorting test, here
tension variation ΔVopt is set for each device during the calibration phase. A serious limit of this approach is that structural parameters of devices can change unpredictably during lifetime, causing variations of quadrature error. The value of ΔVOpt
is therefore no longer an optimal value for the new operating conditions. A proposed solution to this problem is to adopt a closed loop architecture, based on feedforward PI in which the optimal ΔVOpt is the feedforward action and PI controller
compensates for lifetime quadrature variations. This procedure results in a further
optimization of residual quadrature values, as shown in Tab. 2.
Table 2: Residual quadrature results
Axis
Residual quadrature OL [Nm]
Pitch
6.46 · 10−12
Roll
9.09 · 10−12
Yaw
3.66 · 10−13
Residual quadrature CL [Nm]
2.04 · 10−16
2.87 · 10−16
1.15 · 10−17
78
D. Marano et al.
4 Fabrication and test results
All individual devices present on the wafer are tested for functional defects by electric wafer sorting (EWS). The quadrature amplitude is evaluated for each gyroscope
of the wafer, as shown in Fig. 6.
Fig. 6: EWS Testing: quadrature distribution (Yaw axis) on wafer
4.1 Experimental quadrature cancellation
The quadrature compensation strategy has been electrically simulated for an isolated
device inside the wafer. Applying a differential dc voltage to quadrature compensation electrodes quadrature error variation is observed and ΔVOpt value is obtained
by interpolation; in Fig. 7 results for roll axis are shown.
Fig. 7: Residual quadrature amplitude (Roll axis) for different voltages applied to Roll quadrature
cancellation electrodes
Modeling of a three -axes MEMS gyroscope …
79
5 Conclusion
In this paper a theoretical and experimental analysis of a three-axes MEMS gyroscope, developed by ST Microelectronics, has been presented. Exploiting the equations of motions for a 3-DoF gyroscope structure provided an estimation of the
drive and sense motion amplitude. Natural mode shapes and frequencies of the device have been obtained by finite element simulations to characterize the device.
Equations for the design of quadrature compensation electrodes have been derived,
and residual quadrature calculated with open loop architecture. A new quadrature
compensation strategy, based onan innovative feedforward PI architecture, accomplishing for changes of device parameters during lifetime of device has been introduced and results discussed. Finally, fabrication details and measurement results of
test devices have been reported.
References
1. V. Kaajakari, Practical MEMS, Small gear publishing, Las Vegas, Nevada, 2009
2. M. Saukoski, L. Aaltonen, K.A.I. Halonen, Zero-Rate Output and Quadrature Compensation
in Vibratory MEMS Gyroscopes”, IEEE Sensors Journal, Vol.7, No. 12, December 2007
3. B.R. Johnson, E. Cabuz, H.B. French, and R. Supino, Development of a MEMS gyroscope for
northfinding applications, in Proc. PLANS, Indian Wells, CA, May 2010, pp. 168-170.
4. Volker Kempe, Inertial MEMS, Principles and Practice, Cambridge University Press, 2011
5. A. S. Phani, A. A. Seshia, M. Palaniapan, R. T. Howe, and J. A. Yasaitis, Modal coupling in
micromechanical vibratory rate gyroscopes, IEEE Sensors J., vol. 6, no. 5, pp. 11441152, Oct.
2006.
6. H. Xie and G. K. Fedder, Integrated microelectromechanical gyroscopes, J. Aerosp. Eng., vol.
16, no. 2, pp. 6575, Apr. 2003.
7. W. A. Clark, R. T. Howe, and R. Horowitz, Surface micromachined Z-axis vibratory rate
gyroscope, in Tech. Dig. Solid-State Sensor and Actuator Workshop, Hilton Head Island, SC,
USA, Jun. 1996, pp. 283287.
8. A. Cammarata, and G. Petrone, Coupled fluid-dynamical and structural analysis of a monoaxial mems accelerometer, The International Journal of Multiphysics 7.2 (2013): 115-124.
9. S. Pirrotta, R. Sinatra, and A. Meschini, A novel simulation model for ring type ultrasonic
motor, Meccanica 42.2 (2007): 127-139.
10. M. S. Weinberg and A. Kourepenis, Error sources in in-plane silicon tuning fork MEMS gyroscopes, J. Microelectromech. Syst., vol. 15, no. 3, pp. 479491, Jun. 2006.
11. Mikko Saukoski, System and circuit design for a capacitive MEMS gyroscope, Doctoral Dissertation, Helsinki University of Technology
12. R. Antonello, R. Oboe, L. Prandi, C. Caminada, and F. Biganzoli, Open loop compensation
of the quadrature error in MEMS vibrating gyroscopes, IEEE Sens. J., vol. 7, no. 12, pp.
1639-1652, Dec. 2007
13. Ni, Yunfang, Hongsheng Li, and Libin Huang. ”Design and application of quadrature compensation patterns in bulk silicon micro-gyroscopes.” Sensors 14.11 (2014): 20419-20438.
14. W. A. Clark and R. T. Howe, Surface micromachined z-axis vibratory rate gyroscope, in Proc.
Solid-State Sens., Actuators, Microsyst. Work-shop, Hilton Head Island, SC, Jun. 1996, pp.
283-287
15. E. Tatar, S. E. Alper and T. Akin, Quadrature error compensation and corresponding effects on
the performance of Fully decoupled MEMS gyroscopes, IEEE J. of Microelectromechanical
systems, vol. 21, no. 3, June 2012
80
D. Marano et al.
16. A. Sharma, M.F. Zaman, and F. Ayazi, A sub 0.2◦ /hr bias drift micromechanical gyroscope
with automatic CMOS mode-matching, IEEE J. of Solid-State Circuits, vol. 44, no. 5, pp.
1593-1608, May 2009
17. B. Chaumet, B. Leverrier, C. Rougeot, and S. Bouyat, A new silicon tuning fork gyroscope
for aerospace applications, in Proc. Symp. Gyro Technol., Karlsruhe, Germany, Sep. 2009, pp.
1.1-1.13
18. Weinberg, M.S., Kourepenis A., Error sources in in-plane silicon tuning-fork MEMS gyroscopes, Journal of Microelectromechanical Systems. Volume 15, Issue 3, June 2006, pp. 479491
19. C. Acar, A. Shkel, MEMS Vibratory Gyroscopes, Structural Approaches to Improve Robustness, Springer, 2008.
20. Acar, C., Shkel, A. M. (2003). Nonresonant micromachined gyroscopes with structural modedecoupling. Sensors Journal, IEEE, 3(4), 497-506.
A disassembly Sequence Planning Approach for
maintenance
Maroua Kheder1,*, Moez Trigui1 and Nizar Aifaoui1
Mechanical Engineering Laboratory, National Engineering School of Monastir, University of
Monastir, Av Ibn El jazzar Monastir, Tunisia
* Corresponding author. Tel. 0021658398409; Fax: 0021673500514. E-mail address:
maroua.kheder@gmail.com
Abstract: In recent years, more and more research has been conducted in close
collaboration with manufacturers to design robust and profitable dismantling systems. Thus, engineers and designers of new products have to consider constraints
and disassembly specifications during the design phase of products not only in the
context of the end of life but more precisely in the product life cycle. Consequently, optimization of disassembly process of complex products is essential in the
case of preventive maintenance. In Fact, Disassembly Sequence Plan (DSP),
which is among the combinatorial problems with hard constraints in practical engineering, becomes an NP-hard problem. In this research work, an automated DSP
process based on a metaheuristic method named “Ant Colony Optimization” is
developed. Beginning with a Computer Aided Design (CAD) model, a collision
analysis is performed to identify all possible interferences during the components’
motion and then an interference matrix is generated to identify dynamically the
disassembly parts and to ensure the feasibility of disassembly operations. The
novelty of the developed approach is presented in the introduction of new criteria
such as the maintainability of the usury component with several other criteria as
volume, tools change and disassembly directions change. Finally, to highlight the
performance of the developed approach, an implemented tool is developed and an
industrial case is studied. The obtained results prove the satisfactory side of these
criteria to identify a feasible DSP in a record time.
Keywords: Disassembly Sequence Plan, Computer Aided Design, Interference
Analysis, Optimization, Ant Colony algorithm, Maintenance.
1
Introduction
Maintenance requires the replacement of failed components; removal and reassembly of these components will take up a large proportion of time and cost in
maintenance task. Indeed, Preventive Maintenance (PM) refers to the work carried
out to restore the degenerated performance of a system and to lessen the likelihood
of it failing. It is important to note that the removal or dismantling parts require
maintenance engineers to identify a feasible and near optimal disassembly se© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_9
81
82
M. Kheder et al.
quence before carrying out the disassembly operations. For this reason, in a manufacturing system, DSP takes an important place in the life phase of a product by
the way it has gained a great deal of attention by designers and researchers [1].
Chung et al. treated the problem of selective DSP based on the Wave Propagation
(WP) method, focuses on topological disassemblability of parts [2]. Moreover, the
capability to generate automatically an efficient and feasible DSP still a topic subject to be improved. In the classification of optimization problems, DSP is considered among NP-hard combinatorial optimization problems with hard constraints in
practice engineering. The increasing number of components affects the space of
disassembly solutions which will be more and more complex [3]. To surmount
this difficulty, the metaheuristic methods seem to be the most suitable to DSP
problem, especially the family of swarm intelligence, such as Ant Colony Optimization (ACO).
ACO is inspired by the foraging behavior of natural ants, these algorithms are
characterized by a type of indirect coordination that relies on the environment to
stimulate the performance of subsequent actions by the same or other agents,
which is called stigmergy [4]. Wang and al. applied ACO in selective disassembly
planning by appointing the target list of components to be repaired [5]. Indeed,
ACO is used as a powerful technic to solve complex NP-hard combinatorial optimization problems in recent years [6-7]. In this work, we are more partially concerned about the ACO method to optimize DSP of CAD assembly in a context of
preventive maintenance. The remaining part of this paper is organized as it follows. First, the optimal DSP by ACO is formulated. Then, beginning with a CAD
model, the approach of geometric precedence feasibility in disassembly sequence
is presented. The treated example explains the novel criteria used to generate an
optimum DSP to optimize the preventive maintenance. The developed approach
considers several criteria such as part volume, tools change, the change of disassembly directions and maintainability of usury component. Finally, an academic
case study is presented to illustrate the efficiency of the proposed method.
2
Ant Colony Research for DSP
2.1
Flow chart
The main goal of this approach is to exploit historic and heuristic information to
construct candidate solutions and fold the information learned from constructing
solutions into the history. The stages of ACO can be structured in a flow chart presented in Figure 1.
2.2
Free part search
A disassembly Sequence Planning Approach …
83
The assembly model created by CAD systems encloses many appropriate data
which can be useful in DSP problem such as the data related to the part and the
data associated to the assembly constraints between parts. Based on the work of
Ben Hadj et al. [8] which proposed a MMIDE (Mate and Mate In Place Data Extraction), an exploration of the CAD data can conduct to the elaboration of interference matrix [I]k along the +k-axis direction with k ϵ (+X, +Y, +Z). In order to
explain all the algorithm stages, an illustrative example of belt tightener was treated. Figure 2 and Table 1, present the treated mechanism which is composed of 7
parts and needs three tools G1, G2 and G3 to be disassembled.
Begin
Initialize the ACO parameters
Locate ants randomly at primary
parts
Determine probabilistically
which part to be selected next
Move to the next part and delete
it from the interference matrix
No
all parts have
been selected?
Yes
Evaluate all solution
Update of pheromone
The termination
condition satisfied
?
Yes
Optimal Solution found
End
Fig. 1. Flow chart of Ant Colony Algorithm for DSP.
No
84
M. Kheder et al.
7
5
4
3
6
1
2
Fig. 2. The CAD model of belt tightener.
Table 1. Component list of belt tightener and its characteristic.
Component
Name
Maintainability
Tool
Volume(mm3)105
1
Tree
2
G1
1,05
2
Built
1
G1
4,11
3
Pad
3
G2
0.327
4
Bearing spacing
1
G1
0.253
5
Pulley
2
G1
2.97
6
Nut HEX
1
G3
0.177
7
Screw HEX
2
G3
0.167
For the illustrative example, the interference matrices in the three directions
(+X, +Y, +Z) is given by:
ª
«P
«
«P
«P
>I@+z = «
«P
«
«P
«P
«
¬P
ª
«P
»
0
«
»
«P
1»
«
»
0
» >I@+x = «P
«P
0»
«
»
0»
«P
«P
»
1
«
»
0¼
¬P
P P P P P P Pº
1
2
3
4
5
6
1
0 0 0 0 0 0
2
1 0 1 1 1 0
3
1 0 0 0 1 0
4
1 0 1 0 1 0
5
1 0 0 0 0 0
6
1 1 1 1 1 0
7
1 1 1 1 1 0
7
P P P P P P Pº
1
2
3
4
5
6
1
0 1 1 1 1 1
2
1 0 0 1 0 1
3
1 0 0 0 1 0
4
1 0 0 0 0 0
5
1 0 1 0 0 0
6
1 0 0 0 0 0
7
1 1 0 0 0 0
ª
«P
1»
»
«
1»
«P
»
«
0
» > I@ = « P
+y
«P
0»
»
«
0»
«P
»
«P
0
»
«
0¼
¬P
7
P P P P P P Pº
1
2
3
4
5
6
7
1
0 1 1 1 1 1 1»
2
1 0 0 0 0 0 1»
3
4
»
»
1 0 0 0 1 0 0 (1)
»
1 0 0 0 0 0 0»
»
6
1 0 1 0 0 0 0»
1 0 0 0 0 0 0»
7
0 0 0 0 0 0 0¼
5
»
A disassembly Sequence Planning Approach …
85
Where
i P1...PN, represent the N parts of the assembly.
i (Iml) is equal to 1 if there is interference between part m and part l when
disassembling along the +K-axis direction, otherwise is equal to 0.
The hard task of DSP is how to detect a possible sequence without any collision
among the disassembly operations. In this work, the generation of feasible DSP is
essentially based on the free part concept which consists of checking the [I]k
elements matrices and identifies a Free Part (FPm) among the +k-axis direction or
the k-axis direction. In fact, If the component P m of an assembly does not
interfere with another component Pl in the direction of the +k-axis, the component
Pm can be disassembled freely in the direction of the +k-axis. If [I]k is the interference matrix in the direction of +k, the transpose matrix [I]kT represents the interferences along the opposite direction -k. This interesting property allows to limit
the component translations to 3 main directions during the CAD stage, although
the information regarding the 6 directions of disassembly is obtained. By using the
approach described above, we note that part 1, part 6 and part 7 represents no interference with another component along their disassembly in the direction respectively of +Z, Z and +Y. Consequently, the free parts detected according to the illustrative example are (1, +Z), (6, Z) and (7, +Y) and they are shown in red in
the interference matrix.
2.3
Feature selection with ACO and solution construction
As mentioned in the introduction of the ACO, the main objective of the ant is finding the shortest path which is equivalent in our case to the optimal disassembly
sequence planning with a minimal cost. To construct their solution, ant k is summoned to select from part m the next part l to visit based on the probabilistic statetransition rule Pk (m, l).
Pk (m, l )
D
E
­ >W (m, l )@ . >K (m, l ) @
°
>W (m, u )@D . >K (m, u )@E
® u¦
J ( m)
°
0
¯
if l  J k ( m)
(2)
k
else
The probability depends, firstly, on the pheromone concentration in the path
τml which corresponding to the positive feedback of the track. In this study, the size
of the pheromone matrix is represented as a (6nu6n). Secondly, it depends on the
heuristic information ɳml which combines the criteria of change of disassembly direction and tools change from part m to part l. The matrix heuristic information’s
size, [ɳ], represented as (6nu6n) and the expression is computed as follow:
86
M. Kheder et al.
K (m, l ) w1 d (m, l ) w2 t (m, l )
(3)
Where:
d (m ,l): is an integer representing the direction change between part m and part l
which can take the following values:
i 2: if there is no change between two consecutive parts.
i 1: if there is a change of 90° between two consecutive parts.
i 0: if there is a change of 180° between two consecutive parts.
t (m ,l): is an integer corresponding to the tool change between part m and part l,
which can take the following values:
i 0 if there is no change of tools between two successive parts.
i 1: if there is a change of tools is needed between two successive parts.
w1, w2 represent two weight coefficients and α and β are two parameters which
determine respectively the relative influence of the pheromone trail and the heuristic information. Jk (m) is the complete candidate list generated dynamically using
the interference matrix after the part m has been removed. Indeed, the transition of
part m to part l is based on the roulette wheel selection to avoid the premature
convergence.
3
Optimization of DSP
The optimization of DSP is a multi-objective problem, so it’s necessary to introduce and integrate more objectives that can be automatically quantified. Thus, the
optimal disassembly sequence consider four objectives: the maintenance of usury
parts, the disassembly direction change, the disassembly tool change, and the part
volume. The purpose is to obtain an optimum DSP by disassembling the smaller
parts first, disassembling the maximum number of parts in the same direction
without changing the tool and easier access to remove the defective components
[9].Where OF is the objective function which represents the quality of the DSP
given as follow:
OF
max N (
J
G
D 1 T
P
M
(4)
\V )
Where
M
¦
N
k 1
mi
¦m
i
N k 1
V
N
vi
¦ 6v ( N i 1)
i 1
i
A disassembly Sequence Planning Approach …
D
N
¦ d ( pi , pi1 )
87
T
i 1
N
¦T ( p , p
i
i 1
i 1
)
x N is the number of parts in the mechanism,
x M is the relative factor of maintenance for each component: mi can take the
following values:
i 1: if there is no maintenance needed to the component m
i 2: if a corrective maintenance of component m is needed
i 3: if a preventive maintenance of component m is needed
x V is the relative volume of each component in the mechanism.
x γ, δ, μ , and ψrepresent weight coefficients that can be chosen according to
the objectives of the designer.
x D, T: represent respectively the total value of direction change and the total
value of the tools change of the disassembly sequence where pi and pi+1
are two parts successively disassembled.
In the treated example the value of weight coefficient is γ = δ= ψ= 0.2. Moreover, attention is paid to coefficient μ = 0.4.
3.1
Pheromone trail
If all ants have finished their tasks and built their sequence completely, the pheromone update occurs.
W ij (t 1) (1 U )W ij (t ) Q
ma
­ 'W k if (i, j)  sequence S of
°¦
Q ® k 1 ij
°¯ 0
Otherwise
(5)
k½
°
¾
°¿
(6)
Where: t represents the different iterations of ant colony algorithm , Q represents the sum of the contributions of all ants that used to move from part m to part
l for constructing their solution and ma is the number of ants that found iteration
best sequence. The extra amount of pheromone is quantified by:
'W ml k
G u OFsk
(6)
δ (δ > 0) is a parameter that defines the weight given to the best solution and ρ ϵ
[0.1] is the rate of evaporation. The evaporation mechanism helps ants to progres-
88
M. Kheder et al.
sively forget what happened before and extend their research towards new directions without being overly constrained by its past decisions.
4
Implementation and Case study
The data processing implementation of the proposed approach has been performed
using Matlab R2013b (Matrix Laboratory), SolidWorks CAD system and its API
(Application Programming Interface). The output of ACO shown in Figure 3 presents the evolution performance of the algorithms, the objective function OF versus generation number of the related example (Figure 2) and the optimal sequence
with the associated direction.
Optimal DSP
Direction
7
+Y
6
-Z
2
-Z
4
-Z
1
+Z
5
+Z
3
+Z
Fig. 3. The output of the implemented ACO Disassembly Sequence Tool.
The output of the implemented tool exposed in Fig.3 highlights the five steps:
x The import of an assembly in CAD format,
x The extraction of assembly data,
x The generation of interference matrix,
x The entrance of both objective function and ACO parameters ,
x The generation of the DSP.
The optimal DSP and the associated disassembly directions of the treated example are presented in Table 2. The computation time is 5.03 s which proves the
efficiency of the proposed approach.
Table 2. Best disassembly sequence and its associated direction.
A disassembly Sequence Planning Approach …
Optimal DSP
Direction
5
89
7
6
2
4
1
5
3
+Y
-Z
-Z
-Z
+Z
+Z
+Z
Conclusion
In this paper, an optimization of DSP based on an ant colony approach for preventive maintenance is proposed. The precedence relationships between parts were
considered using a free part process which permits the generation of feasible DSP.
A Computer based tool was implemented permitting the generation of optimal
feasible DSP from a CAD model. The obtained results, shown in an industrial example, reveal the credibility of the proposed approach.
References
1. Moore K. E., Gungor A. and Gupta M. S. Petri net approach to disassembly process planning
for products with complex AND/OR precedence relationships. Computer and Industrial Engineering, Vol 35, 1998, pp.165-168.
2. Chung C.H. and Peng Q.J. An integrated approach to selective-disassembly sequence planning. Robotics & Computer-Integrated Manufacturing, Vol. 21, No. 4, 2005, pp. 475-85.
3. Lambert A. J. D. Optimizing disassembly processes subjected to sequence-dependent cost.
Computers and Operations Research, Vol 34 (2), 2007, pp. 536-55.
4. Grassé, P. P. La reconstruction du nid et les coordinations interindividuelles chez Bellicoitermes natalenis et Cubitermes sp. La théorie de la stigmergie: Essai d’interprétation des termites
constructeurs, Insectes Sociaux, Vol. 6, 1959, pp. 41-81.
5. Wang J F., Liu J H and Zhong Y. F. Intelligent selective disassembly using the ant colony algorithm. Artificial intelligence for engineering design, analysis and manufacturing, Vol 17, 2003,
pp. 325-333.
6. Mullen R. J., Monekosso D.; Barman S. and Remagnino P. A review of ant algorithms. Expert
Systems with Application, Vol 36, 2009, pp 9608-9617.
7. Aghaie A. and Mokhtari H. Ant colony optimization algorithm for stochastic project crashing
problem in PERT networks using MC simulation. International Journal of Advance Manufacturing Technology, Vol 45, 2009, pp. 1051–1067.
8. Ben Hadj R., Trigui M., and Aifaoui N. Toward an integrated CAD assembly sequence planning solution. Journal of Mechanical Engineering Science, Vol 229, 2014, pp. 2987-3001.
9. Kheder M., Trigui M., and Aifaoui N. Disassembly sequence planned based on a genetic algorithm. Journal of Mechanical Engineering Science, Vol 229, 2015, pp. 2281-2290.
A comparative Life Cycle Assessment of utility
poles manufactured with different materials and
dimensions
Sandro Barone1, Filippo Cucinotta2, Felice Sfravara2
1
University of Pisa.
2
University of Messina.
* Corresponding author. Tel.: +39-090-3977292. E-mail address: filippo.cucinotta@unime.it
Abstract In the production of utility poles, used for transmission, telephony, telecommunications or lighting support, for many years, the steel has almost entirely
replaced wood. In recent years, however, new composite materials are a great alternative to steel. The questions are: is the production of composite better in terms
of environmental impact? Is the lifecycle of composite pole more eco-sustainable
than lifecycle of steel pole? Where is the peak of pollution inside the lifecycle of
both of technologies? In the last years, in order to deal with new European polices
in environmental field, a new approach for the impact assessment has been developed: the Life Cycle Assessment. It involves a cradle-to-grave consideration of all
stages of a product system. Stages include the extraction of raw material, the provision of energy for transportation and process, material processing and fabrication, product manufacture and distribution, use, recycling and disposal of the
wastes and the product itself. A great potentiality of the Life Cycle assessment approach is to compare two different technologies designed for the same purpose,
with the same functional unit, for understanding which of these two is better in
terms of environmental impact. In this study, the goal is to evaluate the difference
in environmental terms between two different technologies used for the production
of poles for illumination support.
Keywords: Life Cycle Assessment, Green Design, manufacturing optimization,
utility poles
1 Introduction
In the last years, the need to reduce the environmental impact of a product has led
to define a new regulatory framework for the assessment of the lifecycle in term of
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_10
91
92
S. Barone et al.
eco-sustainability. The Europe Society of Environmental Toxicology and Chemistry, in the years between 1990 and 1993 produced the development of a tool called
Life Cycle Assessment described by Fava et al [1]. This new approach allows investigating the environmental impact of the single stage of the production, from
the raw material extraction to the disposal phase. In the last 30 years many developments have been conducted and a summary of these is reported by Klöpffer [2]
and Finnveden et al.[3]. The principal parts of LCA are goal and scope definition,
Life Cycle Inventory (LCI), impact assessment and the interpretation of results.
Vigon and Jensen [4] have conducted a study on how the results are influenced by
data quality collected in the life cycle inventory phase. Baumann and Rydberg [5]
carried out evaluation of results by using different databases in the inventory stage
of a life-cycle assessment. Another important aspect is the uncertainty of data in
inventory phase, so Maurice et al. have conducted studies in this part of LCA [6].
In addition to the theoretical aspects of LCA, the researchers focused on practical
aspects, defining the parts of LCA that need more specified study for achieving of
a more solid conclusion [7]. Thanks to LCA it is possible to understand if, in addition to a change of mechanical behavior, materials are also different in environmental term. An example of this kind of study is reported in [8].
The aim of this study is the assessment of the cradle-to-grave life cycle environmental impacts related to two different manufacture types of utility poles: steel
galvanized pole and fiberglass pole. The assessment concern the interpretation and
comparison of impact indicator values of greenhouse gas (GHG) emissions, abiotic depletion (AD) and Abiotic depletion fossil (ADF), eutrophication potential
(EP), acidification potential (AP), global warming potential (GWP), freshwater
aquatic ecotoxicity potential (FAETP), human toxicity potential (HTP), marine
aquatic ecotoxicity potential (MAETP), ozone layer depletion potential (ODP),
photochemical ozone creation potential (POCP) and terrestric ecotoxicity potential
(TETP).
2 Material and methods
2.1 Goal definition and unit function
Aim of this study is the application of life cycle assessment methodologies in respect to guidance provided by the International Organization for Standardization
(ISO) in standards ISO 14040 [9] and 14044 [10] for comparison of environmental
impact of two different manufacture process of utility pole. In this study, the following aspects are included: Goal and Scope definition, Inventory analysis, Impact assessment and Interpretation. The environmental inputs and outputs that
concerns steel galvanized utility pole and fiberglass utility pole are reported, an
A comparative Life Cycle Assessment of utility …
93
assessment of impact indicators for each product and a comparison between them
is conducted. The purpose of these poles is ensure a lighting support to a height
from the ground of 6 and 8 meters, for a period of 60 years, the poles are installed
at equivalent spacing. The use life of a steel pole is about 60 years [11] so for period under investigation is sufficient one quantity. In the case of Fiberglass pole
the use life is estimated in about 20 years, so three poles are needed to cover the
entire period. The system boundary is previously defined (from extraction of raw
material to disposal phase) and the geographic areas of production of these poles
are two different plants in South-Italy. The assessment of impact factors is carried
out with a Gabi Educational Software, in respect to standard ISO 14044. For all
processes not directly performed inside the plants (production of glass fibers, production of steel sheet, galvanization process) the evaluation gate-to-gate is conducted thanks to the use of database inside the software (GaBi Databases) that is
fully refresh every year and developed in accordance with the LCA standard.
2.2 Fiber-Glass composite utility pole inventory
The Fiberglass composite pole under study is manufactured with two principal
distinct stages. A first phase that concerns the weaving of the fabric form (particular condition because usually the fabric is bought from third parts), a second phase
that concerns centrifugal casting. The final fabric is unidirectional fiberglass with
addiction of random chopped fibers. The fiberglass material is bought from other
company, so for this reason it were evaluated the provenance and the type of
transportation. At this phase there are different machines to work. The cutting machine for chopped production, so the continuous filament are cut to the random fiber of length equal to 5 cm, weaving machine for the manufacturing of the final
fabric, length controls machine, wrapping and cutting machine. The second phase
is the centrifugal casting, the fabrics are transferred on a worktop, a first layer of
polyester (with the purpose of protect the fiberglass from external agents) is lay
out, on it different layers of fabric are disposed so that the final shape of the pole
is trunk-conical. All layers are wrapped and put inside the centrifugal casting machine where the resin with catalyst is injected. The angular velocity of a permanent mold inside the centrifugal casting machine (about 800 rpm) allows to push
the stack of layers near the walls in a cylindrical shape and allows also to do flowing the resin along the whole height of the pole. The velocity is reduced to about
300 rpm when every part of the pole is wetted by resin. The total process of centrifugal casting is completed in about 20-30 minutes. The geometrical characteristics of the fiberglass pole are reported in Table 1, the reported diameter is to the
base of the pole. The materials bought are reported in Table 2 where there are also
the provenance, type of transport and type of material.
94
S. Barone et al.
Table 1. Characteristics of Fiberglass pole (6 meter and 8 meter height)
Characteristics
6 meter pole
8 meter pole
Shape
Trunk-conical
Trunk-conical
Height
6000 [mm]
8000 [mm]
Diameter
174,5 [mm]
213 [mm]
Thickness
5 [mm]
5 [mm]
Material
Fiberglass
Fiberglass
Final weight
20,24 [kg]
35 [kg]
The total energy absorption of every single phase involved in the process is reported in Table 3, the energy used in this fabric is auto-produced with photovoltaic panels. The process with the higher consumption of energy is the centrifugal
casting phase. The entire process is modelled inside Gabi Software, the cradle-tograve life cycle stages considered in the LCA of Fiberglass pole are illustrated in
Figure 1b, the distribution of mass is immediately guessed by the thickness of arrows.
Table 2. Type of material and distance of provenance and type of transport
Material
Type &
Producer
Transport Type
& km
Roving 1200
E-Glass OCV
Truck 1400
Roving 2400
E-Glass OCV
Truck 1400
Chopped
E-Glass OCV
Truck 1400
Subbi
Polyester Alphatex
Truck 2654
Film
Polyester Nontex
Truck 1400
Resin
Polyester COIM
Truck 1400
Accelerant
Cobalt Bromochim
Truck 1400
Dye
Grey Comast
Truck 1400
Catalyst
Retic C Oxido
Truck 800
Table 3. Energy absorbed for every process
Weaving department
Machine
Power
Working time
Type of energy
Cutting chopped machine
2 kW
1 h 20 min
Photovoltaic
Loom machine
3 kW
1h 20 min
Photovoltaic
Control machine
0,5 kW
1h 20 min
Photovoltaic
Winder Machine
1,5 kW
1h 20 min
Photovoltaic
Resin handling
Machine
Pump
Power
1 kW
Working time
Type of energy
10 min
Photovoltaic
A comparative Life Cycle Assessment of utility …
95
Centrifugal Casting department
Machine
Working time
Type of energy
Mixer
Power
1 kW
5 min
Photovoltaic
Centrifugal Casting
12 kW
30 min
Photovoltaic
Packing machine
2 kW
1 min
Photovoltaic
2.3 Steel galvanized pole inventory
A cradle-to-grave life cycle inventories are not available for steel utility poles so a
life cycle inventory ad-hoc of this process has been conducted. In the production
of steel galvanized pole, the fabric buys the non-galvanized steel sheet. On these
sheets, different holes are executed by a punching machine and successively a
bending of sheet is performed by a press machine. When the process of bending is
concluded, the welding process begins through a submerged arc-welding machine.
When the pole is completed, all accessories are applied on it for the final implementation with an arc welding. The pole, after initial manufacture, is hot-dip galvanized with zinc for the protection from external corrosive agent. Subsequently,
there are the use of a grinder for polishing of the pole and of an overhead crane for
its movement and a drill for mounting accessories. Every process inside the plant
is quantified in terms of energy and mass, but the production of metal sheet and
the final process of galvanization are made by other plants. In these two cases, the
standard processes are evaluated thanks to the Gabi Software database. The steel
pole in the disposal phase is modeled as if the 100 % is recycled as scrap steel. A
summary of principal characteristics of steel pole is shown in Table 4.
Table 4. Characteristics of Steel pole (6 meter and 8 meter height)
Characteristics
6 meter pole
8 meter pole
Shape
Conical
Conical
Height
6000 [mm]
8000 [mm]
Diameter
200 [mm]
270 [mm]
Thickness
5 [mm]
5,5 [mm]
Material
Galvanized Steel
Galvanized steel
Final weight
20,2 [kg]
35,0 [kg]
Sheet surface
3,77 [m2]
6,78 [m2]
Weight of sheet
148 [kg]
293 [kg]
Weight galvanized pole
155,0 [kg]
307,0 [kg]
A summary of selected inventory inputs and outputs, so the total energy absorption of every single phase involved in the process are reported in Table 5, the energy used in this plant is that of the national grid. The only material bought is the
not-galvanized sheet metal, that is sent from 735 km by truck, the galvanization
96
S. Barone et al.
process is done in another plant distant about 60 km and the transportation of the
pole is done by truck.
Table 5. Energy absorption in every process
Machine
Power
[kW]
Working time
Working time
6 meter pole [s]
8 meter pole [s]
Punching machine
4
60
60
Press machine
44
600
600
Submerged arc welding
14
424
424
Arc welding machine
4
300
300
Grinding machine
2,2
300
300
Drill
100
120
120
Overhead crane
18
300
300
The entire process is modelled inside Gabi Software; the cradle-to-grave life cycle
stages considered in the LCA of steel pole are illustrated in Figure 1a.
A comparative Life Cycle Assessment of utility …
97
Figure 1. Flow modelled inside Gabi Software for Steel pole (a – upper figure) and Fiber glass
pole (b – lower figure)
3 Results and Conclusions
According to Standard ISO, the results are normalized dividing them by a reference value. It is clear that there are different possibility of normalization sets, depending on region and year. The normalization set chosen in this study is the CML
2001; the factors in this type of normalization are described in [12]. The normalized results are shown in Figure 2, in which Steel pole is set as unit.
98
S. Barone et al.
Figure 2. Comparison of impact indicators for 6 m (a – upper figure) and 8 m (b – lower figure)
between Fiberglass pole and Steel pole. Steel pole is set as unit.
The impact indicators of Fiberglass pole are almost always better than those of the
Steel pole. For 6 meters pole, the only impact indicator that for Fiberglass pole is
higher than Steel Pole is the Eutrophication Potential (EP) because of disposal
treatment of composite material (a lack of regulations) and the production of glass
fiber material. For the 8 meters pole, the Freshwater Aquatic Ecotoxicity is higher
for Fiberglass pole. This indicates the non-linearity of behavior of the impact indicators respect to the length of the pole. The principal difference in the two different manufacturing processes is the weight of material used. The LCA of Steel pole
is strongly influenced by mass and energy introduced in the process. An important
quantity of environmental impact is relative to extraction of raw material for the
Steel pole respect to Fiberglass. The results show that in South Italy, the choice of
A comparative Life Cycle Assessment of utility …
99
composite pole it’s the better solution in environmental terms respect to steel pole.
To perform this quantification, the methodology that is well established over the
years as an effective tool for environmental performance is LCA. The paper show
also the mass and energy input and output of every single process inside a production plant of composite poles and steel poles. The division in sub-processes allows
to intervene, in those with higher environmental impact, in an optimization loop
focused in the environmental impact improvement of the entire product. The purpose of this article is then quantifying the difference between the two products in
order to have, in phase of choice, an additional criterion beyond classic ones
(structural and costs) according new European polices. Thanks to LCA method,
the impact environmental is a quantified and measured variable that can be used
like each other technical variable in project phase.
Acknowledgments The research work reported here was made possible thanks to Eng. G.
Cirrone of NTET Company SpA - Belpasso CT, Italy, that furnished data for inventory analysis.
References
1. Fava, J., Denison, R., Curran, M., Vigon, B.W., Selke, S., Barnum, J.: A technical framework
for life-cycle assessment. , Pensacola - Florida (1991).
2. Klöpffer, W.: Life cycle assessment. Environ. Sci. Pollut. Res. 4, 223–228 (1997).
3. Finnveden, G., Hauschild, M.Z., Ekvall, T., Guinée, J., Heijungs, R., Hellweg, S., Koehler,
A., Pennington, D., Suh, S.: Recent developments in Life Cycle Assessment. J. Environ.
Manage. 91, 1–21 (2009).
4. Vigon, B.W., Jensen, A. a.: Life cycle assessment: data quality and databases practitioner
survey. J. Clean. Prod. 3, 135–141 (1995).
5. Baumann, H., Rydberg, T.: Life cycle assessment. J. Clean. Prod. 2, 13–20 (1994).
6. Maurice, B., Frischknecht, R., Coelho-Schwirtz, V., Hungerbühler, K.: Uncertainty analysis
in life cycle inventory. Application to the production of electricity with French coal power
plants. J. Clean. Prod. 8, 95–108 (2000).
7. Heijungs, R.: Identification of key issues for further investigation in improving the reliability
of life-cycle assessments. J. Clean. Prod. 4, 159–166 (1996).
8. Puri, P., Compston, P., Pantano, V.: Life cycle assessment of Australian automotive door
skins. Int. J. Life Cycle Assess. 14, 420–428 (2009).
9. European Standard ISO: Environmental management - Life cycle assessment - Principles and
framework. (2006).
10. European Standard ISO: Environmental management -- Life cycle assessment -Requirements and guidelines. (2006).
11. Bolin, C.A., Smith, S.T.: LCA of pentachlorophenol-treated wooden utility poles with
comparisons to steel and concrete utility poles. Renew. S. Energy Rev. 15, 2475–2486
(2011).
12. CML - Department of Industrial Ecology: CML-IA Characterisation Factors,
universiteitleiden.nl/en/research/research-output/science/cml-ia-characterisation-factors
Prevision of Complex System’s Compliance
during System Lifecycle
J-P. Gitto1,2,*, M. Bosch-Mauchand2, A. Ponchet Durupt2, Z. Cherfi2,
I. Guivarch1
1
MBDA, 1 avenue Réaumur, 92358 Le Plessis-Robinson Cedex, France
Sorbonne Universités, Université de Technologie de Compiègne, CNRS, Laboratoire
Roberval, Centre Pierre Guillaumat, CS60319, 60203 Compiègne Cedex, France
2
* Corresponding author. Tel.: +33 1 71 54 36 09. E-mail address: jean-philippe.gitto@utc.fr
Abstract: In this paper, we propose a methodology to define a predictive model of
complex systems’ quality. This methodology is based on a definition of system’s
quality through factors and allows taking into account the specificities of the
company. The model obtained with this methodology helps quality practitioners to
have an objective view of system’s quality and predict the future quality of the
system all along its lifecycle. This approach is illustrated through its application to
design a model for compliance prediction, in an aeronautic and defense group,
MBDA.
Keywords: Product Compliance; Compliance Forecasting; Product Design;
Product Quality; Decision Making.
1 Introduction
In a changing world, with global competition, it is essential for companies to satisfy their customers’ requirements. As for all companies, in particular those who
produce complex systems, a management system of the quality is essential to organize the work and to insure customer satisfaction. A complex system is consisted of electronic components, mechanical parts and software. Its lifecycle extends
over several years and involves many engineering fields. This increases the complexity of project organization and monitoring [1, 2].
In the scope of our study, the complex systems are produced in small series and
have a lifespan of several decades. Their development is based on a contract with
the customers who define their requirements at the beginning of the project. These
requirements are translated into technical definitions early in the process of development and can evolve all along the system lifecycle.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_11
101
102
J-P. Gitto et al.
Thereby, it is difficult for quality practitioners to have a comprehensive and objective view of the quality of the future system in use (i.e. at completion). Several
methods and tools were developed to manage quality of products and of process
[3], but they don’t answer to all needs according to companies specificities [4, 5],
(like industrial sector, product complexity, internal organization…) and are not
based on direct measurements on the product. So, there is a real need to connect
quality measurements on system during its lifecycle, with the ability to meet the
satisfaction of customers’ requirements, when the system is in use. The system
lifecycle is often divided in several phases. The transition from one phase to another corresponds to a gate (G0 to G4 in Figure 1) and is, certainly, the best moment to assess the work done and to compare it with the planned advancement. To
help quality practitioners to do this assessment and to compare development maturity, to customers’ requirements, a quality predictive model is necessary.
G0
Develop
Concept
Delivery to the
Customer
G1
Design
G2
Qualify
G3
Produce
G4
Sustain
Fig. 1. System lifecycle phases and gates
The proposed model allows predicting system quality at completion. It helps
quality practitioners to identify the cause of the deviation and to decide if corrective actions are required (Figure 2). This model has to be used at each gate of the
system lifecycle to update the prevision of the system’s quality, according to the
last design improvements.
Corrective Actions
Phasei
Indicators
Quality
Practitioners
Predictive Model
Quality Assessment
Prediction of Quality at Completion
Corrective Actions
no
Gi,i+1
Are goal
Phasei+1
achieved ? yes
Quality
Processes
Fig. 2. Use of Predictive Model at each gate
Section 2 is dedicated to a review of some existing models with their strengths
and weaknesses to answer the problematic previously exposed. Then, in section 3,
the proposed methodology to build the predictive model is defined and is illustrated with an application on an industrial case study.
2 Review of Existing Methods and Models
To address the need of quality management system, several methods and tools
have been developed, classified here in two categories.
On one hand, there are Quality Assurance (QA) methods, spread since the 80’s
in companies [6], to implement appropriate techniques and practices in order to
Prevision of Complex System’s Compliance …
103
provide high quality products [3]. To take into account the voice of customers, Total Quality Management [7] has been defined. TQM principles have been translated in ISO 9000 standards. They offer a guide to ensure the quality of future system
by the control of the development and production methods [8]. QA methodologies
are based on the control of the process and not on the product itself [9] to ensure
quality achievement.
On the other hand, Systems Engineering methods based on indicators [10] are
used to monitor future performances of a system [11]. The indicators of Systems
Engineering are defined for a specific product’s feature or a specific technology
[12]. All the product subsystems and components have various paces of development and validation which may overlap. This makes problematic to assess the
forecast of system compliance before the final validation of series production.
Besides, the use of these two categories of methods does not allow to make a
link between the performance of the development process and the future satisfaction of customer’s requirements [13]. To make this link, the engineers and the
quality practitioners must have a clear overview of the project and must have a
long experience in development. Else, the risk is to miss a part of customer’s
needs or to detect a compliance problem too late in the development process, what
is generally expensive to correct in the final stages of product development [14].
Some models to predict product quality already exist for software. They are based
on the Factor, Criteria, Metrics model (FCM) [15]. The proposed methodology is
an adaptation of the FCM to build a predictive model of complex systems’ quality
at completion and to identify early any deviation from the target.
3 Methodology and associated Model
In the context of this study, complex systems are produced in too small quantities
to apply statistical analysis. Hence, the identification of factors, criteria and
metrics, and the model structure are based on expert knowledge elicitation [16].
The proposed methodology allows formalizing experts’ experiences, and their
heuristics. It is a way to deal with uncertainty [17] adapted to this context.
In the FCM model, quality factors are characteristics, reflecting customers’
point of view, which actively contribute to the quality of the system. Those factors
are broken down into criteria which characterize company’s processes quality
from the internal point of view. Model’s metrics provide mechanisms to quantify
quality criteria level. Thus, the model’s inputs, “indicators” in Figure 2,
correspond to the metrics values during the project. Based on these metrics values,
the model gives at each gate indications about the quality of criteria and, outputs a
forecast of system’s quality at completion. Several product quality factors have
been identified to characterize our systems’ quality at completion, for instance
compliance, reliability, safety, usability, maintainability…
104
J-P. Gitto et al.
In this paper, only compliance factor is considered to apply the proposed
methodology.
Predictive Model Construction Methodology
Step 1: Factor’s
Goal Definition
Factors +
Goal at
Completion
Step 2:
Criteria
Definition
Criteria
Step 3:
Metric
Definition
Context
Company’s
Processes
Company’s
Processes
Company’s
Strategy
Experts
Elicitation
Experts
Elicitation
Metrics
Step 4:
Model
Setting
Predictive
Model
Validation
Experts
Elicitation
Fig. 3. Synoptic of the proposed methodology to build predictive quality model
Each step of the methodology (Figure 3) is illustrated by its application for
MBDA, a European aeronautic and defense group, to the product quality factor
Compliance. To ease the reading of the methodology, a paragraph at the end of
each step explains how it was applied to the MBDA case.
1.1 Step 1: Factor’s Goal Definition
A goal must be defined for the system quality factor to forecast its quality level at
completion. An associated mean of measurement must also be chosen to assess the
factor’s quality level when the system is in use. This goal is based on the existing
industrial practices and takes into account the company’s organization and the
context of use of the product.
Application to Compliance Factor:
During production and use of system, each non-compliance (NC) is recorded in a
database to be treated. The objective is to deliver system to the customers without
NC and to not discover new NC during the use phase. So, the compliance quality
level at completion is characterized by the quantity of NC recorded, knowing that
the goal is to have no NC.
1.2 Step 2: Criteria Definition
To evaluate a factor, during systems’ lifecycle, the company must identify the
processes which influence the future level of the factor, at completion. Criteria are
defined to characterize quality or performance of those influencing processes in
the predictive model. Whereas factors derive from customers’ point of view,
criteria derive from the company interest. To identify relevant criteria in the
Prevision of Complex System’s Compliance …
105
company, a literature review can help to define the most common criteria. But
criteria are highly dependent on the company’s organization and an audit inside
the company is essential. It is then necessary to determine how the company’s
processes implied in the development of the system, impact the future quality.
People implied in the factor development must be asked to identify suitable
criteria. To avoid self-censorship, it is preferred to interview employees
individually. Furthermore, it is easier to plan individual interview than a single
meeting with all participants.
Application to Compliance Factor:
In the MBDA case and regarding to the compliant quality factor, five criteria
have been identified from the processes of the company (Figure 4):
Requirements Quality
Design Quality
Design’s Justification Production System
Quality
Quality
Supply Chain
Quality
Compliance
Fig. 4. Compliance predictive model’s Criteria
The three first criteria are for the compliance of system’s definition to customer’s
requirements. The last two, are for the compliance of the system regarding to its
definition.
1.3 Step 3: Metric Definition
For each criterion, metrics must be defined to assess the criterion quality level all
along the system lifecycle. Quality practitioners and experts who work on the
processes concerned by criteria are asked to say which metrics are adapted to
characterize criteria level. When the model is used, to prepare the passage of a
product lifecycle gate, the value of each of metric is calculated in order to be
processed by the model. The metrics can be based on company’s databases,
prototype’s characteristics, documentation, subjective evaluation… To be treated
in the predictive model, the metric’s values are expressed on a numerical scale.
Application to Compliance Factor:
In the case of system’s compliance model for MBDA, 17 metrics have been
defined to characterize all criteria previously identified; the chosen scale to
express metric’s value is from 0 to 100. Metrics for the compliance of the
system’s definition to customer’s requirements are given in Figure 5.
106
Need
Coverage
J-P. Gitto et al.
System
TRS
Maturity
Sub-system
%
TRS
Maturity Published
TRS
Design
Evolution
Rate
TRL
Requirements
Quality
Design
Quality
Class of
Difficulty
Justification
Coverage
Justification
Relevancy
Design’s
Justification
Quality
Compliance of
Definition
Fig. 5. Compliance of definition predictive model
After metrics definition, the whole model can be structured. The selected
formalization consists in building a network where the factor, criteria and metric
are placed on nodes. The whole criteria identified for a factor have an influence on
this one. Thus, all the criteria will be connected to the factor in the network and
each metric will be related to one criterion at least. The structured network has the
layout defined in Figure 5. Once the results of interviews are analyzed and the
model is structured, it is necessary to plan a global meeting with all participants to
validate the proposed selection of criteria and metrics. This review allows
participants to give their judgment about the good translation of their answer in
the model and to discuss some points, if they have different opinions.
1.4 Step 4: Model Setting
The network architecture having been defined, the model must be parameterized
to establish the relations which make possible the evaluation of the criterion,
according to the metrics.
For each arc of the network, a weight p is defined to characterize the influence
of one parent node, to its child. The chosen convention is to define the sum of all
arcs’ weight between a child node and it parents, equal to 1. The value of one
child node (a criteria node or the factor node) is calculated by adding all of its
parents’ node values, pondered by the weight of their arc. For a criterion Cj and its
metrics Mi it can be expressed by the equation (1) where cj is the value of the
criterion Cj; mi, the value of metric Mi and pi,j the weight of the arc between M i
and Cj:
୍
୨
ൌ ෍ ’୧ǡ୨ ൈ ୧ ሺͳሻ
୧ୀଵ
Prevision of Complex System’s Compliance …
107
In general, arcs’ weight can be defined by statistical analysis on data record.
But in the context of complex system development, set of recorded data don’t
include enough development case to be analysed. Consequently the elicitationbased method has been chosen to define those weights. Experts having
experiences about a criterion are questioned for this purpose. In simple cases, the
questioned expert can directly give the weight of each arc, either by its value, or
by positioning each parent node on a scale of importance. The weights are
distributed among all arcs proportionally of their importance.
If the combination of metrics or criteria is too complicated to be directly
assessed by experts, arcs’ weight can be defined by an indirect method. The
principle is to ask the participant to give a value for each parent node in two cases,
for a standard development and for a minimum admissible development. The
procedure is illustrated through its application to the compliance factor.
Application to Compliance Factor:
For example, the criterion C1 “Requirements Quality” has 4 metrics: M1 “need
coverage”, M2 “system technical requirements (TRS) maturity”, M3 “sub-system
TRS maturity” and M4 “percentage of published TRS”. According to (1):
ଵ
(2)
ൌ ’ଵ ଵ ൅ ’ଶ ଶ ൅ ’ଷ ଷ ൅ ’ସ ସ
Each participant is questioned to give the value of the child node in the
expected case and for all possible cases were only one parent node is on its
minimum admissible case. The experts give the minimum admissible value for
each metrics: 90% for m 1 and m4, 60% for m2 and m3; and they give the expected
level for those metrics: 100% for each metrics. After, participants evaluate C1
value in each case exposed in the Table 1:
Table 1. Metrics and criterion values estimated by experts
m1
90%
100%
100%
100%
m2
100%
60%
100%
100%
m3
100%
100%
60%
100%
m4
100%
100%
100%
90%
c1
97%
92%
90%
98%
We deduce that p1 = 0.3, p2 = 0.2, p3 = 0.25, p4 = 0.25. Each node of the
model can be determined by this method if it is necessary. The weights taken into
account in the model are the mean of participants’ answers.
The model have been tested on two systems in development, called A and B.
The model have been used for the system A for the 3 first gate (start of the
development to the start of qualification) and for system B, the model have been
used for 2 gates. To assess factor’s quality level, the values of model’s metrics are
evaluated and are added in the model.
108
J-P. Gitto et al.
Calculated compliance levels are in Figure 6:
Fig. 6. Compliance level given by the model for Systems A and B.
For system A, the compliance levels calculated with the model is under the
expected levels for the three gates. The progression between the two first gates is
too low and the gap between expected level and actual level grows. At gate G2 a
part of the gap is filled. For system B, the gap is lesser on G1 and the goal is
reached on G2. Those results are consistent with posteriori analysis of the quality
of the development. Evaluate the compliance at G0 would have helped to correct
the gap before G1. The model helps to identify the roots of the problem by the
metrics value. For system A, the gap is explained by a poor coverage of
customer’s need and maturity of requirements at the beginning of the
development; for system B, quality is lower because of some delay.
1.5 Discussion
The model consistency has been tested on several scenarios of development and
reviewed with quality practitioners. This assessment can be used to alert project
manager as soon as possible on a risk to have non-conformities at the end of the
development phase and the model gives indications to identify the root causes of
the problem. To understand why and take corrective actions, experts working on
affected criteria must be involved in the treatment of this anomaly. Thereby,
project manager can take decisions knowing their impact on future system’s
compliance. The model has been tested on two systems in development and
reliability factor was also treated. First results confirm the consistency of the
model. However, complex system development last several years and we will
know the system compliance in use only after years of use. There is limited
number of development case to test the model which limits the validation
possibilities. The work on validation test is in progress. The model should evolve
gradually as new experiences are gained.
Prevision of Complex System’s Compliance …
109
Conclusions
Existing tools to monitor complex system development are oriented to
processes’ performance and don’t make the link with quality at completion from
customers’ point of view. The proposed methodology allows an industrial
company to build its own predictive model for system’s quality, usable all along
the system lifecycle. The methodology for predictive model is based on FCM
method, and the main steps describe here are the definition of a goal for complex
system’s quality factor, the definition of criteria and metrics for a product quality
factors and the setting of the model, based on experts elicitation. This method
requires many participants and is time consuming to be implemented in a
company. Further research can be done to improve elicitation strategy and expert
selection.
References
1. Fellows, R. and Liu, A.M.M. Managing organizational interfaces in engineering construction
projects : addressing fragmentation and boundary issues across multiple interfaces. Construction Management and Economics, 2012, 30(8), pp. 653 – 671.
2. Hoegl, M. and Weinkauf, K. Managing Task Interdependencies in Multi-Team Projects : A
Longitudinal Study. Journal of Management Studies, 2005, 42(6), pp. 1287 – 1308.
3. Oakland, J. S. TQM and operational excellence: text with cases, 2014 (Routledge)
4. Powell, T.C. Total quality management as competitive advantage: A review and empirical
study. Strategic Management Journal, 1995, 16, pp. 15–37.
5. Söderlund, J. Pluralism in Project Management Navigating the Crossroads of Specialization
and Fragmentation. International Journal of Management Reviews, 2011, 13, pp. 153–176.
6. Prajogo, D. and Sohal, A.S. TQM and innovation: a literature review and research framework. Technovation, 2001, 21(9), pp.539–558.
7. Cua, K.O., et al. Relationships between implementation of TQM , JIT , and TPM and manufacturing performance. Journal of Operations Management, 2001, 19, pp.675–694.
8. Tari, J.J. & Vicente, S. Quality tools and techniques: Are they necessary for quality management. International journal of Production Economics, 2004, 92(3), pp.267–280.
9. Kitchenham, B. Towards a constructive quality model Part I: Software quality modeling,
measurement and prediction. Software Engineering Journal, 1987, 2(4).
10. Lead, C.M. et al. Systems Engineering Measurement Primer A Basic Introduction to Measurement Concepts and Use for Systems Engineering, 2010 (INCOSE).
11. Orlowski, C. et al. A Framework for Implementing Systems Engineering Leading Indicators
for Technical Reviews and Audits. Computer Science, 2015, 61, pp.293–300.
12. Sauser, B.J. et al. A system maturity index for the systems engineering life cycle. Int. J. Industrial and Systems, 2008, 3(6), pp.673–691.
13. Zairi, M. Measuring performance for business results, 2012 (Springer).
14. Azizian, N. et al. A framework for evaluating technology readiness, system quality, and program performance of US DoD acquisitions. Systems Engineering, 2011,14(4), pp.410–426.
15. McCall, J.A., Richards, P.K. and Walters, G.F. Factors in Software Quality Concept and Definitions of Software Quality, 1977 (Rome air development center).
16. Ayyub, B.M. Elicitation of expert opinions for uncertainty and risk, 2001 (CRC Press).
17. Herrmann, J.W. Engineering Decision Making and Risk Management, 2015 (J.W. & Sons).
Framework definition for the design of a mobile
manufacturing system
Youssef BENAMA1, Thecle ALIX2, Nicolas PERRY3*
1
Université de Bordeaux, I2M UMR5295, 33400 Talence, Fr,
2
Université de Bordeaux, IMS UMR5218, 33400 Talence, Fr.
3
Arts et Métiers ParisTech, I2M UMR5295, 33400 Talence, Fr.
* Corresponding author. Tel.: +33-556845327; E-mail address: nicolas.perry@ensam.eu
Abstract The concept of mobile manufacturing systems is presented in the literature as an enabler for improving company competitiveness by cost reduction,
delay respect and quality control. In comparison with classical sedentary systems,
added characteristics should be taken into consideration, such as the system life
phases, the dependency to the production location, human qualification as well as
means supply constraints. Such considerations might be addressed as soon as
possible in the design process. This paper aims at presenting a contribution for the
design of mobile manufacturing systems based on three analysis: (1) an analysis of
the mobile manufacturing system features (2) an identification of the attributes
enabling the system mobility assessment, and (3) the proposal of a framework for
mobile production system design considering new context-specific decision criteria.
Keywords: production system, mobile manufacturing system, design of manufacturing plant.
1 Introduction
Ensuring shipment of bulky and fragile product can be economically and technically challenging. The solution that can be adopted is to conduct production activities close to the end client. In case of a one-time demand, implanting a permanent
production plant may seem unrealistic and then the concept of Mobile Manufacturing System (MMS) that consists in using the same production system to satisfy
successively several geographically dispersed customer orders, directly on the end
client location can be a good alternative.
The use of mobility of production systems has been encountered in many industries: construction industry [1], shipyard industry, etc. As interesting as it
seems, the concept has been rarely discussed in the literature. The few existing
definitions of mobility depend on authors and contexts [2]. Mobility is also de© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_12
111
112
Y. Benama et al.
fined at different levels for manufacturing system. There is an internal mobility
concerning manufacturing system modules (machinery, material handling modules, etc.) and a global or external mobility concerning the movement of the whole
manufacturing system. This last level is analyzed across geographic areas and
underpins strategic considerations with medium to long-term implications.
In order to facilitate the movement of the manufacturing system to a new geographical location, Rösiö [3] evokes three required characteristics that are: the
mobility of module, the modularity [4] and the integrability of modules.
In this paper, a holistic view of the manufacturing system is adopted. The mobility of the manufacturing system is defined as the ability of a manufacturing
system, defined by its technical, human and information components, to move and
produce on a number of successive geographical locations. The definition includes
two aspects:
x Transportability: the manufacturing system must be transportable and must be
able to adapt to the requirements of the different transportation modes (road, sea,
etc.)
x Operationality: the system must be able to be quickly operational on different
locations for which it is designed.
The following section proposes to discuss how a Mobile Manufacturing System
(MMS) may differ from a sedentary manufacturing system.
2 Requirements for manufacturing system mobility
The concept of mobility implies to consider some additional system life-phases
regarding traditional manufacturing systems: mobility of modules, on-site maintenance management, organizational aspects and training needs, energy supply.
2.1 Manufacturing system design
The production system design process is based on four macro phases [7]: (1) initialization, (2) preliminary design, (3) embodiment design and, (4) detailed design.
Each of these phases consists in selection, evaluation and decision activities. Taking into account the characteristics of mobility is built through each phase of the
MMS design process. Obviously, in a context of mobility, the production system
environment changes from one implementation location to another and the analysis of the system’ environment is of huge importance.
A production system is currently seen as a system composed of several subsystems generally analyzed through an external and internal views coupled with other
physical system, decisions and informational views. Then the production system
design depends on the design (or selection of items when solutions may exist in
the market) of each system component, but also depends on the connection between these subsystems for their integration into the overall system.
Framework definition for the design …
113
A production system can be defined as a system of systems to the extent that
on the one hand, it is composed of a set of subsystems. Each one has its own life
cycle and each one may be defined independently from the others. On the other
hand, interactions between these subsystems define constraints for the system of
systems, affecting the performance of the overall system [8].
The Systems engineering adopts two complementary points of view for systems analysis [9]:
x An external view or “black box” approach defining the system boundaries used
to identify the external environment elements that force the system and that the
system must respond to by providing the expected services. The environment is
defined by all factors that might influence or be influenced by the system [9].
x An internal view or “white box” approaches that considers the internal system
interacting elements, which define its organization (architecture) and its operation.
2.2 Additional system life-phases
During its operation, the MMS is first put into service on its implantation site before being used for production. Throughout this phase, maintenance and configuration operations carried out in order to adapt its behavior to meet at best the expected performance. However, unlike to sedentary manufacturing systems, Mobility requires additional operational phases:
x Transportation phase (a): the MMS is packaged and transported to its implantation location.
x On-site installation phase (b): the MMS arriving on site is composed of independent modules and components that are integrated and lead to the plant installation. Upstream, operations to prepare the site are performed. Downstream, the
factory is installed and operations of verification and commissioning are carried
out.
x On-site production phase (c): the plant is used to produce locally. In parallel,
maintenance operations are necessary to maintain high system performance.
x Diagnosis and control phase (d): at the end of the production phase, a diagnosis of all modules is carried out to ensure that the mobile plant will be operational
for the next production run. The modules requiring heavy maintenance or replacement are identified. Replacement and procurement orders are launched during this phase.
x Dismantling phase (e): the plant is dismantled. Various modules and components are conditioned and prepared for the transportation phase.
x Transportation phase (f): the modules are placed in the transportation configuration, two scenarios are then possible depending on the business strategy of the
company:
o
A new order arrives and a new site is identified. The MMS is routed to
the new location and the operational cycle resumes at the phase (b);
o
No new order is identified and thus no new implantation location is identified. The MMS is then routed to its storage location that corresponds to the
114
Y. Benama et al.
phase (g). Depending on negotiations with the manager (client, institution,
etc.) of the site where the system has been used, the MMS storage phase could
take place in the former location in the expectation of a new order.
x Storage phase (g): During the MMS inoperability period, the modules have to
be stored until a new order. The storage can take place on the stationary basis, or
on the latest operating location in order to stay closer to a potential market. During
this phase, heavy maintenance operations can be conducted such as: maintenance
or replacement of machines, modules reconfiguration, etc.
The identification of the life-phases is important as evaluating the overall performance (cost, delay, etc.) of the system depends on it.
2.3 Organizational aspects and training needs
Geographic mobility of the manufacturing system requires adapting the automation level required to the qualification level of the personnel available on-site. To
ensure the production system independence regarding the on-site operator qualification, the level of the manufacturing system automation must be adapted. An
independent production system to operator’s qualification can be imagined as a
highly automated system. However, too many automation leads to a complexity
requiring some expertise to ensure MMS maintenance operations. A trade-off
must be achieved between the required automation level and the on-site available
qualification. An on-site operator’ training offer facilitate this trade-off.
System mobility means that each time a new team is involved in the system for
a new implantation location [5]. Hence, the need to provide operator training for
running the manufacturing system is crucial. Moreover, Fox recalls the need for a
qualified local middle management which makes the link between foreign personnel and the local population, and who could be also responsible for applying best
practices [6].
2.4 Mobility of modules
Mobility of the manufacturing system modules implies that each module is being
transportable and operational on site. Modularity is an enabler of component mobility. The weight and volume of each module must be compliant to transportation
modes. In addition, the modules must withstand different transportation constraints (mechanical shock, tightness constraints, etc.). Finally, the equipment onsite operationality (equipment) must adapt to on-site available energy sources. The
equipment must be easily integrated and commissioned.
2.5 On-site maintenance management
On the one hand, maintaining system performance during the operation phase
implies to adopt a comprehensive strategy that takes into account the duration of
the manufacturing system presence on a specific implantation location in order to
minimize the need for shutdown. On the other hand, in order to carry out on-site
Framework definition for the design …
115
interventions, spare parts supply chain management must be adapted according to
the manufacturing system mobility.
2.6 Energy supply
Depending of each implantation location characteristics, the energy supply issue
arises each time. The MMS autonomy depends on its ability to be independent in
supplying the necessary energy required for the operations of its resources [1].
The energy supply system can be based on diesel generators or by using solar
panels to provide the necessary power [6]. The issue of energy consumption (nature and quantity) can be a determining factor for choosing the MMS constituent
resources.
After reviewing the requirements to be taken into account into a mobile manufacturing system analysis, we propose to discuss in the next section the system
design issue
3. A design framework adapted for a single implantation
location
The sequence of the key steps in the MMS design process [10] (figure1) starts
with 1) a refinement of the requirement specification 2) the determination of what
is to be carried out in-house or is to be outsourced and 3) some technical solutions
proposal (MMS configuration design). These three steps are discussed hereafter.
3.1 Requirements specification refinement
The design activity starts from the requirements specification that contains a description of the product to be manufactured (BOM) and details the client’s request
(production volume, delays, requirements, etc.). The initial requirements specification will be supplemented with information and details obtained after MMS and
implantation location environment analysis. This first enhanced specifications
version (noted Requirement_1 in figure 1) allows imagining a first configuration
of MMS. This MMS configuration, not economically efficient, represents a generic definition able to satisfy the demand on the proposed location.
3.2 Manufacturing strategy analysis
The MMS generic configuration will be then refined through an analysis of what
is relevant to produce on site or what needs to be outsourced. This analysis involves several criteria and requires the establishment of evaluation process and
decision support [11]. The analysis of the make or buy strategy enables to decide
the MMS functionalities, i.e. operations that the MMS should be able to carry on
the implantation location. The description of the necessary MMS functionalities
116
Y. Benama et al.
supplements the previous requirements specification (noted Final Requirement in
figure 1). The MMS design activity can be now conducted.
Requirement_1
•Product specification
•Client request
Initial Requirement
•Product
specification
•Client request
Analysis of MMS
and implantation
location’
environment
Information
Constraints
• Location’s
iinformation, etc.
•R
•Resources'
constraints, etc.
Final Requirement
Cost Delay Quality
Design
requirements
Data on ressources
mobility
Analysis of
manufacturing
strategy: Make or
Buy
y
0
Design of generic
MMS configuration
Internal Information on
production management
Multi criteria
i
Evaluation
Analysis Formalization
Cost
Design requirements
Data on
ressources
Internal Information on
production management
Delay
•Product specification
•Client request
• Location’s information, etc.
•Resources' constraints, etc.
•M
MMS functionalities
Expert
Knowledge
Quality
Decision Aid Model
Surface
mobility
Integrability
On site Resources
Resources’
availability
Design
D
i
off MMS configuration
fi
ti adapted
d t d to
t a single
i l location
l
ti off
implantation
Multi criteria
Analysis
Evaluation
Formalization
MMS
Configuration
for 1site
Expert
Knowledge
Fig. 1 Mobile Manufacturing system design framework adapted for single implantation location
3.3 Design of MMS configuration
This activity considers as input the latest requirements specification version and
technical data about all the resources that will be integrated into the MMS configuration as well as the production management information. The choice of MMS
configuration is based on classical criteria such as cost, quality and delay; in addition to other criteria specific to the mobility concept, namely: the mobility indicator
3.4 Design of MMS configuration
This activity considers as input the latest version of the specification and the technical data about all resources that will be integrated into the MMS configuration
as well as production management information and assumptions. The choice of the
MMS configuration is based on several decision criteria.
In addition to the typical cost, quality and delay requirements, the proposed
approach incorporates new criteria that are specific to the context of mobility [10]:
the mobility index, the integrability index and the criterion of on-site resources
availability
3.4.1 Mobility index
Framework definition for the design …
117
Analyzing mobility during the embodiment design phase concerns the whole production system defined by all its components. These components can be classified
into two categories: technical equipment and human modules. The assessment of
technical equipment and human modules mobility is based on different approaches
involving several criteria. It is therefore necessary to evaluate each category and
then aggregate the results to give a unique appreciation of the whole manufacturing system mobility [10]. This appreciation can be expressed by a quantitative
value between 0 and 1 that indicates a satisfaction index. The index construction
approach is based on a multi-criteria analysis. Two important concepts are used:
the expression of preference and the criteria aggregation.
On the one hand, the mobility of MMS technical module has to be satisfied
through all its life phases. To be mobile, a technical module must be: transportable, mountable on site, operating on site and dismantled. On the other hand, the
human system operates by providing flexible working ability to carry out simple
or complex operations contributing to the functioning of MMS. This requires
skills acquired or developed on-site during the on-site production phase. The human system mobility can be understood as the mobility of one or more skills necessary for the manufacturing system operation.
3.4.2 Integrability index
Generating a MMS configuration consists in the integration of various independent modules (machines, operators, conveyors, etc.). In order to have feasible configurations, it is necessary to ensure that the selected modules can be integrated
with each other. Each module has one or more interfaces to bind to other modules.
The Integrability evaluation process of a MMS configuration combines two approaches [10]:
x A decomposition analysis approach (top-down): The MMS configuration is
broken down into individual modules. Each module integrates common interfaces
with one or more other MMS modules. The analysis of Integrability is carried out
at the level of each MMS configuration’ elementary module.
x An assessment approach based on integration (bottom-up): it is based on the
definition and evaluation of all nodes in the system configuration. Individual
measurements will be aggregated to give a single measure of the MMS configuration’ Integrability.
3.4.3 Criterion of on-site resource availability
For a given MMS configuration, the evaluation phase of the availability of the
competences starts with the assessment of required skills in this configuration.
Thus, for each configuration’ entity, required skills are identified based on the
attribute “needed sills” contained in the description of each resource. This attribute
is faced to available competences on the implantation location. An evaluation
method is proposed to ensure that the required resources by the MMS configuration that had been suggested are available on the implantation location [10]. The
assessment of skills availability is split into three stages: identification of the re-
118
Y. Benama et al.
quired skills, identification of relevant actor profiles and assessment of the profiles
availability on the implantation location.
4 Conclusions
In this communication, the concept of mobile manufacturing system is discussed.
The mobility requirements were addressed and a mobile manufacturing system
design framework is presented. The design process is based on some decision
criteria. In addition to the typical cost, quality and delay criteria, three other decision criteria are proposed: the mobility index, the integrability index and a criterion of on-site resources availability. The proposed design approach is limited to the
consideration of a single implantation location. However, the concept of successive mobility requires that the same production system is operated successively on
several implantation locations. The design approach must be adapted to the multisites context by integrating the concept of reconfigurability. A first analysis of this
issue is presented in [10]. This issue of successive multi-sites mobility will be
addressed in future communications.
References
1. Erwin R. and Dallasega P. Mobile on-site factories-scalable and distributed manufacturing
systems for the construction industry. 2015
2. Stillström C. and Mats J. The concept of mobile manufacturing. Journal of Manufacturing
Systems 26 (3-4). 2007, pp.188Ǧ93.
3. Rösiö C. Supporting the design of reconfigurable production systems. 2012. Jönköping University.
4. Flores, A. J. Contribution aux methods de conception modulaire de produits et processus
industriels. 2005. Institut National Polythechnique de Grenoble
5. Olsson E., Mikael H. and Mobeyen U. A. Experience reuse between mobile production modules-an enabler for the factory-in-a-box concept. In. Gothenburg,2007, Sweden.
6. Fox, S. Moveable factories: How to enable sustainable widespread manufacturing by local
people in regions without manufacturing skills and infrastructure. Technology in Society 42
2015, pp: 49Ǧ60.
7. Pahl G., Beitz W., Feldhusen J. and Karl-Heinrich G. Engineering Design: A Systematic
Approach. Springer Science & Business Media. 2007
8. Alfieri A., Cantamessa M., Montagna F. and Raguseo E. Usage of SoS Methodologies in
Production System Design. Computers & Industrial Engineering 64 (2). 2013, pp: 562Ǧ72
9. Fiorèse S. and Meinadier J.P. Découvrir et comprendre l’ingénierie système. AFIS. Cépaduès
Éditions. 2012
10. Benama Y. Formalisation de la demarche de conception de système de production mobile :
integration des concepts de mobilité et de reconfigurabilité. Thèse de doctorat. 2016.
Unversité de Bordeaux.
11. Benama Y., Alix T. and Perry N. Supporting make or buy decision for reconfigurable manufacturing system, in multi-site context. APMS, Ajaccio, September 2014, pp.
An automated manufacturing analysis of plastic
parts using faceted surfaces
Jorge Manuel Mercado-Colmeneroa, José Angel Moya
Murianab, Miguel Angel Rubio- Paramioa, Cristina MartínDoñatea*
a
Department of Engineering Graphics Design and Projects. University of Jaen. Spain. Campus
Las Lagunillas, s/n. 23071. Jaen (Spain)
3
ANDALTEC Plastic Technological Center, C/Vilches s/n, 23600 Martos -Jaen, Spain
* Corresponding author. Tel.:
cdonate@ujaen.es
+34 953212821; fax: +34 953212334. E-mail address:
Abstract
In this paper a new methodology of automated demoldability analysis for parts
manufactured via plastic injection molding is presented. The proposed algorithm
uses as geometric input the faceted surface mesh of the plastic part and the parting
direction. Demoldability analysis is based on a sequential model to catalog nodes
and facets of the given mesh. First, the demoldability of nodes is analyzed, subsequently, from results of previous nodes analysis, facets of the mesh are cataloged
in: demoldable (facets belong cavity and core plate), semi-demoldable (plastic part
manufactured by mobile mechanisms, side cores) and non-demoldable (plastic
part not manufacturable). This methodology uses a discrete model of plastic part,
which provides an additional advantage since the algorithm works independent of
the modelling software and creates a new virtual geometry providing information
on its manufacture, exactly like CAE software. All elements of the mesh (nodes
and facets) are stored in arrays, according with their demoldability category, with
information about their manufacture for possible uses in other CAD/CAE applications related to design, machining and costs analysis of injection molds.
Keywords: Manufacturing analysis; mesh analysis; injection molding; CAD.
1 Introduction and Background
The manufacturing process of injection molding is the industrial method most
commonly used for producing plastic parts that require: finishing details with tight
tolerances, and dimensional control. Currently the plastic industry demands
searching for graphics and computational tools to reduce the design time of the
plastic parts and the manufacturing of the plastic injection molds that perform
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_13
119
120
J.M. Mercado-Colmenero et al.
them. Currently, CAD and CAE systems enable design engineers to reduce design
time tasks, simulation, analysis of product manufacturability, and cost estimation.
The demoldability analyses of the plastic part, and detection of slides and internal undercuts, have established an important area of research within the field of injection mold design because they affect directly the design and its final cost. Different methodologies have addressed the demoldability analysis of the plastic part
by means of visibility techniques along the parting direction. Authors such as
Chen et al. [1], which proposed to address the concept of visibility and estimation
of optimal parting direction through the concept of pockets, or Manoochehri [2],
have been pioneers in this technique.
Other authors have focused their research on the recognition of the features of
the plastic part in CAD format. A feature is defined as a discrete region of the part
that has information about its modeling and manufacturing. Features extraction
methodology makes available the part information and enters it as an input of a
structured algorithm. Fu et al. developed a set of algorithms for solving the
demoldability analysis by means of the features recognition, including the recognition of undercut features [3], the definition of the parting direction [4], the parting line and parting surface [5] and the recognition of the upper cavity and lower
cavity [6], and design of sides cores [7]. Yin et al. [8] proposed a methodology to
recognize undercut features for near net shapes. Ye et al. [9] provided an undercut
features recognition hybrid method and [10] extended their work to side core design. Other methods combine features recognition with visibility algorithm given a
parting direction with discretizing the plastic part. Singh et al. [11] describes an
automated identification, classification, division and determination of complex
undercut features of die-cast part.
Nee et al. [12, 13] proposed to solve demoldability analysis by means of classifying the plastic part surfaces according to its relative orientation to the parting direction and the connection between them. This method uses the dot product between the parting line and the normal vectors surface in order to define the
demoldability of surfaces.
Huang et al. [14] and Priyadarshi et al. [15] have focused their research on the
application of demoldability and visibility analysis of multipart molds. Thus, a
facet of the discretized geometry of the plastic part is demoldable if accessible
along the parting direction and not obstructed by any other facet of the rest of the
part. The applicability of this type of mold is very limited to the scope of prototyping.
Rubio et al. [16] and Martin et al. [17,18] based their research of demoldability
analysis on algorithms based on model discretization by means of sections by cutting planes, which are crossed by straight lines. A set of intersection points on the
workpiece is generated and analyzed according to their demoldability. Nevertheless the obtained precision is far from that obtained by other methods (i.e. recognition features). Finally, other authors used the GPUs as a tool for detecting the undercuts in the plastic part. Khardekar et al. [19] limited the use of the GPU'S to
recognize the possible parting directions that do not generate any undercut. This
An automated manufacturing analysis of plastic …
121
paper proposes a new method of automated demoldability analysis based on the
geometry of the discretized plastic part (set of mesh nodes-facets). It allows the
independence from the CAD modeler and is valid for any type of surface mesh in
any plastic part. After analysis, a new virtual geometry which incorporates manufacturing information of the plastic part is generated.
2 Methodology
2.1 XOY Planes Beam Generation, Preprocessing
Starting from a 3D plastic part to be manufactured, a three-dimensional mesh
formed by a set of nodes (N) and facets (F) is generated. The facets that make up
the mesh are triangular; hence a facet, Fi ‫ג‬Թ3, has 3 unique nodes, Nij ‫ג‬Թ3, associated to it.
The presented methodology is based, first, on an arrangement of nodes N ij ‫ג‬Թ3
according to their Z coordinate (parting direction, Fig. 1). Then, a sheet set of
XOY analysis planes πp, such that each node Nij ‫ג‬Թ3 of the mesh belongs to a plane
πpk ‫ג‬Թ3 (equation 1). Each plane XOY is associated with a node of the mesh and
therefore also the facet to which the node belongs.
(1)
This arrangement of the mesh elements is performed downwardly along the
parting direction. Note that a facet Fi ‫ג‬Թ3 belongs to only a plane πpk‫ג‬Թ3 defined
by the node of the facet with the greatest z dimension along the parting direction.
2.2 Recognition algorithm of demoldable facets along the parting
direction, Processing
Before describing the set of logical operations that make the algorithm, a set of initial premises that complement it should be established:
x Demoldability analysis is performed along the parting direction, Dz (Fig. 1),
that is established as an input of the present algorithm.
x For reclassification of facets in cavity and core plate, a double sweep is performed, along the parting direction, with positive and negative sense.
x Vertical and non-vertical facets are analyzed independently.
2.2.1 Not-vertical facets
The algorithm begins with the facets belonging to the first plane πp1 ‫ ג‬Թ3, which
are classified as demoldable and therefore both facets and nodes that compose it
are classified as demoldable by means of the cavity plate and belonging to βf ‫ ג‬Թ3.
122
J.M. Mercado-Colmenero et al.
(2)
Where βf ‫ ג‬Թ3 (Fig. 1) represents the array of facets that are demoldable by
means of cavity plate. For the following levels of analysis [2,m], this algorithm
proposes to assess the facet demoldability by projecting its nodes and control
points PGauss according the parting direction. Given the analysis plane πpk ‫ג‬Թ3, the
demoldability of a facet Fi ‫ ג‬Թ3 associated with it is analyzed using as a reference
the whole information of all facets analyzed in previous planes. Based on the
above premise, the facets belonging to the first level are classified as demoldable.
Thus, given a facet Fi, it is considered demoldable if the projection of its associated nodes Nij and PGauss,i do not intersects with the facets assigned as demoldable
(belonging to βf ‫ ג‬Թ3) or not-demoldable (belonging to ηfcav ‫ ג‬Թ3) in immediately
preceding planes.
(3)
(4)
Where ηfcav ‫ ג‬Թ3 (Fig. 1) represents the array of facets of the mesh that are not
demoldable by means of cavity plate or, as provided in subsequent sections, semidemoldable facets.
Fig. 1. Location of the facets belonging to βf (green, demoldable facets) and to ηfcav (red, notdemoldable).
2.2.2
Vertical facets.
Vertical facets (facets that meet the geometrical condition of perpendicularity to
the parting direction Dz) are cataloged from the set of facets assigned as non-
An automated manufacturing analysis of plastic …
123
vertical and demoldable by means of cavity plate previously analyzed. To do a
border contour (equation 5) is established by facets belonging to βf ‫ ג‬Թ3.
(5)
So that a vertical facet Fi ‫ ג‬Թ3 is demoldable if the projection of its nodes N ij
and Gauss points PGauss,i belong to the border contour Fr(βf). If so, these facets are
stored in the array of facets demoldable by means of cavity plate βf ‫ ג‬Թ3 and in the
opposite case are stored in the array of facets not-demoldable by means of cavity
plate or semi-demoldable ηfcav ‫ ג‬Թ3 (Both arrays previously established).
(6)
(7)
Fig. 2. Location of vertical facets belonging to βf and to ηfcav.
2.2.3. Reallocating demoldable facets to Core Plate.
Once the scan is performed along the parting direction and in the positive direction (+Dz), the new algorithms defined in paragraphs 2.2.1 and 2.2.2 are implemented, reorienting the part in the negative direction of the parting direction (-Dz).
Thus it is obtained as a result the set of facets demoldable by means of core plate,
which will be stored in the array γf ‫ ג‬Թ3. And the set of facets not-demoldable by
means of core plate, which will be stored in the array γf ‫ ג‬Թ3.
To do this, a set of unification requirements for those facets with duplicated results must be established. These being the following:
x
Demoldable facets by means of both cavity and core plate (duplication of results) will be stored in the array γf ‫ ג‬Թ3 (core plate), and removed from βf ‫ ג‬Թ3
(cavity plate).
124
J.M. Mercado-Colmenero et al.
(8)
x
Facets classified as demoldable by means of core plate (second analysis, -Dz),
but which have been classified as not-demoldable by means of cavity plate
(first analysis, +Dz) will be stored in the array γf ‫ ג‬Թ3 (core plate), and removed from ηfcav ‫ ג‬Թ3 (Facets not-demoldable by means of cavity plate or
semi-demoldable).
(9)
x
Similarly, facets classified as demoldable by means of core plate (second
analysis, -Dz), but which have been classified as demoldable by means of cavity plate (first analysis, +Dz) will be stored in the array βf ‫ ג‬Թ3 (cavity plate),
and removed from ηfcor ‫ ג‬Թ3 (Facets not demoldable by means of core plate or
semi-demoldable).
(10)
Fig. 3. Demoldability analysis along +Dz y -Dz. Unification of results, Boundary Conditions.
2.3 Reallocation algorithm for facets not-demoldable by means of
lateral slides or not-demoldable undercuts.
This section describes the algorithm for the reclassification of the facets F i ‫[ ג‬ηfcav
U ηfcor]. As shown in Fig. 4, this set of facets can divide their domain, creating new
virtual polygonal facets. Automatic division of these facets allows evaluating inner regions thereof which can be demoldable or not, depending on the presence of
overlap between these facets and the facets defined above as demoldable. By
means of a comparative facet-to-facet process can be determined that notdemoldable or semidemoldable facets ([ηfcav U ηfcor]) are entirely or partially overlapped by demoldable facets ([βf U γf]). To check for overlapping between a pair of
facets, both facets are projected on a plane perpendicular to the parting direction
An automated manufacturing analysis of plastic …
125
and is checked by a Boolean logic operation if there is contact between both facets. One facet Fi ‫[ ג‬ηfcav U ηfcor] is semidemoldable if it meets the condition that its
nodes have a z coordinate along the parting direction below the z coordinates of
the nodes of the reference facet (belonging to [βf U γf]) and if the intersection between these facets it is not zero. Otherwise, it is reclassified as demoldable (belonging to [ηfcav U ηfcor]).
(11)
Where δf ‫ ג‬Թ3 represents the set of all semi-demoldable facets and Fref ‫ ג‬Թ3 represents a reference facet to check for overlap. Once the semi-demoldable facets are
defined, they are fragmented, finding for each the demoldable region by means of
upper or lower cavity and the not-demoldable region. The division of semidemoldable facets is performed by applying a methodology of subtraction and intersection between each of the semidemoldable facets and the closed set of reference facets.
(12)
Fig. 4. Example of resolution of semi-demoldable facets. Boolean operation.
Finally, the set of facets classified as not-demoldable are again analyzed to check
its demoldability by means of applying side cores. Thus, the part is reoriented by
turning 90° around the X axis and then around the Y axis (Checking the
demoldeability in new parting directiona, D’z, perpendicular to main parting direction), Fig. 5. For each turn, the algorithms presented in previous sections are run,
excluding those facets classified as demoldable in this phase.
Fig. 5. Side Core.
126
J.M. Mercado-Colmenero et al.
3 Implementation and Results
In order to validate this new methodology of automated demoldability analysis,
we analyzed three cases of plastic injection parts. All analyzes were performed
with the same precision of the mesh (angle and deviation). The implementation of
the algorithm has been made from the numerical calculation software Matlab
R2013a®. In contradiction to other methods, this algorithm has the advantage of
being adaptable to other programming language and its application extends to any
type of mesh surface. Results of the algorithm are presented below, as it is shown
the proposed cases are grouped according to the degree of demoldability in:
demoldable, demoldeable via side core and non-demoldable.
First, Case A (Fig 6) is completely demoldable. Thus, all facets, which are part
of the mesh of the plastic part, are demoldable in the parting direction Dz:=Z. As it
is shown (Fig. 6), Case A is composed (Table 1) of 406 facets demoldables
through cavity plate and 554 facets demoldables through core plate. Therefore, its
manufacture is trivial and it does not require no slide mechanism.
Then, Case B (Fig. 6) is demoldeable by using side cores. In contradiction to
previous case, this plastic parte requires a side core for manufacturing, which as it
is shown (Fig. 6) is defined in the direction DSide:=Y, perpendicular to the parting
direction. Case B is composed (Table 1) of 120 facets demoldables through cavity
plate, 358 facets demoldables through core plate and 20 facets demoldables
through a slide mechanism or side core.
Finally, Case C is non-demoldable. Case C is composed (Table 1) of 2428 facets demoldables through the main parting direction Dz:=Z. As Case B, it has 358
facets demoldables through core plate and 20 facets demoldables through a slide
mechanism or side core. As Case B, it possess 80 facets which require a sliding
mechanism in order to be demoldable in the side direction DSide:=X (perpendicular
to the parting direction). Nevertheless, as it is shown in Fig. 6 the core of the plastic parte is not demoldable in any direction. So, 518 facets are categorized as nondemoldable facets. This implies: the need to modify the geometry of the plastic
part in order to be demoldeable or non-manufacture of it, by the technique of plastic injection molds.
Case
Studies
Parting
Direction
Cavity
Facets
Core
Facets
Side
Core
Facets
Side
Core
Direction
NonDemold.
Facets
A
Z
406
554
-
-
-
120
358
50
Y
Manufacturable
B
Z
-
C
Z
1214
1214
80
X
518
Table 1. Demoldability result for the plastic part A, B and C.
Yes
Yes, through
side core
No
An automated manufacturing analysis of plastic …
127
Fig. 6. Demoldability result for the plastic part A, B and C.
4 Conclusions
In this paper a new methodology for evaluating the demoldability analysis for a
given parting direction is proposed. This method proposes a discrete analysis of
the geometry of the plastic part to examine the demoldability in facets and nodes
belonging to the mesh. The developed algorithm uses as input the discretized surface of the plastic part and generates, after analyzing it, a new virtual geometry
that incorporates information about the manufacture of the plastic part.
The algorithm detects facets that are demoldable through cavity and core plate and
facets that are non-demoldable. In this second case, demoldability of said facets is
evaluated in a perpendicular direction to the parting direction, allowing to define
the geometry and direction of side cores. Finally, the designer of the plastic part
can adapt and modify the geometry of the part though regions of it that are cataloged as non-demoldable. This reduces time and costs associated with the initial
phases of design and manufacturing of injection mold.
The proposed method improves other developed methods so far since it allows the
realization of demoldability analysis independently of CAD modeler, is valid for
application to any plastic part geometry, and it does not need access to internal information of the part. The geometry of the solid remains stored in arrays for later
use in other CAD/CAE applications related to injection mold design, machining of
cavity and core plates, etc.
Additionally, a future work is the implementation of the proposed algorithm in an
automated mold design system.
Acknowledgments This work has been supported by Consejeria de Economía, Ciencia y
Empleo (Junta de Andalucia, Spain) through the project titled ” A vertical design software for
integrating operations of automated demoldability, tooling design and cost estimation in injection molded plastic parts. (CELERMOLD)” (Project Code TI-12 TIC-1623)
128
J.M. Mercado-Colmenero et al.
References
1. Chen L.L., Woo T.C. Computational geometry on the sphere with application to automated
machining. ASME Transactions. Journal of Mechanical Design 114, 288-295.
2. Weinstein M, Manoochehri S. Optimal parting direction of molded and cast parts for manufacturability. Journal of Manufacturing Systems 1997; 16(1):1-12.
3. Fu M.W, Fuh J.Y.H., Nee A.Y.C. Undercut feature recognition in an injection mould design
system. Computer Aided Design 1999; 31(12):777-790.
4. Fu M.W., Fuh J,Y.H., Nee A.Y.C. Generation of optimal parting direction based on undercut
features in injection molded parts. IIE Transactions 1999; 31: 947-55.
5. Fu M.W, Nee A.Y.C., Fuh J.Y.H. The application of surface visibility and moldability to parting line generation. Computer Aided Design 2002; 34(6): 469-480.
6. Fu M.W., Nee A.Y.C., Fuh J.Y.H. A core and cavity generation method in injection mold design. International Journal of Production Research 2001; 39:121-38.
7. Fu M.W., The application of surface demoldability and moldability to side core design in die
and mold CAD. Computer Aided Design 2008; 40(5): 567-575.
8. Yin ZP, Han Ding, You-Lun Xiong: Virtual prototyping of mold design: geometric
mouldability analysis for near-net-shape manufactured parts by feature recognition and geometric reasoning. Computer-Aided Design 2001: 33(2): 137-154
9. Ye X.G. Fuh JYH, Lee K.S.: A hybrid method for recognition of undercut features from
molded part. Computer-Aided Design 2001; 33(14):1023-34.
10. Ye X.G. Fuh JYH, Lee K.S.: Automotive undercut feature recognition for side core design of
injection molds. Journal of Mechanical Design 2004; 126:519-26.
11. Singh R, Madan J, Kumar R. Automated identification of complex undercut features for side
core design for die casting parts. Journal of Engineering Manufacture. 2014;228(9):1138-1152.
12. Nee A.Y.C., Fu, M.W., Fuh J.Y.H., Lee K.S., Zhang Y.F. Determination of optimal parting
direction in plastic injection mould design. Annals of the CIRP.1997:,46(1): 429-432.
13. Nee A.Y.C., Fu M.W., Fuh J.Y.H, Lee K.S., Zhang Y.F. Automatic determination of 3D
parting Lines and Surfaces in Plastic Injection Mould Design. Annals of the CIRP. 1998: 47(1):
95-99.
14. Huang J, Gupta SK, Stoppel K. Generating sacrificial multi-piece molds using accessibility
driven spatial partitioning. Computer Aided Design 2003;35(3):1147–60.
15. Priyadarshi A.K.L., Gupta S.K. Geometric Algorithms for automated design of multipiece
permanent molds. Computer Aided Design 2004; 36(3): 241-260.
16. Rubio M.A.,Pérez J.M., Rios J. A Procedure for plastic parts demoldability analysis Robotics
and Computer Integrated Manufacturing 2006; 22(1):81-92.
17. Martin Doñate C, Rubio Paramio, M. A. New methodology for demoldability analysis based
on volume discretization algorithms. Computer Aided Design.2013; 45(2): 229-240.
18. Martin Doñate C, Rubio Paramio, Mesa Villar A. Método de validación automatizada de la
fabricabilidad de diseños de objetos tridimensionales en base a su geometría. Patent number: ES
2512940.
19. Khardekar R, McMains S. Finding mold removal directions using graphics hardware. In:
ACM workshop on general purpose computing on graphics processors; 2004, pp. C-19, (abstract).
Applying sustainability in product development
*
2
Rosana Sanz , José Luis Santolaya , Enrique Lacasa
3
2-3 Department of Design and Manufacturing Engineering, EINA, University of Zaragoza,
C/ Maria de Luna 3, Zaragoza 50018, Spain
* Corresponding author. Tel.: +34-976-761-900; fax: +34-976-762-235. E-mail address:
rsanz@unizar.es
Abstract
Sustainable product development initiatives have been evolving for some time to
support companies improve the efficiency of current production and the design of
new products and services through supply chain management. This work aims at
integrating environmental criteria in product development projects at the same
time that traditional product criteria are fulfilled. The manufacturing process of an
airbrush was studied. Different strategies focused on the optimization of raw materials and energy consumption along the manufacturing operations, the identification of the product components that could be modified according to a DFA analysis, the evaluation of the recyclability rate for the materials making up the product
and the identification of those materials with the highest environmental impact,
were applied. An approach based on two main strategies, optimization of materials
and optimization of processes is proposed to be used by engineering designers for
a progressive education to eco-design practice.
Keywords: Sustainability; product development; design guidelines;
1 Introduction
The progress toward sustainability implies maintaining and preferably improving,
both human and ecosystem well-being [1]. Achieving sustainable development in
industry will require changes in organizational models and production processes in
order to balance the efficiency of its operations with its responsibilities for environmental and social actions [2].
According to stimuli as the opportunities for innovation, the expected increase
of product quality and the potential market opportunities [3], sustainable product
development initiatives have been evolving for some time to support companies
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_14
129
130
R. Sanz et al.
improve the design of new products and services through supply chain management.
Several authors have contributed to the development of methods and tools considering environmental criteria in the same way as conventional design criteria
through an Eco-design approach. Using Eco-design or Design for the Environment
(DfE) all environmental impacts of a product are addressed throughout its complete life cycle, without unduly compromising other criteria and specifications like
function, quality, cost and appearance. As is shown in Fig. 1, a whole product system life cycle includes five different stages: materials obtaining, production process, distribution, use and final disposition.
Fig. 1. Stages of the product life cycle.
Eco-design integrates Design for X (DfX) strategies of all life span phases into
one [4]. It can benefit from techniques such as design for disassembly, design for
end-of-life and design for recycling. This methodology is inspired by the concurrent engineering and integrated design, which imply the incorporation of downstream factors, such as manufacturing, assembly, maintenance and end-of-life at
the very beginning of the design project [5].
Specific tools for eco-design can be classified in environmental assessment of
products and environmental improvement tools. Environmental assessment tools
are generally based on a life cycle assessment (LCA) method. The well-known
structure of goal definition and scoping, inventory analysis, impact assessment and
interpretation was developed during the harmonization-standardization work by
SETAC and ISO 14040 [6, 7]. Environmental impact is usually expressed by
means of indicators based on LCA evaluation methods. On the other hand, environmental improvement tools provide guidelines and rules for helping designers to
identify potential actions to improve the environmental performance of products.
Brezet and van Hemel [8] developed the Life Cycle Design Strategies (LiDS)
Wheel that identifies different strategies to achieve sustainability around the product life cycle. The LIDS wheel can be used to estimate the environmental profile
of an existing product or to evaluate the action plan for a new product.
This work focuses on the production stage of the product life cycle. Methodology applied and results obtained for a case study are shown in the following sections.
Applying sustainability in product development
131
2 Methodology
In order to achieve a more sustainable product, the following operative method
is proposed (Fig. 2):
Fig. 2. Methodology for a more sustainable product development.
The identification, classification and proper characterization of the different
product components is a preliminary required task. The study of the production
process implies all operations needed to the manufacture, assembly and finishing
of each product component.
A set of indicators are used to assess the sustainability of the production process. These are the global warming (GW), which represents the mass of CO 2 emitted to the atmosphere, the energy consumption and the percentage of material removed. The EuP Eco-profiler tool is proposed to evaluate the global warming
indicator. Database and calculation methodology of this tool is defined in MEEuP
methodology [9]. Input data takes into account the mass of each material making
up the product and the energy consumption along the manufacturing process. Output data are different eco-indicators. Energy consumption and waste percentage
are calculated by means of the elementary flows exchanged by the industrial installation.
132
R. Sanz et al.
Furthermore, a design for assembly indicator (DFA) is obtained. Most common
methods are based on measuring the ease or difficulty with which parts can be
handled and assembled together into a given product. An analytical procedure to
design for assembly is followed where the problems associated with the components design are detected and quantitatively assessed [10]. The process of manual
assembly is divided into two separate areas: handling (acquiring, orienting and
moving of part) and insertion - fastening (mating a part to another part or group of
parts). The result of this analysis is a DFA indicator, which is obtained by dividing
the theoretical minimum time for the actual mounting assembly time.
On the other hand, recyclability was analyzed by the indicator that represents
the percentage of material that can be recovered by the method of manual
separation or trituration. Recyclability can be calculated once known the following
aspects: material type and mass of each component of the product and rate of recyclability for each material (RCR) [11].
The last stage of the operative method is the product redesign. Strategies as the
reduction of materials, the selection of low impact and recyclable materials and
the easy insert, manipulation and assembly of components, are proposed. The
preservation of the design specifications is considered.
3 Case study
The product studied in this work is a professional dual action airbrush (depressing
the trigger/level action delivers air and drawing back on the trigger releases paint).
The paint is drawn into from a reservoir mounted on top of the airbrush (gravity
feed) and it is atomized outside the airbrush tip. The components of this mechanism are shown in Fig. 3.
Applying sustainability in product development
133
Fig. 3. Airbrush components.
Materials to manufacture the airbrush essentially include stainless steel, brass,
Teflon and chromium. The mass of each one and the resulting GW indicator are
indicated in Table 1.
It should be noted a different contribution of each material to the global warming indicator. Chromium, which is used in the surface finishing process, represents
only 4.3% of the product mass but involves 48.4% of GW.
Table 1. Distribution of mass and environmental impact.
Mass (g)
GW (Kg CO2)
Raw materials (g)
Steel (AISI 304)
Materials
160
0.97
360
Brass (CW 614N)
4.17
0.01
10
Teflon (PTFE)
0.023
0.001
0.07
7.4
0.92
22
171.5
1.9
392.1
Chromium
Total
The study of manufacturing process reveals that the material removed in drilling and machining processes is a high percentage of the raw materials acquired
(Table 1). According to previous methodology, manufacturing, assembly and finishing operations were reviewed in order to propose a more sustainable product
development. The following sustainability strategies were applied: to reduce the
amount of material removed, to reduce parts number, to change materials and to
change surface finishing process.
Some changes in raw materials selection for each component of the airbrush
were carried out. The use of calibrated bars and tubes was proposed. Thus, the
waste percentage was reduced and several operations as drilling and turning processes were also avoided. Results are shown in Fig. 4, where the following information for both, initial design, Di, and redesign alternative, A, is shown for some
components: size of raw materials, manufacturing operations that were simplified
for each alternative, energy consumption and amount of material removed along
the manufacturing process. In the case of the first component (needle cup), the operations of drilling and contour turning were eliminated by the proper selection of
the raw materials size. Consequently, a significant reduction in material removed
(24.8 %) and energy consumption (18.2%) were achieved.
The sequence of operations required to assembly each part of the airbrush in
terms of align, insert and manipulation was studied in detail. Each act of retrieving, handling, and mating a component is called an assembly operation. This analysis is shown in Fig. 5. Column 1 shows the part identification (sorted by
assembly steps) and column 2 identifies the number of times the operation is carried out consecutively. The rest of them correspond to the identification of two
separate areas: handling and insertion and fastening, which, provide two manual
codes and their corresponding time per part in order to get operation time and
134
R. Sanz et al.
costs. This coding can be found in the tables for estimation time [10]. Last column
identifies with two possible numbers, (0 - avoidable or 1-essential), the minimum
theoretical number of pieces ideal situation, in which, separate parts could be
combined into one unless, one piece, as each piece is added to assembly, the piece
should be of a different material, or must be isolated, on all other parts assembled
the piece must be separated from all other parts assembled to perform the assembly of parts that meet one of the above criteria. DFA indicator let us can be analyzed when changes are carried out on product design as a comparative method
and as a tool to identify which components could be modified or redesign to
optimize the product life cycle.
Airbrush
component
1. Needle Cup
AISI 304
2. Nozzle body
AISI 304
5. Needle
AISI 304
6. Packing washer
PTFE
8. Reservoir cup
AISI 304
9. Trigger
AISI 304
11.Sleeve limit
CW614N
12. Spring shaft
AISI 304
14. Needle sleeve
AISI 304
15. Needle fitting
AISI 304
16. Handle
AISI 304
17. Fitting screw
AISI 304
21. Valve Body
AISI 304
23. Plunger valve
CW614N
26. Nut
AISI 304
29. Body
AISI 304
Di
A
Di
A
Di
A1
Di
A
Di
A
Di
A
Di
A
Di
A
Di
A
Di
A
Di
A
Di
A
Di
A
Di
A
Di
A
Di
A
Raw materials
size (mm)
Ø8x6.2
Ø7x1.5x6.2
Ø10x9.8
Ø9x9.8
Ø2x131.7
Ø2x130.9
Ø4x2.5
Ø3x2.5
Ø28x8
Ø27x8
Ø12x17.7
Ø11x17.7
Ø10x5.7
Ø10x2.5x5.4
Ø5.5x42.7
Ø5x42.7
Ø10x19.7
Ø10x4x19.2
Ø7x11.2
Ø7x10.9
Ø13x59.7
Ø12x4x58.7
Ø9x36.7
Ø8x35.9
Ø11x21.7
Ø10x20.9
Ø4x21.7
Ø4x21.4
Ø11x10.7
Ø11x2x10.2
Ø13x82.7
Ø12x82.3
Machining operations
removed (mm)
Drilling (4)
Contour turning (0.75)
Contour turning (0.5)
Facing (0.8)
Facing (0.5)
Contour turning (0.5)
Contour turning (0.5)
Facing (0.3)
Drilling (5)
Contour turning (0.5)
Facing (0.5)
Drilling (2)
Facing (0.3)
Facing (1); Drilling (4)
Contour turning (0.5)
Facing (0.8)
Contour turning (0.5)
Facing (0.8)
Contour turning (0.5)
Facing (0.3)
Facing (0.5)
Drilling (7)
Facing (0.4)
Contour turning (0.5)
Energy
(w·h)
0.39
0.26
0.76
0.64
0.26
0.26
0.03
0.004
3.22
2.94
5.29
4.95
0.18
0.14
0.66
0.49
0.5
0.42
0.36
0.35
6.03
4.48
1.67
1.25
1.45
1.1
0.12
0.12
0.8
0.4
7.92
6.44
Material
rem. (g)
2
0.6
5
4
2
2
0.05
0.02
30
27
15
13
3
1
5
4
4
4
2
2
44
29
13
9
11
8
1
1
6
3
48
36
Fig. 4. Airbrush components. Reduction of the amount of material removed.
The percentages of recyclability of the product according to the treatment of
end of life were estimated such as is shown in Fig. 6. This analysis reveals that
95% can be recovered by manual separation, which is always more thorough than
trituration. The majority of components are raised in steel. It presents a high RCR
(rate of recyclability) and no changes are proposed. To assess whether or not the
Applying sustainability in product development
135
manual separation process, we can quantify the economic value of the recovered
materials meet the cost of operator generated to perform the separation.
Finally, the chromed layer was proposed to be substituted by a polishing process of the stainless steel components. Product specifications were not practically
modified because a high corrosion resistance was preserved. A substantial reduction of 48% could be obtained for the GW indicator.
Fig. 5. Airbrush components. DFA analysis.
136
R. Sanz et al.
Fig. 6. Airbrush components. Percentage of recyclability.
4 Conclusions
This work aims at integrating sustainability in product development projects.
Manufacturing, assembly and finishing process for a case study were analyzed in
detail and different strategies were applied. An airbrush was the product studied.
First, the optimization of raw materials allowed us reducing 18.2% the energy
consumption and 24.8% the amount of material removed along the manufacturing
process. Next, DFA method was used to identify those components that are likely
to be modified to detect which of them might have problems at some point of
product life-cycle management. It allowed a comparative analysis and provided an
estimation of how much easier it was to mount a design with certain characteristics, which another design with different features. Recyclability analysis of the
product identified the percentage of material that could be recovered and estimated the future value of the same, for a more effective final phase of the product lifecycle management. In this case, materials were preserved because presented high
RCR. Finally, the chromed layer applied in finishing process of the airbrush
showed a relative high impact environmental. Thus, it was proposed to be substituted by a polishing process.
Applying sustainability in product development
137
Acknowledgments The research work reported here was made possible by the work developed
on the Advanced Product Design programme (Master in Product Design Engineering in the University of Zaragoza.
References
1. UNCED, Agenda 21. United Nations Conference on Environment and Development, Rio de
Janeiro, June 1992.
2. Garner A. and Keolian G.A. Industrial ecology: an introduction. University of Michigan's National Pollution Prevention Center for Higher Education: Ann Arbor, MI, 1995.
3. Van Hemel C. and Cramer J. Barriers and stimuli for ecodesign in SMEs. Journal of Cleaner
Production, 2002, 10, 439-453.
4. Holt R. and Barnes C. Towards an integral approach to 'Design for X': an agenda for decisionbased DFX research. Research in Engineering Design, 2010, 21 (2), 123-126.
5. Boothroyd G., Dewhurst P., Knight W.A. Product design for manufacture and assembly (3
ed.), 2011. Florida, USA: CRC Press. Taylor and Francis Group.
6. ISO, 2006a, 2006b. ISO 14040 International Standard. In: Environmental management - Life
cycle assessment - Principles and framework. Requirements and Guidelines. International
Organization, Geneva, Switzerland.
8. Brezet J.C. and Van Hemel C.G. Ecodesign: a promising approach to sustainable production
and consumption, 1997. UNEP, United Nations Publications, Paris.
9. Kemna R., van Elburg M., Li W., van Holsteijn, R. MEEuP Methodology Report, 2005.
10. Boothroyd G. Product design for manufacture and assembly, 1994. Marcel Dekker, N. York.
11. IEC/TR62635, Guidelines for end-of-life information provided by manufacturers and recyclers and for recyclability rate calculation of electrical and electronic equipment, 2012.
Towards a new collaborative framework
supporting the design process of industrial
Product Service Systems
Elaheh Maleki*, Farouk Belkadi, Yicha Zhang, Alain Bernard
IRCCYN-Ecole Centrale de Nantes,1 rue de la Noë BP 92101,44321 Nantes Cédex 03, France
* Corresponding author. Tel.: +33-240-376-925; fax: +33-240-376-930. E-mail address:
Elaheh.Maleki@irccyn.ec-nantes.fr
Abstract The main idea of this paper is to present a collaborative framework for
PSS development process. Focused on the engineering phase, this research will
use the modular ontology to support the management of the interfaces between
various engineering knowledge involved in the PSS development process. The
supporting platform is developed as a part of a collaborative framework that aims
to manage the whole PSS lifecycle.
Keywords: Product-Service System, PSS Design, Collaborative Platform,
Knowledge repository
1. Introduction
Product-Service System (PSS) has been represented in 1999 as a promising solution for “sustainable economic growth” face up to the hard competitiveness in
the challenging markets [1]. Afterward, numerous economic, social, technological
and environmental incentives for PSS adoption have been discussed by different
researchers [2, 3, 4]. Being the most “feasible dematerialization strategy” [5], PSS
has been the subject of several works and innovations to support the above mentioned incentives.
To move towards the adoption of PSS business model, industries need to create
a new system of solution providing [6] by rethinking their current design and production processes as well as their business relationships with both customers and
the supply chain. The interdisciplinary nature of this new phenomenon increases
the number of disciplines involved in the development process [7], implies the
need of robust coordination and collaboration efforts [8]. These efforts should be
able to provide proper communication interfaces and facilitate knowledge sharing
among product, sensor and service experts [9].
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_15
139
140
E. Maleki et al.
The success of the collaborative process is strongly linked to the need of sharing knowledge between actors to ensure a common representation of the problem
to be solved. This representation is an integration of a set of knowledge fragments
created separately according to the expert skills and point of view on the problem.
The role of the collaborative tools is to ensure the consistency of interconnected
data and knowledge created by various activities and managed by several information systems, including legacy CAx tools. During the last decades several
Computer Supported Collaborative Work (CSCW) frameworks have been developed with the aim of assisting actors in their design activity [10]. Although the
PSS design process exploits classical CAD tools (mechanical, software, etc.) the
current collaborative tools fail to consider specific integration constraints and activities of the PSS development process [11]. In this context, developing a collaborative framework to support the whole lifecycle of industrial PSS is crucial.
Setting the above target, the purpose of this paper is to make a proposal for the
main foundations of a collaborative framework to support PSS engineering design
process in product-oriented PSS. To do so, literature review is made in the next
section to clarify the PSS concept. The third section of the paper discusses the development process of PSS as well as the main functions to be ensured by ideal collaborative framework supporting PSS design process. The last section will describe our framework for PSS semantic modeling and the global structure of the
knowledge repository supporting the proposed platform.
2. PSS definition and characteristics
Regardless of different vocabularies to describe PSS [8], there are some mutual
entities for PSS in the literature. The first general definition of PSS is given by
Goedkoop et al. [1] in 1999. Vasantha et al. [12] reviewed different definitions of
PSS used in different methodologies and concluded that “PSS development should
focus on integrating business models, products and services together throughout
the lifecycle stages, creating innovative value addition for the system.”
Meier et al. [3] characterized Industrial PSS by ‘‘the integrated and mutually
determined planning, development, provision and use of product and service
shares including its immanent software components in Business-to-Business applications and represents a knowledge-intensive socio-technical system’’.
Baines et al. [13] describe PSS as the convergence of “servitization” of product
and “productization” of service while Tukker’s typology [2], as the “most accepted classification” in literature [14], made a distinction between three main categories of PSS as “Product-Oriented Services, Use-Oriented Services, and ResultOriented Services”. Moderating the previous models, Adrodegari et al. [14] proposed a new form of PSS typology that relies on the ownership concepts and the
“building blocks of the business model framework” as Ownership-Oriented
(Product-Focused, Product and Processes-Focused) and Service-Oriented (Access-
Towards a new collaborative framework …
141
Focused, Use-Focused, Outcome-Focused). There are numerous PSS development
methods which focus on the integrated lifecycle management of product and service in PSS [16, 17, 18].
Inspired from various definitions of PSS in the literature [19, 2, 11, 12, 4], the
PSS concept is considered in this work as “a system of value co-creation based on
technical interfaces between product and service components as well as collaborative interactions between involved actors”. Based on this definition, the PSS development process is highly interactive and dependent on the supportive collaborative infrastructures.
3. Towards a collaborative framework for PSS design support
Design is a complex iterative process that aims to progressively define a complete, robust, optimal and efficient solution to answer a set of heterogeneous requirements provided by various stakeholders. The classical product design process
starts with the identification of product functions for each requirement, followed
by the identification of principles of solution and types of components for each
function and ends by the detailed definition of features and interfaces of product
components.
An industrial PSS design process could follow the similar main steps of the
above process for the identification of the physical components necessary for the
achievement of product functions and service shares. But it is not enough; indeed,
the outcomes of the PSS design process are more complex and concern the detailed definition of additional components and features as well as the technical solutions implementing the links between product and service components (Fig. 1).
Highlighting the positive impact of ICT tools on the PSS performance [21],
collaborative frameworks supporting a special part or the whole PSS lifecycle activities, is considered as a big challenge for the factory of the future. Building an
integrated collaborative framework to manage the PSS whole lifecycle needs to
integrate different knowledge expertise point of view such as customers, engineers
and production Point of view.
Service definition requires the identification of all information required to be
managed for the realization and exploitation of the PSS. The service features concerns the identification of all material and human resources requested in the PSS
usage stage regarding the resources availabilities and working environment constraints. These resources are necessary to maintain permanent relationships between customer and PSS provider during the whole contractual transaction after
delivering the PSS. This is one of the main differences between product-based and
PSS-based business models. The product features concerns the identification of
physical components considered as a specific category of material resource connected to some product components. The physical components can be sensors
142
E. Maleki et al.
needed to support the collection of real time service data or additional equipment
for communication between service resources and smart components of product.
Product
Functions
Production
Constraints
onstraintts
Customerr
Needs
ce
Service
e
Type
Environment
Constraaints
Constraints
PSS Design Process
Resources
Resource
Availability
Usage
Usa
Constraints
Product
Features
Product Service
Links
Service
Features
Fig. 1. The PSS Design process
3.1. From PSS design process to PSS design support framework
There is a breadth of related research on modular product-service development
methods which focus on the modular engineering of product, service, actors and
ICT infrastructure in PSS [19]. Knowledge and data required for the integrated solution formed by the components of a supportive platform should be managed on a
common repository, structured according to a set of modular ontologies covering
all PSS aspects.
Regarding Tukker et al. [15] “Companies use formal or informal approach to
the PSS development and they also use their own tools and procedures. The companies which are active more in product prefer to develop service in accordance
with the product development”. Proposing Computer Supported work facilities is
a crucial task to improve and harmonize the current development practices.
This paper focuses on the design support system which manages the multidisciplinary engineering process of the product oriented PSS and the related models. Respecting this multi-disciplinary essence of PSS engineering process, the design support system should assist collaboration between four main actors:
1) the project leader who has the role to fix the PSS project objectives and
validate the final result according to a set of pre-defined requirements
2) Sensor engineers in charge of the creation and management of sensor data
3) Mechanical engineers in charge of the creation and management of product
data with legacy CAD tools and respectively the collaborative framework
4) PSS engineers in charge of defining the new PSS solution as a combination
of pre-defined product components and sensors. They will interact with
mechanical and sensor engineers through the collaborative platform to fix
the final integration solution of the PSS.
Towards a new collaborative framework …
1)
2)
3)
4)
5)
143
The functions required as the minimum, but not a comprehensive, to
be considered in the collaborative framework are:
Service definition facility handling the creation of service features (information, resources, sensors, etc.).
Sensor management helps the declaration of sensor data and research of
optimal sensor for the defined service.
Integration solution configurator helps the creation and evaluation of physical links between pre-defined product and service components.
PSS Lifecycle modeler for the classification and analysis of different PSS
working situations. This is helpful for the PSS engineer making decision
about the best sensor and optimal integration solution.
CAx tools connection to support the management of CAD files and the
generation of light 3D representation of the PSS structure.
3.2. Knowledge repository structure
Several product models have been proposed and used along the recent years
[22]. These models should be extended to integrate new concepts necessary for the
definition of the associated service.
The architecture of the proposed PSS design support framework is based on a
central knowledge repository as a kernel component through which different business applications are interconnected to provide technical assistance and collaboration facilities to users (Fig. 2).
To define and implement the structure of this knowledge repository, domain
ontologies in PSS will be defined and connected to form the whole semantic model. This is based on a concurrent process grouping top down approach based on
recent findings in the literature survey and bottom up approach implementing the
pragmatic point of view gathered from industrial practices (Fig. 3).
Based on the analysis on the main functionalities of the PSS design in engineering phase, we have identified the main concepts of the semantic model. The architecture of the proposed PSS design support platform is based on a central
knowledge repository as a kernel component through which different business applications are interconnected to provide technical assistance and collaboration facilities to users (Fig. 2). To define and implement the structure of this knowledge
repository, domain ontologies will be defined and connected to form the whole
PSS semantic model.
144
E. Maleki et al.
Fig. 2. Global architecture of the proposed design support framework
Fig. 3. Methodological approach for Semantic model building
Considering the industrial context of PSS, domain ontologies are as follows:
1) Product Ontology: Supports the classification of main categories and features of products (domestic appliances, machines, transport facilities, etc.).
This helps the identification of some standard technical constraints to be
respected for the definition of the technical solution.
2) Service Ontology helps the classification of main service categories with a
list of standard information and KPIs necessary to describe each service
Towards a new collaborative framework …
145
type. For example, monitoring the machine health requires environmental
data like humidity, temperature and dust.
3) Sensors ontology includes a classification of technical sensors according to
a set of standard indicators useful for search and selection of optimal sensor to implement a specific service.
4) Connector ontology proposes a classification of main connection possibilities and constraints according to sensors and product types. This will help
the definition of the integration solution between PSS items.
5) PSS Lifecycle taxonomy is used to classify all possible standard working
conditions for each PSS life stage connected to product and service features.
4. Conclusion
Considering the complexity and multi-disciplinary nature of PSS development,
using a collaborative IT tool is critical for both provider and customer in the industrial projects. In this context, providing the common language to manage the
interfaces between various actors is the most complicated primary step. As a result
of this research, this is proposed to break down the system to the modules, not only in the engineering process but also in software design projects. Modular ontology concept seems to be a feasible solution for the massive knowledge involved in
PSS development.
This paper presents a summary of the first specifications results of the future
design architecture, component of the collaborative framework. The future works
concern the specification of the proposed functions and the construction of the different ontology models. These developments will be connected to the whole collaborative framework and the related semantic model.
Acknowledgments The presented results were conducted within the project “ICP4Life” entitled “An Integrated Collaborative Platform for Managing the Product-Service Engineering
Lifecycle”. This project has received funding from the European Union’s Horizon 2020 research
and innovation program. The authors would like to thank the academic and industrial partners
involved in this research.
References
1.
2.
Goedkoop, Mark J & Van Halen, Cees J G ét al. (1999), Product Service systems ,
Ecological and Economic Basics, Economic Affairs 1999
Tukker, Arnold (2004), Eight types of product-service system: eight ways to sustainability? Experience from Suspronet. , Business Strategy and the Environment, Bus.
Strat. Env. 13, 246–260 (2004)
146
E. Maleki et al.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
Meier,Horst & Roy, Raj et al. (2010), Industrial Product-Service Systems—IPS2,
CIRP Annals - Manufacturing Technology 59 (2010) 607–627 Contents
Vezzoli, Carlo et al., (2014) Product-Service System Design for Sustainability, Learning Network on Sustainability, Greenleaf Publishing Limited, 2014
Mont, O.K. (2002), Clarifying the concept of product–service system, Journal of
Cleaner Production 10 (2002) 237–245
Schnürmachera, Christian & Haykab, Haygazun et al. (2015) Providing ProductService-Systems - The Long Way from a Product OEM towards an Original Solution
Provider (OSP), Procedia CIRP 30 ( 2015 ) 233 – 238
Schenkl, Sebastian A. (2014), A Technology-centered Framework for Product-Service
Systems, Procedia CIRP 16 (2014) 295 – 300
Reim, Wiebke et al (2014), Product-Service Systems (PSS) business models and tactics - A systematic literature review, Journal of Cleaner Production, Volume 97, 15
June 2015, Pages 61-75
Trevisan, Lucile & Brissaud, Daniel (2016), Engineering models to support product–
service system integrated design, CIRP Journal of Manufacturing Science and Technology, Available online 8 April 2016
Linfu, S. Weizhi, L: Engineering Knowledge Application in Collaborative Design. 9 th
International Conference on Computer Supported Cooperative Work in Design, Coventry, (2005) 722-727
Cavalieri, Sergio (2012), Product–Service Systems Engineering: State of the art and
research challenges, Computers in Industry 63 (2012) 278–288
Vasantha et al (2012), A review of product_service systems design methodology,
Journal of Engineering Design,Vol. 23, No. 9, September 2012, 635
Baines, T S et al. (2007) State-of-the-art in product service-systems, Proc. IMechE
Vol. 221 Part B: J. Engineering Manufacture
Adrodegaria, Federico et al. (2015), From ownership to service-oriented business
models: a survey in capital goods companies and a PSS typology, Procedia CIRP 30 (
2015 ) 245 – 250 (7th Industrial Product-Service Systems Conference - PSS, industry
transformation for sustainability and business)
Tukker, Arnold et al. (2006) New Business for Old Europe : Product-Service Development, Competitiveness and Sustainability, Greenleaf Publishing, 2006
Aurich, J.C. et al (2006), Life cycle oriented design of technical Product-Service Systems, Journal of Cleaner Production 14 (2006) 1480-1494.
Tran, Tuan A. et al (2014), Development of integrated design methodology for various
types of product – service systems, Journal of Computational Design and Engineering,
Vol. 1, No. 1 (2014) 37-47.
Wiesnera, Stefan (2015), Interactions between Service and Product Lifecycle Management, Procedia CIRP 30 ( 2015 ) 36 – 41 (7th Industrial Product-Service Systems
Conference - PSS, industry transformation for sustainability and business)
Wang, P.P. , Ming, X.G. et al. (2011) Status review and research strategies on productservice systems, International Journal of Production Research, 49:22, 6863-6883.
Manzini, E. (2003) A strategic design approach to develop sustainable product service
systems: examples taken from the ‘environmentally friendly innovation’ Italian prize,
Journal of Cleaner Production 11 (2003) 851–857
Belvedere, Valeria et al. (2013), A quantitative investigation of the role of information
and communication technologies in the implementation of a product-service system,
International Journal of Production Research, Vol. 51, No. 2, 15 Jan. 2013, 410-426
Sudarsan, R, Fenves, S.J. et al. (2005), A product information modeling framework for
product lifecycle management. Computer-Aided Design 37 (2005) 1399–1411.
Information model for tracelinks building in
early design stages
David RÍOS-ZAPATA 1,2,∗ , Jérôme PAILHÈS2 and Ricardo
MEJÍA-GUTIÉRREZ1
1
Universidad EAFIT, Design Engineering Research Group (GRID), Carrera 49 #
7 Sur - 50, Medellı́n, Colombia
2 Arts et Metiers ParisTech, I2M-IMC, UMR 5295. F-33400 Talence, France
∗ Corresponding author. Tel.: (+57) 4 261-9500, Ext. 9059 e-mail:
drioszap@eafit.edu.co
Abstract. Over the last decades many efforts are being made into either both, creating better products or improving processes, yet, generating more information, and
usually leaving behind how to manage whole information that already exist and
using it to improve the decision making process. This article is centred into the development of an information model that allows to have a multilevel traceability in
early design stages, by definition of tracelinks of information at the design stages,
where information evolves between from linguistic requirements into design variables. Regarding of the information the should be analysed the research is focused
into the setting up of a graphic environment that will allow to determine relationships between different variables that exist in conceptual design, granting designers
teams the opportunity to use that information in decision making situations in terms
of knowing how changing one variable upsets any requirement. Finally, this article
presents a case study of a design of a portable cooler in order to clarify the usage
and opportunities present by the usage of the traceability model.
Key words: Early design processes, traceability in design, information management model, decision-making in design
1 Introduction
The success of a design solution is normally based in the quality of the justification at decision making processes, where different elements can participate in this
justification: time to make a decision, the level of satisfaction of the solution, and
inevitably experience the designer who makes the decision. These factors have a
significant influence on the result of the developed product.
To support these decision-making processes, there are different tools and methodologies that help design teams in this process of transformation of the design need
into a concrete and successful final product. Over the last 30 years, with the increase
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_16
147
148
D. Rı́os-Zapata et al.
of computers usage, many of the new digital tools had arrived; at the same time, the
automation of many tasks was made thanks to the aid of these tools [1].
The use of these computer aids creates a positive impact, both at technical and
organisational level. Through the use of these digital tools, coupled with the use of
simultaneous working methodologies, new product development time decreased by
60% [2]; also, the success rate of new products increased from 31% in the late 80’s
[3], up to 60% in the last decade, which finally offers the benefits of saving time
and money [?]. Even so, it is important to recall the lack of computer tools at early
design stages [4] and the importance to develop tools that allows to track changes at
those stages [5].
This research is centred in studying how design activities in early design stages
can be supported by the use of an information model that helps to monitor the information by the creation of a traceability model between requirements and variables.
The structure of the article is composed by a State of the art at section 2,; section 3
that will explain the information model proposal and section 4 that will explain its
usage in the early design of a portable cooler.
2 State of the art
Product design processes can be divided into four principal phases: clarification of
the task, conceptual design, embodiment design and detail design [6], where the first
three can be considered as early design stages. In terms of knowledge management
in early design, many design approaches had fail to fully support conceptual design.
The reason is the the lack of connection between the external requirements and the
design variables [7].
Early design is normally conducted by a need analysis, then a functional analysis that allows to write specifications in terms of the relationship of the product
with the environment [8]. Afterwards, design could be conducted by following the
FBS Framework (Function-Behaviour-Structure) which allows to transform specifications (in terms of functions) into equations [9].
Also, it is important to recall that at early design stages, about 80% of the decisions are made, even if the help of computer tools at those stages is quite low
[4]. The type of decisions includes the evolution of the information. For instance, in
the design process of a glass for hot beverages, the first step is the research of the
user’s needs, where it might be found requirements such as the glass must be big
and light. The design team starts to analyse these information, and make decisions,
whether based on their experience or further information (such as benchmarking).
Eventually, designers define the specifications that fulfil those requirements, i.e.,
write specifications in terms of the volume and the weight of the product.
Afterwards, designers define several technical aspects of the product, in terms
of the behaviour that determines the aspects that can be characterised into design
variables (equations that determines the volume and weight of the product). This
leads to the appearance of secondary variables (diameter, height and thickness).
Information model for tracelinks building in early design stages
149
These variables will allow to the product to fulfil the design specifications. Finally,
by arranging different possibilities, designers determine their final values and assign them into different geometric features. This determines the end of early design
stages design and represents the beginning of the detailed design. As this information undergoes from linguistic to numeric data, it is clear that the imprecision of
design decreases, which finally allows to designers to arrive to consisting solutions
[10]. All the information evolution process in detail can be watched in Figure 1.
REQUIREMENTS
¨To be big¨
¨To be light¨
BEHAVIOUR
SPECIFICATIONS
The volume of the product should be 0.5l
The product weight should be less than 0.05 Kg
V=Fn(D,h)
W=Fn(V,t,material)
VARIABLES
V
D
h
t
Fig. 1 Information evolution in early design processes
All this information evolution process in early design stands several questions,
such as how is all information stored?, is there stored any connection-links between
those kinds of information? and how designers took their decisions?.
It is inferable that there must exist any relationship between these deliverables,
a relation that helps to recognise the evolution of the product between the task and
the final solution; according to the IEEE, the degree where relationships between
two or more items can be established, specially where one item is the predecessor
of other items, can be called traceability [11].
The importance of having a high detail of information will determine the level
of integration of the traceability of the model, which allows the granularity of the
relationships between the different kinds of information [5]. In this connection, a
traceability tool must identify items that are potentially affected by a change in
function of their tracelinks. Finally, it is important to underline how the dependence
is determined in design, which is measured in three variables. Variability: how are
the requirements set? Sensitivity: which is the risk in the design when a change
occurs? Integrity: knowledge is required to achieve the task? [12].
Regarding to traceability models in product design, CATIA V6’s RFLP1 Module
is able to stock in the whole information in the same platform. Nevertheless, the
way information is stored and processesed is not interactive[13], so, requirements
and logical inputs are not necessary connected to CAD model, but stored in the same
file [14]. Also, it is important to recall that many product management models deal
with poor data traceability specially at the exploration of the requirements definition
[15].
Finally, traceability models are supporting knowledge reuse in early design
stages. For instance, Baxter et al. had defined a traceability framework centred in
the performance analysis of specific requirements and the use of that information in
order to optimise design solutions [16].
1
RFLP for Requirements Functional Logical Physical acronym
150
D. Rı́os-Zapata et al.
3 Traceability model proposal
The model is centred in answering the question regarding the store and exploit of
the traceability information in early design stages, so it is important to consider the
importance of storing information linked to requirements, specifications, equations
and variables.
During the need analysis, the most important goal is to determine a list of requirements (See ”I want the product to be big” in Fig 1). Mostly, this list is an input
to any design engineering process; nevertheless, the process is not limited to be performed only by users’ specialist. This model is limited to work by the input of the
list or requirements and not by developing techniques to retrieve those requirements.
For the functional analysis, designers are called to analyse the interaction of the
product with the environment in order to address functions that allow to write the
Product Design Specifications. Then, a link between Requirements and Specifications can be established by using a correlation matrix (e.g. correlation matrix of the
QFD (Quality Function Deployment)). In Figure 2 is presented an example where
relations from a matrix are extracted for building a graphic relation between requirements (Rq) and specifications (Sp) is represented.
Rq1
Rq2
Sp1 Sp2 Sp3
Rq1
X
Rq2
X
Rq3
Rq3
X
Sp2
X
Sp3
Sp1
Fig. 2 Requirements to specifications
At this point, the FBS framework is used to manipulate the evolution of the information. So, in the formulation stage is analysed by the definition of the main
function and its division into functional blocks (FBD). Since the Function Approach
defined the functions alongside the relationship of the product with the environment,
those functions will represent the fluxes that enter into the system. The analysis of
those fluxes (material, matter and information), whether internal or external, allows
determining the physical behaviour that rules each part product. This defines the
equations of the product, and the connection between Equations and Specification.
In order to develop this links, CPM/PDD (Characteristics- Properties Modelling;
Property-Driven Development) models are used. These models allows to designers
to establish connections between information, but also focusing in controlling the
design parameters Ci [17, 18].
Finally, at the synthesis stage, the designers select for each function box, from
the FBD, a suitable solution. Here, the designers complete the set of equations in
Information model for tracelinks building in early design stages
151
terms of the final solutions. At this point ends the early design process and the team
might proceed into detailed design, where they assign values to each variable. In
Figure 3 is presented the tracelinks model in the different levels over early design;
also is presented how the model is connected with FBS framework and how the
model extends is boundaries into requirements (semantic variables).
INTEGRATION AT LEVEL 0
REQUIREMENTS
Rq1
FUNCTION
SPECIFICATIONS
Rq2
Sp2
Rq3
Sp1
Sp3
BEHAVIOUR
EQUATIONS
STRUCTURE
VARIABLES
V D2h
q
Text Tint
t / KA
Fig. 3 Tracelinks representation
4 Case study
In order to validate the model, and to look forward several pitfalls, a portable cooler
design process was conducted. From need analysis, the input was defined in 9 requirements. Regarding to the functional analysis, 5 functions were written for the
product to accomplish: The product must be easily carried by the user; the product must resist solar radiation; the product must isolate food from the external air;
the product must incorporate ice; the product must isolate food from the solar heat.
These functions were interpreted as 11 specifications. Then, the construction of the
QFD correlation matrix allowed to determine the connection between Requirements
and Specifications. For instance, requirement 1, Keep things cool is associated to 8
specifications, including wall thickness, but it is also related to the cooler volume.
After the definition of Specifications, the design process undergoes with the formulation stage and the construction of the FBD that can be watched in Figure 4.
Also, this figure represents the analysis of a selected function: hold. This function
represents the wall of the container, where its function is to hold the flux of heat that
is heading into the cooler; the behaviour of this wall can be described as a thermal
conduction process.
At the synthesis stage, an isolation principle is assigned to the wall in order to
be implemented into the design. The system will be described as sandwich wall
as: External Wall A - Thermal insulation B - Air C - Internal Wall D system. The
equation that represent this isolating system is represented in Equation 1 and is the
design parameter Ci to be implemented .
152
D. Rı́os-Zapata et al.
Solar radiation
Solar radiation
Integrate
Ice
C.C
Food
Heat
Air
Food
Water
Ice
Stock
Heat
Hold
Human force
Information
FUNCTION TO BEHAVIOUR
q @Text
HOLD
Integrate
Information
Allow
q @Tint
T T
q ext int
t / KA
q
STRUCTURE
A
B
C
D
q
LB
Fig. 4 Function block diagram analysis and structure definition
Qconv =
LA
KA ∗AA
+
Text − Tint
LC
LB
KA ∗B + KC ∗AC
D
+ KDL∗A
D
(1)
Finally, an entire traceability map can be build. In Figure 5a, the traceability tree
is shown. Here the requirements are represented at the bottom. In second level are
the specifications, which are connected to the equations. Finally the equations are
connected to the variables in top level of the tree. As a practical display propose,
only the links that connect the Equation 1 (Eq1: Heat flux in the wall) and the requirement ”keep things cold” are shown.
In terms of how this traceability tree can be used to support designs’ decisionmaking, an example can be presented. For instance, the development team had realised that the cooling capability of the cooler is too short, so designers propose to
increase the thickness of the thermal insulation LB (See Structure in Figure 4). The
domain solution for this variable is defined as D(LB ) = [0.01 − 0.1], and designers
decide to set it up to its maximal value in order to increase the thermal insulation of
the cooler.
By the analysis of the equations, it is clear that no further specifications were
affected by modifying LB in order to reduce the heat flux, but the analysis of the
traceability tree reflects a different survey. Here, it is found that the LB thickness,
related to the requirement of keeping things cold, is also related to the volume of
the cooler. This represents that changing the thickness affects the volume, so both
variables are correlated. The graph that connects both variables can be seen in Figure
5b.
This kind of traceability information model, offers to designers the list of variables that correlated with each other. This allows to designers to take better decision
when they perform changes in the design; but also leads to new challenges. In this
situation, the design team find that there is a correlation that affects two variables,
and considering the limits established for the volume, the new domain solution is redefined as D(LB ) = [0.01 − 0.05]. The new constraint, seen thanks to the traceability
Information model for tracelinks building in early design stages
K
L
D
l
K T xt
C
e
A
V
V
w
K LCL
A
h K LD
T nt B
LB
B
i
3.5
4
Eq1
3
2
2
Wall thickness
1.5
1
0
0.5
0.2
Keep things cool
0.4
4
3
0
5
0.8
1
(a)
1.2
0
Volume
1
1
2
Eq2 Eq1
3
2.5
0
5
153
1.4
Wall thickness
0.4
4
0.2
0
Keep things cool
3
2
1
(b)
0
1.4
1.2
1
0.8
Fig. 5 Keep things cold specification relationship
tree, led the team to optimise the cooling capability without affecting the volume of
the cooler.
Certainly, a tool of this nature can empower the decision making process to be
performed using whole the information in product life-cycle, and not based in the
experience of the designers team, specially when correlations are not obvious. Also,
this tool is able to alert designers with early warnings when the modification of one
variable might affect other variables.
5 Conclusion and further research
One of the strongest contributions of this research is offering a model that allows the
interconnection of information at early design stages, linking precisely information
that is in linguistic form, to design variables, and further information in detailed
design. In the presented example, it was found that the correlation between both
variables lay at requirements level (linguistic level). This leads to have a widely
view in early design, because its usage allows to find correlations of variables as far
as requirements list.
Further, as a distinction with other traceability models, such as RFLP, the presented model allows an interactive manner to analyse the information (with graphs);
rather than be a competence with RFLP models, this kind of solutions can work as
a complement to it, and will allow to connect requirements with CAD models, increasing and analysis within whole product life-cycle.
Finally, the exploitation of the information collected by the presented model reduces uncertainty in how the decisions are being taken. Nevertheless, for further
models two important things can be defined: i) develop a mechanism that allows to
define the level of correlation between each variable, including degrees of correlation at different stages ii) develop a graph theory model that allows the analysis the
correlation between the design variables in a automatically manner.
154
D. Rı́os-Zapata et al.
References
1. BF Robertson and DF Radcliffe. Impact of CAD tools on creative problem solving in engineering design. Computer-Aided Design, 41(3):136–146, 2009.
2. B. Prasad. Concurrent engineering fundamentals- Integrated product and process organization. Upper Saddle River, NJ: Prentice Hall PT, 1996.
3. Elko J Kleinschmidt and Robert G Cooper. The impact of product innovativeness on performance. Journal of product innovation management, 8(4):240–251, 1991.
4. L. Wang, W. Shen, H. Xie, J. Neelamkavil, and A. Pardasani. Collaborative conceptual design–
state of the art and future trends. Computer-Aided Design, 34(43):981–996, 2002.
5. Simon Frederick Königs, Grischa Beier, Asmus Figge, and Rainer Stark. Traceability in systems engineering–review of industrial practices, state-of-the-art technologies and new research
solutions. Advanced Engineering Informatics, 26(4):924–940, 2012.
6. G. Pahl, W. Beitz, J. Feldhusen, and H. Gote. Engineering design: A systematic approach.
Springer Verlag, 2007.
7. John S Gero and Udo Kannengiesser. The situated function–behaviour–structure framework.
Design studies, 25(4):373–391, 2004.
8. Dominique Scaravetti, Jean-Pierre Nadeau, Jérôme Pailhès, and Patrick Sebastian. Structuring of embodiment design problem based on the product lifecycle. International Journal of
Product Development, 2(1):47–70, 2005.
9. John S Gero. Design prototypes: a knowledge representation schema for design. AI magazine,
11(4):26, 1990.
10. Ronald E Giachetti and Robert E Young. A parametric representation of fuzzy numbers and
their arithmetic operators. Fuzzy sets and systems, 91(2):185–202, 1997.
11. Approved September. Ieee standard glossary of software engineering terminology. Office,
121990(1):1, 1990.
12. Mohamed-Zied Ouertani, Salah Baı̈na, Lilia Gzara, and Gérard Morel. Traceability and management of dispersed product knowledge during design and manufacturing. Computer-Aided
Design, 43(5):546–562, 2011.
13. Ricardo Carvajal-Arango, Daniel Zuluaga-Holguı́n, and Ricardo Mejı́a-Gutiérrez. A systemsengineering approach for virtual/real analysis and validation of an automated greenhouse irrigation system. International Journal on Interactive Design and Manufacturing (IJIDeM),
pages 1–13, 2014.
14. Chen Zheng, Matthieu Bricogne, Julien Le Duigou, and Benoı̂t Eynard. Survey on mechatronic engineering: A focus on design methods and product models. Advanced Engineering
Informatics, 28(3):241–257, 2014.
15. Joel Igba, Kazem Alemzadeh, Paul Martin Gibbons, and Keld Henningsen. A framework
for optimising product performance through feedback and reuse of in-service experience.
Robotics and Computer-Integrated Manufacturing, 36:2–12, 2015.
16. David Baxter, James Gao, Keith Case, Jenny Harding, Bob Young, Sean Cochrane, and Shilpa
Dani. A framework to integrate design knowledge reuse and requirements management in
engineering design. Robotics and Computer-Integrated Manufacturing, 24(4):585–593, 2008.
17. Christian Weber. Cpm/pdd–an extended theoretical approach to modelling products and product development processes. In Proceedings of the 2nd German-Israeli Symposium on Advances in Methods and Systems for Development of Products and Processes, pages 159–179,
2005.
18. Chr Weber. Looking at dfx and product maturity from the perspective of a new approach to
modelling product and product development processes. In The Future of Product Development, pages 85–104. Springer, 2007.
Section 1.3
Interactive Design
User-centered design of a Virtual Museum
system: a case study
Loris BARBIERI1*, Fabio BRUNO1, Fabrizio MOLLO2 and Maurizio
MUZZUPAPPA1
1
Università della Calabria - Dipartimento di Meccanica, Energetica e Gestionale (DIMEG)
2
Università di Messina
* Corresponding author. Tel.: +39-0984-494976; fax: +39-0984-0494673. E-mail address:
loris.barbieri@unical.it
Abstract
The paper describes a user-centered design (UCD) approach that has been adopted
in order to develop and build a virtual museum (VM) system for the “Museum of
the Bruttians and the Sea” of Cetraro (Italy). The main goal of the system was to
enrich the museum with a virtual exhibition able to make the visitors enjoy an
immersive and attractive experience, allowing them to observe 3D archaeological
finds, in their original context. The paper deals with several technical and technological issues commonly related to the design of virtual museum exhibits. The
proposed solutions, based on an UCD approach, can be efficiently adopted as
guidelines for the development of similar VM systems, especially when very low
budget and little free space are unavoidable design requirements.
Keywords: User-centered design, user interfaces design, human-computer interaction, virtual museum systems.
1 Introduction
Nowadays, museums need to combine the educational purpose [1] with the capability to involve their visitors through emotions [2]. In order to achieve these goals
and overcome the old principles of traditional museology, growing emerging
technologies such as Virtual Reality, Augmented Reality, and Web applications,
are increasingly popular in museums. This union has involved the development of
a large number of instruments and systems that allow users to enjoy a culturally
vivid and attractive experience. There are many examples of such systems that
have been efficiently applied to the museum field: projection systems that could
turn any surface into an interactive visual experience; multi-touch displays; devices for gesture based experiences; Head Mounted Displays (HMDs) or 3D displays
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_17
157
158
L. Barbieri et al.
that turn the visit into an immersive and attractive experience; [3,4,5,6,7]. Even if
all these systems are appealing and really appreciated by their users, many devices
present some limitations due to their expensive installation or maintenance, the
large volume of work, or a poor user-system interaction caused by an incomplete
maturity of that specific technology in museum applications. Starting from these
considerations, and taking into account that 90% of museums are small-sized and
with low budgets, there is an unmet need in the development and design of more
affordable systems able to offer a fascinating and memorable experience to museum visitors. Since Virtual Museum (VM) systems aim to be immediate and easy to
use, enjoyable and educative, these applications represent a typical case study that
needs to be addressed through a user-centered design (UCD) approach. This approach can be efficiently used in museums [8, 9], but there aren’t works concerning specifically the UCD development of VM system.
Therefore, this paper represents a first attempt to describe a UCD approach carried out for the development of low-cost VM systems that rely on off-the-shelf
technologies to create 3D immersive user experiences. The paper, furthermore,
gives some guidelines to choose the key technical devices and presents a case
study for the development of the Virtual Museum system installed in “the Museum of the Bruttians and of the Sea” of Cetraro (Italy).
2 Virtual Museum system design
Prior to the design phase, it is fundamental to take into account the requirements
that are often specified by museum directors and are generally related to budget
reasons. In fact, the great majority of museums are small, with less than 10˙000
visitors per annum, and can rely on a very low budget [10]. Then the economic
concerns severely affect the development and modernisation plans that, in the era
of “experience economy” [1], all the museums have to be competitive and to attract more visitors. Starting from these considerations, there are two fundamental
requirements that must be achieved: low-cost and usability. Then, a VM system
should be designed to be cheap and, at the same time, to inspire the visitor.
For these reasons, on the one hand it is almost impossible to adopt very expensive technologies such as HMD and CAVE (Cave Automatic Virtual Environment) for the visualization or wearable haptics and gesture recognition devices for
the interaction. On the other hand, usability, intended as both affordance and users’ satisfaction, should be the key quality of the system. In addition, museum curators usually dictate other requirements that could affect the overall dimensions
of the systems and their aesthetics. Once all these data have been acquired, the design process can start in accordance to the recommendations (ISO 13407) for a
UCD project, that can be summarized in the following flow chart (fig.1):
User-centered design of a Virtual Museum system: a case study
159
Fig. 1. Main steps of the VM system development process based on an UCD approach.
3 Guidelines for selecting the visualization and interaction
device
In this section some guidelines have been defined for selecting the hardware to be
adopted for the VM system, considering the economic reasons and the types of information we want to offer to the visitors.
Among the different commercial devices, projectors and high definition (HD)
monitors have been selected as an alternative for the visualization of the VM exhibit. The HD monitors can perform a 4K resolution with high brightness and contrast, on the contrary, the projector can achieve a full HD resolution with higher
maintenance costs. Among the most commonly device controllers that can be included in a cheap VM system trackballs, touch-screen consoles and gesture recognition devices (i.e. MS Kinect or Leap Motion) have been analyzed. The table 1
shows the synthesis of our analysis.
Table 1. Device controllers.
Trackball/mouse
Costs
Touch screen
Gesture recognition devices
low
high
medium
Quality of interaction
unattractive
very intuitive
intuitive
Devices’ integration
low
medium
high
Training required
no
no
yes
About the touch-screen consoles there are two design solutions: the adoption of
a touch-screen console for controlling the objects and data that are visualized on a
HD monitor (fig.2a), or the adoption of a unique touch-screen monitor that can be
used both for the visualization and interaction of the virtual exhibit (fig.2b).
160
L. Barbieri et al.
The pros and cons of the two different solutions, depicted in figure 2, have
been analysed taking into account also some ergonomic requirements that are fundamental in an UCD approach, in order to define some guidelines. Our considerations are that in order to get the optimum immersive visual HD experience, viewers should be located at the theoretical spot known as optimum HD monitor
viewing distance [11]. These requirements can be satisfied only in the first case
(fig.2a): in fact the viewers can stand at the distance that they prefer for their optimal viewing experience, thanks to the displacement of the controls. On the contrary, the adoption of a touch-screen monitor (fig.2b) implies a viewer distance
that depends by anthropometric measurements [12] and it is lower than the recommended viewing distance. Based on 3D industry professionals’ experience, the
optimum seating distance for 3D monitor sets does not appear to be much different than the optimum range for regular HD monitors. But the viewing distance is
affected by the type of stereoscopic projection adopted. In fact, a 3D passive projection uses glasses that cut the 1080p resolution of the HD monitor in half (540p)
to each eye. This means that the optimum viewing distance increases so that
touch-screen monitors (fig.2b) result to be inappropriate for the visualization and
interaction with 3D scenarios.
Fig. 2. System composed by HD monitor and touch-screen controller (a); touch-screen monitor
based system (b).
To sum up, the adoption of a touch-screen for the visualization and interaction
of the 3D virtual exhibit (fg.2b) should be irrevocably excluded. A further consideration is that the touch-screen remote control for the interaction with the VM system could be a handheld device, i.e., tablet, or fixed in a specific position. The
first solution can usually be adopted when there is an operator that stands over the
system and gives the controls to the visitors that want to enjoy the virtual exhibit.
Instead, the second solution can be employed when the system is intended for unattended operation and, since the console cannot be moved, it is possible to increase the screen size of the touchscreen in order to enhance its legibility.
User-centered design of a Virtual Museum system: a case study
161
4 The Case Study
The VM system described in this paper was intended to be installed in a small
archaeological museum, the “Museum of the Bruttians and the Sea”, hosted in the
beautiful setting of the Palazzo del Trono of Cetraro (Italy). The VM system will
be surrounded by archaeological pieces, found in a small group of necropolis, and
housing facilities that were built by the Bruttian people. Among the archaeological
finds there are bronze and iron weapons, ceremonial vases, drinking cups, eating
dishes, pins and jewellery.
4.1 First prototype
As clearly expressed by the ISO9241-210:2010 (standards for human-centred design for interactive systems), in a UCD approach the design and evaluation stages
should be preceded by the gathering of requirements and specifications to better
define the context of use and the user requirements.
The VM exhibition should allow users to engage into an educational and fun
experience. In particular, as requested by the museum director, the VM system
should permit its visitors to experience two different 3D scenarios that realistically
reproduce: a tomb belonging to the necropoli of Treselle discovered in the territory of Cetraro and an underwater archaeological deposit, located 20 km away
from Cetraro, a few meters from the shore 2/4 m deep. In the first scenario the
visitors should be able to visit the virtual tomb, with its Bruttian burials, and visualize and manipulate its contents, such as bronze and iron weapons (bronze belts,
spearheads, javelin), pottery, drinking cups (skyphoi, kylikes, bowls, cups) and
eating dishes (plates, paterae). In the second scenario the visitors can interact with
some remains and fragments of amphorae dating back to the middle of the III century BC.
4.2 Selection of the visualization and interaction device
The configuration with an HD monitor and a touch-screen remote control has chosen in accordance to the volume that the VM system can occupy in the museum
and to the specifications described in the previous section.
The volume requirements guided us toward the individuation of a 46” HD
monitor, that, in accordance to THX [13] standards, has an optimum viewing distance range of 1.5-2.5 meters. The minimum viewing distance is set to approximate a 40° view angle (considering the average human vision, the upper limit for
maximum field of view is around 70°, which corresponds to the maximum field of
162
L. Barbieri et al.
view inclusive of peripheral vision) and the maximum viewing distance is set to
28° approx. This range allows us to satisfy both the constraints on the volume and
the minimum distance necessary to perceive the stereoscopic experience that is
commonly considered to be 1.5 meters. It is worth to notice that, due to many objective and subjective factors, the user experience provided by the virtual exhibit
changes from person to person [14,15]. For example, the age affects 3D perception: children have a lower ocular distance if compared with adults. This means
that, if placed at the same distance from the monitor, children have a more immersive 3D viewing experience than adults.
In this case, since the presence of a supervisor is not always assured, we have
preferred to fix a 23” touch-screen console into a specific position.
4.3 System architecture development
Once the devices for user interaction and visualization of the virtual museum exhibition have been defined, the following step was the definition of the position of
these devices in space. In particular, the relative positions and distances of the HD
monitor and the touch-screen console should be identified, trying to take into account the ergonomic standards for a better experience of the VM system.
Since the virtual exhibit is intended to be used by many different audiences,
such as middle and high school students, college students, tourists, etc., ergonomic
studies have been performed in order to find the optimal positioning of the visualization device and its control system. Also the grade of the touch-screen console
has been studied. For a comfortable experience of the VM system, we tried to
keep users’ movements as natural as possible, with particular attention to the most
repetitive ones, i.e. the neck and shoulder extension movements. As detailed in the
previous section, a 46”HD monitor allows for an optimum viewing distance range
of 1.5-2.5 meters. Therefore, the touch-screen console has been placed at a distance of 1.5 meters from the HD monitor, in order to take advantage of the full
range and enjoy an optimal immersion and visualization of the 3D contents.
Once the relative positioning of the monitor and controller has been done, we
focused on the design of the structure. As depicted in figure 3a various design alternatives have been evaluated. As recommended by UCD standards, various virtual prototypes of the VM system architecture have been designed which differ in
their materials, dimensions, and aesthetics. These prototypes have been subjected
to an iterative design process that allowed us to improve each version, but also to
exclude those ones that were less performing in terms of ergonomic and technical
requirements. Figure 3b shows the final virtual prototype realized with white and
orange folded panels made of PPMA (Polymethyl methacrylate). Aluminium
builtin elements were adopted to support and fasten the monitors.
User-centered design of a Virtual Museum system: a case study
163
Fig. 3. Alternative design solutions and rendering in the context of use (a). Final virtual prototype of the VM system architecture (b).
4.4 User interfaces design
Since the VM system will be used by a large variety and different types of visitors
the user interfaces (UIs) should clearly communicate its purpose, so that users
with no experience with technological devices should be able to understand immediately what they should do. For this reason, the UI design process was firstly
focused on the development of minimalistic design of UIs to make the layout and
graphic features of the VM system as simple as possible. In the composition of the
graphical elements as a whole, UIs should provide the users all the essential features to manipulate virtual objects, but also to get access to a database of media
contents, such as images, texts and sounds, so that the interaction could also have
an educational value. This kind of approach allowed us to define a first lowfidelity prototype (paper prototyping) of the UIs. Prior to proceed with the development of a fully operational software for the management of the VM system, the
first UI prototypes should be submitted to a user-centered evaluation in order to
drive and refine their design. The evaluation has been performed by means of a
Cognitive Walkthrough (CW) [16] usability inspection method. According to the
CW standards and recommendations [17] a group of experts performed an UI inspection going through a set of tasks and evaluating UI understandability and ease
of learning.
The results of the UI design and CW analyses was a “three level” user interface. In the first level there is the “home screen” (fig.4a) where visitors can choose
the preferred language, but most important, he/she can select the experience. Once
the user has selected the desired option, he/she accesses to the second level.
Depending on the selected scenario, the second interface that appears to users
could be the Tomb of Treselle (fig.4b) or the underwater environment (fig.4c). In
particular most of the screen area is reserved to the visualization of the 3D scenario while the rest of the screen is organized as follows: on the left side some ba-
164
L. Barbieri et al.
sic informations explain to visitors how to navigate through the 3D environment
and manipulate its 3D contents; on the lower section of the screen there is a text
field that gives historical and cultural information about what the user is going to
experience. In particular, the tomb of Treselle (fig.4b) featured a Bruttian burial
dating back to the IV century BC and contains: weapons (bronze belts, iron spearheads and javelin); pottery, such as ceremonial vases, drinking cups (skyphoi, kylikes, bowls, cups) and eating dishes (plates, paterae); a lead set used in meat banquets and consisting of skewers, a grill and a pair of andirons made of iron or lead.
While the underwater site (fig.4c) contains a residual archaeological deposit, concreted to the seabed and large rocky blocks, that consists of a merchant vessel carrying a load of transport amphorae of the MGS V and VI types, dating back to the
middle of the III century BC. When the user selects one of the virtual objects present in the two environments, he/she enters in the third level (fig.4d) in which it is
possible to manipulate, zoom-in and get specific information about the artwork.
Fig. 4. First interface of the VM system (a). Second UI levels that allow users to experience a 3D
immersive reconstruction of the tomb of Treselle (b) or an underwater environment (c). 3D models accessible through the third UI levels (d).
4.5 VM system evaluation
The final stage of the UI development consists in their assessment in order to
evaluate their usability. The user studies carried out were very important for the
design of the final VM system because these allowed us to gain many information
related to the user-experience and the interaction with different alternatives of the
virtual exhibition. In particular, we noticed that when the monitor is controlled
through a touchscreen remote control, the users may get confused, inattentive and
User-centered design of a Virtual Museum system: a case study
165
annoyed due to the information arrangement between the two screens. Then we
tested two different solutions. In the first solution, both the HD monitor and the
touch-screen console display the same kind of information and contents. In the
second solution, the HD monitor visualizes only the 3D contents, while all the text
data and information are accessible only by the touch-screen console.
Traditional metrics, such as the time and the number of errors and questionnaires, that allow to catch cognitive aspects related to user experience, have been
used to interpret the outcomes of the user study. The results of the comparative
testing show that, even if from an objective point of view, there is not a statistical
significant difference between the two configurations but, from a subjective point
of view, the satisfaction questionnaires demonstrate a preference for the second
solution. In particular, when the touch-screen duplicates the information present
on the main monitor it reduces misunderstanding problems since it prevents the
user from inquiring both screens to find the desired information, but it also reduces the perceived user experience of the virtual exhibition. On the contrary, a
full-screen visualization of 3D contents on the main monitor where all the menus
and texts are on the touchscreen device, increases the user’s immersion and the
contents appear more pleasant and attractive from an aesthetic point of view.
Fig. 5. Visitors while experiencing the VM system.
On the basis of these results, we decided to adopt the second solution for the
VM system interaction, as shown in fig.5. While the main monitor is dedicated to
a 3D visualization of the archaeological finds, the touch-screen console is used to
control the 3D objects, but also to display information and educational contents.
5 Conclusions
In this paper a user-centered design approach has been adopted for the development of a VM system that has been realized for the “Museum of the Bruttians and
the Sea” of Cetraro.
The paper gives many technical and technological advices and suggestions,
166
L. Barbieri et al.
which can be adopted to overcome several typical and recurrent problems related
to the development of VM systems, especially when low budgets and space constraints are among the design requirements.
The results of user testing and the opinions gathered by visitors demonstrated
that the adoption of an UCD approach can efficiently improve the VM system development, and gives birth to a product that offers a more efficient, satisfying, and
user-friendly experience for the users.
References
1. Pine II B.J., Gilmore J.H. The Experience Economy: Work is Theatre & Every Business a
Stage. Harvard. 2000.
2. Vergo P. New Museology. Reaktion books. London. 1989.
3. Blanchard E.G., Zanciu A.N., Mahmoud H., and Molloy J.S. Enhancing In-Museum Informal
Learning by Augmenting Artworks with Gesture Interactions and AIED Paradigms. In Artificial Intelligence in Education (pp. 649-652). Springer Berlin Heidelberg. 2013.
4. Pescarin S., Pietroni E., Rescic L., Wallergård M., Omar K., and Rufa C. NICH: a preliminary
theoretical study on Natural Interaction applied to Cultural Heritage contexts. Digital Heritage Inter. Congress, Marseille, V.2, p.355, 2013.
5. Wang C.S., Chiang D.J., Wei Y.C. Intuitional 3D Museum Navigation System Using Kinect.
In Information Technology Convergence, pp. 587-596. Springer Netherlands, 2013.
6. Bruno F., Bruno S., De Sensi G., Librandi C., Luchi M.L., Mancuso S., Muzzupappa M., Pina
M. MNEME: A transportable virtual exhibition system for Cultural Heritage. 36th Annual
Conf. on CAA 2008, Budapest, 2008.
7. Bruno F., Angilica A., Cosco F., Barbieri L., Muzzupappa M. Comparing Different VisuoHaptic Environments for Virtual Prototyping Applications. In ASME 2011 World Conference on Innovative Virtual Reality, pp. 183-191.
8. Barbieri L., Angilica A., Bruno F., Muzzupappa M. An Interactive Tool for the Participatory
Design of Product Interface. In IDETC/CIE 2012 Chicago (pp. 1437-1447). 2012.
9. Petrelli D., Not E. UCD of flexible hypermedia for a mobile guide: Reflections on the
HyperAudio experience. User Modeling and User-Adapted Interaction, 15(3-4), 303-338.
2005.
10. IFEL-Fondazione ANCI e Federculture. Le forme di PPP e il fondo per la progettualità in
campo culturale. 2013.
11. Craig J.C., Johnson K.O. The Two-Point Threshold Not a Measure of Tactile Spatial Resolution. Current Directions in Psychological Science, 9(1), 29-32. 2000.
12. Woodson W.E., Tillman B., Tillman P. Human factors design handbook, 2nd Ed. Woodson,
1992.
13. http://www.thx.com/
14. Barbieri L., Bruno F., Cosco F., Muzzupappa M. Effects of device obtrusion and tool-hand
misalignment on user performance and stiffness perception in visuo-haptic mixed reality. International Journal of Human-Computer Studies, 72(12), 846-859, 2014.
15. Barbieri L., Angilica A., Bruno F., Muzzupappa, M. Mixed prototyping with configurable
physical archetype for usability evaluation of product interfaces. Computers in Industry,
64(3), 310-323, 2013.
16. Lewis C., Polson P., Wharton C., Rieman J. Testing a walkthrough methodology for theorybased design of walk-up-and-use interfaces. ACM CHI’90, Seattle, WA, 235-242, 1990.
17. Wharton C., Rieman J., Lewis C., Polson P. The cognitive walkthrough method: A practitioner’s guide. Usability Inspection Methods, John Wiley & Sons, New York, 79-104, 1994.
An integrated approach to customize the
packaging of heritage artefacts
G. Fatuzzo1, G. Sequenzia1, S. M. Oliveri1, R. Barbagallo1* and M. Calì1
1
University of Catania, Catania, Italy
*
Corresponding author – email: rbarbaga@dii.unict.it
Abstract The shipment of heritage artefacts for restoration or temporary/travelling exhibition has been virtually lacking in customised packaging.
Hitherto, packaging has been empirical and intuitive which has unnecessarily put
the artefacts at risk. So, this research arises from the need to identify a way of designing and creating packaging for artefacts which takes into account structural
criticalities to deal with deteriorating weather, special morphology, constituent
materials and manufacturing techniques. The proposed methodology for semiautomatically designing packaging for heritage artefacts includes the integrated
and interactive use of Reverse Engineering (RE), Finite Element Analysis (FEA)
and Rapid Prototyping (RP). The methodology presented has been applied to create a customised packaging for a small C3rd BC bronze statue of Heracles (Museo
Civico “F.L. Belgiorno” di Modica -Italy). This methodology has highlighted how
the risk of damage to heritage artefacts can be reduced during shipping. Furthermore, this approach can identify each safety factor and the corresponding risk parameter to stipulate in the insurance policy.
Keywords: Packaging; cultural heritage; laser scanning; FEM; rapid prototyping
1 Introduction
Planning the transportation of heritage artefacts (HA) and designing appropriate
packaging for them are issues often faced by museums. Traditionally [1], the approach was manual, not systematic nor scientific and wasted time and money.
Given the complexity and irregular shapes of the artefacts, universal packaging solutions are inappropriate. Ideal packaging should be able to provide certain prerequisites: correct artefact position, zone interface choice, materials choice, ease
of assembly/disassembly and finally recyclability. These requisites require a pre© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_18
167
168
G. Fatuzzo et al.
liminary study regarding morphology, materials, conservation state and an analysis of the criticality of each artefact.
The evermore widespread use of laser scanning in the field of cultural heritage
[2-3] invites the use of more integrated methodologies, already widely used in industrial engineering, to beat time and costs as well as improving the security of
HA. A recent study [4] proposed Generative Modelling Technology (GMT) to design appropriate packaging for the size and shape of a specific artefact. Furthermore, this study provided for the use of rapid prototyping also increasingly used in
archaeology [5]. In another very recent study [6], an approach based on 3D acquisition was proposed together with an interactive algorithm to produce customised
packaging for fragile archaeological artefacts using a low cost milling machine.
To date, there are no studies in the literature which integrate laser scanning with
finite element analysis for verifying packaging/artefact interaction. However,
various studies [7-8-9] have aimed at structurally verifying large statues using finite elements.
An integrated methodology is proposed in this work based on laser scanning,
Finite Element Analysis (FEA) and Rapid Prototyping (RP) to design and create a
customised packaging for a small bronze sculpture. As opposed to studies in the
literature, this approach includes a preliminary morphological and structural
analysis of the statue as well as a study of the interaction between statue and packaging to verify analytically how secure they are during handling and transit. The
flow chart below (Fig.1) summarises the approach of this research. Future developments might apply the methodology to medium-large sculptures.
Fig. 1. Flow diagram of the proposed approach.
1.1 Case study
For this packaging project, the bronze statue of ‘Heracles of Cafeo’ kept at the
'F.L. Belgiorno' public museum in Modica was chosen. Dating back to 300BC, the
bronze-fusion cast statue is 220 mm high with a volume of 257 x 103 mm3. It is a
rare small bronze hellenistic sculpture discovered in Sicily. It had recently been
An integrated approach to customize the packaging of heritage artefacts
169
restored to inhibit the effects of carbonation and copper chloride. Likewise, there
was evidence of a much earlier restoration to reconstruct its right arm which is
larger than the left. As shown in Figure 2, Heracles is wearing an imposing cloak
in lion-skin from his head down to along his left side. His upright body is leaning
against the extended left leg, the right in repose is just ahead. The left hand is
holding a bow and arrows, the bow-strings between his fingers. The right hand is
resting on a club [10] as per the most common iconography.
Fig. 2. The ‘Heracles of Cafeo’ statue
2 Methods
Digitalising the surfaces of archaeological objects uses non-invasive methodologies to ensure their integrity. Computerised tomography (CT) is one of the
most versatile techniques for dealing with lathe produced work because it even
provides the dimensions of non-visible parts [11]. Since the Heracles statue was
made by bronze fusion, laser scanning techniques were chosen using NextEngine
Desktop 3D scanner, which is particularly versatile for acquiring the geometry of
small objects without contact and with suitable adjustments would also be precise
for large objects. The scanning took place in the museum where it is on display
(Fig. 3a). The sensor was set-up to deal with the complex morphology and surface
finish of the statue, as well as the unalterable environmental factors of the display
space, the limited work area and the rather dim artificial lighting [12]. Fifty-five
acquisitions were made in two sessions so as to have the greatest number of surface samples possible resulting in nearly 90%. The files (total 2.54GB) were saved
with the .WRL extension.
In post-processing, Inus Technology's RapidForm software was used to align
and reconstruct an overall representation of the clouds of metrical points obtained
170
G. Fatuzzo et al.
from 15 shells (Fig. 3b). Through data merging and data reduction, the scans were
stored in one 3D model filtered to 185,741 polygons from the initial 1,406,250;
the outliers were eliminated although few were generated by the high redundancy
of the overlapping zones (Fig. 3c). Finally, the small unsampled areas were reconstructed automatically.
Fig. 3. Digitalising the statue:(a) surface scans of the sculpture; (b) storing the shells; (c) final 3D
model.
The larger disconformities, due to laser inaccessibility because of the internal
plastic shape of the cloak and residual parts of the arrows held in the left hand
(Fig.4), were reconstructed by converting the RapidForm® format of .MDL
(14,435 kB) to .STL until all the ASCI data corresponded to the original geometry.
This data was then modified to .3DM (284 kB) to fit NURBS modelling (Rhinoceros® software) of the specific reconstructions (Fig.4).
Fig. 4. Reconstructing the unsampled surfaces.
The packaging procedure was preceded by a morphological analysis to define
optimum orientation during handling and transit. This first evaluation was followed by FE analysis to highlight any highly critical zones to protect compared to
the stronger zones which the packaging can touch directly. Given Heracles upright
position and given certain fragilities both longitudinally and horizontally, it was
decided to package him lengthwise. A structurally static study was carried out using the well-established FEA which has already been used to identify zone criticalities in monuments.
To characterise the statue's material composition, bibliographic searches
showed no scientifically certain data. The few available chemical analyses of the
An integrated approach to customize the packaging of heritage artefacts
171
alloy revealed that in the late Greek Empire bronze was made up of copper, tin
and lead with growing lead content up to 30-40% to facilitate fusion [13]. Because
of the difficulty of establishing the statue's mechanical properties, this study refers
to a work on the bronze statue of 'Giraldillo' [8].
To unequivocally characterise the statue structurally, FEM analyses were carried out in MARC® environment, by subjecting it to a hydrostatic pressure of 0.1
MPa to qualitatively evaluate those zones of greatest and least criticality. This
type of simulation is well suited to cases where the load conditions cannot a priori
be established and furthermore, acting uniformly across the statue's surface, provides an overall view of its stress state and therefore of its critical zones. Given the
statue's surface complexity, a mesh was created in Hypermesh® with 63,518 tetrahedric elements. From the FEM study of the model, the zones at greatest risk of
breakage were those protruding parts with the smallest cross-sections. Figure 5
shows the statue's morphology and in particular the parts to exclude from contact
with the packaging: hand L, elbow R, foot R and the cloak. Analogously, the
analysis identified the strongest areas or those with lower Von Mises stress values,
from which the sections not to exclude from contact with the packaging could be
extrapolated. Moreover, said analysis highlighted that a solution of distributed
support for critical zones would not prevent contact with the packaging.
Fig. 5. Results of FEA on sculpture.
Once the statue/packaging interface zones have been defined, and having divided the statue into 48 sections perpendicular to the main axis, a morphological
study was initiated on the sections most suited to packaging contact as shown in
fig. 6. In particular, four transverse (to the statue's axis) sections were identified at
different intervals. Section A-A at 25.5mm from the tip of the sculpture's head coincides with Heracles forehead and has a surface area of 868mm 2. The profile of
the frontal section is 50.57mm long and is more regular compared to the rear pro-
172
G. Fatuzzo et al.
file which includes the added complexity of his cloak and is 49.6mm long. Section
B-B at 33mm from section A-A coincides with the shoulder to which the cloak is
attached and has a surface area of 2193mm2. The profile of the frontal section is
83.97mm long. Even though the cloak's knot is sharp, it's a similar irregularity to
the cloak's fold at the back protruding by 78.66mm, slightly less than the other
profile. Section C-C at 63mm from section B-B coincides with the statue's pelvis,
hand L and the cloak. Excluding the hand and drape from touching the packaging
because they are protruding morphologies (weaker parts), the pelvis area has a
surface area of 1155mm2. Morphologically, front and back are on the whole quite
similar. The front profile 58.61mm long, whereas the back is 50.68mm. Section
D-D at 90mm from section C-C, coincides with the statue's ankle and has a surface area of 246mm2. Both ankle contours have the same shape. Its front profile is
42.08mm long and at the back 35.37mm. From section analyses, their front profiles are smoother than at the back as well as being overall at 235.23mm long,
more lengthy than the back at 214.31mm. Given that the packaging should touch
either the front or back of the four sections, the most regular and lengthier front
profiles were identified as supports. So, with the best transit position being horizontal, analysing sections the prone position was thus chosen.
Fig. 6. Morphologically/geometrically identifying and analysing the statue-packaging interface.
From this data the Heracles statue's packaging was designed to be a
170x150x300mm parallelepiped, its longest side parallel to the statues axis. Internally and perpendicular to the axis, eight sliding ribs were created at the four levels identified in the morphological analysis. The ribs slide on guides so the statue
can be inserted or removed easily.
To simulate statue-packaging interaction, FEM analyses under acceleration in
the contact zones were applied considering the statue's weight and hypothesising
that the supports are infinitely rigid (fig.7). To evaluate real transit accelerations
for works of art, reference was made to the literature [14] regarding the monitor-
An integrated approach to customize the packaging of heritage artefacts
173
ing and experimental measurement of shock/vibration values subjected to the
packaging of paintings during actual overland, air and sea shipping. It was possible to extrapolate an acceleration of 9g which was verified while flying the painting 'The Consecration of Saint Nicholas' by Paolo Veronese from the Chrysler
Museum (Norfolk, Virginia) to the National Gallery.
The simulation results highlighted the following: the hypothesised packaging
provides protection for the critical more fragile zones (hand L, elbow R, foot R,
and cloak) where the packaging touches the statue revealing stress values according to Von Mises such that the safety factor is not less than 10. This hypothesised
packaging would therefore provide ample safety margins in transit.
(a)
(b)
Fig. 7. FEA of Statue-packaging interaction.
Having carried out the virtual tests described above, Rapid Prototyping (RP)
techniques were used to create prototypes of the sculpture and packaging. A Stratasys 3D printer (Dimension 1200es model) was used to produce ABS prototypes
(fig.8) by way of FDM (Fusion Deposition Modelling). The sculpture and packaging prototypes facilitated assembly/disassembly tests which could not have been
done on the original statue. To construct the prototypes on a 1:1 scale, 206.7cm 3 of
ABS was used for the statue and 76.2cm 3 for the packaging and took about 18h.
The ABS packaging prototype could be considered functional and so usable in the
future for transiting the original Heracles of Cafeo.
174
G. Fatuzzo et al.
Fig. 8. ABS prototypes by additive manufacturing.
3 Conclusions
This work has presented an integrated methodology based on laser scanning, finite
element analysis and rapid prototyping to design and build a customised packaging for a small bronze sculpture. This methodology may be applied to different
goods of various sizes, materials and shapes. As opposed to studies in the literature, this approach carries out a preliminary study of the item from the point of
view of shape and structure, and a study of the item-packaging interaction to virtually verify the degree of safety during handling and transit. The FEM results
confirm that the chosen variables provide ample safety margins for transit, and
furthermore provide a risk parameter for insurance policies. The sculpture and
packaging prototypes produced by additive manufacturing provided aesthetic,
functional and assembly evaluations. Future developments regard the study of
procedures based on automatic algorithms for choosing the orientation and sections which interface between artefact and packaging.
References
[1] Stolow, N. (1981). Procedures and conservation standards for museum collections in transit
and on exhibition. Unesco.
[2] Fatuzzo, G., Mussumeci, G., Oliveri, S. M., & Sequenzia, G. (2011). The “Guerriero di
Castiglione”: reconstructing missing elements with integrated non-destructive 3D modelling
techniques. Journal of Archaeological Science, 38 (12), 3533-3540.
[3] Fatuzzo, G., Mangiameli, M., Mussumeci, G., Zito, S., (2014). Laser scanner data processing
and 3D modeling using a free and open source software. In Proceedings of the International
Conference On Numerical Analysis And Applied Mathematics 2014 (ICNAAM-2014), Vol.
1648. AIP Publishing.
[4] Sá, A. M., Rodriguez-Echavarria, K., Griffin, M., Covill, D., Kaminski, J., & Arnold, D. B.
(2012, November). Parametric 3D-fitted Frames for Packaging Heritage Artefacts. In VAST
(pp. 105-112).
An integrated approach to customize the packaging of heritage artefacts
175
[5] Scopigno, R., Cignoni, P., Pietroni, N., Callieri, M., & Dellepiane, M. (2015, November).
Digital Fabrication Techniques for Cultural Heritage: A Survey. In Computer Graphics Forum.
[6] Sánchez-Belenguer, C., Vendrell-Vidal, E., Sánchez-López, M., Díaz-Marín, C., & AuraCastro, E. (2015). Automatic production of tailored packaging for fragile archaeological artifacts. Journal on Computing and Cultural Heritage (JOCCH), 8(3), 17.
[7] Borri, A., & Grazini, A. (2006). Diagnostic analysis of the lesions and stability of Michelangelo's David. Journal of Cultural Heritage, 7(4), 273-285.
[8] Solís, M., Domínguez, J., & Pérez, L. (2012). Structural Analysis of La Giralda's 16thCentury Sculpture/Weather Vane. International Journal of Architectural Heritage, 6(2), 147171.
[9] Berto, L., Favaretto, T., Saetta, A., Antonelli, F., & Lazzarini, L. (2012). Assessment of
seismic vulnerability of art objects: The “Galleria dei Prigioni” sculptures at the Accademia
Gallery in Florence. Journal of Cultural Heritage, 13(1), 7-21.
[10] Rizzone, V. G., Sammito, A. M., & Sirugo, S. (2009). Il museo civico di Modica" FL
Belgiorno": guida delle collezioni archeologiche (Vol. 2). Polimetrica sas.
[11] Bouzakis, K. D., Pantermalis, D., Efstathiou, K., Varitis, E., Paradisiadis, G., & Mavroudis,
I. (2011). An investigation of ceramic forming method using reverse engineering techniques:
the case of Oinochoai from Dion, Macedonia, Greece. Journal of Archaeological Method and
Theory, 18(2), 111-124.
[12] Gerbino, S., Del Giudice, D. M., Staiano, G., Lanzotti, A., & Martorelli, M. (2015). On the
influence of scanning factors on the laser scanner-based 3D inspection process. The International Journal of Advanced Manufacturing Technology, 1-13.
[13] Giardino, C. (1998). I metalli nel mondo antico: introduzione all'archeometallurgia. Laterza.
[14] Saunders, D. (1998). Monitoring shock and vibration during the transportation of paintings.
National Gallery Technical Bulletin, 19, 64-73.
Part II
Product Manufacturing and Additive
Manufacturing
This track focuses on the methods of Additive Manufacturing, a technology that
has enabled the building of parts with new shapes and geometrical features. As
this technology modifies the practices, new knowledge is required for designing
and manufacturing properly. Papers in this topic deal with the optimization of la ttice structures or the use of topological optimization as a concept design tool.
In this track some interesting experimental methods in product development are
also introduced. Various user centered design approaches are presented in detail.
The authors try to overcome the lack of detailed users’ requirements and the lack
of norms and guidelines for the ergonomic assessment of different kind of tools
and interactive digital mock-ups.
Finally, the Advanced manufacturing topic covers very specific manufacturing
techniques like the use of a collaborative robot for a fast, low price, auto mated and
reproducible repair of high performance fiber co mposite structures.
Antonio Bello - Univ. Oviedo
Emmanuel Duc – IFMA
Massimo Martorelli - Univ. Napoli ‘Federico II’
Section 2.1
Additive Manufacturing
Extraction of features for combined additive
manufacturing and machining processes in a
remanufacturing context
Van Thao LE1*, Henri PARIS1 and Guillaume MANDIL1
1
G-SCOP Laboratory, Grenoble-Alpes University, 46 avenue Félix Viallet, 38031 Grenoble
Cedex 1, France
* Corresponding author. Tel.: +33-476-575-055; E-mail address: Van-Thao.Le@g-scop.eu
Abstract The emergence of additive manufacturing (AM) techniques in the last
30 years allows to build complex part by adding material in a layer-based fashion
or spraying the material directly into the part or a substrate. Taking into account
performance of these techniques in a ‘new remanufacturing strategy’ can open
new ways to transform an end-of-life (EoL) part into a new part intended for another product. The strategy might allow a considerable material proportion of existing parts to be reused directly for producing new parts without passing through
the recycling stage. In this work, the strategy enable the transformation of existing
parts into desired parts is first presented. The strategy uses an adequate sequence
of additive and subtractive operations, as well as inspection operations to achieve
the geometry and quality of final parts. This sequence will be designed from a set
of AM features and machining features, which are extracted from available technical information and the CAD models of existing part, and final part. The core of
the paper focuses on the feature extraction approach. The approach development is
based on the knowledge of AM processes and machining process, as well as the
specifications of final part.
Keywords: Feature extraction; Additive manufacturing feature; Machining feature; Additive manufacturing; Remanufacturing.
1 Introduction
To answer the issues of end-of-life (EoL) products, the industrial manufacturers
are looking for efficient strategies able to recover EoL products. Generally, the
used products are separated and recycled into raw material; and then, raw material
is used to produce workpiece. However, energy consumption of the recycling systems remains important. Moreover, added values and a considerable amount of
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_19
181
182
V.T. Le et al.
energy used to produce original products are generally lost during the recycling
process [1]. Nowadays, remanufacturing is considered as a pertinent solution for
EoL products [1, 2]. Indeed, remanufacturing is an industrial process allowing the
conversion of worn-out products/ EoL products into products in a like-new condition (including warranty) [3, 4]. This process can potentially reduce the cost of
product manufacturing while minimizing environmental impacts by reducing resource consumption and waste [1, 5, 6].
In the last two decades, the emergence of additive manufacturing (AM) techniques allows a complex part to be built directly from a CAD model without special fixtures and cutting tools [7]. In comparison to conventional manufacturing
processes, such as machining, casting and forging, AM technologies are interesting from the point of view that they have great potential for improving material
use efficiency, saving energy consumption, and reducing scrap generation and
greenhouse gas emissions [8]. Today, these techniques are efficiently used in automobile and aerospace industry, as well as in biomedical engineering [7].
Literature shows that the use of AM technologies (e.g., direct metal deposition
(DMD) or construction laser additive deposition (CLAD) and fused deposition
modelling (FDM)) has a significant efficiency in the remanufacturing field. Wilson et al. sated that laser direct deposition was efficient for remanufacturing turbine blades [9]. Nan et al. presented a remanufacturing system based on the integration of reverse engineering and laser cladding. This method was able to extend
the life-time of aging dies, aircrafts, and vehicle components [10]. However, these
works only focused on the method of remanufacturing component, namely returning EoL parts/components in a like-new condition, and extending their life-time.
Zhu et al. proposed different feasible strategies to produce new plastic parts from
existing plastic parts. The strategy uses CNC machining, additive manufacturing
process (i.e., FDM process) and inspection process interchangeably [11]. Nevertheless, the strategy was only efficient for producing prismatic plastic parts. In
some cases, the strategy is not time-effective and reducing tensile strength of obtained parts. Recently, Navrotsky et al. showed that SLM technique has a significant potential for creating new features on existing components [12]. Terrazas et
al. presented a method, which allows the fabrication of multi-material components
using discrete runs of EBM system (electron beam melting system) [13]. In this
work, the authors successfully built a copper entity on the top of an existing titanium part. Their results open the perspective of using EBM for remanufacturing.
The investigation on the build of new feature on existing part using EBM process
presented in our recent work [14] also confirms that EBM technique allows a new
part to be achieved from an existing part.
In this work, the performance of AM techniques is integrated in a ‘new remanufacturing strategy’, which can give a new life to EoL parts by transforming them
into new parts intended for another product. The strategy consists of combining
machining process, AM processes, and inspection process. Namely, the desired
part is achieved from existing parts by a manufacturing sequence comprising subtractive and additive operations, as well as inspection operations. The scope of this
Extraction of features for combined additive …
183
work focuses on a feature extraction approach, which allows achieving AM features, machining features from the technical information and the CAD models of
existing part, and final part. These features will be considered as input data for designing the manufacturing sequence compatible with the proposed strategy. This
paper is organized as follows: Section 2 presents the new remanufacturing strategy. The novel feature extraction approach is described in Section 3. Conclusion
and future work are presented in Section 4.
2 New remanufacturing strategy
The objective of new remanufacturing strategy is to give a new life to an EoL part
or an existing part by transforming it into a new part intended for another product.
The strategy consists of combining machining process (i.e., CNC machining), metallic AM processes and inspection process, and even the heat treatment process
[15]. This combination takes advantage and performance of AM and machining
processes (e.g., obtaining a complex part by AM techniques and achieving a high
precision by CNC machining), while minimizing the disadvantage of these processes (poor dimensional and surface quality generated by AM processes and limited tool accessibility in machining process, for example).
Fig. 1. General process consistent with the proposed strategy.
The generation of a process adequate for the proposed strategy contains three
major steps (Figure 1), namely the pre-processing of existing EoL part, the processing, and the post-processing. First, existing part is cleaned and evaluated; and
then, the actual shape and dimensions of existing part are achieved by a system of
measurement and scanning to generate the CAD model. The processing step refers
to define a manufacturing sequence containing subtractive and additive operations,
and inspection operations, and even heat treatment. The post-processing step consists of final inspection operations, and additional operations, such as labeling, etc.
The major issue to solve is: how such a manufacturing sequence is defined? In
the next section, we present an approach to extract both machining features and
184
V.T. Le et al.
AM features from available technological information and the CAD models of existing part and final part. These extracted features will be used as input data to design the manufacturing sequence.
3 Novel approach of feature extraction
3.1 Definition of manufacturing features
In the following, manufacturing features refer to machining features and AM features. The machining feature has been defined by GAMA group [16]. “A machining feature is defined by a geometrical form and a set of specifications for which a
machining process is known. This machining process is quasi-independent from
the processes of other machining features” [16]. A machining process is an ordered sequence of machining operations. Following this definition, the major attributes of a machining feature consist of the geometrical characteristics; the intrinsic tolerance on the form and dimensions; the machining directions; and the
estimated material to remove from rough state [17]. Recently, Zhang et al. also
proposed a definition of AM features based on shape feature and consistent with
the characteristics of AM processing [18]. The definition in their work has an important role in optimization of build direction in AM process. In fact, the build direction has an effect on roughness of obtained surfaces, and mechanical properties, as well as support volume in AM. However, the choice of build direction in
the current study depends on the starting surface on existing part (the build direction is the normal vector of the starting surface). Hence, the major attributes of
AM features that have a particular interest here contain the geometrical characteristics of the expected shape, the build direction and the starting surface, the estimated material volume to be added by AM processes and the roughness quality. In
this paper, all entities to be added into the part will be considered as AM features.
Thus, an AM feature is defined as follows: an AM feature is a geometrical form
and associated technological attributes for which it exists at least an AM process.
The AM feature is then built by adding material from a starting surface on existing part.
3.2 Approach proposition
Many works published in the literature focus on automatic manufacturing feature
extraction methods, in particular extracting machining features, as shown in [19,
20]. These methods are based on the information of design parts and the
Extraction of features for combined additive …
185
knowledge of machining process. The extracted features are then used for manufacturing process planning [17, 21]. However, these methods are only efficient in
machining field. In our work, an existing part will be transformed into a desired
part using a sequence of additive and subtractive operations, and inspection operations. This process is totally different compared to machining process, which generally removes material from a cylindrical or rectangular workpiece to achieve geometry and quality of final part. Consequently, the previous methods are not
effective in this case. Hence, we propose an extended feature extraction approach,
which is based on the knowledge of AM processes and machining process, as well
as the specifications of final part. Available technological information and the
CAD models of existing part and final part are considered as input data of the approach.
3.3 Knowledge of manufacturing processes
In this section, the knowledge of AM processes and machining process are exploited to identify and extract manufacturing features. In this study, we focus on
two types of ‘metal-based’ AM techniques – Powder Bed Fusion (e.g., EBM and
SLM), and Directed Energy Deposition (e.g., CLAD and DMD) [22]. The machining process is performed by a CNC machine. The CNC machines today have sufficient performance to achieve the expected quality. The knowledge of AM processes is outlined as follows:
Capabilities and limitations of AM processes: For EBM and SLM processes,
the build of parts is performed in vertical direction by depositing metallic powder
layer by layer on a flat surface. Hence, the machining stage should be realized on
existing part to obtain a flat surface for material deposition stages. In some cases,
existing parts should also be clamped on the build table by a fixture system to
achieve such configuration. Moreover, these processes are limited by the build envelope and single material per build. In comparison, CLAD and DMD processes
offer a larger build envelope and flexible build directions due to a 5-axis CNC
machine configuration. These techniques can also deposit multiple materials in a
single build. However, their ability for building internal structures and overhang
structures is limited.
Part accuracy and surface roughness of AM-built parts: Generally, the quality
and roughness of AM-built surfaces are not always adequate for the quality of final part [15, 23]. Hence, machining stages are further performed to ensure the expected quality (of course, only surfaces generated by AM process having surface
roughness incompatible with expected precision, are further machined). The surface roughness value of AM-built surfaces, as well as the geometric errors due to
thermal distortion, and residual stresses in AM processes should be taken into account for the generation of AM features.
186
V.T. Le et al.
Collision constraints: the collision constraints are very important to take into
account in feature identification and feature extraction. For CLAD and DMD processes, the collision between the nozzle and the part during material deposition
stages must be avoided. In the EBM and SLM processes, to avoid collision between the rake and existing part, it is essential to start the build of part from a flat
build surface.
In machining, the accessibility of cutting tools is one of the major constraints to
be taken into account during the identification and the extraction of manufacturing
features. If the build of an AM feature cause an inaccessibility of cutting tool in
the next machining operation, it has to be built after the machining operation. The
constraints of part clamping in machining stages should also be considered.
3.4 Development of feature extraction process
The proposed feature extraction process contains five major steps as shown in
figure 2. The proposed extraction process is demonstrated using the case study
presented in figure 3. For this purpose, all steps were performed manually using a
CAD software. The pocket (P), the hole (H) and the surfaces (fS1 to fS7) of the final part require a high surface precision. The roughness of the surfaces (eS1, eS2,
and eS3) of existing part satisfies the quality of the final surfaces (fS1, fS2 and
fS3). The steps of process are outlined as follows:
Fig. 2. Major steps of feature extraction process.
Local coordinate system definition and Positioning: The first step consists of
defining a local coordinate system for each CAD model of existing part and final
Extraction of features for combined additive …
187
part. Afterwards, two local coordinate systems - namely, two parts - are positioned
so that the common volume between existing part and final part is as big as possible (figure 4a). Moreover, for functional surfaces of final part (e.g., surfaces fS4,
fS5 and fS7), it is necessary to leave a sufficient over-thickness for finishing operations. This over-thickness should be integrated in the generation of the common
volume.
Fig. 3. Test parts: existing part and final part.
Extraction of the common volume, the removed volumes and the added volumes: After the step A01, two parts are well positioned respecting the constraint
that common volume is as big as possible. From there three volumes are extracted
using Boolean operations (figure 4b). The common volume of two parts is obtained using (Existing part) AND (Final Part). The added volumes are obtained by
subtracting the common volume to the Final part. Finally the removed volumes
are obtained by subtracting the common volume to the Existing part. In the following, the common volume is called as the common part.
Fig. 4. Illustrating the step A01 (a), the step A02 (b), and the step A03 (c).
Modification of common part geometry by talking into account the manufacturing process constraints: the common part geometry is not generally adequate for
AM processes. Indeed, in EBM and SLM processes the build surface must be flat
to avoid collision between the rake and the part. In CLAD and DMD processes, it
is also very important to avoid collisions between the nozzle and the part. Hence,
it is necessary to modify the common part geometry. For example, in figure 4c,
taking into account the EBM or SLM process constraints, the volume located on
the plan (S1) must be removed; and the hole (H), which does not exist on existing
188
V.T. Le et al.
part, will be machined after AM stage. Moreover, for the surfaces of common part
requiring machining (e.g., the contour surface S2, and the surfaces S3 and S4 of
figure 4c), it should also leave a sufficient over-thickness for the finishing operations. The over-thickness is estimated based on the expected quality. The new geometry of common part after modification, denoted as CF, is further used to extract the AMFs and MFs. The CF is considered as an intermediate part in the
processing.
Extraction of machining features from existing part: from the CF and existing
part, the volumes to be removed from existing part to achieve the CF are extracted
using the Boolean operations. These extracted volumes and the associated attributes of geometrical form of the CF formulate machining features, denoted as MFe.
Figure 5a illustrates this step. In this case, we have two machining features extracted from existing part, MFe_1 and MFe_2. The machining processes of these
features, which allow achieving the top flat surface feature, on which the AM features will be built, and the ‘irregular step’ feature.
Fig. 5. Illustrating the step A04 (a), and the step A05 (b).
Extraction of the AMFs and the MFs after AM stages: Similarly in the previous
step (A04), the volumes to be added into common part to achieve the geometry of
final part are extracted from final part and the CF (figure 5b). The AM features are
also defined from these extracted volumes, the specifications of final part, and the
associated technological attributes of AM processes. An AM feature can be either
a final feature of final part (e.g., AMF_1 and AMF_3), or rough state of a machining feature after AM processing (e.g., AMF_2). The relation of AM features can
be classified into three categories: independence, dependence, and grouped. For
example, AMF_1 is independent from AMF_2 and AMF_3; AMF_3 is dependent
on AMF_2; and AMF_3 are considered to be grouped.
Obviously, the independent AM features are built in different build directions,
and the dependent AM features are generally built in the same build direction. The
grouped AM feature can be built either in the same direction (for EBM and SLM
processes), or in the different build directions (for CLAD and DMD processes).
To identify AM features, it is also essential to take into account the machining
constraints, such as collision constraints. In certain cases, the dependent AM features should be decomposed into different independent features, and built in dif-
Extraction of features for combined additive …
189
ferent AM stages to avoid the collision between the cutting tools and the part occurring in the next machining stage. For example, if AMF_2 and AMF_3 are built
in the same AM stage, the drilling of the hole (H) or the finishing of the pocket (P)
on the AMF_2 may cause collision between cutting tools and AMF_3. Thus,
AMF_3 must be built after the machining of the hole (H) and the pocket (P) on the
AMF_2 feature.
Moreover, to ensure the quality of final part, functional surfaces have to be machined after AM processing. Thus, a sufficient over-thickness for the finishing
stages leaving on these surfaces is taken into account in generating the CAD model of AM features (e.g., AMF_2). It is estimated as a function of roughness of surfaces generated by AM process, the required quality of final surfaces, and the surface quality achieving by machining. The over-thickness will become the rough
state attribute of machining features after AM process.
The machining features after AM process, denoted as MFa, are determined
from the functional features of final part (for example, the functional surfaces fS4
to fS7, the holes (H), and the pocket (P) in figure 3). The rough state attributes of
MFa features defined by the over-thickness integrated in AM features, or a plain
material state (particularly in the case of drilling holes). In figure 5b, the machining features, MFa_{1, 2, 3, 4, 6}, correspond to functional surfaces of final part;
and MFa_5 corresponds to the hole feature (H).
5 Conclusion and future work
The research focused on the feature extraction process in a context of remanufacturing. The proposed approach allows an effective extraction of both MFs and
AMFs features from the CAD model of existing part and final part, and the
knowledge about constraints of AM process and machining process. This has been
illustrated using a case study.
Future work will consist in designing a manufacturing operation sequence compatible with the new remanufacturing strategy using the extracted features.
Acknowledgments The authors would like to thank Rhône-Alpes Region of France for its support in this project.
References
1. King A. M., Burgess S. C., Ijomah W., et al., Reducing Waste: Repair, Recondition,
Remanufacture or Recycle ?, Sustainable Development, 2006, 267, 257–267.
2. Bashkite V., Karaulova T., and Starodubtseva O., Framework for innovation-oriented
product end-of-life strategies development, Procedia Engineering, 2014, 69, 526–535.
3. Gehin A., Zwolinski P., and Brissaud D., A tool to implement sustainable end-of-life
190
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
V.T. Le et al.
strategies in the product development phase, Journal of Cleaner Production, 2008, 16, 566–
576.
Aksoy H. K. and Gupta S. M., Buffer allocation plan for a remanufacturing cell, Computers
and Industrial Engineering, 2005, 48(3), 657–677.
Goodall P., Rosamond E., and Harding J., A review of the state of the art in tools and
techniques used to evaluate remanufacturing feasibility, Journal of Cleaner Production, Oct.
2014, 81, 1–15.
Östlin J., Sundin E., and Björkman M., Product life-cycle implications for remanufacturing
strategies, Journal of Cleaner Production, Jul. 2009, 17(11), 999–1009.
Guo N. and Leu M., Additive manufacturing: technology, applications and research needs,
Frontiers of Mechanical Engineering, 2013, 8(3), 215–243.
Huang R., Riddle M., Graziano D., et al., Energy and emissions saving potential of additive
manufacturing: the case of lightweight aircraft components, Journal of Cleaner Production,
May 2015.
Wilson J. M., Piya C., Shin Y. C., et al., Remanufacturing of turbine blades by laser direct
deposition with its energy and environmental impact analysis, Journal of Cleaner Production,
2014, 80, 170–178.
Nan L., Liu W., and Zhang K., Laser remanufacturing based on the integration of reverse
engineering and laser cladding, International Journal of Computer Applications in
Technology, 2010, 40(4), 254–262.
Zhu Z., Dhokia V., and Newman S. T., A novel decision-making logic for hybrid
manufacture of prismatic components based on existing parts, Journal of Intelligent
Manufacturing, Sep. 2014, 1–18.
Navrotsky V., Graichen A., and Brodin H., Industrialisation of 3D printing (additive
manufacturing) for gas turbine components repair and manufacturing, VGB PowerTech 12,
2015, 48–52.
Terrazas C. A., Gaytan S. M., Rodriguez E., et al., Multi-material metallic structure
fabrication using electron beam melting, The International Journal of Advanced
Manufacturing Technology, Mar. 2014, 71, 33–45.
Mandil G., Le V. T., Paris H. and Saurd M., Building new entities from existing titanium
part by electron beam melting: microstructures and mechanical properties, The International
Journal of Advanced Manufacturing Technology, 2015.
Le V. T., Paris H., and Mandil G., Using additive and subtractive manufacturing
technologies in a new remanufacturing strategy to produce new parts from End-of-Life parts,
22ème Congrès Français de Mécanique, 24 au 28 Août 2015, Lyon, France.
Groupe GAMA, La gamme automatique en usinage. Editions Hermès, Paris, 1990.
Paris H. and Brissaud D., Modelling for process planning: The links between process
planning entities, Robotics and Computer-Integrated Manufacturing, 2000, 16(4), 259–266.
Zhang Y., Bernard A., Gupta R. K., et al., Feature Based Building Orientation Optimization
for Additive Manufacturing, Rapid Prototyping Journal, 2016, 22(2).
Harik R. F., Derigent W. J. E., and Ris G., Computer aided process planning in aircraft
manufacturing, Computer-Aided Design and Applications, 2008, 5(6), 953–962.
Harik R., Capponi V., and Derigent W., Enhanced B-Rep Graph-based Feature Sequences
Recognition using Manufacturing Constraints, in The Future of Product Development:
Proceedings of the 17th CIRP Design Conference, F.-L. Krause, Ed. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2007, 617–628.
Liu Z. and Wang L., Sequencing of interacting prismatic machining features for process
planning, Computers in Industry, 2007, 58(4), 295–303.
Vayre B., Vignat F., and Villeneuve F., Metallic additive manufacturing: state-of-the-art
review and prospects, Mechanics & Industry, 2012, 13, 89–96.
Vayre B., Vignat F., and Villeneuve F., Designing for additive manufacturing, Procedia
CIRP, 2012, 3(1), 632–637.
Comparative Study for the Metrological
Characterization of Additive Manufacturing
artefacts
Charyar MEHDI-SOUZANIa*, Antonio PIRATELLI-FILHOb, Nabil
ANWERa
a
Université Paris 13, Sorbonne Paris Cité, LURPA, ENS Cachan, Univ. Paris-Sud, Université ParisSaclay, 94235 Cachan, France.
b
Universidade de Brasilia, UnB, Faculdade de Tecnologia, Depto. Engenharia Mecânica, 70910-900,
Brasilia, DF, Brazil
* Corresponding author. Tel.: +33 1 47 40 22 12; E-mail address: souzani@lurpa.ens-cachan.fr
Abstract Additive Manufacturing (AM), also known as 3D printing, has been in-
troduced since mid 90' but it begins to have a broader use along last ten years. The
first uses of AM process were for rapid prototyping or for 3D sample illustration
due to the weak performances of mechanical characteristics of the materials available. However, even if this technology can provide answers for mechanical requirements, it will be largely used only if geometrical and dimensional characteristics of generated parts are also at the required level. In this context, it is
necessary to investigate and identify any common dimensional and/or geometrical
specifications of the parts generated by AM process. Highlighting singularity of
AM systems should be based on the fabrication and measurement of standardized
artefacts. Even if those test parts allow assessing some important characteristics of
AM systems, there are still some challenges to characterize the capacity of generating freeform surfaces and features. In the literature, none of existing test parts
are proposing those kind of features even if the generation of free-form surfaces is
a significant benefit of AM systems. In this context, the aim of this paper is to
provide a metrological comparative study on the capacity of an AM system to
generate freeform parts based on an artefact.
Keywords: Additive manufacturing; measurement artefact; free form characterization; dimensional metrology
1 Introduction
Additive Manufacturing (AM) is the process used to build a physical part layer
by layer directly from a 3D model [1]. The first uses of AM process were for rapid
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_20
191
192
C. Mehdi-Souzani et al.
prototyping and 3D sample [2] illustration due to the weak performances of mechanical characteristics of the materials available. Recent development and more
particularity with the use of metal and ceramics powder, broadens considerably
the field of use of AM. It is now reasonable to considerate the use of parts fabricated by this process in such industries as aerospace or automotive. This technology will be largely used only if geometrical and dimensional characteristics of
generated parts were also at the required level [3]. In this context, we believe that
an investigation is necessary to identify the common dimensional and geometrical
specifications of the parts generated by AM process. The knowledge of the capacity of AM process generating parts with dimensional and/or geometrical requirements could allow to take into the account a correction factor at the design step.
By this way printed parts specifications will increase. This study can be based on
the design of an artefact. The artefact should be representative of the complex
forms and geometry that can be built by an AM system but they must also, reflect
metrology characterizations.
In the literature, only few studies focus on these topics. Moylan & all from
NIST, start their work by noting that even if different test parts have been introduced by the past, there are no any current standard parts for AM systems [4].
They summarized the existing parts by studying the important features and characteristics found on those parts and propose a new artefact intended for standardization. The part is composed by various canonical geometries: staircases, holes,
pins, fine pins and holes, negative and positive cubes, vertical surfaces, ramp and
cylinders. Yang & all, proposed an assessment of the design efficiency of the test
artefact introduced by NIST team and based on their analysis they provided a redesigned artefact [5]. They analysed in more details seven characteristics:
straightness, parallelism, perpendicularity, roundness, concentricity, true position
for z plane and true position for pin. They concluded that some geometrical characteristics are redundant and some dimensions have relevant effects on the parts
build. Based on their conclusion, they introduced a new part using the same kind
of geometrical forms but they provided different orientation and features dimensions in order to analyse the capacity of the AM system to generate the same features in different sizes and directions. Islam & all, [6] provide an experimental investigation to quantify the dimensional error of powder-blinder 3D printer. They
use a test part defined by superposition of concentric cylinders with descendant
radii from down to top and a central cylindrical hole.
In this context, we provide an experimental comparative study on the capacity
of an AM system to generate freeform parts. A complex geometry artefact was designed and produced and in order to provide an independent study, three different
measuring instruments were used to characterize the dimensions and geometry of
the test part. Conclusions of this study and future works are also highlighted.
Comparative Study for the Metrological ...
193
2 Artefact design and experimental context
In the literature, many artefacts have been used to study AM systems, but they are
only designed with regular surfaces [4,5,6]. In this context we introduce a new artefact designed with freeform and regular surfaces. The NPL (National Physical
Laboratory-UK) provide a freeform artefact called "FreeForm Reference Standard". But it has been designed to aid the assessment of contactless coordinate
measurement system such as laser scanner [7,8] and not to assess the dimensional
and geometrical characteristics of parts manufactured by AM systems. The NPL
artefact is defined by a single part built by blending several geometrical forms.
The analysis of this part let us conclude that it is not enough appropriated to characterize an AM system. However, some of its forms can be used. Based on this
conclusion, a new artefact is designed with the following regular geometries:
plane; cylinder; sphere; extruded ellipse; cone and torus; and an axisymmetric
aspherical shape (lens) and a Bézier surface for freeform geometries. A ComputerAided Design (CAD) model was generated using CATIA V5 software, with basis
dimensions of 240 x 240 mm. Figure 1 presents the designed artefact with respective geometries.
Fig. 1. Free-form artefact designed to evaluate the AM system.
The part has been manufactured with a ZPrinter 450 from Zcorporation, a powderbinder process machine [9] with part tolerances of ±1% or ±130 μm according to
the manufacturer [10]. The CAD model was implemented in this machine and the
artefact was produced with zp150 (gypsum) material.
The artefact was measured with three different instruments, a Cantilever type Coordinate Measuring Machine (CMM), an Articulated Arm CMM (AACMM) and a
laser scanner. The Cantilever CMM is a Mitutoyo and has a work volume of 300 x
400 x 500 mm, with a standard combined uncertainty of 0.003 mm. The AACMM
is a Romer arm and has a spherical work volume of 2.5 m in diameter, with standard combined uncertainty of 0.03 mm. The laser scanner is a NextEngine system
and has an accuracy of 0.26 mm. Figure 2 presents the measuring instruments. As
a part of the study process, the measurement system can introduce variations and
194
C. Mehdi-Souzani et al.
influences the study's conclusion. This is why we used three different systems to
take into account this potential variation source that is not related to AM system.
Fig. 2. Measuring instruments: a) Laser scanner ; b) Articulated Arm CMM ; c) Cantilver type
CMM.
Each characteristic has been measured five times in order to compute the average,
standard deviation, and other statistical characteristics. Two-dimensional characteristics have been measured for the regular surfaces: diameters and height (distance between two nominal surfaces), as well as flatness, parallelism and perpendicularity between situation features. For the freeform surfaces, the deviation of
the geometries in respect to the theoretical CAD model has been measured. A
graphical analysis with the means and the error bar, determined with t-Student distribution and 95% probability, complete the study.
3 Results and discussion
3.1 Dimension characteristics of regular surfaces
For the measurement of regular surfaces, CMM and AACMM with two different
contact probes have been used: a point contact stylus probe with 0mm ball diameter (AACMM0) and a 6mm ball diameter stylus (AACMM6). Table 1 presents the
data analyses resulting from the measurements: deviation (d), standard deviation
(s) and the standard deviation of the mean (sm95).
sm95 = (t.s)/√n
(1)
with t= 2,776 : the t Student parameter for 95% and n= 5: the sample size
Comparative Study for the Metrological ...
195
sm95 is used to present the standard deviation of the mean associating 95% probability to the result.
In table 1, "D" means diameter; "H" means the height of the given feature and "L"
means the distance between two given plane surfaces.
Table 1. Data analyse for regular surfaces measurements in mm.
d
AACMM =6mm
sm95
s
d
AACMM d=0
sm95
s
d
CM
s
sm95
1 D cylinder
2 H cylinder
-0.186
0.112
0.048
0.015
0.060
0.018
-0.490
0.124
0.029
0.053
0.036
0.065
-0.198 0.030
0.114 0.006
0.038
0.007
3 D sphere
-0.382
0.398
0.494
-0.656
0.081
0.101
-0.044 0.097
0.121
4 L plane 5-9
5 L plane 3-7
6 L plane 6-10
7 L plane 4-8
8 L plane 1-2
9 H Bézier
10 H ellipse
-0.284
-0.028
0.058
0.352
0.040
0.098
0.146
0.015
0.013
0.011
0.086
0.012
0.048
0.050
0.019
0.016
0.014
0.107
0.015
0.059
0.062
-0.422
-0.198
-0.246
-0.118
0.010
0.244
0.136
0.019
0.020
0.019
0.151
0.010
0.048
0.059
0.024
0.025
0.023
0.188
0.012
0.060
0.073
0.188
0.254
0.828
0.407
0.048
0.203
0.100
0.441
0.578
0.354
0.129
0.015
0.024
0.119
0.356
0.465
0.285
0.104
0.012
0.019
0.096
Figure 3 presents a graphical analyse of the deviation value summarized in table 1.
For instance, the fourth column of x-axis of figure 3 represents the fourth line of
table 1, namely the deviation "d" computed on the data for each measurement system. This graphical analyse shows that for half of the features the deviation values are similar regardless of the measurement system (1, 2 8, 9 and 10). For the
second half the values depend on the measurement system used, but we can notice
a constant variation for all the systems: the CMM gives a positive deviation, the
AACMM0 a negative deviation and the AACMM6 has an approximately constant
gap. The values summarized in table 1 do not allow concluding on a general trend
of oversizing or undersized. A complementary study should be provided to explain
this variation.
196
C. Mehdi-Souzani et al.
Fig. 3. Graphical analyse of the deviation presented in Table 1.
3.2 Free-form surfaces and features
For the measurement of freeform surfaces, CMM, scanner and AACMM0 (The
AACMM6 does not allow free-form measurement) have been used. All the features in this paragraph have been measured as cloud of points without any geometry association process or criteria. In a second step, the set of points have been
processed in Rhinoceros software [11] as illustrated in figure 4.
Fig. 4. Analyses of deviations between data points and CAD model in Rhenoceros.
Table 2 presents the deviations of points to the CAD model in the same terms than
table 1: d, s and sm95.
Comparative Study for the Metrological ...
197
Table 2. Data analyse of freeform geometries measurement in mm.
AACMMd=0
CMM
Scanner
d
s
sm95
1 Bézier
0.331
0.224
0.278
0.541
0.427
0.530
0.714
0.619
0.768
2 Torus
0.617
0.434
0.539
0.731
0.500
0.621
0.155
0.102
0.127
3
0.258
0.135
0.168
0.416
0.363
0.451
0.158
0.115
0.143
4 Ellipse
1.107
0.492
0.611
0.563
0.380
0.472
0.885
0.638
0.792
5
0.649
0.459
0.570
0.487
0.360
0.447
0.944
0.519
0.644
Lens
Cone
d
s
sm95
d
s
sm95
Figure 5 presents a graphical analyse of the deviation for each line of table 2. As
shown in figure 5, for freeform features, the values are more scattered but the
analysis shows that all the deviations are positive. In other terms, for those freeform features a volumetric expansion has been identified. This expansion is coherent regarding the literature. Especially if we take into account the material used
[12]. This conclusion may be related to some previous work [6] although in that
case it was on dimensional errors on regular forms. Even if this seems to be in opposition with previous section, as the computation methods used in both sections
are different it is not possible to conclude.
Fig. 5. Graphical analyse of the deviation summarized in Table 2.
Using the same method of computation and study the influence of size variation
on the deviation for a given feature could bring an answer. However, it seems reasonable to conclude that in this case a correction parameter could be used in the
CAD model to generate a manufactured part in concordance with the nominal dimensional requirements.
198
C. Mehdi-Souzani et al.
3.3 Geometric deviations
Figure 6 shows the parallelism deviation, in mm, between planes 1 and 2, planes 4
and 8, planes 6 and 10 (Please refer to figure 1 for surface numeration).
Fig. 6. Parallelism deviation.
Figure 7 shows the perpendicularity deviation, in mm, between planes 3 and 5,
planes 3 and 9, planes 5 and 7, planes 5 and 9. (Please refer to figure 1 for surface
numeration).
Fig. 7. Perpendicularity deviation.
Figure 8 shows the flatness of the plane surfaces of the artefact. Note that "Bézier,
Ellipse and Cylinder", are referred to the planes at the top of the mentioned features: the top plane of the Bezier feature; the top plane of the ellipse feature; the
top plane of the cylinder.
Comparative Study for the Metrological ...
199
Fig. 8. Features Flatness.
According to figure 6, parallelism deviations in all major directions are similar
even if the maximum deviation (between planes 1 and 2: 0,21 mm) is twice the
minimum deviation (between planes 5 and 9: 0,11 mm). At this stage no explication can be given.
For perpendicularity, we can also observe (figure 7) a similar deviation in all major directions except between plane 5 and 9, where the deviation is almost 3 times
higher than in other cases.
For the flatness, according to figure 8, we can conclude that in the major cases,
when the planes have the same orientation, the flatness is similar: planes 1 and 2;
planes 3 and 7; planes 4 and 8. When the planes have different orientations, the
flatness is also different for instance in between planes 6 and 9. We can assume
that orientation of the generated surface in the AM manufacturing space has an influence on the flatness of the generated parts.
4 Conclusions
There is only few works on the dimensional accuracy assessment of AM systems
to manufacture freeform shapes while the generation of those surfaces is one of
the major advantages of AM process. To address this weakness, we developed a
new geometric artefact designed to characterize dimensional and geometrical capabilities of an AM system to generate freeform parts. The artefact has been built
using a powder-binder AM system and a comparative measurement study has
been performed. Based on the measurements, we can conclude that the volumetric
expansion on free-form features has a considerable impact on the geometrical
characteristics. As a perspective of this work, it will be interesting to study the
possibility to introduce a correction factor here. A second conclusion can be
drawn regarding the variation of the orientation ant its influence on the flatness
200
C. Mehdi-Souzani et al.
while the parallelism and perpendicularity seems independent of orientation. Future research efforts will concentrate on establishing more knowledge about correction parameters when considering features of size and the relative positioning
of the surfaces regarding the build direction. Another issue is the measurement of
internal features using CT scanner.
5 References
1. M.N Islam, B Broswell, A.Pramanik, "An Investigation of Dimensional Accuracy of Parts
Produced by Three-Dimensional Printing", Proceedings of the World Congress on Engineering 2013 Vol I, WCE 2013, July 3 - 5, 2013, London, U.K
2. P.F Jacobs, "Rapid prototyping and Manufacturing: Fundamentals of stereolithography", Society of Manufacturing Engineers, Dearborn MI (1992)..
3. NIST "Measurement Science Roadmap for Metal-Based Additive Manufacturing", Additive
Manufacturing Final Report, 2013.
4. S. Moylan, J. Slotwinski, A. Cooke, K. Jurrens, M. A. Donmez, "Proposal for a Standardized
Test Artefact for Additive Manufacturing Machines and Processes," Proceeding of the Solid
Free Form Fabrication Symposium, August 6-8 2012, Austin, Texas, USA.
5. Li Yang, Md Ashabul Anam "An investigation of standard test part design for additive manufacturing", Proceeding of the Solid Free Form Fabrication Symposium, Agust 2014, Austin,
Texas, USA.
6. M.N Islam, S. Sacks, "An experimental investigation into the dimensional error of powderbinder three-dimensional printing", The International Journal of Advanced Manufacturing
Technology, February 2016, Volume 82, Issue 5, pp 1371-1380
7. M B. McCarthy, S B. Brown; A. Evenden; A D. Robinson," NPL freeform artefact for verification of non-contact measuring systems",Proc. SPIE 7864, Three-Dimensional Imaging, Interaction, and Measurement, 78640K (27 January 2011); doi: 10.1117/12.876705
8. http://www.npl.co.uk/news/new-freeform-standards-to-support-scanning-cmms
9. Gibson I, Rosen D, Stucker B (2015) Additive manufacturing technologies, chapter 8. Binder
jetting, 2nd ed. ISBN 978-1-4939-2112- 䯠 6, New York: Springer Science and Business Media
10. 3D systems, Z printer 450, Technical specifications: http://www.zcorp.com/fr/Products/3DPrinters/ZPrinter-450/spage.aspx
11. https://www.rhino3d.com/fr/
12. Michalakis KX, Stratos A, Hirayama H, Pissiotis AL, Touloumi F (2009) Delayed setting
and hygroscopic linear expansion of three gypsum products used for cast articulation. J Prosthet Dent 102(5): 313–318
Flatness, circularity and cylindricity errors in
3D printed models associated to size and
position on the working plane
Massimo MARTORELLI1*, Salvatore GERBINO2, Antonio LANZOTTI1,
Stanislao PATALANO1 and Ferdinando VITOLO1
1
Fraunhofer JL IDEAS - Dept. of Industrial Engineering, University of Naples Federico II,
P.le Tecchio, 80 - 80125 Naples – Italy
DiBT Dep't - Engineering Division, Univ. of Molise, Campobasso, Via De Sanctis snc 86100 Campobasso (CB) - Italy
2
* Corresponding author. Tel.: +390817682470; fax: +390817682470. E-mail address:
massimo.martorelli@unina.it
Abstract The purpose of this paper is to assess the main effects on the geometric
errors in terms of flatness, circularity and cylindricity based on the size of the
printed benchmarks and according to the position of the working plane of the 3D
printer. Three benchmark models of different sizes, with a parallelepiped and cylinder shape placed in five different positions on the working plane are considered.
The sizes of models are chosen from the Renard series R40. Benchmark models
are fabricated in ABS (Acrylonitrile Butadiene Styrene) using Zortrax M200 3D
printer. A sample of five parts for each geometric category, as defined from the
R40 geometric series of numbers, is printed close to each corner of the plate, and
in the plate center position. Absolute Digimatic Height Gauge 0-450mm with an
accuracy of ±0.03mm by Mitutoyo is used to perform all measurements: flatness
on box faces, and circularity/cylindricity on cylinders. Results show that the best
performances, in terms of form accuracy, are reached in the area center printable
while they decrease with the sample size. Being quality a critical factor for a successful industrial application of the AM processes, the results discussed in this paper can provide the AM community with additional scientific data useful to understand how to improve the quality of parts which may be obtained through new
generations of 3D printer.
Keywords: Additive Manufacturing, Fused Deposition Modelling, Geometric Errors.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_21
201
202
M. Martorelli et al.
1 Introduction
According to ISO/ASTM 52915 [1], Additive Manufacturing (AM) is defined as
the process of joining materials to make objects from 3D model data, usually layer
upon layer, as opposed to sub-tractive manufacturing methodologies.
Until a few years ago, manufacturing physical parts required very expensive AM
processes and investments in tooling and sophisticated specific software. This
posed a barrier to the widespread deployment of such techniques.
Today a new generation of AM techniques has rapidly become available to the
public, due to the expiration of some AM patents and to open-source movements,
which allowed significant cost reductions. Nowadays, there are many low-cost 3D
printers available on the market (< €2000).
AM processes offer several technical and economic benefits compared to traditional manufacturing processes. They have the capability to produce complex and
intricate shapes that are not feasible with traditional manufacturing processes.
The geometric freedoms associated with AM provide new possibilities for the part
design. Associated to topology optimization techniques and other methods able to
generate complex shapes, AM processes, potentially, allow to save time, material
and costs. In economic terms, AM permits decoupling manufacturing costs from
the component complexity (Fig. 1).
Fig. 1. Comparison between AM (dashed line) and traditional (continuous line) manufacturing
techniques
In order to profit from the benefits offered by AM, it is necessary to consider the
manufacturing limits and restrictions. This applies in particular to the geometrical
accuracy, being quality a critical factor for a successful industrial application of
AM techniques [2]. Therefore, the implications of AM processes on current geometric dimensioning and tolerancing (GD&T) practices, need to be investigated,
in particular for the new generations of low-cost 3D printer where there is a significant lack of scientific data related to their performances.
Flatness, circularity and cylindricity errors …
203
In this paper, considering a low-cost 3D printer, the main effects on the geometric
errors of flatness, circularity (or roundness) and cylindricity based on the size of
the printed benchmarks and according to each position of the working plane are
described. Flatness and cylindricity errors, in fact, induce substantial effects on
system functionalities in relevant applications [3, 4]. The study was carried out at
the Fraunhofer Joint Lab IDEAS-CREAMI (Interactive DEsign and Simulation –
Center of Reverse Engineering and Additive Manufacturing Innovation) of the
University of Naples Federico II.
2 GD&T and Additive Manufacturing
GD&T standards, although rigorous, have been developed based on the capabilities of traditional manufacturing processes and there are no specific references to
the AM processes.
Although the current increasing interest of industry in AM processes led to the development, through ASTM International and ISO, of news standards [1, 5–8],
however standard methods for the assessment of the geometric accuracy of AM
systems have not been actually defined yet.
Dimensional and micro- and macro-geometric errors in the manufacturing of an
AM part depends on several factors:
- Machine resolution – dependent upon machine design and control.
Every AM system has inherent capabilities due to its design and control (e.g. the
resolution of the stepper motors used to move the print-head and platform in
Fused Deposition Modeling systems or the diameter of the laser spot in laserbased systems).
- Material resolution – dependent upon the material format that is used.
The material is delivered in several different formats in AM: sheet, powder, extruded bead, liquid vat. Extruded bead width will determine the minimum X and Y
direction resolution, sheet thickness will determine the minimum Z direction resolution, powder particle size will affect X, Y and Z direction dimensional accuracy.
- Distortion – usually caused by thermal gradients.
The distortion is usually a result of internal stresses caused by different rates of
cooling in 3D printed parts (thermal gradients). This can happen during the build
process or when the part is cooled after removal from the machine. It can happen
with both metals and polymers. The impact upon accuracy can be very severe with
several millimeters of distortion sometimes seen.
- Process parameters.
The process parameters play an important role in defining the final part quality
and part accuracy of a product [8, 9].
Layer thickness, build orientation, hatching pattern and support structures are the
main AM parameters which directly cause dimensional and micro- and macrogeometric errors in the manufacturing of an AM part [10-13]. Layered nature of
204
M. Martorelli et al.
AM introduces a staircase effect in a part [14-16]. Increased layer thickness results
in more pronounced staircase error, as shown in Fig. 2.
a)
b)
Fig. 2. Effect of layer thickness on staircase error in a spherical part: a) layer thickness of 0.1
mm, b) layer thickness of 0.05 mm
The build orientation of the part being manufactured has to be decided in advance
according to the quality to achieve (specifically related to the functional surfaces)
and also taking into account the placement of support structures [17]. A support
structure is additional material attached to a part during the build process in order
to support features such as overhangs and cavities that have insufficient strength in
a partially manufactured state. After the manufacturing of the part is completed,
support structures can be manually removed or dissolved away. It is essential to
minimize the use of these supports as reduced contact area between the part and
these structures will result in better part quality and also reduce the post processing efforts [18].
The effect of build orientation on flatness error was investigated in [19] and the
authors concluded that the staircase error due to layer thickness and build orientation is the cause of the flatness error on the part and established a mathematical relation between them.
Fig. 3 shows the effect of build orientation (angle between the surface and the horizontal direction [20]) on staircase error for a flat face manufactured using an AM
process.
Fig. 3. Effect of build orientation on staircase error
The effect of build orientation on cylindricity error has been investigated in [21]
and an optimization model to obtain the part orientation while minimizing support
structures and form errors has been developed.
Flatness, circularity and cylindricity errors …
205
3 Materials and Methods
For this study, three benchmark models of three different sizes (small, medium
and large), made of one parallelepiped and one cylinder (in vertical position)
placed in five different positions on the working plane are considered. The nominal diameters of the cylinders are of 20, 30 and 40 mm; same value for the nominal sizes of the cube’s sides. Same height, equal to 20 mm, for all workpieces. We
chose a simple geometry in order to make measurements easier at a later stage.
The 3D printer Zortrax M200 is used to fabricate the benchmark models in ABS
(Acrylonitrile Butadiene Styrene) with 0.14 mm of layer thickness.
A sample of five parts for each of the three geometric category, as defined from
the R40 geometric series of numbers, is printed on each of the angles of the plate,
and in the center of the plate. Each model is identified with a number from 1 to 5,
which matches them to their printing position, as shown in Fig. 4. X and Y directions of printing are also reported on each benchmark.
Y
X
Fig. 4. Sample of five parts printed and identified with a number from 1 to 5
3.1 Errors measurement
Flatness, circularity and cylindricity are measured using Absolute Digimatic
Height Gauge 0-450 mm with an accuracy of ±0.03 mm by Mitutoyo (Fig. 5).
3.1.1 Flatness error
Flatness error is measured on top and two lateral surfaces of workpieces in XZ and
YZ direction, as depicted in Fig. 6.
206
M. Martorelli et al.
Fig. 5. Measurement equipment
Fig. 6. Layout of the workpieces for flatness measurement. Highlighted bold edges of the measured surfaces along XZ and YZ, together with the measurement grid, referred to the large size
(40 mm) of the workpiece
Firstly, the height gauge is set to 0 value by making the pointer touching the support table (which the workpiece was put on), then it is elevated and put onto the
opposite side, so getting the digital measurement. For each face the measurement
is repeated for several positions. In order to obtain a representative set of points of
the workpieces, a rectangular grid is drawn on the surfaces (according to ISO
12781-2). A grid of 5x5 mm is set, so for example a data set of 8x8 measurements
are collected for the top face of the “large” (size 40 mm) workpiece (Fig. 5). Same
procedure for the other faces and for the “medium” (size 30 mm) and “small” (size
20 mm) workpieces. According to ISO 12781-1, the least squares reference plane
method (LSPL) is adopted to generate the flatness tolerance range. Starting from
Flatness, circularity and cylindricity errors …
207
LSPL plane, the maximum positive local flatness deviation (FLT P) and the maximum negative local flatness deviation (FLTv) are measured to calculate the peakto-valley flatness deviation (FLTt).
Fig. 7. XY Flatness – tolerance range
3.1.2 Circularity error
Circularity (or roundness) measurements are realized using the same height gauge
used for flatness, plus using a magnetic base V-Block, in which the cylinder is
blocked. The magnetic base assures that the block will not move during the measurements. The height gauge is then set to 0 value when the pointer touches the
workpiece surface. After that, three 90° clockwise rotations are applied to the cylinder, measuring each time the variation.
Circularity error value is calculated using the least square circle (LSCi) method,
which evaluates the best fitted circle by minimizing the square error. LSCi is the
reference to evaluate the circularity which is calculated as the difference between
the maximum and the minimum distance between the LSCi and the real profile.
The maximum positive local circularity deviation (RONP) and the maximum negative local circularity deviation (RONv) are measured to calculate the peak-tovalley circularity deviation (RONt). Then, mean and standard deviation are computed based on eight different sections per cylinder. For a part perfectly round the
pointer of the height gauge will not move. This V-block (3-point) method is the
simplest way to measure circularity. For more accurate measurement, also able to
capture spacing and phase of profile irregularities, a spindle should be adopted,
which provides a circular datum.
208
M. Martorelli et al.
Fig. 8. XZ Flatness – tolerance range
Fig. 9. YZ Flatness – tolerance range
3.1.3 Cylindricity error
Extending the circularity measurement to the whole surface of the cylinder is
the way to measure cylindricity error. Once the pointer of the height gauge is set
to 0 as in the previous measurement, it is moved along the cylinder axis, measuring variations of the radius in eight different points, just like in the circularity
measurements for multiple sections.
Flatness, circularity and cylindricity errors …
209
According to the method adopted for calculating the circularity error, the least
square cylinder (LSCy) method is evaluated by best fitting a cylinder to measured
data, after providing an initial guess for the axis direction, the axis center and the
cylinder radius. Then, deviations of points from that cylinder are calculated and
the maximum positive deviation and maximum negative deviations are recorded;
they correspond to peak deviation and valley deviation, respectively. Peak-tovalley cylindricity deviation is the measure of the cylindricity error.
Same considerations about the limits of V-block measurement method made
for circularity apply for cylindricity error evaluation.
Fig. 10. Circularity – tolerance range
Fig. 11. Cylindricity – tolerance range
210
M. Martorelli et al.
4 Results and discussion
Figures 7, 8 and 9 show FLTt flatness errors expressed in terms of LSPL (least
squares reference plane method) mean value (black dot) and FLPp and FLPv peak
and valley values (red dots), respectively, related to five positions on the working
plane of the printer and different sizes (small, medium, large) of parallelepipeds.
The measures are related to planes XY, XZ and YZ, respectively. Figures 10 and
11 show results for the circularity and cylindricity errors, respectively.
Flatness
Results turn out that there are no significant differences in the position on the
working plane as the flatness error is very similar, excepting for local spot larger
variability measured in particular in YZ plane on positions 2 and 5. Generally
speaking, the XY top face presents very similar variability for small and medium
workpieces in position 1, 3 and 4, whereas workpieces of medium and large size
present larger flatness error on position 2 and 5. The latter consideration applies to
all measured faces. Workpieces of small and medium size are the ones with the
lowest flatness variability.
Circularity and cylindricity
The tolerance ranges are comparable for each sample size and for each position.
The analysis does not show a clear pattern for the standard deviation even if it
seems that the average error increases as sample size.
We can generally claim that the best printer performances, in terms of form accuracy, are reached in the area center printable (position 3).
5 Conclusions
Today low-cost 3D printers are considered systems with great potential for the future of manufacturing. However currently there is a significant lack of scientific
data for these systems.
In this paper a preliminary study on the main effects of the geometric errors, in
terms of flatness, circularity and cylindricity based on the size of the printed
benchmarks and according to the position of the working plane of the 3D printer,
were assessed.
Taking into account the limits of the present investigation, the results show that
there is no difference in the workpiece size and position on the working place for
flatness error; instead, in terms of circularity and cylindricity errors, the best performances are reached in the central area of the plate and that they decrease with
the sample size. Some local larger variabilities can be ascribed on manufacturing
process and measurement procedure.
Flatness, circularity and cylindricity errors …
211
The results discussed in this paper can give useful additional scientific data to understand how to improve the quality of AM parts obtained using new generations
of 3D printers. Further tests and measurements, accomplished on multiple samples, through several benchmark prototypes, could assure a better evaluation of
statistical variations from both ideal forms and positions in order to provide a series of charts to be used during the designing aimed to rapid manufacturing systems.
Acknowledgments The authors gratefully acknowledge “Costruzioni Meccaniche s.n.c." factory in Sant'Anastasia (NA).
References
1. ISO/ASTM 52921, 2013, Standard Terminology for Additive Manufacturing-Coordinate Systems and Test Methodologies.
2. ISO 17296-1, 2014, Additive Manufacturing—General—Part 1: Terminology.
3. Calì M., et al. Meshing angles evaluation of silent chain drive by numerical analysis and experimental test, Meccanica, 51(3), 2016, pp. 475-489.
4. Sequenzia G., Oliveri S.M., Calì M., Experimental methodology for the tappet characterization of timing system in ICE, Meccanica 48(3), 2013, pp. 753-764.
5. ISO 17296-4, 2014, Additive Manufacturing—General Principles—Part 4: Overview of Data
Processing Technologies, ASTM Fact Sheet.
6. ISO 17296-3, 2014, Additive Manufacturing—General Principles—Part 3: Main Characteristics and Corresponding Test Methods.
7. ISO 17296-2, 2015, Additive Manufacturing—General Principles—Part 2: Overview of Process Categories and Feedstock.
8. Lanzotti A., Martorelli M., Staiano G., Understanding Process Parameter Effects of RepRap
Open-Source Three-Dimensional Printers through a Design of Experiments Approach, Journal of Manufacturing Science and Engineering, 2015, 137(1), pp. 1-7, ISSN: 1087-1357,
Transactions of the ASME.
9. Lanzotti A., Del Giudice D.M., Lepore A., Staiano G., Martorelli M., On the geometric accuracy of RepRap open-source three-dimensional printer, Journal of Mechanical Design, Transactions of the ASME, 2015, 137(10).
10. Ratnadeep P., Anand S., Optimal part orientation in Rapid Manufacturing process for achieving geometric tolerances, Journal of Manufacturing Systems, 2011, 30(4), pp. 214-222.
11. Paul R., Anand S., Optimal part orientation in Rapid Manufacturing process for achieving
geometric tolerances, Journal of Manufacturing Systems, Volume 30, S. 214– 222, 2011.
12. Taufik M., Jain P. K., Role of build orientation in layered manufacturing: a review, Int. J.
Manufacturing Technology and Management, Volume 27, 2013.
13. Lieneke T., Adam G.A.O., Leuders S., Knoop F., Josupeit S., Delfs P., Funke N., Zimmer D.,
Systematical Determination of Tolerances for Additive Manufacturing by Measuring Linear
Dimensions, 26th Annual International Solid Freeform Fabrication Symposium, Austin, August 10-12, 2015.
14. Masood S. H., Rattanawong W., A generic part orientation system based on volumetric error
in rapid prototyping, The International Journal of Advanced Manufacturing Technology
2002, 19(3), pp. 209-216.
15. Pandey, Pulak Mohan, N. Venkata Reddy, and Sanjay G. Dhande, Slicing procedures in layered manufacturing: a review, Rapid Prototyping Journal, 2003, 9(5), pp. 274-288.
212
M. Martorelli et al.
16. Paul, Ratnadeep and Sam Anand., Optimal part orientation in Rapid Manufacturing process
for achieving geometric tolerances, Journal of Manufacturing Systems, 2011, 30(4), pp. 214222.
17. Kulkarni, Prashant, Anne Marsan, and Debasish Dutta., A review of process planning techniques in layered manufacturing, Rapid Prototyping Journal, 2000, 6(1), pp. 18-35.
18. Das P., Chandran R., Samant R., Anand S., Optimum Part Build Orientation in Additive
Manufacturing for Minimizing Part Errors and Support Structures, 43rd Proceedings of the
North American Manufacturing Research Institution of SME, Procedia Manufacturing, 2015.
19. Arni R., Gupta S.K., Manufacturability analysis of flatness tolerances in solid freeform fabrication, Journal of Mechanical Design, 2001, 123(1), pp. 148-156.
20. Campbell R.I., Martorelli M., Lee H.S., Surface Roughness Visualisation for Rapid Prototyping Models, Computer Aided Design, Vol. 34, Issue: 10, 2002, pp. 717-725, ISSN 00104485.
21. Paul R., Anand S., Optimization of layered manufacturing process for reducing form errors
with minimal support structures. doi:10.1016/j.jmsy.2014.06.014, Journal of Manufacturing
Systems, 2014.
Optimization of lattice structures
for Additive Manufacturing Technologies
Gianpaolo SAVIO1*, Roberto MENEGHELLO2 and Gianmaria
CONCHERI1
University of Padova - Department of Civil, Environmental and Architectural Engineering Laboratory of Design Tools and Methods in Industrial Engineering
1
2
University of Padova - Department of Management and Engineering Laboratory of Design Tools and Methods in Industrial Engineering
* Corresponding author. Tel.: +39-049-827-6735; fax: +39-049-827-6738. E-mail address:
gianpaolo.savio@unipd.it
Abstract Additive manufacturing technologies enable the fabrication of parts
characterized by shape complexity and therefore allow the design of optimized
components based on minimal material usage and weight. In the literature two approaches are available to reach this goal: adoption of lattice structures and topology optimization. In a recent work a Computer-Aided method for generative design and optimization of regular lattice structures was proposed. The method was
investigated in few configurations of a cantilever beam, considering six different
cell types and two load conditions. In order to strengthen the method, in this paper
a number of test cases have been carried out. Results explain the behavior of the
method during the iterations, and the effects of the load and of the cell dimension.
Moreover, a visual comparison between the proposed method and the results
achieved by topology optimization is shown.
Keywords: Cellular Structure, Lattice Structures, Additive Manufacturing, Design Methods, Computer-Aided Design (CAD).
1 Introduction
Additive manufacturing (AM) technologies enable the fabrication of innovative
parts not achievable by other technologies, characterized by shape complexity,
multiscale structures and material complexity. Moreover, fully functional assemblies and mechanisms can be directly fabricated [1]. These technologies need specific design tools and methods to take full advantage of their unique capabilities,
which currently have only limited support by commercial CAD software.
Reduction in material usage and weight could be a fundamental step in the diffusion of AM as demonstrated in industrial applications (e.g. in design of brackets
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_22
213
214
G. Savio et al.
for aerospace industry). To reach this goal, commercial CAD software applications exist, that are able to create a skin model and an internal lattice structure.
Unfortunately it is very difficult to perform structural analysis on cellular geometric models. Alternatively, other commercial tools support topology optimization,
which modifies material layout within a given design space, for a given set of
loads and boundary conditions such that the resulting layout meets a prescribed set
of performance targets, obtaining an optimized concept design.
Today, interest in cellular materials is being driven by transport industry, aimed
at new vehicles, which need to be lighter than ever (to reduce fuel usage and inertia) but also stiff, strong and capable of absorbing mechanical energy (e.g. in vehicle collision or in helmet design) [2-3]. This explains the number of papers dealing
with homogenous lattice structures and related mechanical properties. Otherwise
conformal or random cellular structures were studied in literature and optimization
criteria were proposed. For instance, recent research proposed methods for optimizing cellular structures, where the goal is to reach an established deflection and
a target volume, ensuring structural strength [4]. The approach was extended to
conformal lattice structures, in which the cellular structures are not regular, but
follow the shape of curved surfaces in order to increase its stiffness or strength [5].
Another optimization method of conformal lattice structure use the relative
density available from the topology optimization to assign a thickness to the
beams [6]. A Bidirectional Evolutionary Structural Optimization approach based
on topology optimization was recently proposed. This method takes into account
the orientation of cells in the design stage, and considers solid volume and skin in
addition to beam elements [7].
In a recent work the authors [8] proposed a Computer-Aided method for generative design and optimization of regular cellular structures, obtained by repeating
a unit cell inside a volume, where the elements are cylinders having different radii.
The approach is based on the iterative variation of the radius of each element in
order to obtain the optimal design. Target of the optimization is the achievement
of a required level of utilization, that specifies the level of usage of the material
for each element (utilization is equal to zero when the maximum stress inside an
element is null and is equal to 1 when the maximum stress is the maximum admissible, e.g. equal to yield stress).
The method was investigated in few configurations of a cantilever beam, considering six different cell types and two load conditions. As a result, cell types
were classified as a function of the relative density and compliance/stiffness in the
different load conditions. The main limit of the study concerns the limited number
of tests performed and the absence of case studies and experimental tests.
In this work, a number of test cases has been assessed in order to evaluate the
behavior of the method during the iterations, and the effects of different loads and
cell dimensions. These results will be the basis for the development of guidelines
in parameters setup as a function of load/constraint configuration and compliance/stiffness requirements. Finally, a visual comparison between the proposed
method and topology optimization approach is shown.
Optimization of lattice structures …
215
2 Design Method
The proposed design method (fig. 1) is aimed at the substitution of a solid model
with cellular structures, obtaining a wire model computed by a generative modeling approach [9]. A finite element (FE) model is built on the wire model and then
analyzed [10]. A dedicated iterative optimization procedure was developed in Python [11] in order to obtain an optimized geometric model.
Repeating side by side a regular unit cell of specified dimension, a wire model
is obtained. Each type of unit cell is defined by a number of edges and consequently the wire model is a collection of lines connected at vertices called nodes.
Each edge of the wire model is a beam with circular sections in the FE model.
The initial radius is the same for all the beams, and is computed in order to ensure
a desired value of utilization index for the most stressed beam. This index specifies the level of usage of the material for an element according to EN 1993-1-1
[12]. To complete the FE model, material, load and constraints must be defined
according to functional requirements of the solid model.
The most important result of FE analysis is the computation of the utilization of
each beam (Ui = utilization of i-th beam), needed in the optimization step. Goal of
the optimization is to obtain Ui of all beams close to a target utilization Ut. In order to consider the AM process features, a minimum radius (Rmin) for each beam
must be defined; moreover a max radius (Rmax) is computed considering the cell
dimension.
More in detail, the optimization procedure consists of an iterative modification
of the radius Ri of each beam (therefore defining a new FE model) and involves
new results of the FE analysis. Each new radius Rn i is defined as:
Rni
Ri ˜ Ui Ut
(1)
if Rni > Rmax then Rni = Rmax
(2)
if Rni < Rmin then Rni = Rmin
(3)
The iterative procedure continues until Ui of each beam satisfies the following
equation:
Ut ‒ x˜Ut < Ui < Ut + x˜Ut
(Ut > 0, x < 1),
(4)
where x defines the range of admissible utilizations Ui (e.g. x=0.1 means
Ut±10%).
Finally, the optimized geometrical model is computed: a cylinder having the
optimized radius and spherical caps is constructed around each line of the wire
model. Then, a Boolean union is carried out over all cylinders. Spherical caps are
adopted in order to reduce stress concentrations and to avoid non-manifold entities
216
G. Savio et al.
at the nodes, where several beams having different radii converge together. A similar approach was proposed by Wang et al. [13].
This modeling procedure shows limits especially in Boolean operations, file
dimensions and fillets. To overcome these restrictions a specific modeling procedure was developed for cubic cell. Starting from the results of the optimization
procedure, a simple mesh was modeled and then the Catmull-Clark subdivision
surface [14] was adopted to obtain a smooth mesh using Weaverbird [15]. This
approach can be extended to other cell types, defining specific methods for creating a simple mesh model of the cell.
Solid model
Cell type
Cell dimension
Wire model
Cross-sections
Material
FE model
Loads
Constrains
FE analysis
Optimized?
N
New radii
Y
Optimized
model
Mesh
model
Fig. 1. The proposed method for modeling and optimize lattice structures.
3 Test cases
A cantilever beam with dimensions 30x30x80 mm was studied. 6 types of cells
(fig. 2) were studied: simple cubic (SC) [16], body center cubic (BCC) [16], reinforced body center cubic (RBCC) [16], octet truss (OT) [17], modified Gibson-
Optimization of lattice structures …
217
Ashby (GAM) [18], modified Wallach-Gibson (WG) [19]. Polyamide 12 (PA
2200 by EOS GmbH) mechanical properties were adopted: tensile modulus
E=1700 MPa, yield strength=48 MPa, shear modulus G=630 MPa, density
930kg/m3 (Amado-Becker et al. 2008).
a)
b)
c)
d)
e)
f)
Fig. 2. Cell types: a) SC, b) BCC, c) RBCC, d) OT, e) GAM, f) WG.
The behavior of the method during the iterations has been investigated on the 6
cell type, adopting 5 mm of cell dimension and 50 N of flexural load.
The effect of the load has been studied on a 5 mm BCC subjected to a flexural
load ranging between 10 N to 200 N, with step of 10 N. The cell dimension effect
has been investigated on a BCC cell with edge length 2.5 mm,5 mm,10 mm.
Comparison between our method and topological optimization has been performed on SC cell with edge length 2.5 mm and 5 mm on 50 N of flexural load.
The topology optimization problem has been solved using Millipede, an add-on
for Grasshopper [20].
The convergence conditions adopted are: Ut=0.5, x=0.10 (0.45<U i<0.55), Rmin
= 0.25 mm, Rmax = 5 mm.
Relative density U is assumed as:
ρ=Vo/Vc
(5)
where Vo is the volume of the optimized structure and Vc is the volume of the
cantilever (Vc = 30x30x80 = 72000 mm3). Volume has been computed without
considering the beams ends overlapping (i.e. without performing any boolean union).
4 Results and discussion
The behavior of the method during the iterations is shown in fig. 3 and summarized in tab. 1 for the convergence conditions. The maximum and minimum utilizations of the beams show that convergence can be obtained with a low number of
iterations for the BCC and RBCC cell. These 2 cells show a clear trend in convergence, while the other cells have an irregular trend in the maximum and minimum
utilizations values and consequently on the method convergence (fig. 3a). Number
of beams and nodes show the problem complexity: the simplest cells are CS, WG
and BCC (tab. 1). BCC shows the lowest relative density in the studied conditions,
218
G. Savio et al.
while the GAM the highest (fig. 3b). It should be underlined that the relative density has a contribution linked to the beams with minimum radius, and consequently, in other configurations, different types of cell may produce lower relative density. WG, RBCC and BCC show the higher stiffness, while the GAM has the
higher compliance (fig 3c).
Generally it is possible to see that higher density is related to a lower compliance in the same topological configuration (fig 3b,c). This aspect is evident for the
WG cell, in which it is possible to see 2 configurations: the first around 20th iteration and the second beyond 60th iteration.
Other convergence criteria could be adopted in order to obtain a lower number
of iterations, with no significant difference in relative density and displacement.
For example, using as convergence criteria Ui<0.55 and a variation of displacement between two consecutive iterations less than 0.1%, the GAM cell converges
within 12 iterations. Similarly, using as convergence criteria Ui<0.55 and a variation of relative density between two consecutive iterations less than 0.1%, the WG
cell converges within 24 iterations.
Results relevant to the load variation, studied on a BCC cell, are summarized in
fig. 4. The proposed method found a solution until a load of 140 N. In order to increase the maximum load value, it is possible to change the convergence criteria
or modify the approach for the new radii computation. In this range the relative
density is almost proportional to the load, while the displacement has a quadratic
behavior for load ranging between 20 and 140N. The number of iterations for the
convergence is between 16 and 25 for loads less than or equal to 120 N and increase until 40-50 iterations for higher loads.
Adopting different cell dimensions (fig. 5), higher stiffness could be obtained
with smaller cell dimensions. This could be related to the increased number of
beams with minimum radius. Due to the same reason, increasing the cell dimension, the relative density shows a decreasing trend. For the given conditions the iterations needed increase together with the cell dimension. Finally, it is possible to
see a strong reduction of the problem complexity (number of beams) increasing
the cell dimension.
Fig. 6 shows a visual comparison between our approach and topology optimization. A similar behavior can be seen particularly in the portion close to the constrain (left) and to the loaded surface (right). These similarities could be related to
the homogenization procedure that occurred in the topology optimization problem
[21], in which the design space is filled with an artificial composite material made
of cells with holes.
In brief, the results shown in this paper and in [8] can be used to derive guidelines in cell selection and parameters setup: when the stiffness is the design target,
RBCC and BCC cells structures are recommended. CS shows the lowest complexity. Increasing the load, the relative density and displacement increase. Reducing
the cell dimension, the relative density, stiffness and number of beams increase.
BCC are suggested when the goal is a low relative density and low iterations.
Higher values of the range of admissible utilization allow faster convergence.
Optimization of lattice structures …
219
1
CS
BCC
GAM
WG
OT
RBCC
Utilization
0.75
0.5
0.25
0
0
a)
20
40
Iterations
60
80
100
80
100
0.12
Relative density
0.1
CS
WG
BCC
OT
GAM
RBCC
0.08
0.06
0.04
b)
0
20
40
Iterations
60
c)
Fig. 3. Behavior of the proposed method: a) utilization, b) relative density and c) displacement as
a function of the iterations under flexural load.
G. Savio et al.
4
0.075
3
Relative density
0.1
Relative
density
0.05
0.025
0
50
Load [N]
2
Displacement [mm]
220
1
100
150
5
0.08
4
0.06
3
0.04
2
Relative density
Displacement
0.02
1
0
0
0
Number of beams
a)
Cell dimension [mm]
5
10
60000
30
50000
25
40000
20
30000
15
Number of Beams
Iterations
20000
10
10000
5
0
b)
Displacement [mm]
0.1
Iterations
Relative density
Fig. 4. Relative density as a function of the flexural load for the BCC cell.
0
0
Cell dimension [mm]
5
10
Fig. 5. Behavior of the proposed method: a) relative density and displacement and b) number of
beams and iterations as a function of the cell dimension under flexural load for the BCC cell.
Future work will be addressed in the evaluation of further configurations, in the
investigation of methods for simplifying the geometric modeling procedure, and in
Optimization of lattice structures …
221
the experimental testing of the method to components of practical interest. Moreover different optimization criteria will be studied.
Table 1. Model configuration and convergence conditions under flexural load.
Cell type
Beams
Nodes
Iterations
Displacement [mm]
Relative Density
CS
2212
833
57
4.422
0.0766
BCC
6820
1409
19
3.311
0.0473
RBCC
10276
3365
18
3.22
0.0547
OT
14736
2789
77
4.196
0.0816
a)
b)
c)
Fig. 6. Proposed method on a cubic cell (a,b), vs topology optimization (c).
GAM
17280
12324
98
6.474
0.1005
WG
4890
1193
71
2.685
0.0624
222
G. Savio et al.
References
1. Gibson I. Rosen D. and Stucker B. Additive Manufacturing Technologies: 3D Printing, Rapid
Prototyping, and Direct Digital Manufacturing, 2015 (Springer-Verlag New York).
2. Gibson L.J. and Ashby M.F. Cellular solids: structure and properties, 1997 (Cambridge university press).
3. Ultralight Cellular Materials,
http://www.virginia.edu/ms/research/wadley/celluar-materials.html (access 2016/04/26)
4. Chu J. Engelbrecht S. Graf G. and Rosen D.W. A comparison of synthesis methods for cellular structures with application to additive Manufacturing. Rapid Prototyping Journal, 2010,
16(4), 275-283.
5. Nguyen J. Park S.I. Rosen D.W. Folgar L. and Williams J. Conformal Lattice Structure Design and Fabrication. In International Solid Freeform Fabrication Symposium, Austin, August
2012, 138-161.
6. M. Alzahrani, S.K. Choi and D. W. Rosen. Design of Truss-like Cellular Structures Using
Relative Density Mapping Method. Materials and Design, 2015, 85, 349-360.
7. Tang Y. Kurtz A. and Zhao Y.F. Bidirectional Evolutionary Structural Optimization (BESO)
based design method for lattice structure to be fabricated by additive manufacturing. Computer-Aided Design, 2015, 69, 91-101.
8. Savio G. Gaggi F. Meneghello R. and Concheri G. (2015). Design method and taxonomy of
optimized regular cellular structures for additive manufacturing technologies. In International
Conference on Engineering Design, ICED’15, Vol 4, Milan, July 2015, pp.235-244 (Design
Society, Glasgow, Scotland).
9. Grasshopper, http://www.grasshopper3d.com/ (access 2016/04/26).
10. Karamba, http://www.karamba3d.com/ (access 2016/04/26).
11. Rhino Developer Docs,
http://developer.rhino3d.com/guides/rhinopython/what_is_rhinopython/ (access 2016/04/26).
12. Preisinger C. Karamba User Manual for Version 1.1.0. 2015.
13. Wang H. Cheng Y. and Rosen D.W. A Hybrid Geometric Modeling Method for Large Scale
Conformal Cellular Structures. In ASME Computers and Information in Engineering Conference, Long Beach, California, September 2005, pp. 421-427.
14. Catmull E. and Clark J. Recursively generated B-spline surfaces on arbitrary topological
meshes. Computer-Aided Design, 1978, 10(6), 350-355.
15. Piacentino G. Weaverbird Beta 0.9.0.1. http://www.giuliopiacentino.com/weaverbird/ (access
26-04-2016).
16. Luxner M.H. Stampfl J. and Pettermann H.E. Finite element modeling concepts and linear
analyses of 3D regular open cell structures. Journal of Materials Science, 2005, 40, 58595866.
17. Deshpande V.S. Fleck N.A. and Ashby M.F. Effective properties of the octect-truss lattice
material. Journal of the Mechanics and Physics of Solids, 2001, 49, 1747-1769.
18. Roberts A.P. and Garboczi E.J. Elastic properties of model random three-dimensional opencell solids. Journal of the Mechanics and Physics of Solids, 2002, 50, 33-50.
19. Wallach J.C. and Gibson L.J. Mechanical behavior of a three-dimensional truss material. International Journal of Solids and Structures, 2001, 38(40-41), 7181-7196.
20. Panagiotis M. and Sawako K. Millipede http://www.sawapan.eu/ (access 2016/05/02).
21. Hassani B. and Hinton E. Homogenization and structural topology optimization: theory,
practice and software, 1999 (Springer-Verlag London).
Standardisation Focus on Process Planning and
Operations Management for Additive
Manufacturing
Jinhua XIAO1, Nabil ANWER2, Alexandre DURUPT1, Julien LE
DUIGOU1 and Benoît EYNARD1,*
Sorbonne Universités, Université de Technologie de Compiègne, Department of Mechanical
Systems Engineering, UMR UTC/CNRS 7337 Roberval, CS 60319, 60203 Compiègne
Cedex, France
1
2
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris-Saclay, 94235 Cachan, France
* Corresponding author. Tel.: +33 (0)3 44 23 79 67; fax: +33 (0) 3 44 23 52 29. E-mail
address: benoit.eynard@utc.fr
Abstract The work presented in this paper has focused on process planning and
operations management of Additive Manufacturing (AM) through the hereafter
mentioned standards such as ISO 10303 and ISO 14649, ISO 15531, ISO/CD
18828 and Unified Manufacturing Resource Model (UMRM). We have combined these standards to integrate process implementations, manufacturing management and control and information flows. The objective of this work is to standardize manufacturing process for AM. Similarly, the UMRM is introduced to
develop a unified manufacturing resource service platform, which can provide
the required information regarding machine tools to automate the decision making in process planning and operations management.
Keywords: Additive Manufacturing, Process Planning, Operations Management,
Unified Manufacturing Resource Model, Standardisation
1 Introduction
Additive Manufacturing (AM) processes are different from the traditional
manufacturing processes. These differences can be considered in two main points,
the first one is related to material’s processing (additive versus subtractive) and
the second relies on the integrated digital chain and the related data models including not only geometric data but also manufacturing information, such as process
planning and operations management, which still remain poor on standardized information for process specification and planning that would allow the develop© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_23
223
224
J. Xiao et al.
ment of advanced AM operations management [1, 2]. However, the objective of
standards for process planning and operations management allows efficient data
exchange with a better verification and validation, simulation, optimization and
more feedbacks [3, 4]. Hence our work presented in the paper is based on three
important ISO standards whose primary objective is to enhance data exchange for
process planning and operations management. These standards are ISO 10303
(STEP) [5] and ISO 14649 (STEP-NC) [4], ISO 15531 (MANDATE) [6] and
ISO/CD 18828 [7], respectively.
The combination of international standards are a good way of enhancing the interoperability of information systems used in manufacturing system, which makes
information exchange easier and more efficient. ISO 10303 is a standard for the
computer-interpretable representation, which covers a wide variety of product
types and describes standardized data models in several application protocols. The
ISO 15531 MANDATE standard for exchanges of manufacturing management
data has been developed to solve information and knowledge management during
the overall product’s life cycle. The ISO/CD 18828, known also as Production
Planning Standardized Procedure for Production Systems Engineering, is a new
international emerging standard for industrial data and manufacturing interface.
These standards mainly include manufacturing engineering, numerically controlled machine and process measurement and control. Moreover, AM processes
and operations management could be analysed by unified manufacturing resource
model (UMRM), which provides the required information regarding AM machine
tools and auxiliary devices to automate process planning decisions. This work will
further solve process management by standardizing integrated manufacturing
process and operation management.
Next sections will highlight process standards for manufacturing information
system, which mainly include ISO 10303 and ISO 14649, ISO 15531 and ISO
18828 respectively. In section 3, in order to enhance operations management regarding AM machine tools, the unified manufacturing resource model (UMRM) is
analysed. In section 4, a conclusion and perspectives are given.
2 Process standards for manufacturing information system
In manufacturing information system, it is a compelling solution to enhance interoperability for computer aided system applications because they share and exchange manufacturing data in additive manufacturing process [8, 9]. When a computer aided process planning (CAPP) application creates a process planning it
ideally must match machining operations with an appropriate machine tool.
In order to elaborate the product planning process, the relative information of
machine tools should be known before sending its processing instruction to perform. However, the International Standards Organisation (ISO) standard is one of
the current solutions to facilitate data sharing and exchanging for manufacturing
Standardisation Focus on Process Planning ...
225
process. Our research is under the context of product lifecycle management. The
objective of process planning is to generate process sequences and parameters
minimizing hazards. So the significant work concerning data sharing and exchange is being done by ISO, which has launched a number of standardized data
models. Table 1 presents the standards for process planning in manufacturing system. Although every standard has specific usage, all of them use EXPRESS modelling language and permit building neutral data repositories [10].
Table 1. Standards for process planning in manufacturing system
Objectives
Usage
10303 and
14649
ISO standards
STEP-NC
Acronym
To develop standardized data models
Process implementation
15531
MANDATE
To create normalized data models for
improving manufacturing management and information exchange
Manufacturing management and control
18828
No
To standardize procedure for production systems engineering
Information flows for
planning processes
2.1 ISO 10303-238 and ISO 14649 (STEP-NC)
ISO 10303-238 and ISO 14649 is a new data model that links CAD/CAM systems and CNC machines, which remedies the shortcomings of ISO 6983 by specifying machining processes rather than machine tool motion [4]. As ISO 14649
provides a comprehensive model of the manufacturing process, it can also be used
as multi-directional data exchange between all other information technology systems. The advantage of ISO 14649 could supersede data reduction to simple
switching instructions, such as linear and circular movement. It can describe the
machining operations to execute on the workpiece and run on different machine
tools or controllers [8]. However, the usage of the standard is to implement manufacturing process under complicated environment. It has also been extended to
other standards for representing geometry, features, process plans, and manufacturing data using NC machine tools. However, the ISO and ASME standards are
emerging as results of international effort for representing manufacturing resources. STEP-NC provides a full description of the part and the manufacturing
process [11, 12, 13].
2.2 ISO 15531 (MANDATE)
ISO 15531 is an international standard for the computer-interpretable representation and exchange of industrial manufacturing management data. The objective
226
J. Xiao et al.
is to provide a neutral mechanism capable of describing industrial manufacturing
management data throughout the production process. Its description makes it suitable not only for neutral file exchange but also as a basis for implementing and
sharing manufacturing management databases and archiving [14, 15].
ISO 15531 does not standardise the manufacturing process. The aim of ISO
15531 is to provide standardised data models. However, the objectives of
MANDATE are slightly different from the objectives of the model. MANDATE
model is within the manufacturing system: machines, workers, tools, etc. So
MANDATE includes neither the modelling of the resource shape nor the description of its usage.
Fig. 1 MANDATE process information model in manufacturing planning
Ideally, it addresses operational planning and has a simulation capability. It is
made up of a variety of processes. It links together: business planning, production
planning, production scheduling, material requirements planning, capacity requirements planning, and the execution support systems for capacity and material.
Fig. 1 presents MANDATE process information model in manufacturing planning. The MANDATE contains the information structure and the data exchange at
the interfaces between different functions.
2.3 ISO 18828
ISO 18828 makes use of the following terms defined in ISO 15531: manufacturing, process, process planning, production control and resource. ISO 18828-3
provides additional information that focuses on information flows. The main information flow for an operation or a process plan includes two information objects. The first object is referred to as ‘preliminary information for the operation
list’ and includes prerequisites for process planning. The second information object is referred to as an ‘operation list. As shown in Fig. 2, based on preliminary
information that has been compiled from various relevant sources, the concept
planning phase generates an initial conceptual operation list. To start the concept
Standardisation Focus on Process Planning ...
227
planning phase, various preliminary information is required for defining an operation list. During the rough planning phase, the conceptual operation list is detailed
further in accordance with relevant continuing planning process steps and ultimately results in a rough operation list. During the detailed planning phase, the
rough operation list is completely detailed and developed further into an elaborated complete operation list. In the detailed planning phase, the rough operation
list is used to develop the details of manufacturing process steps and work content.
Fig. 2 Information flow of planning process and operation [7]
3 Operations management of AM machines
By analysing the functionality and usages of three standards above, the high integration and interoperability of manufacturing process and operations for additive
manufacturing (AM) could be fundamentally realised under complex environ-
228
J. Xiao et al.
ment. However, the current AM process includes many categories as shown in
Table 2 [16]. Industry has predominantly focused on powder bed fusion (PBD)
and directed energy deposition (DED). These technologies utilise high-localised
temperatures to deposit melt materials on the build surface [17, 18, 19].
The additive manufacturing machining system representation can be classified
into various resource domains as shown in Fig. 3. However, one of the important
objectives of additive manufacturing machining is to represent process capability.
A unified manufacturing resource model (UMRM) is introduced to provide the required information regarding AM machine tools and auxiliary devices to automate
process planning decisions [20].
Table 2 standard terminologies of additive manufacturing process
ASTM Additive Manufacturing Process
Descriptions of operations
Material Extrusion
Material is selectively dispensed through a nozzle of orifice.
Material Jetting
Droplets of material are selectively deposited.
Binder Jetting
A liquid bonding agent is selectively deposited to join powder
materials.
Sheet Lamination
Material sheets are bonded to form an object.
Vat Photopolymerisation
Liquid photopolymer in a vat is selectively cured by lightactivated polymerization.
Powder Bed Fusion
Thermal energy selectively fused regions of a powder bed
Directed Energy Deposition
Focused thermal energy is used to fuse materials by melting as
the material is deposited
Fig. 3 Additive manufacturing machining system resource
Standardisation Focus on Process Planning ...
229
3.1 AM machine elements
An important point of modern design and manufacturing is to ensure effective
manufacturing decisions are made as early as possible in process planning for the
purpose of eliminating or reducing decision mistakes. Many efforts have been
made towards a computer aided design (CAD) -computer-aided process planning
(CAPP)-computer aided manufacturing (CAM) integration platform [21]. Automatic process planning methodologies and manufacturing resource modelling
have largely improved the advance of computer-integration manufacturing, which
includes the determination of raw materials, available machine tools and the selection of the operation sequence. And the machine tool model describes the configuration of the overall machine tool structure, the geometric shape of mechanical
units and the kinematic relationship between mechanical units [22]. But the representation of machine tool cannot be supported by the information model of STEPNC. So it is necessary to propose a modified UMRM to describe machine tool
functionality and process capability of AM machine.
3.2 AM operations
AM operations can be considered as an accumulation of operations to add desired materials into final product. There exist many similar operations for fabricating a part by depositing material, such as welding [23]. AM machine tool has been
used to execute the desired manufacturing operations in manufacturing process.
However, the unified manufacturing resource model (UMRM) has a data model
to provide machine specific data in the form of an EXPRESS schema and act as a
complementary part to represent various machine tools in a standardised form.
Fig. 4 presents the UMRM of machine tool (a) and mechanical machine elements
(b) for AM systems. However, machine tool model is a conceptual representation
for machine tool and provides a logical framework for representing its functionality and sequential process operations in manufacturing systems [24]. So this machine tool representation can be utilised to represent machine tool functionality,
planning process and sequential operations. The machine tool is considered as an
assembly of various mechanical machine elements and auxiliary devices linked
with each other. The purpose of each mechanical element in machine tool is to alleviate machine capability and make entire manufacturing system higher integration and interoperability by standardising process planning and operations.
230
J. Xiao et al.
Fig. 4 The UMRM of AM system for machine tool (a) and mechanical machine element (b)
4 Conclusions and future work
This paper has focused on research issues in planning process and operation
managements dedicated to additive manufacturing. The work presented is based
on three important ISO standards whose primary objective is to enhance data exchange for process planning and operations management. These standards are ISO
10303 ISO 14649, ISO 15531 and ISO/CD 18828 respectively. We analysed fundamental characteristics of these standards to solve process implementations,
manufacturing management and control and information flows for planning process. Then the unified manufacturing resource model is used to provide machine
specific data in the form of an EXPRESS schema and to represent various machine tools in a standardised form. So this machine tool representation is used to
represent machine tool functionality, planning process and sequential operations.
The future developments and applications of this work will promote the integration and standardisation of hybrid manufacturing systems (additive/subtractive) in
industrial context.
Standardisation Focus on Process Planning ...
231
Acknowledgments This work has been supported by the Doctoral Program of Chinese Scholarship Council.
References
1. Krishnan, N., and Sheng, P. S. Environmental versus conventional planning for machined
components. CIRP Annals-Manufacturing Technology, 2000, 49(1), 363-366.
2. Kim, D. B., Witherell, P., Lipman, R., and Feng, S. C. Streamlining the additive manufacturing digital spectrum: A systems approach. Additive Manufacturing, 2015, 5(1), 20-30.
3. Danjou, C., Le Duigou, J., and Eynard, B. Closed-loop Manufacturing, a STEP-NC Process
for Data Feedback: A Case Study. Procedia CIRP, 2016, 41(1), 852-857.
4. Danjou, C., Le Duigou, J., & Eynard, B. Closed-loop manufacturing process based on STEPNC. International Journal on Interactive Design and Manufacturing, 2015, 1(1), 1-13.
5. Pratt, M. J. Introduction to ISO 10303—the STEP standard for product data exchange. Journal
of Computing and Information Science in Engineering, 2001, 1(1), 102-103.
6. Cutting-Decelle, A. F., Young, R. I., Michel, J. J., Grangel, R., Le Cardinal, J., and Bourey, J.
P. ISO 15531 MANDATE: a product-process-resource based approach for managing modularity in production management. Concurrent Engineering, 2007, 15(2), 217-235.
7. Industrial automation systems and integration — Standardized procedure for production systems engineering — Part 2: Reference process for seamless production planning, DRAFT
INTERNATIONAL STANDARD ISO / DIS 18828-2, 2016 (ISO).
8. Eynard B., Bosch-Mauchand M., Integrated Design and Smart Manufacturing, Concurrent
Engineering: Research and Applications, 2015, 23(4), 281-283.
9. Cutting-Decelle, A. F., Barraud, J. L., Veenendaal, B., and Young, R. I. Production information interoperability over the Internet: A standardised data acquisition tool developed for industrial enterprises. Computers in Industry, 2012, 63(8), 824-834.
10. López-Ortega, O., and Moramay, R. A STEP-based manufacturing information system to
share flexible manufacturing resources data. Journal of Intelligent Manufacturing, 2005,
16(3), 287-301.
11. Rauch, M., Laguionie, R., Hascoet, J. Y., and Suh, S. H. An advanced STEP-NC controller
for intelligent machining processes. Robotics and Computer-Integrated Manufacturing, 2012,
28(3), 375-384.
12. Amaitik, S. M., and Kiliç, S. E. An intelligent process planning system for prismatic parts using STEP features. The International Journal of Advanced Manufacturing Technology, 2007,
31(9-10), 978-993.
13. Srinivasan, V. Standardizing the specification, verification, and exchange of product geometry: Research, status and trends. Computer-Aided Design, 2008, 40(7), 738-749.
14. Industrial Automation Systems and Integration – Industrial Manufacturing Management Data
– General Over- view: Part 1, ISO TC184/SC4, ISO IS 15531-1, 2004 (ISO).
15. Ray, S. R., and Jones, A. T. Manufacturing interoperability. Journal of Intelligent Manufacturing, 2006, 17(6), 681-688.
16. Standard, A. S. T. M. Standard terminology for additive manufacturing technologies. PA:
ASTM International F, vol. 2792, Pennsylvania, December 2015, pp.1-3 (ASMT International, West Conshohocken).
17. Flynn, J. M., Shokrani, A., Newman, S. T., and Dhokia, V. Hybrid additive and subtractive
machine tools–Research and industrial developments. International Journal of Machine Tools
and Manufacture, 2016, 101(1), 79-101.
18. Simchi, A., Petzoldt, F., and Pohl, H. On the development of direct metal laser sintering for
rapid tooling. Journal of Materials Processing Technology, 2003, 141(3), 319-328.
232
J. Xiao et al.
19. Brown, C., Lubell, J., and Lipman, R. Additive manufacturing technical workshop summary
report. NIST Technical Note 1823, 2013.
20. Vichare, P., Nassehi, A., Kumar, S., and Newman, S. T. A unified manufacturing resource
model for representing CNC machining systems. Robotics and Computer-Integrated Manufacturing, 2009, 25(6), 999-1007.
21. Xu, X. W., and He, Q. Striving for a total integration of CAD, CAPP, CAM and CNC. Robotics and Computer-Integrated Manufacturing, 2004, 20(2), 101-109.
22. Amaitik, S. M., and Kiliç, S. E. An intelligent process planning system for prismatic parts using STEP features. The International Journal of Advanced Manufacturing Technology, 2007,
31(9-10), 978-993.
23. Eiamsa-ard, K., Nair, H. J., Ren, L., Ruan, J., Sparks, T., and Liou, F. W. Part repair using a
hybrid manufacturing system. In Proceedings of the Sixteenth Annual Solid Freeform Fabrication Symposium, vol. 1, Missouri, August 2005, pp.425-433 (SFF Symposium Proceedings, Austin).
24. Liu, Y., Guo, X., Li, W., Yamazaki, K., Kashihara, K., and Fujishima, M. An intelligent NC
program processor for CNC system of machine tool. Robotics and Computer-Integrated
Manufacturing, 2007, 23(2), 160-169.
Comparison of some approaches to define a
CAD model from topological optimization in
design for additive manufacturing.
DOUTRE Pierre-Thomas1,2, MORRETTON Elodie1,3, VO Thanh Hoang1, MARIN
Philippe1*, POURROY Franck1, PRUDHOMME Guy1, VIGNAT Frederic1
1
Univ. Grenoble Alpes, G-SCOP, F-38000 Grenoble, France
CNRS, G-SCOP, F-38000 Grenoble, France
2
POLY-SHAPE, 235 rue des Canesteu ZI La Gandonne 13300 Salon de Provence – France
3
Zodiac Seat France, ZI La Limoise, Rue Robert Maréchal, 36100 Issoudun, France
* Corresponding author. E-mail address: philippe.marin@grenoble-inp.fr
Abstract: Topological optimization is often used in the design of light-weight
structures. Additive manufacturing allows to manufacture complex shapes and exploits the full potential of this tool. However, topology optimization results is a
discrete representation of the optimal topology, requiring the designers to ‘manually’ create a CAD model. This process can be very time consuming, and hardly
penalizes the design process efficiency. In this paper, several possible approaches
to get a CAD model from the topological optimization results are proposed. From
case studies, benefits and drawbacks of these approaches are discussed in order to
help engineers in the choice of their approach.
Keywords: Surface reconstruction; additive manufacturing; topological optimization; polygonal model.
1 Introduction
Topological optimization is a method for finding the best distribution of material
within a defined spatial design domain, and under specified constraints. This distribution is associated with a specific objective [1]. This often results in very complex geometries which are hard to produce by conventional manufacturing processes. In the same time, the recent development of additive manufacturing
technologies leads to radical changes in the design activities. It is often claimed
that manufacturing of complex shapes becomes possible with virtually no extra
cost. As a consequence, topological optimization might be considered as a tool of
major interest in the design for additive manufacturing process [2].
(a)
(b)
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_24
(c)
233
234
D. Pierre-Thomas et al.
Fig.1. Topological optimization process, (a) The initial model, (b) The density map, (c) The topology after threshold choice.
Since topological optimization involves computations on a spatially discretized
geometrical domain, the resulting optimal geometry is proposed as a polygonal
model (Figure 1). To meet customers’ requirements and facilitate manufacturing,
there is a need for transforming this tessellated model into a new geometric model
(referred in this paper as “CAD model”) giving higher surface continuity. Currently, few existing CAD systems incorporate such a conversion tool.
The aim of this paper is to compare different possible approaches to generate a
CAD model from topological optimization results.
To answer this question, a state of the art study was first conducted in order to
identify from scientific literature and from engineering practices the main possible
approaches. Section 2 presents a short synthesis of this study. These different approaches are then compared through a case study in section 3, and section 4 draws
a comparison of the approaches against several criteria, showing benefits and
drawbacks of each approach for the designers.
2 State of art
CAD model reconstruction has been the focus of many research works in different
domains. Main examples are reverse engineering, which typically involves generating a CAD parametric model from a set of points issued from 3D scanning of an
existing part, and finite elements analysis, which may also require sometimes to
generate a CAD model of a deformed mesh. Hence, a variety of approaches and
tools have been identified, that can be classified into two different philosophies:
for the first one the designer uses the polygonal model issued from the optimization software as a background layer. The CAD model is then generated by hand,
tracing the background layer with different CAD features. In the second philosophy, the user works with automatic surface or solid generation tools directly applied on the polygonal model.
In the first philosophy, there are two main approaches. The first one is to use
conventional CAD features, namely, based on classical extrusion operations. The
created geometries are quite simple. So, creating a complex shape becomes a time
consuming process [3]. In the second one, freeform surface manipulations are
used to create the CAD model [4]. These operations handle NURBS technology.
Some CAD software natively based on this mathematical models recently
emerged (i.e. Evolve1 and Spaceclaim [3]). Using these manual approaches designers have the opportunity to freely interpret the computed optimal topology,
and to make any adjustment they want for integrating additional design or manufacturing constraints. But these approaches are very time consuming, which is
1
www.altair.com
Comparison of some approaches to define …
235
their major drawbacks even if it highly depends on the designer’s skills in using
the CAD tools.
Considering the second philosophy, three main methods were identified, each
of them involving some automatic tools. For geometry reconstruction from 3D
scanning, some steps are necessary: cleaning the points cloud, creating the polygonal model and automatically fitting surfaces to this one. To create the polygonal
model, the “shrink wrapping” algorithm [5] can be used. This is a way to build a
new polyhedral shell as a skin around the original mesh. This technique simultaneously smooths the shape and closes small holes. It uses the principle of a plastic
membrane composed of triangles that is wrapped around the original object. This
membrane is deformed to be close from the object with the objective of reducing
the local energy distortion. From the mesh [6], as [7] and [8], propose to cut it by
patches of surfaces. Then an algorithm of surface creation is applied to each patch.
Finally, these surfaces are merged. Volpin [9] follows the same process, but a
mesh simplification is done beforehand. In the other hand, this reconstruction can
be done from other methods as suggested by [10]. This method proposes to do a
surface interpolation for each patch, then to fit NURBS surface. This method was
also applied to CAD reconstruction from topological optimization results by [11].
An interesting idea comes from the work of Koch [12] who suggests to manually
create afterwards the functional surfaces.
In addition to these different approaches, this review of literature raised some
important criteria for comparing the approaches, and especially the execution
speed and the capability to produce a complex shape [4], or the geometry accuracy, the ease of use, and the repeatability of the result [13].
Through this state of art analysis, we can see that most articles describe a general approach. They use different examples which don’t allow us to make a comparison between the different approaches. Moreover, only algorithms and methods
are described but there are not any development on approach performance, use,
and implementation in design phase as well as on associated tools. So, in order to
draw more explicit comparison between those methods, they will now be implemented within a common case study: the CAD reconstruction of a mounting
bracket from topology optimization results.
3 Case study
The part on which the different reconstruction methods are applied is a mounting
bracket [14]. A 10kN uniformly distributed load is applied normal to the surface
(Figure 2.a). Topology optimization was made with INSPIRE2 software with the
objective of maximizing stiffness while respecting a maximum mass constraint.
The bracket dimensions are 95 x 29 x 27 mm.
2
www.altair.com
236
D. Pierre-Thomas et al.
3.1 First approach: taking inspiration from topology
The optimization result (Figure 2.b) is used for the proposed approaches as a
background layer. This means that the shape coming from topological computation is just considered as an inspiration source to provide ideas on where the designer should put material when designing the reconstructed part geometry.
ƒ Using Standard CAD features (App. 1a)
In this section, a conventional CAD tool (Catia3) is used to redesign the part. First,
an assembly is created to enable the superposition between the topological result
and the new geometry. Then the designer uses classical protrusion and cut operations to build up the model. Finally sharp edges are rounded to improve the connections between the various volumes that constitute the part. It can be observed
big differences between the topological result and the final geometry on Figure 2.c
and Figure 3. If the designer wants to stay closer from the polygonal model, he
will need more conventional operations and more time. It may become a really
complex and time-consuming operation.
(a)
(c)
(b)
Fig.2. (a) Loads and constraints applied on the bracket, (b) Topological optimization result, (c)
Superposition of STL and CAD model
This result can be different depending on the designer and the design strategy.
Figure 4 shows the part model obtained with a different strategy from a different
designer using the same software.
Fig.3. CAD model with CATIA
Fig.4. CAD model with CATIA Generative shape design
In this last case, a surface sweep function is used to build on the part model
and the designer decided to integrate manufacturing constraints like minimum diameter and minimum angle to manufacture part without supports. The two CAD
models are very different from each other and from the topology optimization result. With this design method it is difficult to generate complex shapes because
conventional tools are not appropriate.
3
http://www.3ds.com/fr/produits-et-services/catia/
Comparison of some approaches to define …
237
Using Direct Modeling (App. 1b)
In this section, a PolyNURBS technology is used to design a CAD part from the
polygonal model. PolyNURBS is a “direct modeling” approach that allows user to
“push and pull” a 3-dimensional surface in order to virtually sculpt the shape of
the part without having to manage parameters or feature tree of the shape. The
principle of this tool is moving points of the control polygon of the surface, by
handling vertices, edges and faces (Figure 5).
This CAD model result (Figure 6) is very different from the previous ones. The
tool used for this approach allowing more flexibility in shape design, it gives the
designer more freedom in shape interpretation. Such a tool significantly changes
the way mechanical engineers design parts. The part is redesigned with less basic
features which leads to non-common shapes.
Fig.5. Manipulation in Evolve
Fig.6. CAD Model with Evolve 2015
ƒ Synthesis of the first approach
This approach is based on topological optimization results as a background layer.
Depending on the designer interpretation and the proposed tools, the final CAD
shape can be more or less close to the topological optimization result. Also manufacturing constraints can be integrated and after reconstruction, finite element
analysis on STEP or CAD native file can be made easily. After this shape reconstruction, a shape or a size optimization can be performed.
3.2 Second approach: polyhedron smoothing and automatic
surface reconstruction
In this second approach, the final shape of the part is no more build “freely” by the
designer. The polygonal model from topological optimization is used as a basis for
automatic algorithms to build a set of surface patches. These surfaces fit to the
cloud of points built on triangle nodes coming from previous steps. In the current
case study, this surface fitting process is performed automatically with surface reconstruction algorithm (i.e. CATIA software). But as the polyhedral model coming from topological optimization is generally greatly rough and coarse, it cannot
be used directly to automatically build a fitting surface by a CAD modeler. In this
approach a variety of mesh manipulation tools can be used to decrease the rough
aspect of the polygonal model. To do this, three variants are presented.
238
D. Pierre-Thomas et al.
In the first variant (App. 2a), the shrink wrapping approach [5] is used (i.e.
Magic’s software4). The new CAD model (Figure 7 (a)) is very close to the topology optimization result. But the result is not smooth enough from a perception
point of view.
The idea of the second variant (App. 2b) is to generate a uniform density point
cloud from the STL file, and then to remesh this cloud using an appropriate
smoothing algorithm. In our example, these manipulations were carried out in
MeshLab5 using Poisson algorithm (Figure 7 (b)).
In the third approach (App. 2c), smoothing is performed with the STL handling module of the CAD software (i.e. Catia) driven by a deviation parameter.
(Figure 7 (c)).
(a)
(b)
(c)
Fig.7. Final CAD model (a) with App. 2a (b) with App. 2b (c) with App. 2c
ƒ Synthesis of this second approach
Automatic surface reconstruction does not allow shape interpretation by designer.
Time for the overall operation process depends on the algorithm efficiency more
than on the strategy chosen by the designer. In the case study presented here, the
result is manufacturable, but integrating manufacturing constraints would need
additional operations. Also, the large number of elementary surfaces generated by
the automatic surface reconstruction makes the realization of a finite elements
analysis difficult. Moreover, a parametric optimization is not possible as no parameters are available from this approach.
4 Analysis and discussion
The previous case study makes it possible to compare the 5 implemented approaches against several criteria (see Table 1). Some of those criteria were sug4
5
http://software.materialise.com/magics
http://meshlab.sourceforge.net/
Comparison of some approaches to define …
239
gested by the literature (identified by an asterisk in Table 1), and some others
emerged from the case study.
Table 1. Comparison of 5 CAD reconstruction approaches from the previous case study.
Criteria
App.1a App.1b App.2a App.2b App.2c
Execution time (in hours) *
1.5 to 5
1.5
1.5
2.5
2
Sensitivity on the execution time (in hours)
+/- 5
+/- 3
+/- 1
+/- 1 +/- 1
Time control
++
++
Result shape quality
++
++
-+
Repeatability of the result *
--+
++
++
Capability to create complex surfaces easily*
+
++
++
++
Capability to create basic surfaces (plan, cylinder)
++
+
---Capability to be accurate in the part dimensions*
++
---Capability to integrate client requirements easily
++
+
---Capability to integrate manufacturing constraints
++
---Number of needed software
1
1
2
3
2
Number of elementary surfaces created *
795
520
6900
7200 6200
File size in Mo
3
8
64
66
70
We can point out that the execution time highly depends on the designer’s expertise on CAD software or mesh manipulation software. Choosing one of the approaches depends on client requirement and on the project time. Regarding the
part shape, some approaches enable more or less complex surfaces but with different quality levels. Approach 1b can create complex and smooth surfaces but it is
not the case of approaches #2 which create complex shapes but with a lower quality: with more surfaces. However, these second type approaches which enable to
create complex surfaces have some difficulties to create more basic surfaces, as
planes or cylinders when needed, and especially on functional surfaces. In such
cases, we recommend to manually merge basic volumes with the semiautomatically created geometry. Another point is that when a part is designed, the
aim is to meet client requirements. But to integrate these, it is necessary to steer
the model creation which is allowed by approaches of type 1 rather than approaches of type 2. The two last criteria, the number of elementary surface patches
and the generated file size, are both related to the ease of handling the resulting
CAD model. Obviously, the smaller the CAD file, the higher the ease of handling.
But it is also to be noticed that too many surfaces often lead to more expensive
FEA simulation phase since auto-meshing will generate more or less unnecessary
elements. Type 1 approaches prove to be highly preferable to type 2 ones against
these last criteria.
240
D. Pierre-Thomas et al.
5 Conclusion and perspectives
In this paper, several possible approaches to generate a CAD model from the topological optimization results are identified, formalized, and tested within the typical
case study of designing a lightweight structure. But there is no universal approach
for CAD part reconstruction. Moreover, this operation is time consuming, even if
that depends on designers expertise on the different tools, and on the part complexity. However, although many topological optimization loops are often necessary in the design process for additive manufacturing of a part, this task of geometry reconstruction doesn’t have to be performed after each one of these loops. A
single reconstruction at the end of the optimization process may be sufficient, and
our results can be used as a first guideline for the designer in order to choose and
to implement a reconstruction approach.
Finally, these results raise a fundamental question: knowing that additive manufacturing machines currently use input data as STL files (polyhedral geometry),
can we avoid this time consuming step of building CAD surfaces? While from a
technical point of view the answer is probably yes in most situations, emancipating design processes from a traditional CAD model would be a considerable
change in people’s mind and in industrial practices.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
D. Brackett, I. Ashcroft, and R. Hague, Topology optimization for additive manufacturing, in
Proceedings of the Solid Freeform Fabrication Symposium, 2011, 348–362.
P.T. Doutre et al., Optimisation topologique : outil clé pour la conception des pièces
produites par fabrication additive, in AIP PRIMECA La Plagne conference, 2015, 73, 1–6.
S. Yang and Y. F. Zhao, Additive manufacturing-enabled design theory and methodology : a
critical review, Int J Adv Manuf Technol, 2015, 80, 327–342.
H. K. Ault and A. D. Phillips, Direct Modeling : Easy Changes in CAD ?, in 70thMIDYEAR, 2016, 99–106.
L. P. Kobbelt, A Shrink Wrapping Approach to Remeshing Polygonal Surfaces, 1999, 18(3).
R. R. Martin, Reverse engineering geometric models-an introduction, Comput. Des., 1997,
29(4), 255–268.
M. Vieira and K. Shimada, Surface mesh segmentation and smooth surface extraction
through region growing, Comput. Aided Geom. Des., 2005, 22,771–792.
R. R. Martin, T. Varady, and P. Benko, Algorithms for reverse engineering boundary
representation models, Comput. Des., 2001, 33, 839–851.
O. Volpin, A. Sheffer, M. Bercovier, and L. Joskowicz, Mesh simplification with smooth
surface reconstruction, Comput. Aided Geom. Des., 1998, 30(11), 875–882..
W. Ma and P. He, B-spline surface local updating with unorganized points, 1998, 30(11),
853–862.
P. Tang and K. Chang, Integration of topology and shape optimization for design of
structural components, in Struct Multidisc Optim 22, 2001, 65–82.
P. R. Koch, FE-optimization and design of additive manufactured structural metallic parts for
telecommunication satellites, in Paris Space week conference, 2015.
M. Berger et al., State of the Art in Surface Reconstruction from Point Clouds, State art
Rep., 2014.
B. Vayre, “Conception pour la fabrication additive , application à la technologie EBM,”
Université Grenoble Alpes, 2014.
Review of Shape Deviation Modeling for
Additive Manufacturing
Zuowei ZHU1, Safa KEIMASI1, Nabil ANWER1*, Luc MATHIEU1and
Lihong QIAO2
1
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris-Saclay, 94235 Cachan, France
School of Mechanical Engineering and Automation, Beihang University, Beijing 100191,
China
2
*Corresponding author. Tel.: +33-(0)147402413; fax: +33-(0)147402220. E-mail address:
anwer@lurpa.ens-cachan.fr
Abstract Additive Manufacturing (AM) is becoming a promising technology capable of building complex customized parts with internal geometries and graded
material by stacking up thin individual layers. However, a comprehensive geometric model for Additive Manufacturing is not mature yet. Dimensional and form
accuracy and surface finish are still a bottleneck for AM regarding quality control.
In this paper, an up-to-date review is drawn on methods and approaches that have
been developed to model and predict shape deviations in AM and to improve geometric quality of AM processes. A number of concluding remarks are made and
the Skin Model Shapes Paradigm is introduced to be a promising framework for
integration of shape deviations in product development, and the digital thread in
AM.
Keywords: Additive manufacturing, Geometric deviation, Skin model shapes,
Geometric modeling.
1 Introduction
Additive Manufacturing (AM), as a most frequently used method for rapid prototyping nowadays, was first explored and applied in the automotive, aerospace and
medical industries and it is considered to be one of the pillars of the fourth industrial revolution. Different from traditional machining, in which parts are made by
removing materials from a larger stock through different processes, AM fabricates
volumes layer by layer from their three-dimensional CAD model data.
Additive Manufacturing and its different technologies have been reviewed comprehensively by many authors. A classification of additive manufacturing technologies is shown in Table 1 according to their characteristics.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_25
241
242
Z. Zhu et al.
Table 1. Process categories of additive manufacturing as classified by ASTM [1].
Process type
Description
Related Technologies
Binder jetting
Liquid bonding agent selectively deposited to join powder
Powder bed and inkjet head (PBIH),
plaster-based 3D printing (PP)
Material jetting
Droplets of build material selectively deposited
Multi-jet modeling (MJM)
Powder bed fusion
Thermal energy selectively fuses regions of powder bed
Electron beam melting (EBM), selective
laser sintering (SLS)
Directed energy
deposition
Focused thermal energy melts
materials as deposited
Laser metal deposition (LMD)
Sheet lamination
Sheets of material bonded
together
Laminated object manufacturing (LOM),
ultrasonic consolidation(UC)
Vat
photopolymeriza
tion
Liquid photopolymer selectively cured by light activation
Stereolithography (SLA), digital light
processing (DLP)
Material extrusion
Material selectively dispensed
through nozzle or orifice
Fused deposition modeling (FDM)
A brief illustration of the digital chain of an AM process is shown in Figure 1. The
process can be divided into an input phase, a build phase and an output phase. In
the input phase, the CAD model of the part is designed and converted as a
STereoLithography (STL) file format that is readable for AM machines and provides the geometry information. This STL file is basically an approximation of the
designed part achieved by triangulation, which causes deviation in the final part.
In the build phase, process parameters like energy source, layer thickness, build
direction, supports and material constraints are set in the machine and the part is
fabricated layer by layer. The output phase is an imperative part of the process, in
which procedures like support removal, cleaning, heat treatment and NC machining are executed to ensure the final quality of the part.
CAD
STL
Convert
Input Phase
File
Transfer to
Machine
Machine
Setup
Build
Build Phase
Remove
Post
Process
Application
Output Phase
Fig. 1. Typical digital chain of an AM process.
Factors arising from each phase may introduce geometric deviations of the final
part, including quality of input file, machine errors, build orientation, process parameters, material shrinkage and staircase effects due to layer thickness. The control of geometrical accuracy remains a major bottleneck in the application of AM.
The Skin Model Shapes (SMS) paradigm, which stems from the theoretical foundations of Geometrical Product Specification and Verification, is proposed as a
comprehensive framework to capture product shape variability in different phases
of product lifecycle [2]. It enables the consideration of geometric deviations that
are expected, predicted or already observed in real manufacturing processes. In-
Review of Shape Deviation Modeling for Additive Manufacturing
243
depth researches into the integration of thermal effects in its application on tolerance analysis have paved the way for its adaptation to deviation modeling in the
context of additive manufacturing [3].
In this paper, a review will be done on the methods developed to model and predict shape deviations in AM and to improve geometric quality of AM processes.
Some concluding remarks will be made and the Skin Model Shapes paradigm will
be introduced to be a promising framework for modeling shape deviations in AM.
2 Review of shape deviation modeling for AM
In order to model the shape deviations and improve the geometrical accuracy in
AM processes, in recent years, numbers of researchers have proposed different
models and approaches regarding to specific error sources as well as AM processes. Since the AM process is almost automatic, most of these studies have been devoted to improving the design, among which two major categories of approaches
can be distinguished. One category focuses on changing the CAD design by compensating the shape deviations based on predictive deviation models. The other
category focuses on modifying the input files or improving the slicing techniques
of AM processes.
2.1 Compensation of shape deviations
The researches with respect to deviation compensation are conducted for different
purposes. The compensation models can be briefly classified into machine error
compensation and part shrinkage compensation.
Tong et al. [4,5] proposes a parametric error model, which models all the repeatable errors of SLA (Stereolithography) machine and FDM (Fused Deposition Modeling) machine with generic parametric error functions. The coefficients of the
functions can be estimated through regression of the measurement data gathered
from given points. The model can then be used to analyze individual deviations in
the Cartesian Coordinate System (CCS), based on which appropriate compensations can be made on the input files to minimize the shape deviations of the final
products. However, since the deviations are estimated from individual points, the
continuity of product geometry is not considered and the accuracy of estimation
depends largely on the selected AM machine and measurement points, which
lacks generality.
A series of studies conducted by Huang [6,7,8,9,10] and his research team have
been dedicated to developing a predictive model of geometric deviations that is
able to learn from the deviation information obtained from a certain number of
tested product shapes and derive compensation plans for new and untested prod-
244
Z. Zhu et al.
ucts. They aim at establishing a generic methodology that is independent of shape
complexity and specific AM processes. Their first attempt [6,7] initiates in modeling the in-plane shape deviations induced by specific influential factors in the
MIP-SLA and FDM processes, based on which they develop a generic approach to
model and predict in-plane deviation caused by shape shrinkages along product
boundary and derive optimal compensation plans [8]. The proposed shrinkage
model consists of two main sections known as systematic shrinkage that is considered constant and random shrinkage that can be predicted from experimented
products using statistical approaches. To develop this model, first, Polar Coordinate System (PCS) is used to represent shape and shrinkage is defined as a parametric function denoting the difference between the nominal shape and the actual
shape in different angles. Secondly, experiments are conducted and deformations
are observed to derive the statistical distribution of the parameters in the function,
based on which the shrinkage function can be defined and therefore compensation
plans can be made. Figure 2 shows an illustration of the shrinkage model, a point
P on the nominal shape is represented in PCS as r (T , r0 , z) , with r and T de-
noting its radius and angle, and z denoting the z-coordinate of the 2D plane where
PCS lies in. P’, as the final position of P after shrinkage, can then be easily represented by reducing r with a certain 'r , which is quite difficult to be identified
under CCS. Compared with the work of Tong et al., this model reduces the complexity of deviation modeling by transforming the in-plane geometric deviations in
CCS into a functional profile defined in PCS.
r (T )
r (T , r0 , z )
T
z
Actual Shape
P'
'r
P : r (T , r0 , z )
Nominal Shape
O
T
Fig. 2. In-plane shrinkage model in the Polar Coordinate System.
In a close study, Huang et al. [11] extends this approach from cylindrical shape to
polyhedrons. They propose to treat an in-plane polyhedron as part of its circumscribed circle that can be carved out. A novel cookie-cutter function is proposed to
be integrated in the cylindrical basis model to determine how the polyhedron can
be trimmed from its circumscribed circle. This function is defined as a periodic
waveform that can be modified according to the polyhedron shape. Later, this
model is further extended to freeform shapes [12]. The freeform shape is approximated either by a polygon using the Polygon Approximation with Local Compensation (PALC) strategy or by addition of circular sectors using the Circular Approximation with Selective Cornering (CASC) strategy. Both of the strategies can
Review of Shape Deviation Modeling for Additive Manufacturing
245
be easily implemented based on previous models. Moreover, in [13] they propose
a novel spatial deviation model under the Spherical Coordinate System (SCS), in
which both in-plane and out-of-plane errors are incorporated in a consistent mathematical framework. Based on the above-mentioned shape deviation models, due
compensation plans are made through experimentation using a stereolithography
process. Overviewing all the publications of Huang and his co-authors, we can
conclude that their research on the shape deviation modeling of AM process is generic and comprehensive, covering both 2D and 3D deviations, and shapes of different complexities. The methodologies have also been validated through extensive experiments, showing effective predictability of the shape shrinkage
deviations. However, in order to deduce the exact functions of these models for
compensation, measurements have to be made on certain number of experimental
parts to obtain the shape deviation information. This means that these models can
only be applicable when manufactured products are available. The models lack a
consideration of the overall digital chain of the AM process and the ability to predict possible deviations without experimental data.
2.2 Modification of input files
Despite the error compensation approaches which seek to reduce geometric errors
by learning from experimental data, a number of researchers have proposed to
eliminate the errors in input files of AM processes. The modification of STL files
and the improvement of slicing techniques are two mainstream research topics.
These researches are motivated by the fact that AM does not work on the original
CAD model, while uses the STL file in which the nominal part surface is approximated into a triangular mesh representation. A “chordal error” interpreted as the
Euclidean distance between the STL facet and the CAD surface is introduced during the translation from CAD to STL, as can be seen from Figure 3(a). Besides, a
“staircase error” occurs due to the slicing of the STL file in building the part layerby-layer, as shown in Figure 3(b). The maximum cusp height has been adopted as
an accuracy measurement parameter for AM processes.
CAD Model
Boundary
Design Surface
PCAD
Cusp Height
STL Surface
PSTL
Additive Manufacturing
Part Boundary
Chordal Error
(a)
(b)
Fig. 3. (a) 2D illustration of the chordal error (b) 2D illustration of the staircase effect.
246
Z. Zhu et al.
A notable breakthrough in reducing chordal error is the Vertex Translation Algorithm (VTA) proposed by Navangul et al. [14], in which multiple points are selected on an STL facet and the chordal error is computed as the distance between
each point and its corresponding point on the NURBS patch of the CAD surface.
The point with the maximum chordal error is then identified and translated to the
CAD surface, three new facets are generated by connecting the translated point
with the vertices of the facet and then added to the STL file, while the original
facet is deleted. Figure 4 gives an illustration of this algorithm. A facet isolation
algorithm (FAI) is also introduced to determine the points to be modified by extracting the STL facets corresponding to the features of the part. This algorithm
improves the STL file quality by iteratively modifying the STL facets until choral
errors are minimized. However, numbers of iterations are usually required to satisfy the specified tolerance parameters and each iteration consumes significant
amount of computation time and enlarges the file size. Similarly, the Surfacebased Modification Algorithm (SMA) proposed by Zha et al. [15] modifies the
STL file by adaptively and locally increasing the facet density until the geometrical accuracy is satisfied. The individual part surfaces to be modified are selected
by estimating the average chordal error and cusp height error of the surface. The
modification is also applied on certain points of each STL facet of the selected
surface according to predefined rules. However, SMA will likely increase the STL
file size of a surface exponentially, so it is only preferable for high accuracy part
models with complex features and multiple surfaces as a result of the tradeoff between smaller file size and higher accuracy.
Translated vertex with
maximum chordal error
Design surface
New facets (Added)
Triangular facet in STL file
(Deleted)
Fig. 4. Illustration of the VTA algorithm.
Instead of modifying the whole STL file, Kunal [16] proposes to minimize the errors by modifying each 2D slice of the STL file. The STL contour and designed
NURBS contour on each slice are captured, and new points are generated on each
triangle chord of the STL contour to verify if the chordal error threshold is exceeded in this chord. If so, corresponding chord points need to be translated to the
NURBS contour until the chordal error is below the threshold. Thus new STL contour is formed by connecting the new points. This approach can be seen as a 2D
version of the VTA algorithm that focuses on altering the part in manufacturing
level. The modification is done on each slice, thus calling for large computation
Review of Shape Deviation Modeling for Additive Manufacturing
247
effort. Moreover, if any changes are made to the slicing plan, the whole process
has to be repeated, which reduces its generality and constrains its application.
To reduce the staircase error, some researches on adaptive slicing have been conducted. Instead of uniform slicing which slices the STL file with a constant slice
thickness, adaptive slicing seeks to slice the file using variable slice thicknesses to
achieve the desired surface quality and at the same time ensure a decreased build
time. The Octree data structures have been adopted by Siraskar et al. [17] to successively accomplish the adaptive slicing of the object. A method termed as Modified Boundary Octree Data Structure is used to convert the STL file of an object to
an Octree data structure, by iteratively subdividing a universal cube enclosing the
STL file into small node cubes according to the defined subdivision conditions.
The height values of the final cubes can then be identified as slice thicknesses.
This approach proves to have ensured the geometrical accuracy of the manufactured part through virtual manufacturing but is quite limited in real practice due to
the lack of proper support for adaptive slicing in mainstream AM machines. To
overcome this limitation, a clustered slice thickness approach is intuitively introduced [18], in which clustered strips of varying layer thicknesses are calculated
manually using the minimum slice thickness, with each clustered band of uniform
slices considered as a separate part built on top of each other along the build direction. A KDtree data structure is also adopted in this study to subdivide the bounding box of the STL file to determine the slice thickness. The cusp error threshold
is used to decide whether the cube should be further subdivided. The adaptive slicing approaches are theoretically useful in reducing build time and increasing part
accuracy, but the main challenge is the lack of direct support in AM machines.
2.3 Evaluation and tolerance verification of shape deviations
The deviations resulting from the proposed models need to be verified with respect to tolerance specifications, so as to provide significant feedback for the modification of design. Current studies focus on evaluating the deviations through
analysis of the STL file. The STL file is sliced and the points on each slice contour
are sampled for the evaluation. [19] proposes verification methods for both dimensional and geometric tolerances in AM processes. For dimensional tolerance, the
Least Squares (LSQ) Fitting method is used to derive a substitute geometry of the
extracted points, the dimension of which is then measured and verified. While for
geometric tolerances, a Minimum Zone (MZ) Fitting method is used to derive the
two separating nominal features which enclose all the extracted points with a minimum distance, and their distance is compared with the tolerance value. But this
study overlooks the staircase effect and the MZ method is not realistic for shapes
with complex geometries. In a series of work [14,15,16], similar virtual manufacturing methods based on the STL file are adopted to evaluate shape deviations, in
which the contour points in each slice is offset in the build direction for one-layer
248
Z. Zhu et al.
thickness to form a virtual layer, thus considering the staircase effect. The profile
error of complex shapes is evaluated by calculating the maximum distance between contour points and their closest corresponding points on the design surface.
2.4 Discussion
In this section, two major categories of shape deviation modeling methods are reviewed and a comprehensive overview of these methods can be drawn as shown in
Table 2. The validity of these methods has been proved through experiments and
simulations. However, they could only cover certain phases of the AM process,
while lacking the ability to model deviations from an overall view of the product
lifecycle.
Table 2. Overview of the reviewed methods.
References
Dimensionality
Geometric Model
Main Characteristics
[4,5]
2D, 3D
discrete
Machine errors of FDM and SLA
[6,7]
2D
continuous
Modeling with PCS
[8-12]
2D
continuous
Shape shrinkage, freeform shapes
[13]
2D, 3D
continuous
Modeling with SCS
[14,15]
3D
discrete
Modifying STL facets
[16]
2D
discrete
Modifying 2D slice contours
[17,18]
3D
discrete
Adaptive slicing
3 The Skin Model Shapes paradigm for AM
The main contributions of the Skin Model Shapes have been highlighted recently in different applications, such as assembly, tolerance analysis, and motion
tolerancing. The generation of Skin Model Shapes can be divided in a prediction
and an observation stage depending on the available information in product design
and manufacturing processes [2]. Geometric deviations can be classified into systematic and random deviations, where systematic deviations originate from characteristic errors of the manufacturing process, while random deviations occur due
to inevitable fluctuations of material and environmental conditions. Notably, in the
observation stage, geometric deviations are modeled by extracting the statistical
information of deviations from a training set of observations gathered from manufacturing process simulations or measurements, thus possible deviations in newly
manufactured parts can be effectively predicted [20]. Figure 5 shows the creation
of Skin Model Shapes in both prediction and observation stages.
Review of Shape Deviation Modeling for Additive Manufacturing
249
Observation Stage
Prediction Stage
Nominal Model
Observations
Systematic Deviations
Apply (K)PCA
Random Deviations
Sampling Scores
NO
NO
Check for Specifications
YES
Skin Model Shapes
Fig. 5. The creation of Skin Model Shapes in prediction and observation stages[19].
Compared with the above mentioned approaches, the Skin Model Shapes paradigm can model both the 2D and 3D deviations either by prediction based on assumptions and virtual or real experiments, or by learning from observation data
gathered from manufactured samples. It covers the overall digital chain of the AM
process, though specific methodologies for its application in AM remain to be developed, it is a suitable and promising modeling framework for AM processes.
4 Conclusion
Geometrical accuracy is an important concern for AM processes. In this paper,
current research status of shape deviation modeling of AM processes is reviewed.
Major modeling methods, categorized as Deviation Compensation methods and
Input File Modification methods, have been discussed and their advantages and
limitations are highlighted. The Skin Model Shapes paradigm is introduced to be a
promising modeling method for AM. In further studies, authors will focus on
adapting the Skin Model Shapes paradigm to the AM process and come up with a
comprehensive deviation modeling framework to support the whole digital chain.
Acknowledgments This research has benefitted from the financial support of China Scholarship Council (first author).
References
1.
2.
ASTM F2792-12a, Standard Terminology for Additive Manufacturing Technologies,
ASTM International, West Conshohocken, PA, 2012.
Anwer N., Ballu A., and Mathieu L. The skin model, a comprehensive geometric model
for engineering design. CIRP Annals-Manufacturing Technology, 2013, 62(1), 143-146.
250
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Z. Zhu et al.
Garaizar, O. R., Qiao, L., Anwer, N., and Mathieu, L. Integration of Thermal Effects into
Tolerancing Using Skin Model Shapes. Procedia CIRP, 2016, 43, 196-201.
Tong K., Amine Lehtihet E., and Joshi S. Parametric error modeling and software error
compensation for rapid prototyping. Rapid Prototyping Journal, 2003, 9(5), 301-313.
Tong K., Joshi S., and Amine Lehtihet E. Error compensation for fused deposition modeling (FDM) machine by correcting slice files. Rapid Prototyping Journal, 2008, 14(1), 4-14.
Xu L., Huang Q., Sabbaghi A., and Dasgupta T. Shape deviation modeling for dimensional
quality control in additive manufacturing. In ASME 2013 International Mechanical Engineering Congress and Exposition, IMECE 2013, San Diego, November 2013, pp.
V02AT02A018-V02AT02A018.
Song S., Wang A., Huang Q. and Tsung F. Shape deviation modeling for fused deposition
modeling processes. In IEEE International Conference on Automation Science and Engineering, CASE 2014, Taipei, August 2014, pp.758-763.
Huang Q., Zhang J., Sabbaghi A., and Dasgupta T. Optimal offline compensation of shape
shrinkage for three-dimensional printing processes. IIE Transactions, 2015, 47(5),431-441.
Huang Q., Nouri H., Xu K., Chen Y., Sosina S., and Dasgupta T. Statistical Predictive
Modeling and Compensation of Geometric Deviations of Three-Dimensional Printed
Products. Journal of Manufacturing Science and Engineering, 2014, 136(6), 061008.
Sabbaghi A., Dasgupta T., Huang Q., and Zhang J. Inference for deformation and interference in 3D printing. The Annals of Applied Statistics, 2014, 8(3), 1395-1415.
Huang Q., Nouri H., Xu K., Chen Y., Sosina S., and Dasgupta T. Predictive modeling of
geometric deviations of 3d printed products-a unified modeling approach for cylindrical
and polygon shapes. In IEEE International Conference on Automation Science and Engineering, CASE 2014, Taipei, August 2014, pp.25-30.
Luan H. and Huang Q. Predictive modeling of in-plane geometric deviation for 3D printed
freeform products. In IEEE International Conference on Automation Science and Engineering, CASE 2015, Gothenberg, August 2015, pp.912-917.
Jin Y., Qin S.J., and Huang Q. Out-of-plane geometric error prediction for additive manufacturing. In IEEE International Conference on Automation Science and Engineering,
CASE 2015, Gothenberg, August 2015, pp. 918-923.
Navangul G., Paul R., and Anand S. Error minimization in layered manufacturing parts by
stereolithography file modification using a vertex translation algorithm. Journal of Manufacturing Science and Engineering, 2013, 135(3), 031006.
Zha W. and Anand S. Geometric approaches to input file modification for part quality improvement in additive manufacturing. Journal of Manufacturing Processes, 2015, 20, 465477.
Sharma K. Slice Contour Modification in Additive Manufacturing for Minimizing Part Errors. Electronic Thesis or Dissertation. University of Cincinnati, 2014. OhioLINK Electronic Theses and Dissertations Center. 03 April 2016.
Siraskar N., Paul R., and Anand S. Adaptive Slicing in Additive Manufacturing Process
Using a Modified Boundary Octree Data Structure. Journal of Manufacturing Science and
Engineering, 2015, 137(1), 011007.
Panhalkar N., Paul R., and Anand S. Increasing Part Accuracy in Additive Manufacturing
Processes Using a kd Tree Based Clustered Adaptive Layering. Journal of Manufacturing
Science and Engineering, 2014, 136(6), 061017.
Moroni G., Syam W.P., and Petrò S. Towards early estimation of part accuracy in additive
manufacturing. Procedia CIRP, 2014, 21, 300-305.
Anwer N., Schleich B., Mathieu L., and Wartzack, S. From solid modelling to skin model
shapes: Shifting paradigms in computer-aided tolerancing. CIRP Annals-Manufacturing
Technology, 2014, 63(1), 137-140.
Design for Additive Manufacturing of a nonassembly robotic mechanism
F. De Crescenzio1 and F. Lucchi1
1
Department of Industrial Engineering, University of Bologna
x
Corresponding author. Tel.: +39 0543 374447;
E-mail address: francesca.decrescenzio@unibo.it, f.lucchi@unibo.it
Abstract
The growing potential of additive manufacturing technologies is currently being boosted by their introduction in directly manufacturing of ready-to-use products or components, regardless of their shape complexity. Taking advantage from
this capability, a full set of new solutions to be explored is related to the possibility to directly manufacture joints or mechanisms as a unibody structure.
In this paper, the preliminary design of a robotic mechanism is presented. The
component is designed in order to be manufactured as a unibody structure by
means of an Additive Manufacturing technology. Fused Deposition Modelling
technique is used to print the mechanic arm as a single component, composed by
different functional parts already assembled in the CAD model. Soluble support
material is commonly used to support undercuts: in this case it is also deposited in
the space between two adjacent parts of the same component, in order to allow the
relative motion and the kinematic connection between them. The design process
considers component optimization in relation to both the specific manufacturing
technique and both the interaction between the different parts of the same component, in order to guarantee the proper relative motions.
The conceived mechanism consists in a robotic structure in which the mechanical arm is bounded to a base and connected to a plier on the opposite side.
The effect of clearance between all the kinematic parts is evaluated in order to
assess mechanism degree of mobility in relation to the manufacturing process and
components tolerances and geometry.
Keywords: Design Methods, Additive Manufacturing, unibody mechanism manufacturing, clearance assessment, Fused Deposition Modeling technique
1 Introduction
Additive Manufacturing (AM) and Rapid Prototyping (RP) technologies and techniques concern layer-based additive fabrication of components and prototypes 1.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_26
251
252
F. De Crescenzio and F. Lucchi
Such techniques are mainly used in product development and manufacturing with
the aim of reducing time to market, optimizing costs and the whole design and
fabrication chain. 3D printing allows making parts, components or subcomponents and tools in a wide variety of materials, in a simple way, manufacturing without tooling, assembly lines or supply chains. In this perspective, AM
changes the paradigms of classical industrial manufacturing and development, as
well as the design processes and procedures 2.
On the other hand, high customizations of products and services, delivery time
and green design are currently the new challenges to take into consideration in innovative product development. In this case 3D printers are being used to economically create custom, improved, ready to use products, independently to geometry
complexity. Such features make AM techniques really challenging for low fabrications batches or low volume units.
3D printers technologies is evolving rapidly, with regards to manufacturing
time, improved materials, and reduced costs. The flexibility to build a wide range
of products and shapes that could be printed implies a serious change to supply
chains and business models.
The design procedures and requirements are changing as well since innovations
in product design are introduced in relation to AM paradigms. First of all tolerances and dimensions are dependent to layer thickness, building machine properties,
the specific technology used and the process parameters. Furthermore, the possibility to design and 3D print the internal structure is a key feature for optimizing
the balance between the volume of material used, component weight, final product
resistance and performances. In this perspective AM enables the designer precise
control of the deposition of the material, with the possibility to model layer-bylayer component internal structures.
In this paper the authors explore the effect of clearance parameters in the design and manufacturing of a case study for which the unibody mechanisms is particularly relevant, due to the number and complexity of joints needed. In this case
if no more assembly phases are required, the production times and costs can be reduced. Furthermore, mechanism’s performances can be improved, reducing errors
in parts positioning and fixing. In the proposed case study the clearance is assessed in order to determine the optimal play between the parts to make each couple posable and/or movable up to the force applied to the joint.
1.1 Joint Design for 3D Additive Manufacturing
Design For Manufacturing (DFM) approaches aim to integrate manufacturing
rules and aspects within product design, optimizing object performances and features according not only to product requirements, but also to the selected manufacturing process. Many research activities aim to combine DFM principles with AM
Design for Additive Manufacturing …
253
techniques, defining new design approaches and methodology and manufacturability evaluations to sensibly improve final product quality 3.
One of the most challenging innovations brought by the use of AM in robotics
and product manufacturing is in removing the need of component assembly 4.
This is demonstrated to reduce both time and costs in product manufacturing, even
if the design phases have to be improved is some aspects, such as increasing joints
accuracy and reducing coupling clearance that in 3D printed mechanism could be
too large reducing movements precision. Kinematic pairs clearance assessment is
proposed in 5, where empirical tests are performed with regards to revolute pairs,
prismatic pairs and cylindrical pairs. Functional relationships between extractive
force and clearance and between moment and clearance are developed. Many
studies explore novel joint design, reducing pairs’ clearance and increasing stability. In this respect, in 6 a novel joint has been designed to with the introduction of
add-on markers on the surfaces, reducing joint clearance and improving stability
and rotation performance. Some authors test the impact of drum-shaped roller on
the dynamic behavior of mechanism, reducing instability effects [7].
Unibody 3D printed mechanical joints can find application as functional articulation for biomedical case studies. Such articulated models should be posable: the
joint have to hold a defined position, the internal friction withstands gravity without hand-on parts fusing during the deposition process. In this respective, 8 proposes a design workflow of 3D printed posable articulated models, without any
assembly requirements.
2 Preliminary Design of a Robotic Mechanism
The proposed case study is related to the conceptual design of the main body of a
flexible robotic arm printed by means of FDM technique. In this paper we do not
yet consider the electronic actuation. The arm’s purpose is to support in moving
small objects, simulating the shoulder and elbow mechanism. The design requirements are mainly related to the small dimensions of the arm and the coupling parts
should give the capability to the arm extremity to reach every point within the
volume defined by arm length. Furthermore, the slenderness (1) is considered as
the ratio between the sum of the length of all arm components (li) and the average
area of arm sections (Am):
ܵ ൌ
σ ௟೔
஺೘
(1)
The robotic arm is divided in two parts, whose movements permit the motion
of an object in the 3D space: the mechanical arm and the pliers. The focus of the
activity is to determine the optimal design of the structure of both parts, considering manufacturing them as unibody components. The optimal joints play is as-
254
F. De Crescenzio and F. Lucchi
sessed as a balance between the friction necessary to make a position as stable and
the needed clearance to allow movements.
Robotic mechanism movements are divided into sub-tasks: mechanical arm’s
movements, pliers’ movements, pliers’ opening and closing. A kinematical chain
has been defined as follows: 3 rotating pairs with parallel lines of actions are used
for the mechanical arm movements; 2 rotating pairs with perpendicular lines of action are used for the pliers movements and 2 gearwheels are used for pliers opening and closing. The main selection parameters are precision and ease positioning,
component resistance, reduced amount of necessary space, easy architecture.
The preliminary design of the mechanical arm and of the plier has been developed, introducing the selected kinematical chains (Figures 1 and Figure 2).
The aim of the research activity is to optimize the sizing of the rotating pairs to
be manufactured as unibody structures, in order to allow the movements within
the joint (clearance effect), with a proper positioning between all the parts and
with the feasibility to make all the possible positioning of the arm extremity as
stable equilibrium position (posable effect).
The final solution is prototyped with a FDM – Fused Deposition Modeling –
technique as a demonstration of the developed concept.
Fig. 1. Mechanical arm structure.
Fig. 2. Pliers structure.
Design for Additive Manufacturing …
255
3 Clearance Assessments
The preliminary design phase of the robotic mechanism leads to the general dimensioning of the components, defining the coupling types and their connections.
All the rotating pairs are sized by a nominal value corresponding to the shaft diameter. Hole diameters are selected assessing the clearance effect within the surfaces.
An experimental test plan is provided and investigations are performed on the
basis of the reference joints. The nominal diameter of the shafts joints is considered as reference for each test and the diameter of the hole is increased by discrete
values, in order to determine the minimum play to allow the posable effect, in relation to the specific manufacturing technology.
All the defined test specimens are manufactured with the Stratasys Fortus 250
3D printer, based on the Fused Deposition Modeling (FDM) technique that builds
parts up layer-by-layer by heating and extruding thermoplastic filament (ABS).
Each tested joint (as shaft / hole coupling) is treated and prototyped as a single
part, the support material is used to positioning the two sub-components during
the manufacturing phase and then removed, with a basic solvent at 70°C.
The hole diameter is increased considering the minimum necessary clearance to
have two separate movable parts after the manufacturing and post processing procedures.
Table 1. Pliers joints: test set up
Rotating
Pairs
Rotating pair
RP1
Rotating pair
RP2
Rotating pair
RP3
Nominal
shaft diameter
13 mm
15 mm
3 mm
Hole diameter
Clearance
13.2 mm
0.1 mm
13.3 mm
0.15 mm
13.4 mm
0.2 mm
13.6 mm
0.3 mm
13.8 mm
0.4 mm
15.2 mm
0.1 mm
15.3 mm
0.15 mm
15.4 mm
0.2 mm
15.6 mm
0.3 mm
15.8 mm
0.4 mm
3.2 mm
0.1 mm
3.3 mm
0.15 mm
3.4 mm
0.2 mm
3.6 mm
0.3 mm
3.8 mm
0.4 mm
256
F. De Crescenzio and F. Lucchi
The test plane is conceived as represented in Table 1 and Table 2, considering
pliers and mechanical arm dimensions respectively. Four specimens are designed
and manufactured for each reference nominal shaft diameter; the respective holes’
diameters are increased by 0.2 mm, 0.3 mm, 0.4 mm, 0.6 mm and 0.8 mm.
20 sample joints are designed and prototyped. The selected geometry and the
prototyped models are represented in Figure 3.
Table 2. Mechanical arm joints: test set up
Rotating Pairs
Rotating pair
RA1
Rotating pair
RA2
Rotating pair
RA3
Design shaft
diameter
15 mm
13 mm
7 mm
Hole diameter
Clearance
15.2 mm
0.1 mm
15.3 mm
0.15 mm
15.4 mm
0.2 mm
15.6 mm
0.3 mm
15.8 mm
0.4 mm
13.2 mm
0.1 mm
13.3 mm
0.15 mm
13.4 mm
0.2 mm
13.6 mm
0.3 mm
13.8 mm
0.4 mm
7.2 mm
0.1 mm
7.3 mm
0.15 mm
7.4 mm
0.2 mm
7.6 mm
0.3 mm
7.8 mm
0.4 mm
Fig. 3. Designed and prototyped joint samples.
The optimal clearance values are determined as a balance of both the possibility to move the parts of each joint, with the posable factor. Since the case study
has only a research purpose, the assessment is performed manually, for further de-
Design for Additive Manufacturing …
257
velopment on a real case study, the real forces and strains have to be considered
and applied to each specimen.
The selected clearance values are than used to optimize the robotic mechanism
sizing.
4 Results
Table 3. Clearance Assessment: results
Component
Rotating pair
RA1 - RP2
Rotating pair
RA2 - RP1
Rotating pair
RA3
Rotating pair
RP3
Design
shaft diameter
15 mm
13 mm
7 mm
3 mm
Joint
Movable
Joint
Posable
Hole diameter
Clearance
15.2 mm
0.1 mm
No
15.3 mm
0.15 mm
No
15.4 mm
0.2 mm
Yes
Yes
15.6 mm
0.3 mm
Yes
No
15.8 mm
0.4 mm
Yes
No
13.2 mm
0.1 mm
No
13.3 mm
0.15 mm
No
13.4 mm
0.2 mm
Yes
Yes
13.6 mm
0.3 mm
Yes
No
13.8 mm
0.4 mm
Yes
No
7.2 mm
0.1 mm
No
7.3 mm
0.15 mm
Yes
Yes
7.4 mm
0.2 mm
Yes
No
7.6 mm
0.3 mm
Yes
No
7.8 mm
0.4 mm
Yes
No
3.2 mm
0.1 mm
No
3.3 mm
0.15 mm
Yes
Yes
3.4 mm
0.2 mm
Yes
No
3.6 mm
0.3 mm
Yes
No
3.8 mm
0.4 mm
Yes
No
258
F. De Crescenzio and F. Lucchi
Fig. 4. Final prototyped model.
The manufactured sample joints are used to assess the necessary clearance. The
optimal values are selected as reference to design and manufacture the resulting
robotic mechanism. The results are represented in Table 3.
The selected diameters of holes joints are 15.4 mm, 13.4mm, 7.3 mm and 3.3 mm.
The preliminary design of the mechanical harm is optimized with the defined values and prototyped (Figure 4).
5 Discussion and further developments
In this paper, an experimental procedure is proposed to obtain posable structures
from parts assembled in a Computer Aided Design environment and manufactured
as unibody structures by means of Additive Manufacturing. The described approach is based on the use of a reference shaft diameter and a progressively reduction of holes diameters for each joint of the designed structure. The experiments
have been conducted through a Fused Deposition Modelling machine with soluble
support technology. Soluble or removable support is the main assumption for the
implementation of the proposed workflow. The same approach could be implemented in case of powder based AM technologies, such as 3D printers or SLS (Selective Laser Sintering). Further developments are related to a deeper investigation
on the connection between joint capability to sustain an active force and joint
clearance. Results show different behavior in relation to shaft nominal diameter:
investigations will be related to this aspect and to manufacturing processing parameters too.
This study is intended to contribute to the knowledge related to the creation of
non-assembly structures as this kind of structures may open the way toward several new and challenging applications of additive manufacturing, such as robotics
and reconfiguration of layout in product development.
References
1.
2.
3.
Wong, Kaufui V., and Aldo Hernandez. A review of additive manufacturing, IRSN
Mechanical Engineering 2012
Wei Gao, Yunbo Zhang, Devarajan Ramanujan, Karthik Ramani, Yong Chen, Christopher B. Williams, Charlie C.L. Wang, Yung C. Shin, Song Zhang, Pablo D.
Zavattieri. The status, challenges, and future of additive manufacturing in engineering.
Computer-Aided Design, Volume 69, December 2015, Pages 65-89, ISSN 0010-4485,
http://dx.doi.org/10.1016/j.cad.2015.04.001.
Olivier Kerbrat, Pascal Mognol, Jean-Yves Hascoët. A new DFM approach to combine machining and additive manufacturing. Computers in Industry, Elsevier, 2011,
pp.684-692. doi:10.1016/j.compind.2011.04.003
Design for Additive Manufacturing …
4.
5.
6.
7.
8.
259
A. Bruyas, F. Geiskopf and P. Renaud, Toward unibody robotic structures with integrated functions using multimaterial additive manufacturing: Case study of an MRIcompatible interventional device, Intelligent Robots and Systems (IROS), 2015
IEEE/RSJ International Conference on, Hamburg, 2015, pp. 1744-1750. doi:
10.1109/IROS.2015.7353603
Shrey Pareek, Vaibhav Sharma and Rahul Rai. Design for additive manufacturing of
kinematic pairs. The International Solid Freeform Fabrication Symposium, The University of Texas at Austin, Texas, USA, August 2014.
Xuan Song and Yong Chen. Joint design for 3-D printing non assembly mechanisms.
ASME 2012 International Design Engineering Technical Conferences and Computers
and Information in Engineering Conference. Volume 5; 6 th International Conference
on Micro- and Nanosystems; 17th Design for Manufacturing and the Life Cycle Conference. Chicago, Illinois, USA, August 12–15, 2012. ISBN: 978-0-7918-4504-2.
doi:10.1115/DETC2012-71528.
Yonghua Chen, Chen Zhezheng. Joint analysis in rapid fabrication of non-assembly
mechanisms.
Rapid
Prototyping
Journal,
2011,
17(6):408-417.
doi:
10.1108/13552541111184134.
3D-Printing of Non-Assembly, articulated models. ACM Transactions on Graphics
(TOG) - Proceedings of ACM SIGGRAPH, Asia 2012. Volume 31 Issue 6, November
2012. Article No. 130. doi>10.1145/2366145.2366149.
Process parameters influence in additive
manufacturing
T. Ingrassia*, Vincenzo Nigrelli, V. Ricotta, C. Tartamella
Università degli Studi di Palermo, Dipartimento di Ingegneria Chimica, Gestionale,
Informatica, Meccanica, Viale delle Scienze – 90128 Palermo, Italy
* Corresponding author. Tel.:+39 091 23897263; E-mail address:
tommaso.ingrassia@unipa.it
Abstract Additive manufacturing is a rapidly expanding technology. It allows
the creation of very complex 3D objects by adding layers of material, in spite of
the traditional production systems based on the removal of material. The
development of additive technology has produced initially a generation of additive
manufacturing techniques restricted to industrial applications, but their
extraordinary degree of innovation has allowed the spreading of household
systems. Nowadays, the most common domestic systems produce 3D parts
through a fused deposition modeling process. Such systems have low productivity
and make, usually, objects with no high accuracy and with unreliable mechanical
properties.
These side effects can depend on the process parameters.
Aim of this work is to study the influence of some typical parameters of the
additive manufacturing process on the prototypes characteristics. In particular, it
has been studied the influence of the layer thickness on the shape and dimensional
accuracy. Cylindrical specimens have been created with a 3D printer, the Da Vinci
1.0A by XYZprinting, using ABS filaments.
Dimensional and shape inspection of the printed components has been performed
following a typical reverse engineering approach. In particular, the point clouds of
the surfaces of the different specimens have been acquired through a 3D laser
scanner. After, the acquired point clouds have been post-processed, converted into
3D models and analysed to detect any shape or dimensional difference from the
initial CAD models. The obtained results may constitute a useful guideline to
choose the best set of the process parameters to obtain printed components of
good quality in a reasonable time and minimizing the waste of material.
Keywords: Additive Manufacturing, Reverse Engineering, Process Parameters,
3D Printing.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_27
261
262
T. Ingrassia et al.
1 Introduction
The Additive Manufacturing (AM) term groups all the technologies that allow
to obtain 3D objects by adding, sequentially, very thin material layers.
The development of additive technology has produced initially a generation of
AM systems restricted to industrial applications, but their extraordinary degree of
innovation has allowed the spreading to a large public.
Nowadays, the terms Additive Manufacturing or 3D Printing have become
common among very different users that are, quite often, not specialized in the
engineering field. The spreading of low-cost 3D printers, in fact, has allowed the
use of this technology also among domestic users that, with no particular technical
skill, can be able to produce in a short time (and reduced costs) 3D objects at
home.
The most common domestic systems produce 3D parts through a Fused
Deposition Modeling (FDM) [1-3] process. This technology is based on the
concept that any 3D object can be theoretically subdivided into multiple sections
or thin layers, through a slicing procedure. Consequently, any 3D object can be
created by superimposing many physical layers, one after the other. Using the
FDM methodology, a filament of thermoplastic material is led to the plastic state
through a heated extruder and, just after, it is deposited on a support plane
following a path that defines one of the multiple layers in which the object to be
produced has been virtually subdivided. To achieve the correct shape of each
single layer, usually, the extruder can move itself along a horizontal plane. The
superimposition of different layers can be obtained through vertical movements of
the support plane or of the extruder. Unfortunately, low-cost FDM 3D printers
have low productivity and, usually, make objects with low dimensional and
geometric accuracy [3-6]. These limits, of course, are due to the hardware and
software solutions but, also, to the mechanical components that, because of
reduced cost of the printer, cannot have high-performances. Anyway, some of
these side effects can be effectively limited by choosing the suitable printing
process parameters [7-10].
Aim of this work is to investigate the dimensional and geometric accuracy of a
common low-cost 3D printer and to study how some process parameters affect the
geometrical and dimensional characteristics of the prototypes. In particular,
accuracy analyses have been performed on cylindrical specimens obtained using
different values of the layer thickness. The physical prototypes inspection has
been performed through a typical reverse engineering approach [11-13], using a
3D laser.
2 Materials and Methods
The analysed prototypes have been built using a Da Vinci 1.0A 3D printer
(Fig.1). The Da Vinci 1.0A can use both ABS and PLA filaments, it has a
maximum build volume of 20x20x20 cm and it is managed by the proprietary
Process parameters influence in additive manufacturing
263
"XYZware" software. Unfortunately, this software does not allow to edit
singularly and with accuracy the printing parameters. For this purpose, the
software "Slic3r" and "Simplify3D" have been used. The ".gcode" files created
with these software, containing the imposed set of printing parameters, have been
imported on the XYZware software to launch the printing phase.
Fig. 1. The Da Vinci 1.0A 3D printer
As said previously, the quality of AM prototypes can strongly depend on a
series of process parameters [7,9].
This paper presents some results of a more extensive research activity aimed to
find a correlation between the main printing parameters (layer thickness, extrusion
width, printing speed, slicing direction and extrusion temperature) and the shape
and dimensional accuracy of printed objects.
At this stage, preliminary results regarding the layer thickness, which represents
the thickness of each slice, will be presented.
The survey has been conducted on the cylindrical feature shown in figure 2.
Fig. 2. Dimensions of the analysed cylinder and two prototypes printed with 0.2 mm (on the left)
and 0.05 mm (on the right) layer thickness values
Prototypes made using two different values (0.2 mm and 0.05 mm) of the layer
thickness (fig. 2) have been tested. As regards other printing parameters, default
values of the Da Vinci printer have been used (table 1) and the sliding direction
has been imposed along the z-axis.
Table 1. Main printing parameters
Parameter
Value
Nozzle diameter
0.4 mm
Print speed
5 mm/s
Bed temperature
90°
Nozzle temperature
210°
264
T. Ingrassia et al.
Fill density
25%
Wall thickness
0.7 mm
Some attempts have been also made to evaluate different sliding directions.
Unfortunately, in no case, any acceptable prototype has been obtained (fig. 3).
These printing failures reveal a remarkable technical limit of the printer.
Fig. 3. Cylindrical specimens obtained with sliding direction along the x-axis
2.1 3D acquisition and CAD modelling
The dimensional and shape accuracy inspection of the printed components has
been performed following a typical reverse engineering approach [14-15]. All the
prototypes have been acquired with a triangulation-based 3D laser scanner [16-18]
by Hexagon metrology (Fig. 4). This system has adjustable line lengths and it is
able to acquire point cloud at high speed (150000 points per second) with a good
accuracy level (0.013 mm).
Fig. 4. Hexagon metrology 3D laser scanner HP-L-20.8
The acquired point clouds (fig. 5,a) have been post-processed and converted
into NURBS surfaces (fig. 5,b) obtaining models that differ less than 0.035 mm
from the point clouds. In the final step of the process, the NURBS surfaces have
been converted into CAD solid models (fig. 5,c).
Process parameters influence in additive manufacturing
265
Successively, each prototype CAD model has been aligned with the nominal
CAD model (fig. 6).
Fig. 5. a) Point cloud, b) NURBS surfaces and c) CAD model of a printed cylinder
The alignment has been obtained using iterative algorithms [7,18] to minimize
the distance between the points of comparison. To this aim, two different
approaches have been used sequentially: at first, using a reduced number of
comparison points, the models have been roughly aligned; after, during the fine
adjustment phase, the models alignment has been optimized using a higher
number of comparison points. As soon as all the CAD models alignments have
been made, dimensional and shape inspections of the printed prototypes have been
performed.
Fig. 6. Aligned (acquired and nominal) CAD models.
3 Dimensional inspection
The dimensional inspection has been implemented measuring the deviation
values, considered as the shortest distance from the acquired model to any point
on the nominal CAD model. Root Mean Square Errors (RMSE) have been also
estimated, in order to gather information about the dimensional accuracy of the
printed object.
The obtained results are summarized in table 2.
266
T. Ingrassia et al.
Table 2. Layer thickness values and comparison results of the prototypes
Prototype
Layer Thickness [mm]
Deviation (mm)
RMSE (mm)
0.2
-0.589
0.324
0.2
-0.550
0.254
0.2
-0.641
0.318
4
0.05
-0.524
0.298
5
0.05
-0.663
0.297
6
0.05
-0.609
0.324
1
2
3
Figure 7 shows coloured maps related to the deviation values distributions over
the CAD models of the prototypes. Thanks to these maps, it can be noticed that
large differences between the acquired models and the nominal CAD one occur
along the z-axis. All the prototypes, in fact, are lower than the nominal cylinder.
Fig. 7. Deviation (mm) maps of printed cylinders
The obtained results show that the maximum deviation (absolute) value ranges
from 0.524 mm to 0.663 mm, while the RMSE varies between 0.254 mm and
0.324 mm. The prototype 4 has the lowest values of deviation, whereas the lowest
RMSE has been estimated for the prototype 2.
After, following the previous described alignment procedure, also a
comparative survey among the prototypes has been carried out to evaluate the
repeatability of the results. In particular, the prototypes 1-2-3 (created using the
same set of printing parameters) have been compared two by two. The comparison
results are summarized in the deviations maps of figure 8.
Process parameters influence in additive manufacturing
267
Fig. 8. Comparison between printed cylinders: deviations (mm) maps
Similar values (about 0.280 mm) of the maximum deviation have been
measured for the three comparative analyses. The maximum deviation value,
equal to 0.288 mm, have been found between the prototypes 1 and 2.
These results demonstrate a moderate quality of the printer with regard to the
repeatability of the produced objects.
4 Circularity and cylindricity control
Circularity and cylindricity controls of the specimens have been also
performed.
As regards the circularity, a qualitative analysis has been made. In particular,
cross sections (along the x-y plane of fig. 2) at half height of the cylinders have
been extracted and compared with the nominal circular section.
Figure 9 shows the obtained results. It can be noticed that the cross sections of
all prototypes are not perfectly circular but they have a pseudo-elliptical shape, so
demonstrating a repetitive circularity error.
Fig. 9. Comparison of cross sections (x-y plane)
The cylindricity control has been made by finding, for each specimen, the
tolerance region, that is the zone bounded by two concentric cylinders (“inner”
and “outer”) within which all the points of the cylindrical surface lie (fig. 10).
268
T. Ingrassia et al.
Fig. 10. Cylindricity tolerance zone
For each prototype, the inner and outer cylinders have been found and their
diameters evaluated to calculate the cylindrical tolerance zone as T c=(DouterDinner)/2.
Cylindricity control data are summarized in table 3. The best results, in terms of
lowest tolerance, have been found for the prototypes 4 and 5, printed using a 0.05
mm layer thickness. All the obtained data demonstrate a poor quality of the printer
concerning the geometric accuracy.
Table 3. Cylindricity control data (nominal diameter =20 mm)
1
Outer cylinder
diameter (mm)
20.130
Inner cylinder
diameter (mm)
18.727
Cylindricity
tolerance TC (mm)
0.702
2
20.370
18.808
0.781
3
20.242
18.673
0.785
4
20.180
18.834
0.673
5
20.238
18.908
0.665
6
20.238
18.804
0.717
Prototype
5 Conclusions
In this work, the results of a study about the influence of some parameters of
AM processes have been presented. A systematic survey has been performed to
evaluate the geometric and dimensional accuracy of a low-cost 3D printer. In
particular, some cylindrical specimens have been printed using different value of
the layer thickness. These prototypes have been acquired through a classical
reverse engineering approach and converted into CAD models. After, these
models have been compared with the nominal CAD model to evaluate the
maximum deviation values and the root mean square errors. Circularity and
cylindricity controls have been also performed.
Analysing the obtained results, it can be noticed that, as regards the
dimensional accuracy, comparable values of the maximum deviation have been
calculated for the prototypes printed with 0.2 mm and 0.05 mm layer thickness.
Process parameters influence in additive manufacturing
269
Also for the RMSE, similar values have been found for the 0.05 mm and 0.2 mm
layer thickness prototypes.
No remarkable influence of the layer thickness on the circularity has been
highlighted. All the specimens, in fact, have shown some irregularities of the cross
sections that have a pseud-elliptical shape.
With regard to the cylindricity control, prototypes printed with a 0.05 mm layer
thickness have obtained, on average, better results that 0.2 mm ones.
As a general rule, considering the used printer, the selected printing parameters
and the geometrical feature analysed, it can be stated that the best arrangement
could been obtained using a 0.05 mm layer thickness.
Discussed results have demonstrated that, using low-cost printers, the
achievable geometric and dimensional accuracies are quite poor. Moreover, it has
been observed that the suitable choice of the printing parameters could modify
significantly the quality of the printed object.
References
1.
2.
3.
4.
5.
6.
7.
Bikas, H., Stavropoulos, P., Chryssolouris, G., Additive manufacturing
methods and modeling approaches: A critical review, 2016, International
Journal of Advanced Manufacturing Technology, 83 (1-4), pp. 389-405
Guo, N., Leu, M.C., Additive manufacturing: Technology, applications
and research needs, 2013, Frontiers of Mechanical Engineering, 8 (3), pp.
215-243
Lanzotti, A., Del Giudice, D.M., Lepore, A., Staiano, G., Martorelli, M.,
On the geometric accuracy of RepRap open-source three-dimensional
printer, 2015, Journal of Mechanical Design, Transactions of the ASME,
137 (10), art. no. 101703
Cerniglia, D., Montinaro, N., Nigrelli, V., Detection of disbonds in multilayer structures by laser-based ultrasonic technique, 2008, Journal of
Adhesion, 84 (10), pp. 811-829
Boschetto, A., Bottini, L.,Accuracy prediction in fused deposition
modelling, 2014, International Journal of Advanced Manufacturing
Technology, 73 (5-8), pp. 913-928
Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., Numerical study of
the components positioning influence on the stability of a reverse
shoulder prosthesis (2014) International Journal on Interactive Design
and Manufacturing, 8 (3), pp. 187-197
Lanzotti, A., Martorelli, M., Staiano, G., Understanding process
parameter effects of reprap open-source three-dimensional printers
through a design of experiments approach, 2015, Journal of
Manufacturing Science and Engineering, Transactions of the ASME, 137
(1), art. no. 011017
270
T. Ingrassia et al.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Boschetto, A., Bottini, L., Design for manufacturing of surfaces to
improve accuracy in Fused Deposition Modeling, 2016, Robotics and
Computer-Integrated Manufacturing, 37, art. no. 1357, pp. 103-114
Vaezi, M., Chua, C.K., Effects of layer thickness and binder saturation
level parameters on 3D printing process, 2011, International Journal of
Advanced Manufacturing Technology, 53 (1-4), pp. 275-284
Dai, X., Xie, H., Constitutive parameter identification of 3D printing
material based on the virtual fields method, 2015, Measurement: Journal
of the International Measurement Confederation, 59, pp. 38-43
Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., A multi-technique
simultaneous approach for the design of a sailing yacht, 2015,
International Journal on Interactive Design and Manufacturing, DOI:
10.1007/s12008-015-0267-2
Cappello, F., Ingrassia, T., Mancuso, A., Nigrelli, V., Methodical
redesign of a semitrailer, 2005, WIT Transactions on the Built
Environment, 80, pp. 359-369
Nalbone, L., et al., Optimal positioning of the humeral component in the
reverse shoulder prosthesis, 2014, Musculoskeletal Surgery, 98 (2), pp.
135-142.
Cerniglia, D., Ingrassia, T., D'Acquisto, L., Saporito, M., Tumino, D.,
Contact between the components of a knee prosthesis: Numerical and
experimental study, 2012, Frattura ed Integrita Strutturale, 22, pp. 56-68
Ingrassia, T., Nigrelli, V., Design optimization and analysis of a new rear
underrun protective device for truck, 2010, Proceedings of the 8th
International Symposium on Tools and Methods of Competitive
Engineering, TMCE 2010, 2, pp. 713-725
Ingrassia, T., Mancuso, A., Virtual prototyping of a new intramedullary
nail for tibial fractures, 2013, International Journal on Interactive Design
and Manufacturing, 7 (3), pp. 159-169
Jecić, S., Drvar, N., The assessment of structured light and laser scanning
methods in 3d shape measurements. 4th International Congress of
Croatian Society of Mechanics September, 18-20, 2003
Tóth, T., Živčák, J., A comparison of the outputs of 3D scanners, 2014,
Procedia Engineering, 69, pp. 393-401
Multi-scale surface characterization in additive
manufacturing using CT
Yann QUINSAT1*, Claire LARTIGUE1, Christopher A. BROWN2, Lamine
HATTALI3
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris Saclay, 94235 Cachan, France
1
Surface Metrology Lab, WPI, USA
2
3
Fast Univ. Paris-Sud, Université Paris Saclay 91405 Orsay, France
* Corresponding author. Tel.: +33 147 402 213; fax: +33 147 402 000. E-mail address:
yann.quinsat@lurpa.ens-cachan.fr
Abstract In additive manufacturing, the part geometry, including its internal
structure, can be optimized to answer functional requirements by optimizing process parameters. This can be performed by linking process parameters to the resulting manufactured geometry. This paper deals with an original method for surface geometry characterization of printed parts (using Fused Filament Fabrication
FFF) based on 3D Computer Tomography (CT) measurements. From 3D measured data, surface extraction is performed, giving a set of skin voxels corresponding to the internal and external part surface. A multi-scale analysis method is proposed to analyse the relative internal area of the total surface obtained at different
scales (from sub-voxel to super-voxel scales) with different process parameters.
This analysis turns out to be relevant for filling strategy discrimination.
Keywords: CT measurement, additive manufacturing, multi-scale analysis
1 Introduction
Fig. 1. Example of internal structure
In additive manufacturing, the weight of parts can be optimized to support a
given load by designing an internal structure, including porosity, using a given
filling strategy (figure 1). Relations between strength and filling strategies are required for product and process design optimization. Our preliminary study has
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_28
271
272
Y. Quinsat et al.
shown the significant influence of FFF (Fused Filament Fabrication) process parameters on both the mechanical and fracture properties of the manufactured parts.
Within the context of fracture study, a series of bending samples has been carried out by considering variable filling parameters such as layer thickness, filling
mode, and density. All the specimens presented a small cracking directly issued
from manufacturing (figure 2(a)). Following cracking initiation, a 3-point bending
test was performed with the view to determining KC, critical stress intensity factor
corresponding to fracture initiation. Results displayed in table 1 clearly highlight
the great influence of the process parameters on the value of Kc. Therefore, the relations between strength and filling strategies are necessary for product and process design optimization.
Table 1. Values of Kc (from DOE define in table 3)
Tests
1
2
3
4
5
6
7
8
9
Kc (MPa.sqrt(m))
0.54
1.27
1.68
0.99
2.03
0.78
0.55
0.68
0.61
In the paper, we propose a first approach to determine the relation between the
filling strategy and the resulting surface topography, both internal and external,
which can be linked to strength. Surface topography is thus considered the key element between the manufacturing process and the part function.
(a) 3-point bending setup
Fig. 2. Description of the cracking tests
(b) Typical load evolution vs displacement
The study of 3D surface topography obtained by FFF is a problem welladdressed in the literature. The most classical approximation consists in modeling
the external surface as a juxtaposition of elliptical pipes (or tubes) [1]. The influence of scanning and deposition directions on external topography during filling
has been studied in [2,3]. Zeng et al. show the influence of the process on local
curvatures in a multi-scale analysis [4]. An approach based on a finite element
analysis is proposed in [5] in order to predict 3D surface topography issued from
FFF based on a few measured points. Meanwhile, Pupo et al. [6] establish the link
between process parameters and the obtained topography in the case of sintering
of metallic powders. Hence, most studies address the influence of process parameters on external surface topography, but few works have studied the effect of process parameters on porosity. One difficulty lies in measuring the internal part surface. Computer Tomography (CT) now allows non-destructive 3D measurements
of internal and external part geometry with uncertainties down to micrometers [7,
8, 9].
Multi-scale surface characterization in additive manufacturing using CT
273
Fig. 3. The 3 different filling strategies: (a) Hilbert curve (b) Honey curve(c) ZigZag curve
Data issued from tomography represent the different levels of X-ray attenuation
of the part for each point of the measured volume. The information is stored as a
numerical value (gray level) in a volume element called voxel [10]. Surface extraction from these 3D data (also referred to as 3D edge detection) is the first necessary stage for metrology applications [11, 12], and is strongly linked to the
voxel size which represents the tomography resolution [13]. Some studies show
that uncertainty in surface extraction can be reduced by making use of a sub-voxel
resolution method which numerically improves CT resolution [12, 10]. Uncertainty can achieve 1/10 of the voxel size when sub-voxel resolution is used [14],
which makes CT applied for dimensional metrology applications.
Within this context, this paper deals with a method for surface characterization
in additive manufacturing to discriminate the filling strategy based on CT measurements. The objective is to propose a tool to discriminate the internal geometry
based on the relative internal area evaluation. The originality of this approach lies
in the use of a multi-scale discrimination method, which analyses the relative internal area of the total surface, including internal surfaces of pores from CT measurements. For the experimentation, parts were made using 3-D printer (Ultimaker
2) with three different filling strategies as displayed in figure 3. For each specimen, the same layer thickness ep = 0.25mm was used. This paper is organized as
follows. Section 2, is dedicated to the presentation of our method for CT data
treatment. In section 3, the surface area, including that of the pores, is characterized as a function of scale. The method application is proposed in section 4.
(a) Example of one measured part
(b) Data representation
Fig. 4. Measurement from CT
2. CT data treatment
Data obtained by CT measurements correspond to a set of images defined in 3
perpendicular directions (see figure 4(b)) which constitute a set of voxels. For me-
274
Y. Quinsat et al.
trology applications, the part geometry is defined from the contours of its surface
(both external and internal). Ontiveros et al [15] propose a review of surface extraction methods in relation with metrology applications. Among all the methods,
threshold methods are commonly used for industrial applications: a gray level value is defined as a limit to differentiate the air and the material [15].
To conduct a surface characterization by multi-scale analysis, an automatic CT
data treatment method is required. Hence, a threshold method, close to the algorithm proposed in [17], and using Matlab© Image Processing Toolbox, has been
developed. The first step is then image binarizing according to a threshold value
determined with the well-known Otsu’s method [16]. Otsu’s thresholding chooses
the threshold to minimize the intraclass variance of the thresholded black and
white pixels. This step is followed by a morphological operation on the binary image which removes interior pixels and keeps the outline of the shapes. As surface
extraction is the key point of surface characterization, and the basis for the evaluation of metrological quantities, we propose to assess our method.
For this purpose, a gauge block made in ZrO2 is measured. This gauge ensures
a flatness defect of 0.10 μm and a parallelism defect between its two faces of 0.10
μm. The calibrated distance between the 2 faces is 1.3 mm with an uncertainty of
± 0.12 μm at 20°C. CT measurements are performed with a Zeiss Metrotom for
which the voxel size is 5.9μm×5.9μm ×5.9μm. Finally, the measured volume consists of 200×300×1132 voxels. Our extraction method is applied leading to 2 sets
of points, each one corresponding to one of the 2 gauge faces (figure 5). To test
the interest of the sub-voxel technique, the extraction method is applied considering a smaller voxel-size, 30% of the original voxel-size. This is simply obtained
by a sub-voxel linear interpolation.
Assessment is thus performed considering two quality indicators, as proposed
in [18]: noise and trueness. Noise accounts for the measurement dispersion around
each extracted face (a plane in practice), and trueness measures the difference between the calibrated distance and the measured distance obtained after face extraction (plane-plane distance). Results, displayed in table 2, are consistent with the
gauge quality, which illustrates the efficiency of our surface extraction method. It
also can be observed that the noise along with the trueness is decreased when using a smaller voxel-size. This interesting result shows that sub-pixel refinement, as
suggested by numerous studies [19,15,10] actually improves surface extraction.
Table 1. Assessment of the surface extraction method
Voxel-size
Noise (Face 1) (μm)
Noise (Face 2) (μm)
Trueness (μm)
original size
3
3.3
25.5
30% of original size
2.4
2.7
21.2
Multi-scale surface characterization in additive manufacturing using CT
275
Fig. 5. Contour extraction for the gauge block: (left) Gauge Measurement, (middle) Contour extraction (original voxel-size) (right) Contour extraction (30% of the original voxel- size)
3 Surface characterization by multi-scale analysis
To link process parameters with the resulting surface, a multi-scale discrimination method is proposed to analyze the relative internal area of the total surface.
The surface area, including that of the pores, is characterized as a function of
scale. Prior to the description of our multi-scale analysis method, the calculation
of the surface area based on the identification of the skin voxels is first detailed.
3.1 Relative area calculation
The area of the part surface can be defined from the 3D extracted contours, these contours defining a kind of skeleton structure. Hence, the skeleton is a 3D element. To simplify the representation, an example of the skeleton's construction in
2D is reported in figure 6.
(a) Measurement
(b) Image binarizing
Fig. 6. Surface extraction in the XY-plane
(c) Skeleton structure
Indeed, the operation of surface extraction leads to the identification of all the
voxels containing a portion of surface (internal or external) referred to as skin
voxels, Vskin (figure 7(b)). The relative internal area is calculated as the sum of all
the areas of the skin voxel faces:
A
6
¦ ¦ Ai , j ˜ G i , j
i Vskin j 1
276
Y. Quinsat et al.
Where
­
°Gi , j 1, if theface i belongingto the voxel j is in contactwiththe air
®
°̄Gi , j 0 , if theface i belongingto the voxel j is not in contactwiththe air
The area so calculated depends on the dimensions of the studied zone. To avoid
this problem, the relative internal area Ar is introduced, defined by Ar =A/Ab,
where Ab is the area of the big voxel bounding the studied zone (figure 7(a)). Considering a 2D representation of a specimen measurement, the set of skin voxels
corresponds to the white voxels in figure 7(c).
(a) Original voxel map
(b) Skin voxel representation
Fig. 7. Voxel representation of surface extraction
(c) Skin voxel extraction
3.2 Multi-scale analysis
In order to conduct a multi-scale analysis, the surface area is characterized as a
function of scale [20]. The voxel size defines the scale, and we propose to vary
this value to have scales smaller and larger than the original dimension related to
the measurement scale. The value of the material density belonging to the considered voxel is obtained by linear interpolation. The influence of such a variation on
the surface extraction is illustrated in figure 9.
Fig. 9. Surface extraction as a function of scale (defining from initial voxel size) – from left to
right: Measurement scale 0.3, scale 1, and scale 3, Surface extraction scale 0.3, scale 1 and scale 3
The parameter Ar can thus be calculated as a function of scale. The scale is chosen
as the area of the voxel surface for a considered voxel size. The presented method
proposes to vary this voxel size (i.e. the scale) and to evaluate the corresponding
internal surface area, the scale varying from sub-voxel to meta-voxel. The relative
internal area evolves, increasing linearly on a decreasing logarithmic scale to a
plateau (see figure 10). The plateau begins at the scale of the original measured
Multi-scale surface characterization in additive manufacturing using CT
277
voxel (identified by a vertical blue line in the figure). Results highlight that, in this
study, the numerical sub-voxel treatment (scales < 10−2 in the present case) does
not provide additional information. In the same way, meta-voxels are too far from
reality. When the size of the voxel is close to the size of the test part, the relative
internal area becomes lesser than 1, which is not realistic. The study can thus be
restricted to the zone in which the relative internal area is above 1.
7
6
relative
area area
Relative
internal
5
4
3
2
1
0 �4
10
�3
10
�2
10
�1
2
2
Scale
(mm
(mm
) )
scale
10
0
10
1
10
Fig. 10. Evolution of the relative internal area as a function of the scale
4 Application and discussion
The interest of our surface characterization method is illustrated through 2 applications. The first one concerns the use of such a method to discriminate filling
modes, and the second one more particularly addresses the influence of process
parameters on resulting printed surfaces.
4.1 Filling mode discrimination
For this application, 3 different filling modes are used as described in figure 3
with the same process parameters: thickness is set to 0.25 mm, and the filling is
imposed to 60 %. The original voxel sizes are 37.8μm × 37.8μm × 37.8μm, and
the bounding voxel size is set to 5mm × 5mm × 5mm. The multi-scale analysis is
applied to the CT measurement obtained for the 3 specimens. Results, displayed in
figure 11, show that the evolution of the relative internal area is different depending on the filling mode. This is particularly distinct at low scales.
The linear decrease with the increase of the scale is in turn greater when the
original relative internal area is the largest, corresponding to the zig-zag strategy.
This analysis turns out to be relevant for filling strategy discrimination: multiscale analysis of the relative internal area enables printed surface characterization.
278
Y. Quinsat et al.
Fig. 11. Filling mode comparison using mutli-scale analysis
4.2 Study of process parameters
The second illustration aims at studying the influence of process parameters on the
manufactured surface. The zig-zag mode is considered for this study. The process
parameters retained are level thickness, direction of filling, and filling rate. A design of experiment is proposed according to a table L9 with 3 factors for 3 levels
(table 3).
Table 3. Design of experiment.
Factor
Level1
Level2
Level3
Thickness (mm)
0.15
0.2
0.25
Direction (°)
0
30
45
Filling rate (%)
60
80
100
Fig. 12. Multi-scale analysis for different process parameters
Multi-scale surface characterization in additive manufacturing using CT
279
The 9 manufactured specimens are measured using CT and a multi-scale analysis
of the relative internal area is afterwards performed, leading to the results reported
in figure 12. As displayed in the figure, the value of the relative internal area at the
low scales is different in function of the considered specimen. Furthermore, in the
linear zone, the slope also varies for each specimen. This study confirms the relevance of using the relative internal area to discriminate surface topographies obtained using different process parameters. It is interesting to notice that due to process uncertainties, specimen for which the filling rate is 100% present some pores,
which explains that the relative internal area is not constant for these specimens.
This is likely due to process uncertainty.
4 Conclusion
The set of skin voxels defined the 3D surface topography, both internal and external. To link process parameters to the manufactured surface, a multi-scale discrimination method is proposed which analyses the relative internal area of the total surface. The surface area (area of all the skin voxels), including that of the
pores, is characterized as a function of scale. To show the relevance of multi-scale
relative internal area analysis, 2 different applications are proposed. The first one
clearly highlights the interest of our approach to discriminate filling modes when
using FFF. For the second application, the filling mode is the same (zig-zag
mode), and the study focuses on the influence of some process parameters (level
thickness, filling direction, and filling rate) on the printed surface. This study confirms the relevance of using the relative internal area to discriminate surface topographies obtained with different process parameters. Hence, the geometry and
the structure of the part can be analyzed according to process parameters to answer a given part function. This work is a first approach to determine the relationship between mechanical properties and surface topography. We propose to use
the relative internal surface area to discriminate the internal geometry. According
to multiscale analysis, further work will focus on linking strength and internal relative area. A functional correlation could be found by regressing the relative internal areas at each scale versus the mechanical properties.
Acknowledgments Authors would thank Zeiss Industrial Metrology, LLC which provides all
the CT measurements.
References
1. Ahn, D., Kweon, J.H., Kwon, S., Song, J., Lee, S.: Representation of surface roughness in
fused deposition modeling. Journal of Materials Processing Technology 209(15-16), 5593 –
5600 (2009)
2. Galantucci, L., Lavecchia, F., Percoco, G.: Experimental study aiming to enhance the surface
finish of fused deposition modeled parts. CIRP Annals - Manufacturing Technology 58(1),
189 – 192 (2009)
280
Y. Quinsat et al.
3. Pandey, P.M., Reddy, N.V., Dhande, S.G.: Improvement of surface finish by staircase machining in fused deposition modeling. Journal of Materials Processing Technology 132(1-3), 323
– 331 (2003)
4. Zeng, Y., Wang, K., Wang, B., Brown, C.: Multi-scale evaluations of the roughness of surfaces made by additive manufacturing. In: ASPE - 2014 Spring Topical Meeting (2014)
5. Jamiolahmadi, S., Barari, A.: Surface topography of additive manufacturing parts using a finite difference approach. Journal of Manufacturing Science in Engineering 136(4), 1–8
(2014)
6. Yurivania, P., KarlaP, M., Joaquim, C.: Influence of process parameters on surface quality of
cocrmo produced by selective laser melting. The International Journal of Advanced Manufacturing Technology 80(5-8), 985–995 (2015)
7. Wang, J., Leach, R.K., Jiang, X.: Review of the mathematical foundations of data fusion techniques in surface metrology. Surface Topography: Metrology and Properties 3(2), 023,001
(2015)
8. Chiffre, L.D., Carmignato, S., Kruth, J.P., Schmitt, R., Weckenmann, A.: Industrial applications of computed tomography. CIRP Annals - Manufacturing Technology 63(2), 655 – 677
(2014)
9. Bartscher, M., Hilpert, U., Goebbels, J., Weidemann, G.: Enhancement and proof of accuracy
of industrial computed tomography (ct) measurements. CIRP Annals - Manufacturing Technology 56(1), 495 – 498 (2007)
10. Yage-Fabra, J., Ontiveros, S., Jimnez, R., Chitchian, S., Tosello, G., Carmignato, S.: A 3d
edge detection technique for surface extraction in computed tomography for dimensional metrology applications. CIRP Annals - Manufacturing Technology 62(1), 531 – 534 (2013)
11. Dewulf, W., Kiekens, K., Tan, Y., Welkenhuyzen, F., Kruth, J.P.: Uncertainty determination
and quantification for dimensional measurements with industrial computed tomography.
CIRP Annals - Manufacturing Technology 62(1), 535 – 538 (2013)
12. Lifton, J.J., Malcolm, A.A., McBride, J.W.: On the uncertainty of surface determination in xray computed tomography for dimensional metrology. Measurement Science and Technology
26(3), 035,003 (2015)
13. Kruth, J., Bartscher, M., Carmignato, S., Schmitt, R., Chiffre, L.D., Weckenmann, A.: Computed tomography for dimensional metrology . CIRP Annals - Manufacturing Technology
60(2), 821 – 842 (2011)
14. Carmignato, S.: Accuracy of industrial computed tomography measurements: Experimental
results from an international comparison. CIRP Annals - Manufacturing Technology 61(1),
491 – 494 (2012)
15. Ontiveros, S., Yage, J., Jimnez, R., Brosed, F.:Computer tomography 3d edge detection
comparative for metrology applications. Procedia Engineering 63, 710 – 719 (2013). The
Manufacturing Engineering Society International Conference, MESIC 2013
16. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Transactions on
Systems, Man, and Cybernetics 9(1), 62–66 (1979)
17. Shahabi, H., Ratnam, M.: Simulation and measurement of surface roughness via grey scale
image of tool in finish turning. Precision Engineering 43, pp.146 – 153(2016)
18. Mehdi-Souzani, C., Quinsat, Y., Lartigue, C., Bourdet, P.: A knowledge database of qualified
digitizing systems for the selection of the best system according to the application. CIRP
Journal of Manufacturing Science and Technology pp. – (2016)
19. Jiménez,R.,Comps,C.,Yague,J.: An optimized segmentation algorithm for the surface extraction in ncomputed tomography for metrology applications. Procedia Engineering 132, 804 –
810 (2015). MESIC Manufacturing Engineering Society International Conference 2015
20. Brown, C.A., Johnsen, W.A., Hult, K.M.: Scale-sensitivity, fractal analysis and simulations.
International Journal of Machine Tools and Manufacture 38(5-6), 633 – 637 (1998)
Testing three techniques to elicit additive
manufacturing knowledge
Christelle GRANDVALLET*, Franck POURROY, Guy PRUDHOMME,
Frédéric VIGNAT
G-SCOP Laboratory, 46 avenue Félix Viallet, 38031 Grenoble, FRANCE
*Corresponding author. Tel.: +33 (0)4 76 82 52 79; fax: +33 (0)4 76 57 46 95. E-mail
address: Christelle.Grandvallet@grenoble-inp.fr
Abstract Additive manufacturing (AM) has enabled the building of parts with
new shapes and geometrical features. As this technology modifies the practices,
new knowledge is required for designing and manufacturing properly. To help experts create and share this knowledge through formalization, our paper focusses on
testing three knowledge elicitation techniques. After defining knowledge concepts
we present the State of Art in knowledge elicitation and a methodology. A case
study about support creation for AM points out: the assets and limits of the techniques; the different types of knowledge elements per technique; some contradictions between experts. We finally propose collective tools for a better elicitation
and formalization of AM knowledge.
Keywords: Additive manufacturing; Knowledge management; Elicitation methods; Knowledge elicitation tools; Individual elicitation.
1 Introduction
Additive Manufacturing (AM) technologies have modified the practices of engineers and researchers. Among them, recent metallic AM technologies developed
rapidly, but design and manufacturing rules are still under formalization. This paper focuses on testing three techniques with a limited number of experts in order
to check if they can facilitate the elicitation of AM knowledge not yet formalized.
It is part of our ongoing research work which aims to make AM knowledge accessible and, as knowledge intermediary, to assist the knowledge producers [1] in the
elicitation, structuration and sharing of AM knowledge within a larger community.
A first section about the State of the Art (SoA) explains some concepts of
knowledge management and elicitation methods. A case study in the next section
presents the test of three elicitation methods - unstructured interview, semistructured interview and limited information task - with an AM activity. It is fol© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_29
281
282
C. Grandvallet et al.
lowed by a deep analysis of results, a conclusion and a proposition of progression
toward new method and perspectives.
2 State of the art review and research challenges
2.1 About knowledge representation and AM
The few research works about AM knowledge that can be found in the literature
are more related to knowledge based systems (KMS) than to elicitation techniques
(e.g. Gardan [2]). Although the KMS approach proves to be a promising solution
to specific engineering problems, it relies on a set of explicit and well defined design and manufacturing rules, and such rules still remain fuzzy, even unknown in
the young field of AM. The knowledge elicitation approach could contribute to
make AM knowledge more visible, and to foster emergence of design and manufacturing rules. To start with the basics, one can make a distinction between
knowledge, information and data. Wilson [3] defines a data as “everything outside
the mind that can be manipulated in any way” and it becomes information once
“the data are embedded in a context of relevance to the recipient”. Our position is
to consider that information stands outside individual and that knowledge is rooted
in people’s head. This position is shared by Nonaka and Takeuchi [4] who wrote
“in a strict sense, knowledge is created by individuals”, which highlights the dynamic aspect of knowledge. Knowledge is also characterized by two opposite but
coexisting dimensions, namely tacit and explicit knowledge. Citing Polanyi [5],
Nonaka and Takeuchi explain that “whereas explicit or "codified" knowledge refers to knowledge transmittable in formal, systematic language, tacit knowledge is
personal, context-specific, and hard to formalize and communicate.” Mougin [6]
had studied the structuration of information in a knowledge transfer between
workers of an organization. His structure emphasizes the progression of
knowledge elements, from tacit to explicit states. The first one is a “knowledge
fact” defined as “an observable element that can be captured by video-recording or
direct observation of professional situations”. The second, a “knowledge footprint”, is intentional and can be an oral or written element expressed by a person.
Next structure is a “knowledge object”, more formalized, such as for example an
internal report. Then it could become a “packaged knowledge object” once it is
processed and codified into a repository. In an AM, context our challenge is then
to observe dynamically-constructed knowledge and use this structuration model to
identify tacit and explicit knowledge elements thanks to elicitation techniques.
Testing three techniques to elicit additive …
283
2.2 A knowledge elicitation methodology
Knowledge elicitation is “the process of collecting from a human source of
knowledge, information that is thought to be relevant to that knowledge”. [7].
Elicitation methods are used to help experts express their knowledge, in order to
structure, share and transfer it for a reuse within a community of workers. The
elicitation step is thus at the beginning of the knowledge cycle. One method for
knowledge elicitation has been developed in 2007 by Milton [8] for knowledge
acquisition. His guide aims to capture experts’ knowledge in 47 steps and 4 phases
for formalization into a KMS. A great deal of elicitation techniques are proposed
for catching knowledge (Fig. 1). For the purpose knowledge is structured from
basic/explicit to deep/tacit knowledge, as well as conceptual and procedural.
Fig. 1. Milton’s techniques for capturing different knowledge types.
In an AM domain, conceptual explicit knowledge could be expressed for example
by “I know that supports have a thermal impact onto the heat flow during part
production”. Procedural knowledge would refer to “I know how to create a support structure within my CAD/CAM system”.
Among the techniques, the ones on the left side are interviews that tackle more
explicit knowledge. Unstructured interview and semi-structured interview are the
first tools to use at the beginning of a project. Towards the right side we should
capture deeper knowledge. In our AM context we suppose that we should get a
mixture of knowledge, whether basic or deep, because of the youth of the field.
Also our idea is to consider any trace of knowledge as defined by Mougin. Based
on these assumptions and on the SoA we define our research question as follows:
What would be the benefits and limits in using such elicitation techniques in the
new domain of AM where knowledge is under construction? To attempt to reply
to this question we implemented the following case study.
284
C. Grandvallet et al.
3 Case Study
3.1 The activity of support structure creation
The scope of our analysis is the design and creation of support structures for metallic parts built with EBM (Electron Beam Melting) technology. Example of supports is provided in Fig. 2.
Fig. 2. Example of a turbine built with supports with EBM technology.
This activity is in fact critical because it is closely interlinked with the characteristics of the parts as well as on fabrication parameters and hence influences the final
quality of parts. Supports are indeed used for thermal and/or mechanical reasons;
they are needed for removing heat from the part and to support overhanging surfaces. It is then important to capture where and how to place them on the part, and
this relates to procedural and conceptual knowledge to capture.
3.2 Approach for elicitation of AM knowledge
We identified three researchers in our laboratory, acknowledged for their expertise
in support creation. Each of them has a specific profile which is summarized in
Table 1. The number of experts has been limited as the objective of this experience was to test the usability/relevance of the elicitation in a field in construction.
Table 1. Panel of experts.
Code
Type of specialization
E1
Designer using CAD software for AM
E2
Engineer specialized in support structure creation
E3
Engineer more experienced with the AM process
Three techniques have been chosen among Milton’s list, namely: Unstructured Interview (UI); Semi-Structured Interview (SSI); Limited Information Task (LIT).
(See Table 2.) According to Milton, it is better to start elicitation with UI and SSI
Testing three techniques to elicit additive …
285
to get basic and explicit knowledge. The LIT technique was specifically chosen as
it prompts the expert to provide three pieces of information that he considers as
crucial in order to perform a defined task.
Table 2. Presentation of selected elicitation techniques.
Name
Elicitation
technique
Characteristics according to
Milton
Example of questions asked in
the case study
UI
Unstructured
interview
A freeform chat with the expert to get basic knowledge
of the domain.
Tell me how you do to create
supports?
SSI
Semistructured
interview
Based on a predefined questionnaire sent beforehand to
the expert. Written responses
are reviewed and completed
during the interview.
What are supports used for? What
are the main steps to create supports? Which problems do you
encounter? Why do you undertake these activities? What still
remains difficult or confusing?
LIT
Limited information
task
Based on one request repeated until he expert does not
have anything to add: to give
orally 3 pieces of information
essential to the execution of a
complex task.
If you had to create supports with
only 3 pieces of information,
which ones would you choose
first to execute your task? Followed by: Now if you have 3
more pieces of information,
which one would you use?
The techniques were tested individually with the researchers accordingly (see Table 3). At the beginning of each elicitation session, the rules of the exercise as well
as its objectives were presented to the person.
Table 3. Experts and individual elicitation techniques
E1
E2
UI
X
X
SSI
X
X
LIT
X
E3
X
During the UI, questions were enough open to start a free discussion. Its duration
was of at least one hour. We took notes on a laptop and tape-recording were used.
Initial notes were then completed based on knowledge footprints forms. For SSI, a
questionnaire requested to explain the activity of support creation, the objectives,
the steps to achieve them, as well as the problems identified and the proposed solutions. Once filled in, the questionnaire was reviewed during the interview and
completed by a question and answer session of around 1 hour. As far as the LIT is
concerned it was used with E2 and E3 as they were more experienced than E1 until the person did not have anything to add. Its duration ranged from 20 to 40
minutes.
286
C. Grandvallet et al.
4 Results
Together with our notes and tape-records taken during the elicitation the study
of the three elicitation techniques revealed interesting knowledge elements (see results synthesis in Table 4). Most notably, our analysis highlighted parameters and
criteria in connection with the part quality. They were encapsulated in knowledge
facts and footprints.
Table 4. Knowledge footprints captured during elicitation session
Tool
Elicited knowledge
UI
- Support typology choice
- Support teeth length to facilitate removal
- Offset selection between support and surface
- Manufacturing and cooling time onto grain microstructure
SSI
- Supports help to stiffen the part, dissipate heat, remove the part from
the start plate
- Where to place the supports and how
- Supports depend on part orientation, part balancing in the chamber,
melting behavior of the powder, machine parameters
LIT
- Part orientation
- Angle between plate and overhanging surface
- Length of supports
- Tolerance of the surfaces to support
- Collision between supports
- Roughness induces by the machine parameters
If we go into details, the unstructured interview enabled to get an overview of the
AM overall context and to understand who does what in this collaborative work.
References to scientific articles and SoA were mentioned. The elicited knowledge
was from various scientific domains (EBM materials, design, or manufacturing).
The knowledge footprints were captured in the form of parameters and criteria,
more expressed in terms of problems still unsolved: which typology of support to
add (contour, web, etc.); which teeth length to choose for facilitating the support
removal; which offset to select between the support and the surface… Other parameters were discussed, not directly linked to the support creation but more in relation with the part quality: e.g. the influence of the manufacturing time and cooling onto the grains’ microstructure; the difficulty to remove non-consolidated
powder from a part containing supports. Moreover a 3D file of a part and its supports was shown by one expert with Magics software in order to back up his explanations. Such a demo, that we can typically define as a knowledge fact, was a
Testing three techniques to elicit additive …
287
means to express less explicit knowledge. It could be similar to a scenario, although not set up by the knowledge engineer.
As far as SSI is concerned it had the advantage of asking the expert to clarify its
activity objectives, as well as the sequential steps required in support creation, the
input and output for specific tasks, the problems and solutions, problems not yet
solved, and links with other activities in the whole AM process. Attempts of definitions were provided as knowledge footprints. About our following question
“what is your objective?” replies were “to know the functionality of supports,
where and how to place them”. Answers to the question “what are supports used
for?” were less straightforward as supports may have several functions: they help
to stiffen the part, to remove heat, to unstick the part from the start plate… but
these functions are interdependent with many elements, e.g. the part orientation
and balancing in the chamber, the powder melting behavior, machine parameters…These oral and written knowledge elements were retranscribed afterwards
and gathered under a series of various parameters that completed the keywords
and footprints captured during the other types of interviews. Experts concluded in
the interview they must still find “some operating rules through more trials and
experiences”. This proves that the level of knowledge maturity of experts is low.
Lastly, LIT technique allowed to acquire concepts and precise knowledge elements related to support creation and to have them sorted per priority. The main
footprints captured about support creation were the following: part orientation;
angle between the start plate and the overhanging surface; dimension and thickness of the various surfaces to support; length of the support; tolerance of the surfaces to support; risk of collision between supports; possible roughness induced by
the manufacturing parameters. These elements are considered as critical
knowledge footprints as they were identified by the experts as crucial for the
knowledge creation process. This tool revealed hesitation and highlighted some
contradictions and disagreement between the experts when comparing which piece
of information is more important than another. LIT helped to elicit more detailed
knowledge related to the support creation process. It had the advantage to rank the
above-mentioned support parameters according to the expert’s priority, i.e. avoid
the part defects. It appeared indeed of highest importance to take care of the part
surface quality and obtain a good geometrical quality, the right dimensions, the
right mechanical behavior or a good microstructural quality. These knowledge elements did not however always intersect between experts, they either converged
or diverged. This reveals that researchers' knowledge is still in construction, and
that they have a different level of maturity.
288
C. Grandvallet et al.
5 Conclusion
Our research focused on the individual elicitation of the knowledge of experts
specialized in the support structure creation for AM metallic parts. To do so three
tools were selected and tested: the unstructured and semi-structured interviews,
and the limited information task. The question was where and how to place the
supports. A first result analysis led to the conclusion that many knowledge elements were elicited although experts cannot provide comprehensive or straight answers due to the immature stage of the AM knowledge. Knowledge facts arose
when a scenario was proposed by an expert, which can be explored as a potential
tool. More importantly, our work brought to light a list of support parameters and
criteria which emerged on the fly as knowledge footprints. Support parameters
were defined in terms of density, form and placement. Criteria concerned the part
quality - defined by its surface quality, geometry, dimension, microstructure appearance or mechanical behavior – as well as the process duration and cost. This
conclusion leads us to propose as perspectives to consider support parameters according to their degree of influence onto the part quality. We recommend therefore to complete our elicitation techniques and open our research to more collective elicitation tools such as the one proposed by Stenzel et al. [9]. A matrix
crossing the critical process parameters with performance criteria would serve as
an elicitation tool at the heart of the debate. This would have the advantage to
make interact people with a different level of expertise and opinions. The idea
would be to elicit the knowledge elements during this collective session so as to
get “shared knowledge footprints” and thus to foster knowledge maturing process.
References
1. Markus LM. Toward a Theory of Knowledge Reuse: Types of Knowledge Reuse Situations
and Factors in Reuse Success. J Manag Inf Syst, 2001,18:57–93
2. Gardan N. Knowledge Management for Topological Optimization Integration in Additive
Manufacturing. Int J Manuf Eng, vol:2014
3. Wilson TD. The nonsense of knowledge management. Inf Res 2002, 8:paper 144
4. Nonaka I, Takeuchi H. The Knowledge-Creating Company: How Japanese Companies Create
the Dynamics of Innovation.Oxford University Press, 1995.
5. Polanyi M. The Tacit Dimension. London, Routledge & K. Paul, 1967.
6 Mougin J, Boujut J-F, Pourroy F, Poussier G. Modelling knowledge transfer: A knowledge
dynamics perspective. Concurrent Engineering, 2015, 23(4), 308-319.
7. Cooke NJ. Varieties of knowledge elicitation techniques. Int J Hum-Comput Stud 1994,
41:801–49
8. Milton NR. Knowledge Acquisition in Practice: A Step-by-step Guide. Springer Science &
Business Media, 2007.
9. Stenzel I, Pourroy F. Integration of experimental and computational analysis in the product
development and proposals for the sharing of technical knowledge. Int. J. Interact. Des.
Manuf, 2008, 2(1), 1–8.
Topological Optimization in Concept Design:
starting approach and a validation case study
Michele BICI1*, Giovanni B. BROGGIATO1 and Francesca CAMPANA1
1
Dip. Ingegneria Meccanica e Aerospaziale – Sapienza Università di Roma
* Corresponding author. Tel.: +39-06-44585253; fax: +39-06-4881250. E-mail address:
michele.bici@uniroma1.it
Abstract Nowadays, the most updated CAE systems include structural optimization toolbox. This demonstrates that topological optimization is a mature technique, although it is not a well-established design practice. It can be applied to increase performance in lightweight design, but also to explore new topological
arrangements. It is done through a proper definition of the problem domain, which
means defining functional surfaces (interface surfaces with specific contact conditions), preliminary external lengths and geometrical conditions related to possible
manufacturing constraints. In this sense, its applicability is possible for all kind of
manufacturing, although, in Additive Manufacturing, its extreme solutions can be
obtained. In this paper, we aim to present the general applicability of topological
optimization in the design workflow together with a case study, exploited according to two design intents: the lightweight criterion and the conceptual definition of
an enhanced topology. It demonstrates that this method may help to decrease the
design efforts, which, especially in the case of additive manufacturing, can be reallocated for other kind of product optimization.
Keywords: Topological Optimization; Additive Manufacturing; Design Intent; Conceptual Design; Lightweight Design
1 Introduction
Design in engineering is the ability of planning a product that can absolve its function in respect of various constraints. Currently, the development of design principles, methods and technologies caused that this concept evolved into the research
of the best compromise among possible designs, in compliance with constraints
and demands. Every field, every part of the product lifecycle and every “design
for” defines an “optimum” and the final product is an ideal merge of all these optimized solutions and configurations. Design phase and product process planning
determine the “time to market”. It means a large amount of costs without immedi-
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_30
289
290
M. Bici et al.
ate profit. However, there is not a linear distribution of design and production
costs, so that during the first phases of “time to market”, in the design phase, more
relevant design modification are possible with lower impact in terms of costs, until
no manufacturing investment are planned and made. The technological process
severely constrains the design in terms of cost and technical feasibility, so it is
necessary to talk about product-process integrated design and manufacturability
oriented design. In general, it is recommended to use principles of Concurrent Design [1].
In a context like this, the latest developments of Additive Manufacturing (AM)
technologies come to have an increasing relevance. In fact, Additive technologies,
generally, allow to add material where necessary, layer-by-layer, obtaining forms
not otherwise producible [2]. Obviously, also AM has its own technological constraints so that it is impossible to produce everything with every AM technology
[3]. Nevertheless, it is important to understand that design dynamics change. In
the AM field, the possibility of producing complex shapes and various features,
regardless of process, reduces the effort of component modifications during the
design phase. AM allows to move the designers’ focus from the executive design
to the concept one, with a high reduction of the “design for” rules related to the
process technology. This is why Topological Optimization (TO) procedures assume a substantial role, allowing, already in the concept phase, to obtain a component that has its own volume distributed in function of its conditions of load and
use. It can be made through an extremely consistent mathematical formulation that
is able to explore a very generic design space for a given set of loads and boundary conditions, so that the resulting layout satisfies a prescribed set of performance
targets.
In this paper, we discuss about the TO presenting its definition and application
during design (Section 2 and 3) highlighting, through a case study, described in
Section 4, its suitability not only to lightweight design but also like new tool for
conceptual design according to specific design intent.
2 Topological Optimization in brief
TO is a structural optimization technique that started its evolution in the second
half of the last century. According to Bensdoe [4], structural optimization pertains
to topology, shape and size of a component. Topology extends the concepts of
shape or geometry including the capability of adding and deleting volumes, that
basically means changing the space connectivity via opening-closing operations
(Figure 1). Shape and size optimization looks for mathematical conditions able to
minimize (or maximize) a structural design objective through geometrical parameters like feature sizes (thickness, length, perimeter, ...) or geometric shape, without
changes in the topology (that means without opening/closing connections). In this
sense, a rectangular plate with a circular hole has different shape from a triangular
Topological Optimization in Concept Design …
291
one with an elliptical or hexagonal hole, but they are topologically equivalent,
since they can be transformed one into the other through a transformation map
(homeomorphism) [5].
Fig. 1. Space connectivity examples (on the left); example of shape modification and topology
change via “opening operator” (on the right - in clockwise, from the upper left shape, deformations are in blue and space reduction in the first object depicted in cyan).
At the beginning, the structural problem has been investigated looking for
mathematical conditions able to delete or maintain material in each volume element of the assumed design space [4]. This leads to the microstructural or material
approach that, together with the macrostructural approach, represent the most general and well-posed definition of the problem [6]. SIMP, that stands for Solid Isotropic Material with Penalization, and the Level Set Method are two relevant examples of micro and macrostructural approaches [4,7]. Other approaches with
very intuitive formulations (e.g. hard-kill methods like Evolutionary Structural
Optimization), can be seen as heuristic although many efforts made to extend or
strictly define their field of applicability [8].
Microstructure methods are also called density-based methods. Under the hypothesis of linear elastic behavior and assuming a FEM discretization, they are defined as:
‹ ˆሺUǡ ‫ܝ‬ሻ
‫ۓ‬
۹ሺUሻ‫ ܝ‬ൌ ۴ሺUሻ
(1)
‰
‫ ۔‬୧ ሺUǡ ‫ܝ‬ሻ ൑ Ͳ ݅ ൌ ͳǡ ݉
‫ א ߩŠ–‹™ە‬ሾͲǡͳሿ
where the objective function, f, can be the total design space mass, the natural frequencies, or cost function depending on stresses or compliance. u represents the
nodal strain vector; K(U) the stiffness matrix in function of the density factor, U. It
is defined between 0 (void) and 1 (bulk). F represents the applied nodal loads and
gi is the set of m constraints.
The design variables able to look for the minimization of the material inside the
design space, concern with the element stiffness matrices, generically named Kel,
which opportunely assembled define the stiffness matrix K. Each of them is function of ߩ by:
(2)
‫ܭ‬௘௟ ൌ ߩ௣ ‫ܭ‬௘௟
where p stands for the penalty weight. ߩp is able to transform the problem from
discrete to continuous since p > 1 is able to penalize ߩ values far from 1. SIMP
sets p=3. Each element of the mesh contributes to the component stiffness via (1)
292
M. Bici et al.
thus if the element does not result effective the p value will decreases its relevance, until to its deletion.
As reported in literature, checkerboard patterns and mesh dependency are the
major mathematical drawbacks of this formulation. Filtering techniques or mathematical relaxation of the optimization problem are two possible solutions to these
problems [9]. Both of them contribute to achieve a "well-posed" optimization
problem, obtaining a reliable approach to face the TO via CAE. Filtering techniques reduce the set of possible solutions excluding, via filters, unphysical solutions. Many filters can be applied as described, for example, in [10]. The mathematical relaxation of the minimization problem consists of adding new design
variables. This is achieved putting aside the concept of solid isotropic distribution
of material and defining an assigned microstructure of voids for each element. The
new design variables can be represented by the sizes of the void areas (hole in cell
approach) or by the configuration of a layer structure (layered structures of different ranks). The solution is then found by the so-called homogenization techniques
[4].
The macrostructure approach consists in boundary variation methods. They are
based on implicit functions able to describe what happens on the edge of the design space [8]. Doing so, changes of topology are linked to the distribution of the
contours of the implicit function )(x). In the conventional Level Set Method, this
problem is described by the “Hamilton-Jacobi-type” equation that can be solved
through sensitivity analysis on an assigned grid in the design-space domain [10].
As already mentioned, the original formulation of these methods obviously assumes linear elastic behavior. Generalization to non-linear problems have been
made, see for example [8]. This reference has also an interesting overview of the
possible fields of applications that range from MEMS to biomedical, to civil structure and multiphysics applications.
3 Practical use and impact on the CAD modeling workflow
TO is available in many well-known FEM software (e.g. Ansys, Optistruct, Inspire, Nastran). Their practical use starts from a proper definition of the problem
domain, in terms of preliminary envelope of the volume and lengths, functional
surfaces (interface surface with specific contact conditions), geometrical conditions related to possible manufacturing constraints (e.g. symmetry, draft angle). In
this sense, its applicability is possible for all kind of manufacturing, although in
AM also TO solutions reached without specific manufacturing constraints can be
obtained.
The preliminary envelope of the volume can be a geometrical entity or a dense
FEM mesh, that has to be divided into design space and fixed volumes (e.g. functional surfaces or manufacturing constraints). Although this distinction is a quite
natural concept, it can be implemented in different ways according to the adopted
Topological Optimization in Concept Design …
293
software. Starting from a mesh, it asks for a selection of two sets of element. Using CAD based software (e.g. Inspire), it may ask for splitting the volume in more
than one set. As a consequence, contact conditions must be given not only among
different parts of an assembly, according to FEM procedure, but also among different volumes of the same component. Contacts are part of the load conditions
that may be split into loads and Degree Of Freedom (DOF) constraints. Also in
this case, CAD-based software tend towards load applications not directly to mesh
elements but on geometry entity, hiding the mesh loading results. It may reduce
the knowledge of the effective load/constraint conditions that are applied, so that
careful checks are required to evaluate the compliance of the model. Concerning
loads, more than one operative condition can be analyzed and studied. The optimization can be applied on different loadcases, looking for a compromise solution.
From the computational point of view, not so many input are required. The objective function and the constraints, usually taken from compliance, mass, natural
frequency or a combination of them, … Mesh size must be rather uniform and
dense enough to reach the proper sensitivity to the element deletion, generally it
must be equal or smaller than that used in a good structural analysis evaluation. In
many cases a preliminary run is required to check the goodness of the model.
Optimal solution is provided in terms of final volume or density factor contour
plot. It must be checked through safe factor or other design requirements. If it succeeds, other CAD activities are necessary: surface smoothing (since the computation mesh is rather rough after the optimization, due to its constant length), small
area deletion or pocket closure if other manufacturing constraints are considered.
In this scenario, new CAD technologies (e.g. curve and surface modeling, synchronous modeling) represent a relevant aid to reduce time, nevertheless the necessity of robust data exchange and a common user interface among
CAD/CAE/CAM may represent one of the major drawbacks of the practical use of
TO.
4. Case study
The goal of the case study is to give evidence of the workflow defined in Section 3
by using a commercial TO software (in the specific case solidThinking Inspire
2015) and to validate this design approach comparing the results to those previously achieved through a conventional “trial and error” design process.
For this reason we develop two tests from a case study related to a suspension
wishbone attachment of the Formula SAE car, named Gajarda, designed by the
“Sapienza Corse” team. The attachment is used to connect the uniball joint to the
monocoque chassis of the car at the end of each suspension’s A-arm, as shown in
the red circles of Figure 2.
294
M. Bici et al.
Fig. 2. Gajarda in two versions: the 2012 (on the left) and the 2013 (on the right). Red circles
highlight some of the positions where the suspension wishbone attachments are located.
The attachment is made of an aluminum alloy characterized by: E=70GPa,
ρ=2700 Kg/mm3, σyield=260 MPa, ν=0.33. It has been developed and modified passing from car 2012 version to 2014, to reduce mass, saving functionalities and resistance. Figure 3 shows the design evolution made during these years, reducing
the total mass from 0.061 kg to 0.031 kg.
Fig. 3. Suspension wishbone attachment: interface surfaces (on the left) and shape evolution
made by “Sapienza Corse” Formula SAE team from 2012 to 2014
According to the design intent, the interface surfaces are (Figure 3 on the left):
counterbore holes that must be provided on the base, to allow the connection with
the wall of the chassis; a pocket that allows the insertion and assembly of the ball
joint with various angular orientations of the arm; two aligned holes for the locking pin. The two tests are defined as follows:
Test 1. Starting from the shape and the geometry of the 2012 version, we look
for an optimal design reducing the original mass at the 50% (comparable with the
reduction obtained in the 2014 version) and at the 20%, maximizing stiffness.
Test 2. with the aim of decoupling the optimization process from the designer’s
choices, we give as input for the design space just the component’s envelope volume. It has been defined as cylindrical since it should be made with almost axialsymmetric features, due to the fact that the actual component is manufactured by
machining and the capability of fastening in different positions is required.
Concerning the loads scheme, in the actual configuration they are applied
through a locking pin that is inserted in the two aligned holes of the component. In
Topological Optimization in Concept Design …
295
the TO models, to take into account bending effects due to the pin deflection, they
are applied on the middle point of the axis between the two holes and transferred
to the corresponding cylindrical surfaces by a connector (thus defining the most
severe condition of bending), as shown in figure 4. On the left, the three load conditions can be seen at the middle of the axis between the two holes for the locking
pin. The central vertical plane has been defined as manufacturing constraint, since,
as already said, the designer intent is also to maintain component symmetry, in
both test cases. Fixed volumes are associated to the bolt interfaces necessary to
link the component to the frame. The DOFs that are involved from these assembly
constraints are clearly shown in red. Figure 4 shows, for the two tests, also the design space in brown and the fixed volumes, constrained as non-design space (in
blue).
Fig. 4 Test 1 (on the top) and Test 2 (on the bottom). Brown volumes are design spaces, blue
volumes constrained areas, in red loads and constraints on DOF
In both tests, we have chosen to keep the number of fixing holes to three, differently to what developed by the design team, which has increased the number of
connections reducing, at the same time, their diameter, always in a perspective of
reduction of the masses.
296
M. Bici et al.
5. Results and discussion
Test 1 aims to investigate if, through TO, it is possible to include well-established
design intents that are mainly imposed by the designer knowledge. For this reason,
we setup the optimization to obtain comparable solutions starting from the virtual
model of the 2012 attachment.
Fig. 5. Test 1: mass reduction@50% (at the top); mass reduction@75% (at the bottom)
The optimization problem has been defined as a max-stiffness research with the
constraint of total weight at 50%. Figure 5 shows the final achievements for Test
1. Mass reduction@50% gives a total mass of 0.033 kg in accordance with the
imposed constraint. Starting from this solution an enhanced one has been investigated, moving the final mass constraint up to 75%, taking care that the hypothesis
of linear elastic behavior is not missed. Figure 5, at the bottom, shows this result
that set the mass to 0.017 kg, basically sharpening and smoothing the volume
around the pin hole.
Test 2 aims to investigate the ability of the TO starting from the most general
domain definition. Doing so a decoupling of the TO results and the designer
knowledge is performed, to better assess the TO potentiality as concept design
tool. Imposing the same load and constraint conditions (connected to the grey
parts of non-design space) of Test 1, the optimization has been launched looking
for the minimum achievable mass. Figure 6 shows the results. They refer to an
83% of mass reduction, starting from the CAD value of 0.14 kg. This leads to a final mass of 0.024 kg.
Topological Optimization in Concept Design …
297
In both tests, the TO finds improved solutions. Nevertheless, Test 1 mass reduction @75% represents a shape refinement not a TO, since it does not change
the geometric connectivity of the component.
Fig. 6. Test 2: Final solution
Figure 7.a shows the von Mises equivalent stress of the original team’s design,
computed by FEM made by Ansys, Figure 7.b the results of the 2 improved solutions investigated in Test 1 mass reduction @50% and Test 2, respectively (made
by Inspire). Obviously, the most stressed areas are those with the greatest amount
of material, around the pin hole and at the bolt localizations. The original design
shows a stress below the yield stress (about 200 MPa). Similar stress values are
found via TO, since red area must be considered outlier values due to stress concentration.
Fig. 7. Von Mises equivalent stress: a) Sapienza Corse’s model; b) Test 1 mass reduction @50%
and Test 2 (with the connector for load transfer not hidden).
Figure7.b shows the Von Mises equivalent stress, also for Test 2 (take care that
in this case the locking pin seems to be present since it is graphically shown by the
connector used in the load definition, as discussed in section 4). This last result
seems to exhibit a more uniform stress distribution since a smoother change of the
model boundaries can be found when starting from a larger volume. Moreover, it
must be pointed out that stress concentrations are always found near the nondesign space at the bolts. This is another confirmation of the necessity of a subse-
298
M. Bici et al.
quent smoothing, e.g. rounding, near abrupt changes of section or shapes. It can be
seen as a shape and size refinement that is necessary for the next detailed design.
6 Conclusions
Concerning the ability of capture the design intent, both tests are well-posed in
terms of starting domain and loads. This confirms the suggestion of Bensdoe to
use TO as “a creative sparring partner” [4] or a design speed-up tool. In the case
of TO as concept design tool (Test 2) a total mass of 0.024 kg is found. Compared
to the 0.031 kg of the final Sapienza Corse’s component, it can be seen as an interesting result concerning with TO. So why does not Sapienza Corse achieved the
same result? Basically due to cost and design for a non-additive manufacturing
process. Thus, in our opinion, it is clearly demonstrated that via CAD modifications the preliminary shape design made by TO can be easily modified in the next
design step (detailed design) according to other design requirements. This leads to
an "intrinsic" lightweight or compliance design that is possible to make heavier or
stiffer successively. In this sense, also without adopting Additive Manufacturing,
TO allows to overcome the classical workflow: "rough concept first, then the optimization" towards "optimal concept design first, manufacturing constraints, if
any, later".
Acknowledgments The research work reported here was made possible by the kind help of the
whole “Sapienza Corse” team of the University of Rome “La Sapienza” (Italy).
References
1. Ulrich, K.T., Eppinger, S.D., Product Design and Development, 2003, McGraw-Hill Education (India) Pvt Limited
2. Dr. Ian Gibson, Dr. David W. Rosen, Dr. Brent Stucker, “Additive Manufacturing Technologies. Rapid Prototyping to Direct Digital Manufacturing”, 2010, Springer.
3. William Oropallo, Les A. Piegl, Ten challenges in 3D printing, Engineering with Computers
January 2016, Volume 32, Issue 1, pp 135-148.
4. Bendsoe, M. P., Sigmund, O., Topology Optimization Theory, Methods, and Applications,
2003, Springer
5. Kelley, J.L. General Topology. New York: Springer-Verlag, 1975.
6. Hans A Eschenauer and Niels Olhoff, Topology optimization of continuum structures: A review,Appl. Mech. Rev 54(4), 331-390 (Jul 01, 2001).
7. Deaton, Joshua D.and Grandhi, Ramana V., A survey of structural and multidisciplinary continuum topology optimization: post 2000. Structural and Multidisciplinary Optimization, January 2014, Volume 49, Issue 1, pp 1-38.
8. Pasi Tanskanen, The evolutionary structural optimization method: theoretical aspects, Computer Methods in Applied Mechanics and Engineering, Volume 191, Issues 47–48, 22 November 2002, Pages 5485–5498.
Topological Optimization in Concept Design …
299
9. Roubíček, T. (1997). Relaxation in Optimization Theory and Variational Calculus. Berlin:
Walter de Gruyter. ISBN 3-11-014542-1.
10. Dunning PD, Kim HA, Mullineux G (2011) Introducing loading uncertainty into topology
optimization. AIAA J 49(4):760–768.
Section 2.2
Advanced Manufacturing
Simulation of laser-sensor digitizing for onmachine part inspection
Nguyen Duy Minh PHAN, Yann QUINSAT and Claire LARTIGUE1*
1
LURPA, ENS Cachan, Univ. Paris-Sud, Université Paris-Saclay, 94235 Cachan, France
* Corresponding author. Tel.: +33-147-402-986 ; fax: +33-147-402-200. E-mail address:
lartigue@lurpa.ens-cachan.fr
Abstract: Integrating measurement operations for on-machine inspection in a 5axis machine tool is a complex activity requiring a significant limitation of measurement time in order not to penalize the production time. When using a laserplane sensor, time optimization must be done while keeping the quality of the acquired data. In this paper, a simulation tool is proposed to assess a given digitizing
trajectory. This tool is based on the analysis of sensor configurations relatively to
the geometry of the studied part.
Keywords: In situ measurement, Laser plane sensor, Digitizing quality, Trajectory simulation
1 Introduction
Integrating inspection procedures within the production process involves rapid
decision-making regarding the conformity of parts. In the case of on-machine inspection for instance, part geometry measurements are performed in the same
phase as the machining operations without removing the part from its set-up,
which facilitates comparing the machined part to its CAD model for the conformity analysis. This also contributes to reducing the time allocated to measurement.
Within this context, a few recent studies have focused on the use of laser-plane
sensors to carry out on-machine inspections as they have a great ability to measure
deviations of machining parts within a time consistent with on-machine inspection
[1,2]. In the particular case of the milling process, laser-plane sensors can be integrated in the machine tool, the sensor replacing the cutting tool. As the measurement operation is performed while the process is stopped, one of the main issues
concerns sensor path-planning. In fact, the time allocated to measurement must be
minimized to preserve global production time, but the quality of the acquired data
must be sufficient to measure potential deviations.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_31
303
304
N.D.M. Phan et al.
Fig. 1. Laser-sensor in the machine-tool
The integration of a laser-sensor in a machine-tool is an issue little addressed in
the literature. The fact that the sensor accessibility is increased, considering the 5
degrees of freedom plus the spindle rotation, is an opportunity. Indeed, as for laser mounted on industrial robots, this gives the sensor the possibility to scan an
object from any direction even along curved paths [3]. Wu et al. [4] propose a
method of sensor path planning for surface inspection on a 6 degree-of-freedom
robot that automatically adapts its trajectory to the complex shape of the object by
continuously changing the viewing direction of the scanner mounted on the robot.
Each viewpoint of the planned path must however satisfy several constraints: field
of view, scanning distance, view angle and overlap. Within the context of sensor
path planning for part inspection, it is quite classical to impose some constraints to
the sensor relatively to the part to be measured. In [5], authors introduce the concept of visibility (local and global) to generate a sensor trajectory well-adapted to
the control of complex parts. Prieto et al. propose to keep the sensor as normal as
possible to the surface, while obeying a criterion of quality depending on both the
scanning distance and the sensor view angle [6]. Son et al. include additional constraints such as the number of required scans and the checking of occlusions [7].
Yang and Ciarallo use a genetic algorithm to obtain a set of viewing domains and
a list of observable entities for which the errors are within an acceptable tolerance
[8]. The approach developed in [9] relies on the representation of the part surface
as a voxel map, for which the size of each voxel is defined according to the sensor
field of view (fov). To each voxel, a unique point of view is associated in function
of visibility and quality criteria leading to a set of admissible viewpoints to ensure
the surface digitizing with a given quality. Mavrinac et al. [10] formalize the
search of sensor viewpoints under constraints for 3D inspection using an active
triangulation system. Each viewpoint (sensor configuration) is assessed thanks to a
performance function that results from the combination of constraint functions to
be respected. This interesting approach seems valuable to assess the validity of a
sensor trajectory prior to its optimization.
This paper deals with laser-sensor trajectories well-adapted for on-machine inspection on a 5-axis milling machine-tool. First, we propose to define an original
description format of the sensor trajectory directly interpretable by the CNC of the
machine-tool. The laser-plane sensor takes the place of the cutting tool in the
spindle. This original format takes advantage of the additional degree of freedom
Simulation of laser-sensor digitizing for …
305
given by the spindle rotation. The sensor trajectory is a series of ordered viewpoints that must satisfy a set of constraints (visibility, quality, number of viewpoints, overlaps, etc.). Prior to the stage of trajectory optimization, it might be interesting to have a tool assessing given trajectories according to the constraints to
be satisfied. In this direction, the second part of the paper presents a method for
simulating the digitizing from a given trajectory.
2 Sensor trajectory in 5-axes
A description format must be defined that can be interpretable by the CNC of
the machine-tool. The study can be applied for laser-plane sensor types for which
the field of view (fov) is planar (2D) (figure 2). The sensor trajectory consists in a
set of ordered sensor configurations (each configuration defining a viewpoint), i.e.
a set of positions and orientations. The sensor orientation is given by the couple of
&
&
vectors: vc , the director vector of light-beam axis, and vL , the director vector of
the digitizing line (figure 2). By analogy of cutting tool trajectories for which the
cutter location point (CL point) is the tool extremity [11], the sensor position is defined through the point CE which positions the digitizing line:
&
d *˜ vc .
C0CE
Therefore, in the part frame, the sensor trajectory is a set of configurations
& &
( CE ; vc ; vL ) expressed in the part frame as a set of coordinates (X, Y, Z, I, J, K, I*,
J*, K*) (figure 2). This trajectory is expressed in the machine-tool frame thanks to
the Inverse Kinematics Transform, leading to (X, Y, Z, A, C, W) where A and C are
the classical angles for a RRTTT machine tool and W allows the spindle indexation. This additional degree of freedom is particularly interesting to orient the laser
beam relatively to the surface.
CE
X
Y
-28 -2
-28 -1
…….
Z
59
59
I J K
I*
J*
0 0 1 0.701 0.701
0 0 1 0.701 0.701
K*
0
0
Inverse Kinematics
transformation
X
10
11
…
Y
Z A C W
-1 10 0 0 45
10 0 0 0 45
Fig. 2. Parameters defining the sensor trajectory for on-machine inspection
Indeed, the sensor trajectory is classically defined to satisfy a set of constraints,
generally visibility and quality constraints, leading to the ordered series of sensor
306
N.D.M. Phan et al.
&
configurations ( CE ; vc ). The width of the scanning line, which can be assimilated
to the width of the cutting tool, varies in function of the scanning distance d* (see
figure 2). The additional degree of freedom given by the spindle indexation W
&
permits to orient vL . It could thus be possible to plan a sensor trajectory so that
the width of the scanning line is maximized by modifying the sensor indexation.
After describing the parameter setting for the definition of a digitizing trajectory on a 5-axis CNC machine-tool, a simulation tool is proposed with the aim of assessing the quality of the acquired data.
3 Digitizing simulation
Considering a given sensor trajectory, a simulator has been developed to assess
this trajectory with regard to a set of constraints. As reported in the literature, the
most usual constraints are visibility, and digitizing quality. The laser scanner is
characterized by its actual fov, which corresponds to the area of the scanning plane
which is visible by the camera. The fov is thus defined by its height, H, and its
widths Lmin and Lmax, each width corresponding to the minimal and maximal height
in the fov. Hence, a portion of part surface is visible if it belongs to the fov so defined (figure 4). However, as the sensor moves from one configura&
&
tion ( CEi ; vci ) to another ( CEi1; vci 1 ) , a portion of surface is digitized if it belongs to the swept volume created by the displacement of the laser plane from the
first configuration to the second, as displayed in figure 4.
The orientation of the sensor relatively to the surface and the digitizing distance
characterize the quality of the acquired data. Therefore, the digitizing quality is
evaluated according to both the scanning distance d, and the view angle, D as we
proposed in previous studies [5,13]. Parameters defining the digitizing conditions
are summarized in figure 4 and table1.
Sensor trajectory
CEi+1
H
Laser plane
CEi
Digitized Surface
Nominal Surface
Fig. 4. Parameters of the laser sensor and digitizing volume between 2 configurations
To develop our approach, we take advantage of the formalism proposed in
[4,10], in which the constraints to be verified are expressed as a combination of
functions. Such formalism is rather flexible as it permits to add or remove constraints in function of the complexity we want to introduce in trajectory generation. The CAD model of the part is tessellated through a STL format. This gives a
Simulation of laser-sensor digitizing for …
307
mesh defined by a set ST of n triangular facets Tj. Each facet Tj is defined by 3 ver*
tices denoted V ji , and the normal vector to the facet is denoted n j (see table 1).
Table 1. Parameters for surface digitizing
Mesh Parameters
Digitizing parameters
ST, set of n triangular facets
VCEiCEi1 , swept volume between 2 configurations
Tj, facet j , T j  ST , j >1,n@
dE, distance from the bottom to CEi in the fov
Sv, set of vertices
H, distance from the bottom to the top in the fov
> @
V jk , vertex of a facet Tj, k 1,3
*
n j , normal to the facet Tj
Dmax, limited view angle of the sensor
dV k , distance from the bottom of fov to the vertex V jk j
In a first approach only 2 functions are considered: the visibility function and
the quality function. These functions are applied to the tessellated model.
3.1 Visibility function
The visibility function is used to determine the facets which we denote as seen
by the laser sensor. As the trajectory is a set of sensor configurations, the visibility
function is defined for each trajectory segment, i.e. between two successive configurations. For each facet Tj of the CAD model, the function is defined as a combination of two functions as expressed in equation 1:
FV* (Tj ) FV (Tj )˜ FsD (Tj )
(1)
The swept facet function FV(Tj) checks if the facet belongs to the volume swept
by the laser beam between 2 configurations VCEiCEi1 . A facet belongs to the
swept volume if all its vertices belong to the swept volume:
­
°1, if k >1,3@, V jk VCEiCEi1
FV (T j ) ®
°̄0 , otherwise
(2)
Generally, the view angle is limited [12]. If the angle between the normal vec&
tor to the facet and vc exceeds the maximal view angle Dmax, the facet is not seen.
This is expressed by:
* *
­1, if n j ˜ vc t cos(Dmax )
FsD (T j ) ®
¯0 , otherwise
(3)
308
N.D.M. Phan et al.
At the end of this stage, when the whole sensor trajectory is considered (i.e.
&
&
when all the trajectory segments defined by ( CEi ; vci ) ( CEi1; vci 1 ) are consid*
ered), all the facets verifying FV (Tj ) 1 are characterized as seen and define the
set STs
Tj ST , FV* (Tj ) 1 .
^
`
3.2 Quality function
The visibility of the facet does not ensure the digitizing quality. Indeed, numerous studies point out the importance of the digitizing distance and of the view angle on the digitizing noise, factors that strongly influence the digitizing quality [5,
9,12,13]. Quality is ensured when the digitizing noise is lesser than a threshold,
threshold generally given by the user in function of the considered application.
This involves admissible ranges for both the digitizing distance and the view angle
allowing the definition of the quality function as follows:
Fws(Tj ) FV* (Tj )˜ Fwsd(Tj )˜ FwsD (Tj )
(4)
In equation (4), Fwsd and FwsD account for the quality in terms of digitizing distance and view angle respectively. Therefore, a facet is said well-seen in terms of
digitizing distance if all its vertices belong to the admissible range of digitizing
distances Iad:
­
°1, if k >1,3@, dV jk  Iad
Fwsd(T j ) ®
°̄0 , otherwise
(5)
A facet is said well-seen in terms of view angle, if the angle between the nor&
mal vector to the facet and vc , belongs to the admissible range of view angles de-
fined by D1 and D2:
* *
­1, if cos(D1 ) d n j ˜ vc d cos(D2 )
FwsD (T j ) ®
¯0 , otherwise
(6)
At the end, all the facets verifying Fws(Tj ) 1 are characterized as well-seen
and define the set STws
Tj STs , Fws(Tj ) 1 . All the other seen facets are
tagged as poorly-seen and in turn define a set STps
T j STs , Fws (T j ) 0 with
ps
s
ws
ST ST ‰ ST . The facets which are not-seen define the set STns , complementary
of STs in ST : ST STs ‰ STns .
^
`
^
`
Simulation of laser-sensor digitizing for …
309
4 Results and discussion
The objective is here to validate our simulator by comparing digitizing obtained
using our simulator to the actual digitizing, and considering various trajectories.
The simulator is tested using a case study, and for the laser sensor Zephyr KZ25
(www.kreon3d.com). Although most of the sensor parameters are given by the
manufacturer, a protocol of sensor qualification is required to identify the actual
sensor parameters such as the dimensions of the fov, or the limited view angle, but
also to identify quality parameters that define the admissible ranges of digitizing
distances and view angles.
4.1 Sensor parameters
First, the dimensions of the fov are identified by simply measuring a reference
plane. As the intersection of the reference plane and the laser-beam is a line, the
height H of the fov is identified by observing if the line is visible in the CCD. The
experiment gives H = 50mm. According to the protocol defined in [13], the evolution of the digitized noise, denoted G, is identified in function of the digitizing distance and the view angle. The digitizing noise accounts for the dispersion of the
measured points with respect to a reference element, and it is usually evaluated by
measuring a reference plane surface for different digitizing distances and various
view angles.
Fig. 5. Noise in function of the scanning distance (a) and the view angle (b).
The evolution of the digitizing noise in function of the digitizing distance exhibits a significant decrease of the noise from the bottom position to the top position in the fov (figure 5a). On the other hand, the evolution of the noise in function
of the view angle does not show a significant trend (figure 5b). However, it can be
pointed out that the maximal view angle is equal to Dmax 60q , and that for the
whole range >0q; 60q@ , the noise remains lesser than 0.015mm. Considering that
value as the threshold Gadfor quality, the admissible range of digitizing distances
310
N.D.M. Phan et al.
Iad is defined by >20; 50@ mm. Those two intervals guarantee a digitizing with a
noise lesser than Gad = 0.015 mm.
4.2 Simulator tests
The sensor trajectories used to test our simulator are classical pocket-type trajectories. For the first tests, the sensor orientation is constant, and the trajectory
consists in a set of points CE defined at a constant altitude z (figure 6). To assess
the simulator, the simulated digitizing is compared to the actual one. For this purpose, actual digitizing was carried out using a Coordinate Measuring Machine
(CMM) equipped with a motorized indexing head, which enables the scanner to be
oriented according to repeatable discrete orientations. We choose to assess our
simulator using a CMM, because a 3-axis Cartesian CMM is a machine with less
geometrical defaults than a machine-tool. But this does not change anything in the
principle of our simulator. On the CMM, the orientations of the sensor are given
by the two rotational angles A and B. Therefore, the trajectories expressed in the
part coordinate system (for the simulation) must first be expressed in the CMM
coordinate system (figure 6).
Trajectory in the part coordinate system ( _A0B90)
1
-40,84
20
-47,15
0
0
-1
0
1
0
2
199,77
20
-47,15
0
0
-1
0
1
0
A
B
1
-842,47 -363,29 -272,92
0
90
2
-601,87 -363,29 -272,92
0
90
…
…
_A0B90
Trajectory in the machine coordinate system
+30
A
B
0
90
0
90
Fig. 6. Scanning trajectories for test (A = 0°; B = 90°, z= 0 mm).
Different trajectories for various digitizing distances and sensor orientations
have been tested. Only results associated with one orientation (A = 0°; B = 90°)
and two different distances (z = 0 and z = 30mm) are commented in this paper.
The algorithm is applied to the tessellated CAD model of the part, and facets are
classified in the corresponding set according to visibility and quality functions as
proposed in section 3. To simplify the representation, a color code is adopted:
well-seen facets are green, poorly-seen facets are orange, and not-seen facets are
red (table 2). On the other hand, the actual digitizing gives a point cloud which is
registered onto the mesh model. For each facet, a cylinder, whose basis is the triangle defining the facet and the height is the maximal measurement error, is created.
Simulation of laser-sensor digitizing for …
311
Table 2. Results for actual and simulated digitizing.
Simulation of the digitizing
Actual digitizing
A=0°; B=90°;
z = 0 mm
A=0°; B=90°;
z = 30mm
The set of digitized points belonging to the cylinder so defined corresponds to
the actual digitized facet. To compare actual digitizing to its simulation, we have
to characterize each facet according to visibility and quality functions in the same
way. In this direction, we consider that a facet is not-seen if the density of points
associated with the facet is less than 5 points/mm2; the facet color is red. For each
facet, the geometrical deviations between the digitized points and the facet are
calculated. The associated standard deviation accounts for the actual digitizing
noise. If the noise is greater than the threshold Gad = 0.015mm, the facet is tagged
as poorly-seen, and its color is set to orange. Conversely, if the noise is lesser than
Gad, the facet is tagged as well-seen, and its color is green. Results displayed in table 2 bring out the good similarity between simulation and actual digitizing. This
is particularly marked for the trajectory z = 0. However, some differences exist for
which the simulator underestimates the digitizing. A whole area which appears red
in the simulation is green in the actual digitizing (on the left of the part for the trajectory z = 30 mm for instance). This is likely due to the fact that the digitizing
noise is evaluated using an artefact with a specific surface treatment which makes
the surface very absorbing, whereas the part is coated with a white powder that
matifies the surface. Digitizing is thus facilitated. Nevertheless, the simulator turns
out to be an interesting predictive tool prior to sensor trajectory planning.
6 Conclusion
Within the context of on-machine inspection using laser-plane digitizing systems, sensor trajectory planning is a challenge. To ensure the efficiency of the
measurement, it is necessary to minimize measurement time while ensuring the
312
N.D.M. Phan et al.
quality of the acquired data. The presented work proposes a description format of
a sensor well-adapted to on-machine inspection on 5-axis machine-tools. Given a
digitizing trajectory, a simulation tool of the acquired data quality is presented.
After a real digitizing, a good similarity between simulation and actual digitizing
can be observed. The simulator is thus an interesting predictive tool that can be
used to assist in finding the best strategy to digitize the part with a quality consistent with geometrical deviations obtained in milling.
References
1. L. Dubreuil, Y. Quinsat, C. Lartigue, Multi-sensor approach for multi-scale machining defect
detection, Joint Conference On Mechanical, June 2014, Toulouse, France, Research in Interactive Design Vol. 4
2. F. Poulhaon, A. Leygue, M. Rauch, J-Y. Hascoet, and F. Chinesta, Simulation-based
adaptative toolpath generation in milling processes, Int. J. Machining and Machinability of
Materials, 2014, 15 (3/4), pp.263–284.
3. S. Larsson and Johan AP Kjellander. Path planning for laser scanning with an industrial robot. Robotics and Autonomous Systems, 2008, 56(7), pp.615–624.
4. Q. Wu, J. Lu, W. Zou, and D. Xu. Path planning for surface inspection on a robot-based
scanning system. In Mechatronics and Automation (ICMA), IEEE International Conference
on, 2015, pp. 2284–2289.
5. A. Bernard, M. Véron, Visibility theory applied to automatic control of 3d complex parts using plane laser sensors. CIRP Annals-Manufacturing Technology, 2000, 49(1), pp.113–118.
6. Prieto, F., Redarce, H., Lepage, R., and Boulanger, P., Range image accuracy improvement
by acquisition planning. In Proceedings of the 12th conference on vision interface (VI’99),
Trois Rivieres, Québec, Canada, 1999, pp.18–21.
7. Son, S., Park, H., and Lee, K. H., Automated laser scanning system for reverse engineering
and inspection. International Journal of Machine Tools and Manufacture, 2002, 42(8),
pp.889–897.
8. Yang, C. C. and Ciarallo, F. W., Optimized sensor placement for active visual inspection.
Journal of Robotic Systems, 2001, 18(1), pp.1–15.
9. Lartigue, C., Quinsat, Y., Mehdi-Souzani, C., Zuquete-Guarato, A., and Tabibian, S., Voxelbased path planning for 3d scanning of mechanical parts. Computer-Aided Design and Applications, 2014, 11(2), pp.220–227.
10. Mavrinac, A., Chen, X., and Alarcon-Herrera, J. L., Semiautomatic model-based view planning for active triangulation 3-d inspection systems. Mechatronics, IEEE/ASME Transactions
on, 2015, 20(2), pp.799–811.
11. S. Lavernhe, Y. Quinsat, C. Lartigue, Model for the prediction of 3D surface topography in
5-axis milling, International Journal of Advanced Manufacturing Technology, 2010, 51, pp.
915–924.
12. M. Mahmud, D. Joannic, M. Roy, A. Isheila, J.-F. Fontaine, 3D part inspection path planning
of a laser scanner with control on the uncertainty, Computer-Aided Design , 2011, 43, pp.
345–355.
13. C. Mehdi-Souzani, Y. Quinsat, C. Lartigue, P. Bourdet, A knowledge database of qualified
digitizing systems for the selection of the best system according to the application, CIRP
Journal
of
Manufacturing
Science
and
Technology,
2016,
DOI:
doi:10.1016/j.cirpj.2015.12.002
Tool/Material Interferences Sensibility to
Process and Tool Parameters in VibrationAssisted Drilling
Vivien BONNOT1*, Yann LANDON1 and Stéphane SEGONDS1
1
Université de Toulouse; CNRS; USP; ICA (Institut Clément Ader), 3 rue Caroline Aigle,
31400 Toulouse, France.
* Corresponding author. Tel.: +33 561 17 10 72. E-mail address: vivien.bonnot@univ-tlse3.fr
Abstract Vibration-assisted drilling is a critical process applied on high-value
products such as aeronautic parts. This process performs discontinuous cutting and
improves the drilling behavior of some materials, including chip evacuation, heat
generation, mean cutting force... Several research papers illustrated the differences
between vibration-assisted and conventional drilling, hence demonstrating that
conventional drilling models may not apply. In this process, the cutting conditions
evolve drastically along the trajectory and the tool radius. The tool/material interferences (back-cutting and indentation) proved to significantly contribute to the
thrust force. A method properly describing all rigid interferences is detailed. A local analysis of the influence of the tool geometry and process parameters over interferences is presented. Interferences distribution on the tool surfaces are highlighted, and the presence of back-cutting far away from the cutting edge is
confirmed. A comparison is performed in conventional drilling between the predicted shape of the interferences on the tool surfaces and the real shape of a used
tool. The most interfering areas of the tool surfaces are slightly altered to simulate
a tool grind, the interference results are compared with the original tool geometry,
and significant interference reduction is observed.
Keywords: Vibration-assisted drilling, analytical simulation, interferences,
sensibility analysis.
1 Introduction
The drilling process is performed in the fabrication of countless industrial products. As an example, hundreds of thousands holes are necessary to assemble aeronautic structures. These holes are performed at the end of the manufacturing process on high-value parts. Considering the economic risk, the drilling process has to
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_32
313
314
V. Bonnot et al.
be highly reliable and efficient. The vibration assistance improves the drilling behavior by forcing the fragmentation of the chip, hence facilitating its evacuation.
This behavior results in an improved reliability and efficiency of the process.
The process adds axial vibrations to conventional drilling trajectory. As a result, the cutting and interferences conditions drastically evolve along the tool radius and the trajectory. Knowing these conditions is required to master the process
[1][2]. Several thrust force models have been proposed, for cutting [3][4], backcutting [5], and indentation [3]. Le Dref [3] segmented the tool cutting edge to apply a different model on each segment, based on local cutting conditions gaps.
Ladonne [6] proposed a dynamic model including the tool holder, source of the
vibrations. Bondarenko [5] considered the back-cutting surfaces as a succession of
cutting edges erasing the material. This study illustrates an alternative approach by
including the entire tool geometry into consideration.
In this article, the studied process is detailed, and the presence of interferences
far the cutting are illustrated. The input, the data used to describe the tool and trajectory parameters is firstly described, alongside the outputs: instant and integral
measurements of the interferences. Then, the first interference results in conventional drilling are presented. Subsequently, the influence of the tool and trajectory
parameters with vibrations is estimated around a fixed set of entry parameters. Finally the most interfering areas on the tool surface are identified, these areas are
then corrected to simulate a tool grinding, and the interferences are re-evaluated.
2 Process interferences and simulation details
2.1 Difference between clearance angle and clearance profile
Several technologies include vibrations in drilling, each may be categorized using
the following indicators: self-generated[7-8]/forced[9-3] vibrations, high[9]
/low[3] frequency, and high/low amplitude (opposed to frequency), and also more
complex process using different technologies simultaneously [10]. Under high
frequency and low amplitude several studies do not even consider interferences
[11-12]. This study mainly focuses on high amplitude and low frequency forced
vibrations, such conditions are obtained using a system inside the tool holder including a sinusoidal bearing. Under these conditions, the cutting is discontinuous
and interferences are proven to be significantly influent over the process behavior.
Nevertheless the following strategy may apply to any of the previous categories.
Ztool(T )
f
a
.T .sin(W.T )
2S
2
(1)
Tool/Material Interferences Sensibility to Process …
315
The equation (1) above [13] describes the tool trajectory. The left part express
the forward movement, f is the feed rate, θ is the tool angular position. The right
counterpart express the oscillations, a is the amplitude, W is the frequency of the
oscillations (osc/rev) closely tied to half the number of lobes in the sinusoidal
bearing. Given such equation, and the number of tool teeth, one may represent the
cutting profile (the edge trajectory opposed to the corresponding downhole from
previous cutting), such as represented Figure 1.
Fig. 1. Tooth trajectory profiles.
Usually, interferences are evaluated using a comparison of conventional clearance angle and trajectory angle. Conventional clearance angle may only describe
the local geometry behind the cutting edge, such approach considers clearance
profile to be linear in cylindrical coordinates. The figure 2 illustrates the real cylindrical clearance profiles for several radius samples, these profiles have been
generated for illustration using a tool CAD model.
Fig. 2. Tool CAD Model – Clearance Profiles.
The cutting edge is at the 180° mark. The conventional clearance angle can be
measured at the tangent to these profiles at 180°. The difference between the tangent and the actual profile becomes significant far away from the cutting edge. In-
316
V. Bonnot et al.
terferences will also occur far away from that edge. This illustrates the benefits of
considering the entire clearance geometry.
2.2 Simulation inputs/outputs details
The geometrical evaluation of interferences is based on a Z-level analysis. Additionally, at each evaluation step a cutting and interference volume are removed,
and interference geometrical characteristics for the current step are recorded. The
simulation considers a list of entry parameters. Process parameters are the feed
rate, the oscillations frequency, amplitude and the tool rotation frequency N. The
tool parameters are angular and distance measurements that can be easily obtained
on a tool. These are used to create a CAD model [13], the local parameters such as
described by the norm may be retrieved through a geometrical analysis of the
CAD model, however these may not be used conveniently to specify the global
geometry. An extraction of points describing the tool cutting edge and clearance
surface is performed, the Z-analysis is performed: the relative point positions are
analyzed along the tool trajectory and the interfering volumes are extracted. The
following homogeneous rotation matrix is used to move the tool points along the
tool trajectory.
R
ªcos(T ) sin(T )
« sin(T ) cos(T )
«
« 0
0
«
«¬ 0
0
0
0
0
º
»
0
»
f
a
1
.T . sin(W .T )»
»
2S
2
»¼
0
1
(2)
The tool parameters list is exhaustive; as most of them do not impact interferences, two parameters that situate a specific point P (Figure 2) will be analyzed.
The location of this point is influent on the clearance profile. H is the height between this point and the tool tip, and θt is the angular sector between this point and
the tool nose, measured around the tool tip in the xy plane. This study will analyze
the interference behavior locally around the following set of values (Table 1).
Table 1. Set of parameters used in the analysis.
f
a
W
N
H
θt
0.2 mm/rev
0.2 mm
1.5 osc/rev
2000 rpm
2.5 mm
45°
The outputs are data characterizing the interference volume and how it was
generated. This volume can be presented in two ways: the volume VPart as removed on the part or the volume VTool as removed by the clearance tool surface.
The first represents the evolution of the interferences over the chip formation cycle while the former illustrates the interfering areas on the tool. Three scalar
Tool/Material Interferences Sensibility to Process …
317
measurements taken on these volumes are compared: the global volume Vt (which
is the same for both representations), the maximum height Hmt of the tool volume,
and the maximum height Hmp of the part volume, given Table 2. The simulation
also allows to extract the evolution of two instantaneous measurements: the interference flow-rate (additional volume at each calculation step, divided by the time
step) Q [mm3.s-1] and the projected surface S [mm3]. The subsequent paragraphs
detail the results under the conditions of Table 1.
Table 2. Results in terms of global volume and heights.
Vt
0.0287 mm
3
Hmt
Hmp
0.190 mm
0.133 mm
The total interference profile on the tool (Figure 3) highlights that interferences
are concentrated at the common edge between the first and second clearance surfaces. This gives the first insights to improve interference behavior. Furthermore,
the interference flow-rate (Figure 4) reaches its maximum earlier than the projected surface, this gives us an insight of the evolution of the interfering conditions,
initially concentrated on a small surface with intense penetration, and then spread
to a larger surface with lower penetration.
Fig. 3. Integral interferences on the tool (left) VTool and the part (right) VPart corresponding to one
chip/teeth, the black line represents the cutting edge of one teeth.
Fig. 4. Evolution of interference flow-rate Q and projected interference surface S.
318
V. Bonnot et al.
2.3 Simulation under classical drilling conditions
The results under classical conditions are coherent (Figure 5). The interference
flow-rate profile and projected surfaces are invariant over time. And the distribution of the integral interference volume on the tool is similar to one that can be observed on a used tool, namely the maximum interfering radius next to the cutting
edge is half of its counterpart on the common edge between first and second clearance face (a similar observation can be made under vibratory conditions on the
tool distribution Figure 3). This phenomenon may only be observed with integral
clearance consideration, as it is generated by the rotation of the angled clearance
face.
Fig. 5. Aluminum interference residues from conventional drilling measured (a) and corresponding cumulated interferences simulated on the tool on the same position (b), different scales, similar patterns.
3 Evaluation of parameters influence over interferences
characteristics
In order to evaluate influential parameters around the described conditions, the local partial variabilities of the outputs were evaluated. The results are presented in
table 3. The analysis was conducted on three of the process parameters and two of
the tool parameters. For clarity, units are not detailed.
Table 3. Results of local partial variabilities of the outputs.
df
da
dW
dH
dθt
(mm/rev)
(mm)
(osc/rev)
(mm)
(deg)
dVt / d… (mm3)/(…)
0.403
0.0195
0.0886
-0.062
0.0004
dHmt / d… (mm)/(…)
1.161
-0.019
0.214
-0.0043
0.0008
dHmp / d… (mm)/(…)
0.668
-0.002
-0.082
0.0001
0.00001
In order to get understand these results, these local variabilities are used considering a 5% variation in most entry parameters. Outputs percentage variation can
Tool/Material Interferences Sensibility to Process …
319
be expressed (Table 4). A 1% variation for the oscillations frequency was considered as it corresponds to the uncertainty of the frequency according to Jallageas
[14] results.
Table 4. Variabilities of the outputs considering parameters variation.
df 5%
da 5%
dW 1%
dH 5%
dθt 2%
(mm/rev)
(mm)
(osc/rev)
(mm)
(deg)
dVt (mm )
14%
< 1%
< 1%
20%
1.4%
dHmt (mm)
7%
< 1%
< 1%
< 1%
< 1%
dHmp (mm)
5%
< 1%
< 1%
< 1%
< 1%
3
The feed rate and the height of the CAD parametric point have a significant influence over the interfered volume around these local values. These results may
drastically change under other local conditions, closer or further away from chip
fragmentation, and must be taken as an example. For instance, the amplitude will
have an influence at some point, as it determines the fragmentation.
4 Influence of tool geometry, tool grind to reduce interference
volume
Considering the previous results regarding the cartography of integral interference
on the tool, most of the interference is carried by the common edge between the
first and second clearance surface. The CAD model allows us to easily modify that
edge, by changing the height of the red marked point, namely changing the value
of H. The vertical distance between the tool tip and the considered point has been
increased by 0.1mm. As expected, the volume interfered is reduced significantly
(18%) for a minimal tool modification (Table 5). However the maximum heights
remain unchanged. The maximum interference height on the part remains tied to
process parameters. As for the maximum interference height on the tool, the invariance suggests that the interfered volume is changed mostly in the interfering surface.
Table 5. Results in terms of global volume and heights after tool modification.
Vt grind
Hmt grind
Hmp grind
0.0234 mm3
0.190 mm
0.133 mm
320
V. Bonnot et al.
Conclusion
This study demonstrated the importance of considering the integral geometry of
the tool while evaluating interferences. Also, the feed rate and the edge between
the first and second clearance faces significantly influence the interference volume, but considering the absence of influence of the amplitude of the vibrations
on our local analysis, this highlights that the influence of process/tool parameters
may vary importantly depending on the local values. Finally, a slight change in the
tool clearance face can have a drastic impact on the interference volume, and thus
the thrust forces. Further process testing must be conducted, with different tool
geometries to corroborate these results.
References
1. L. ZHANG, L. WANG, X. WANG. « Study on vibration drilling of fiber reinforced plastics
with hybrid variation parameters method », Composites: Part A, Elsevier, 34 (2003) 237–244
2. X.WANG, L.J. WANG, J.P. TAO. « Investigation on thrust in vibration drilling of fiberreinforced plastics », J. of Materials Processing Technology, Elsevier, 148 (2004) 239–244
3. J. LE DREF. « Contribution à la modélisation du perçage assisté par vibration et à l’étude de
son impact sur la qualité d’alésage. Application aux empilages multi-matériaux. », Ph.D Thesis, Université de Toulouse, 2014
4. O. PECAT, I. MEYER. « Low Frequency Vibration Assisted Drilling of Aluminium Alloys.
», Advanced Materials Research, Trans Tech Publication, 779 (2013) 131–138
5. D. BONDARENKO. « Etude mésoscopique de l’interaction mécanique outil/pièce et contribution sur le comportement dynamique du système usinant », Thèse, 2010
6. M. LADONNE, M. CHERIF, Y. LANDON, J.Y. K’NEVEZ, O. CAHUC, C. DE
CASTELBAJAC. « Modelling The Vibration-Assisted Drilling Process: Identification Of
Influencial Phenomena », Int. J. of Advanced Manufacturing Technology, Vol 40, 1-11, 2009
7. N. GUIBERT, H. PARIS, J. RECH, C. CLAUDIN. « Identification of thrust force models for
vibratory drilling », Int. J. of Machine Tools & Manufacture, Elsevier, 49 (2009) 730–738
8. G. MORARU. « Etude du comportement du système ”Pièce-Outil-Machine” en régime de
coupe vibratoire », Ph.D Thesis, 2002
9. A. BOUKARI. « Modélisation des actionneurs piézoélectriques pour le contrôle des systèmes
complexes », Ph.D Thesis, 2010
10. K. ISHIKAWA, H. SUWABE, T. NISHIDE, M. UNEDA. « A study on combined vibration
drilling by ultrasonic and low-frequency vibrations for hard and brittle materials », Precision
Engineering, Elsevier Science, 22 (1998) 196–205
11. L.-B. ZHANG, L.-J. WANG, X.-Y. LIU, H.-W. ZHAO, X. WANG, H.-Y. LUO. « Mechanical model for predicting thrust and torque in vibration drilling fiber-reinforced composite materials », Int. J. of Machine Tools & Manufacture, Pergamon, 41 (2001) 641–657
12. J. A. YANG, V. JAGANATHAN, R. DU. « A new model for drilling and reaming processes
», Int. J. of Machine Tools & Manufacture, Pergamon, 42 (2002) 299–311
13. S. LAPORTE, J.Y. K’NEVEZ, O. CAHUC, P. DAMIS. « A Parametric Model Of Drill Edge
Angles Using Grinding Parameters», Int. J. of Forming Processes, 10.4, 411-428, 2007
14. J. JALLAGEAS, J.Y. K’NEVEZ, M. CHERIF, O. CAHUC. « Modeling and Optimization of
Vibration-Assisted Drilling on Positive Feed Drilling Unit», Int. J. of Advanced Manufacturing Technology, Vol 67, 1205-1216, 2012
Implementation of a new method for robotic
repair operations on composite structures
Elodie PAQUET 1, Sébastien GARNIER 1, Mathieu RITOU 1, Benoît FURET
1
, Vincent DESFONTAINES ²
1.
UNIVERSITY OF NANTES : Laboratoire IRCCyN (UMR CNRS 6597), IUT de Nantes, 2
avenue du Professeur Jean Rouxel, 44470 Carquefou
2.
EUROPE TECHNOLOGIES, 2 rue de la fonderie, 44475 Carquefou Cedex
* Corresponding authors. E-mail address: elodie.paquet@univ-nantes.fr,
sebastien.garnier@univ-nantes.fr, mathieu.ritou@univ-nantes.fr, benoit.furet@univnantes.fr,v.desfontaines@europechnologies.com
Abstract
Composite materials nowadays are used in a wide range of applications in aerospace, marine, automotive, surface transport and sports equipment markets. For
example, all aircraft’s composite parts have the potential to incur damage and
therefore require repairs. These shocks can impact the mechanical behavior of the
structure in a different ways: adversely, irretrievable and, in some cases, in a
scalable damage. It is therefore essential to intervene quickly on these parts to
make the appropriate repairs without immobilizing the aircraft for too long.
The scarfing repair operation involves machining or grinding away successive ply
layers from the skin to create a tapered or stepped dish scarf profile around the
damaged area. After the scarf profile is machined, the composite part is restored
by applying multiple ply layers with the correct thickness and orientation to replace the damaged area. Once all the ply layers are replaced, the surface is heated
under a vacuum to bond the new material. The final skin is ground smoothed to retrieve the original design of the part. Currently, the scarfing operations are performed manually. These operations involve high costs due to the precision, heath
precautions and a lack of repeatability. In these circumstances, the use of
automated solutions for the composite repair process could bring accuracy,
repeatability and reduce the repair’s time. The objective of this study is to provide
a methodology for an automated repair process of composite parts, representative
of primary aircraft structures.
Keywords: Robotic machining, Composite repair, Repair of structural composite
parts, machining process.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_33
321
322
E. Paquet et al.
1 Introduction
Composite materials nowadays are used in a wide range of applications in aerospace, marine, automotive, surface transport and sports equipment markets [1].
For example, all aircraft’s composite parts have the potential to incur damage and
therefore require repairs. These shocks can impact the mechanical behavior of the
structure in a different ways: adversely, irretrievable and, in some cases, in a
scalable damage. It is therefore essential to intervene quickly on these parts to
make the appropriate repairs without immobilizing the aircraft for too long.
There are two main repair techniques, and these are referred to as scarf and lap
(see figure 1). In the scarf technique the repair material is inserted into the laminate in place of the material removed due to the damage. In the lap technique the
repair material is applied either on one or on both sides of the laminate over the
damaged area. [2]
Fig. 1.Stepped lap and Scarf main repair techniques
To perform and automated this repair operations on CFRP components, a lightweight, portable manipulator with a collaborative robot has been designed and developed during the project “COPERBOT”.
Fig. 2 Robotic solution developed for composite materials reparation in the project COPERBOT
Implementation of a new method for robotic …
323
The aim of this project is the development of an integrated process chain for a fast,
low price, automated and reproducible repair of high performance fiber composite
structures with a collaborative robot.
This platform will be mountable on aircraft structures even in-field, which allows
for repairs without disassembly of the part itself. Consequently, a faster, more
reliable and fully automated composite repair method is possible for the
aeronautical and nautical industry. The objective of this article is to propose a new
method to automate repair process of composite parts, for example monolithic
CFRP laminate plates representative of primary aircraft structures.
This article is based on industrial examples from the collaborative “COPERBOT”
project.
1 Development of a robotic repair method for composite structure.
Repair range of an impact on a composite structure consisted of the steps listed
below:
1
Setting laws for scarfing
Define the laws for scarfing according to the characteristics of
the repair part (stack, thick folds...)
2
3D and NDT scanning of damaged area
Reconstruct in 3D the damaged area,
3
4
Scarf or Stepped lab profile machining of damaged areas
Remove broken folds
Cleaning
Ensure optimal bonding
5
Ply cutting
cut the folds for repair
6
Draping
Strengthening the damaged area
7
Polymérisation
Guarantee pressure / vacuum condition
8
Finition
Getting the initial surface condition
9
3D and NDT scanning of the affected area
Check geometry and quality of the repaired area
Automated opérations
Fig. 3 Reparation range of an impact on a composite structure
324
E. Paquet et al.
The work carried out in the project COPERBOT are limited for the moment to
repair tests on monolithic composite prepreg Hexply 8552 - AGP280-5H with a
draping plan [0,45,0,45,0,0, 45,0,45,0].This type of plate is representative of
materials, thicknesses and stacking sequences found in primary aircraft structures
such as the radome of aircraft.
2 Surface generation by 3d-scanning
Most composite structures present in an airplane such as radome has curved
shapes. It is therefore necessary to make a 3d scanning of the surface in the area to
be repaired in order to recover the normal to the surface to adjust the trajectory
machining.
Fig. 4. Example of a stepped lap on convex part.
The first step in our robotic repair method is to reconstruct the surface to prepare
the stepped lap trajectory of the damaged area with an onboard laser sensor on the
6th axis of the robot. The method adopted to reconstruct the surface is scanning
the surface with the robot following a regular mesh defined by three points by the
operator. By combining the position of the robot and the information given by a
distance sensor (a line laser), we can then reconstruct the damaged area surface.
Three typical examples are shown in fig 5:
Fig. 5. Surface reconstructed by laser sensor fixed on a robot
Implementation of a new method for robotic …
325
3 3D Scarf calculation and milling trajectory.
From the recovered data on the reconstructed surface, machine path trajectories
are calculated to create the appropriate geometry scarf on the surface. This patch
needs to be draped mathematically on the surface, otherwise it wouldn't fit
later in the scarf especially for parts with a smaller radius. [2] [5] Based on
this 3D scarf definition the final milling trajectory is calculated taking into
account different cutter types (shank or radius) as well as the stability of the part
during the milling process. Two typical trajectories are shown in fig 6:
Fig. 6 Two trajectories for a stepped lap.
4 Stepped lap milling
To evaluate the optimum conditions of different repair materials, two types of
tools, and three types of material were used. Repairs were made by the stepped lap
techniques. [9] The cutting conditions selected are listed in the table below:
Tools:
PCD Ø 10mm
Carbide tool Ø 10mm
19250 tr/min
12 000 tr/min
604.45 m/min
376.8 m/min
Feed per revolution
0.25 mm/rev
0.25mm/rev
Cutting depth
0.10 mm
0.10 mm
Width of each step.
20 mm
20 mm
Rotation speed
Cutting speed
Fig. 7 Cutting conditions selected for tests.
326
E. Paquet et al.
Results of tests recommend to use a polycrystalline diamond tool (PCD) for
machining of stepped lap. This type of cutter is designed to withstand the abrasive
properties of composite material.
Fig 8 Stepped lab composite part produced by the robot
To limit the defects created by the machining forces, two types of parameters
were tested to determine the most suitable for our application: on one side, those
associated with the tool, those related the cutting conditions (feed per revolution,
cutting speed, direction of the fibers in relation to the feed rate ...) [2]. The machining paths that were chosen with a view to study subsequently the influence of
fiber orientation on the cutting forces [5].
5 Metrological controls of surfaces obtained by stepping in to
optimize the process conditions.
Optical 3D measurement has controlled the removed pleat depths and the surface
roughness obtained by machining robots on our test plates.
Implementation of a new method for robotic …
327
Fig 9 Metrological controls realized of stepped lap in composite part produced by the robot
Micrographic examination of the surface topography and the profile control
review by a coordinate measuring machine on machined steps show an accuracy
of a tenth of the depth machined on each floor for each of the two tests. Robotic
scarfing made by the technical steps with PCD tool enables a low roughness (Ra
of approximately 21) and without the presence of delamination floors contour
levels.
Fig 10. Analyzing surface quality obtained by PCD tool.
The tests have shown that the quality of the automatic repair is at least
as good as for a repair manually executed by skilled repairmen.
Even for simple repairs the robotic scarfing process has shown to be
two times more efficient than a manual process.
6 Conclusions
This article points a scientific view about the problematic of composites repairs
and proposes excavation solutions achieving normalized using robot.
Through testing we found that the PCD tools associated with certain operating
conditions allow achieving the desired quality level for the preparation of the
repair area. The approach of 3D surface scan and projection paths were validated
by measuring the qualities of realized scarf. Analysis of scarfing tests checks on
robot allows to consider robotic solutions finalized type “ cobot “ for repair of
composite parts on ship or airplane by interventions directly on the sites or
exploitation zones.
328
E. Paquet et al.
However, additional tests must be conducted to validate the proposed
methodology, and the mechanical characterization of the interface repaired and
analysis of the structural strength of the repair by testing expense and fatigue of
different specimens repaired.
7 Acknowledgements
We want to thank Mrs. Rozenn POZEVARA (R&D Composites Project manager)
in EUROPE TECHNOLOGIES for providing us with the necessary material for
testing and also for the partnership ET / IRCCyN for the “COPERBOT” project
funded by the BPI France and uses robotic means from the Equipex ROBOTEX
project, as well as members of the M02P-Robotics team and CAPACITY SAS for
testing and metrological analyzes.
8 References
1. B.FURET – B. JOLIVEL – D. LE BORGNE, "Milling and drilling of composite materials for
the aeronautics", Revue internationale JEC Composites N°18, June-July 2005
2 A. EDWIN- E.LESTER, "Automated Scarfing and Surface Finishing Apparatus for Complex
Contour Composite Structures", American Society of Mechanical Engineers, Manufacturing
Engineering Division, MED 05/2011; 6.
3.S.GOULEAU–S. GARNIER–B.FURET, « Perçage d’empilages multi-matériaux : composites
et métalliques », Mécanique et Industries, 2007, vol. 8, No. 5, p. 463-469.
4.A.MONDELIN–B.FURET– J. RECH, « Characterisation of friction properties between a laminated carbon fibres reinforced polymer and a monocrystalline diamond under dry or lubricated conditions », Tribology International Vol. 43, p. 1665-1673, 2010.
5. B.MANN, C.REICH, « Automated repair of fiber composite structures based on 3d-scanning
and robotized milling» Deutscher Luft- und Raumfahrtkongress, 2012.
6.C.DUMAS, S. CARO, M. CHERIF, S. GARNIER, M. RITOU, B. FURET, “ Joint stiffness
identification of industrial serial robots ”, Robotica, 2011. (2011-08-08), pp. 1-20, [hal00633095].
7.C.DUMAS, A. BOUDELIER, S. CARO, B. FURET, S. GARNIER, M. RITOU, “ Development of a robotic cell for trimming of composite parts”, Mechanics & Industry 12, 487–494
(2011), DOI: 10.1051/meca/2011103
8.A.BOUDELIER, M.RITOU, S.GARNIER, B.FURET, “Optimization of Process Parameters in
CFRP Machining with Diamond Abrasive Cutters”, Advanced Materials Research (Volume
223), 774-783 (2011), DOI: 10.402/www.scientific.net/AMR.223.774
9.BAKER, A.A., A Proposed Approach for Certification of Bonded Composite Repairs to
Flight-Critical Airframe Structure, Applied Composite Materials, DOI 10.1007/s10443-0109161-z
10.WHITTINGHAM, B., BAKER, A.A., HARMAN, A. AND BITTON, D., Micrographic studies on adhesively bonded scarf repairs to thick composite aircraft structure, Composites: Part
A 40 (2009), pp. 1419–1432
11.GUNNION, A.J. AND HERSZBERG, I., Parametric study of scarf joints in composite structures, Composite Structures, Volume 75, Issues 1-4, September 2006, pp. 364-376
12.C.BONNET- G.POULACHON-J.RECH-Y.GIRARD-J.P COSTES, “ CFRP drilling: Fundamental study of local feed force and consequences on hole exit damage “ International Journal of Machine Tools and Manufacture, 2015, 94, pp.57-64.
CAD-CAM integration for 3D Hybrid
Manufacturing
Gianni Caligiana1, Daniela Francia1 and Alfredo Liverani1
1
University of Bologna, v.le Risorgimento 2, Bologna, 40136, Italy
* Corresponding author. Tel.: +390512093352; fax: +390512093412. E-mail address:
d.francia@unibo.it
Abstract Hybrid Manufacturing (HM) is oriented to combine the advantages of
additive manufacturing, such as few limits in shape reproduction, good customization of parts, distributive production, minimization of production costs and minimization of waste materials, with the advantages of subtractive manufacturing, in
terms of finishing properties and accuracy of dimensional tolerances. In this context, our research group presents a design technique that aims to data processing
that switches between additive and subtractive procedures, to the costs and time of
product-manufacturing optimization.
The component prototyping may be performed combining different stages (addiction, gross milling, fine milling, deposition…) with different parameters and
head/nozzles and is able to work with different materials either in addictive, either
in milling.
The present paper is dedicated to introduce different strategies, or in other terms,
different combinations of machining features (addictive or deductive) and different materials to complete a prototype model or mold. The optimization/analysis
piece of software is fully integrated in classic CAD/CAM environment for better
supporting the design and engineering processes.
Keywords: Hybrid manufacturing; CAD; CAM; Process design; Multimaterial
manufacturing.
1 Introduction
During the last decade, intensive research efforts in Rapid Prototyping (RP) focused on Additive Manufacturing (AM) techniques, because of their efficiency in
terms of time and cost reduction, in the product development and manufacturing.
AM enables production of complex structures directly from 3D CAD models in a
layer-by-layer process using metals, polymers, and composite materials.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_34
329
330
G. Caligiana et al.
A large number of additive processes are now available [1]. They differ in the
way layers are deposited to create parts and in the materials that can be used.
Among them, a promising technique is the 3D Printing that puts its roots in the ink
jet printing technology, by means of a printer head that lay down small beads of
material, which harden immediately to form layers. A thermoplastic filament or
metal wire that is wound on a coil can be unreeled to supply material to the extrusion nozzle head. The nozzle head heats the material and turns the flow on and off.
Each layer deposited can be seen as a thinly sliced horizontal cross-section of the
eventual object and each cross section can be extremely detailed.
However, the performance of 3D Printers (3DPs) has often to be tested: the dimensional accuracy, feature size, geometric and dimensional tolerance and surface
roughness are weak points of 3DPs [2, 3].
On the other side, subtractive processes have several advantages that overcome
the limits afore mentioned. In subtractive processes, a piece of raw material is cut
into a desired final shape and a controlled material-removal process sizes it by the
use of machine tools.
A good issue is to combine the two processes, in order to gain advantages from
the addictive and subtractive techniques, depending on the piece to be produced.
Automation is the key to making subtractive prototyping competitive with the additive methods, but it has to face up to the CAD model translation into tool paths
for milling machinery.
Since the 1990s, the concept of hybrid 3D printer arises, that merges additive
and subtractive techniques in one machine. Combining the benefits of milling and
3D printing in one unit, these machines may break through barriers experienced
by design engineers especially inherent to limitations in terms of surface finish
and precision of 3DPs alone. Hybrid 3DPs (H3DPs) produce pieces ready to go
right out of the machine, with no need for a separate milling operation, and guarantees dimensional accuracy and quality standard, difficult to be achieved otherwise.
One lack of H3DPs remains their limited volume capabilities: usually, they can
produce components in the range of size of several centimeters, up to the meter.
Besides, a bottleneck problem in CAD and CAM systems integration, from 90s
to now, has been their implementation in open source environment [4-5]. The
open source development model encourage collaborative work in order to enhance
CAD-CAM/CNC integration tools and several efforts tends towards the development of more efficient integration platforms on open environments.
In this context, the challenge is to overcome the constraints of common hybrid
3D Printers and to optimize the addictive-subtractive techniques interchange by
means of automatic tools and a managing software that can be implemented in
open environment.
This goal motivated our research group to design a 3D hybrid-layered manufacturing printer, which lies on the highest podium in size, comprising both a part of
milling and a part of layering deposition. In order to integrate CAD and CAM
CAD-CAM integration for 3D Hybrid Manufacturing …
331
communication, a management software has been compiled, starting from CAD
and CAM software available in open source environment.
Starting from the analysis of the requirements concerning the dimensions and
accuracy of a piece, this approach evaluates the possible manufacturing combinations between additive and subtractive technologies, seeking the ideal in terms of
processing times, processing waste, materials employed [6].
Generally, a typical sequencing in the proposed process development can be resumed as follows:
1.
when a new piece is assigned, the part features have to be recognized
and classified, before the manufacturing process begins, depending on
its function: it could be a mould or a model part [7];
2. when the part is a mould, an inner core is prepared by means of a
rough cut of a starting block and, upon this support, the software manages the deposition of material in order to complete the part up to its
final shape;
3. when the part is a model, the deposition of rough layers of foam is set
up, in order to obtain a piece near to the final shape, to be latter refined by milling, up to the desired shape of the part;
4. latter operations, by means of spray technique, could even occur in order to finish parts with a desired surface material or to paint it;
5. the manufactured part, as it is a mould as it is a model, could be eventually refined by milling operations in order to achieve good finishing
properties.
The following paragraphs describe the design technique more in particular,
starting from the method adopted and to the equipment description and finally presenting an estimation of the gain in terms of time and cost yielded by this promising technique.
2 The Method
The novel design approach we propose is targeted to exploit the benefits of both
additive and deductive manufacturing. Our aim is to perform a data processing
able to switch between additive and subtractive procedures, enabling the manufacturing of products of any shape, combinations of different materials for the optimal manufacturing of products, in terms of costs and time reduction, also available
for small quantity production. The data processing is implemented exploiting the
open source environment.
In this section, we describe the sequence of operations that can be interchanged
in order to optimise the piece manufacturing by means of hybrid techniques, taking into account, as target, the realization of a mold.
332
G. Caligiana et al.
The mold can be realized in two parts: an inner core, which can be roughly
shaped, and an external surface that has to be carefully defined.
This allows to reduce the external material to the minimum necessary and to
maximize the internal material core, in order to save material costs and weight.
When such optimization can be adopted, addictive and deductive technologies can
be combined in order to perform the manufacturing process. The inner support of
the mold is prepared by milling a raw block up to the desired shape and, upon it, a
minimal deposition of the external material is calculated, in order to reduce time
and costs of the total operation. A further finishing of the surfaces of the mold can
be provided.
Figure 1, which follows, shows an example of how the procedure, starting
from a model, lead to the definition of the mold made of different parts. The mold
in the figure is made of two different materials and is realized through two different manufacturing processes.
Fig. 1. The hybrid manufacturing for mold application.
The driven concept is that a part, even a complex in shape part, rather than being produced as a whole, can be realized as decomposition of an external thin surface deposited upon a pre-prepared support. As input for the manufacturing, the
H3DP requires a CAD model of the mold in order to extract from it the CAD
model of an appropriate support. Figure 2 shows the support generation phase in
which, from the geometric model G1, the geometric model G2 is calculated.
Fig. 2. The geometric model extraction for the inner support.
This support can be milled starting from a row block made by a filling material,
such as polystyrene [8].
Then, as shown in Figure 3, it is possible to complete the mold shape by addictive manufacturing. In order to addict material to the support up to reach the final
shape, the RP machine requires a further CAD model that can be obtained by
comparing the final shape of the mold to the inner core support geometry.
CAD-CAM integration for 3D Hybrid Manufacturing …
333
Fig. 3. The model for the layer manufacturing process.
However, after the layering manufacturing, the piece obtained could require
some other finishing operation, as shown in Figure 4, in order to meet roughness
values that layer deposition cannot guarantee.
Fig. 4. The roughness of surface obtained by layering deposition must be removed to attend the
final shape of the mold.
Thus, the mold is obtained by sequences that switches from addictive to deductive manufacturing. The sequence of these operations can be resumed as follows,
in Figure 5, where the symbol + is used to refer to addictive manufacturing, the
symbol – to refer to subtractive manufacturing.
Fig. 5. The sequences of the hybrid manufacturing process
Otherwise, for model parts production, additive layering manufacturing can be
employed to roughly form a part, to be later refined, by milling, in order to reach a
final desired shape, with good tolerances and roughness accuracy. Anyway, the integration between additive and subtractive manufacturing strictly depends on the
integration between CAD/CAM, which support the manufacturing processes.
334
G. Caligiana et al.
For this purpose, our research group compiled also a control software, able to
simulate CNC machining on the block, in order to detect errors, potential collisions, or areas of inefficiency. This enables to correct errors before the program
can be loaded on the CNC machine, thereby eliminating manual prove-outs.
3 The Equipment
The equipment arranged in our laboratories is able to work as additive and subtractive manufacturing system at the same time: our research group compiled the
software that support this system. It is an open source software and is able to
translate and interconnect different programming language, in order to coordinate
different functions of the system: it includes a 3D slicer and a CNC/CAM module,
fully integrated with the CAD software. Thanks to open source CAD/CAM software, it is possible to design the CAD geometry, perform multi-physics simulations to optimize the design and to generate the G-code, ready for the 3D Printing
and milling [9].
The hybrid 3D printing process begins with the modelling of a part by means of
a CAD software. This is an open source software, developed starting from the
Freecad architecture. Freecad is one of the most promising open source 3D-CAD
software focused on mechanical engineering and product design. It is Featurebased, parametric, with 2D sketch input with constraint solver, it supports brep,
nurbs, booleans operations or fillets.
The subtractive process is managed by the integration of a milling module,
based on the Freemill architecture. FreeMill is a module for programming CNC
mills & routers. It creates one type of tool path, called parallel milling, where the
cutter is driven along a series of parallel planes to machine the part geometry. It
runs full cutting and material simulation of the tool path and outputs the G code to
the machine tool.
The slicing module has been compiled in order to give instructions to the RP
machine to produce the desired part starting from the software Slic3r. It is able to
convert a digital 3D model into printing instructions for the 3D printer. It cuts the
model into horizontal slices (layers), generates tool paths to fill them and calculates the amount of material to be extruded.
The main purpose is to manage many aspects in a single environment and at low
cost: to be able to handle 3D printing and CAM operations in an economic environment, with open source tools, extension of CAD and CAM programs. The research group extended the Freecad’s environment and integrated it with the 3D
printing software (Slic3r) and CAM module (Freemill).
As check tool of the communication between the different additive-subtractive
phases, three different visualization modules have been inserted in the system, for
the G-codes visualization. They describe the 3D object to be produced in all its
slicing steps and they are: Repetier-Host, Colibrì and Openscam. The RexRoth
CAD-CAM integration for 3D Hybrid Manufacturing …
335
MTX module, finally, emulates the entire slicing thus allowing the control of the
process and, in case of errors, the avoidance of the printer damage.
In order to inter-connect the different software and modules and to set the parameters required for the production setting, graphical ad hoc interfaces have been
designed, by means of the programming language Phyton. The software is implemented on a 3 axis machinery, shown in Figure 6, with head/nozzle replacement
for milling and addictive manufacturing for fast switch. The system spans over a
huge volume (5.000 x 3.000 x 2.000 mm) and may also be equipped by a nozzle in
order to spray a film coat on the surface. Through a very user friendly interface,
the user can choose a process, can simulate it and then can make the system working.
Fig.6. The Hybrid 3D Printing in our laboratories and its managing software interface.
4 Discussion
In this paragraph is briefly discussed the convenience to address to the innovative hybrid approach for some products, which entails constraints in shape or dimension of the pieces, that traditionally are obtained through laborious and timeconsuming manufacturing. For example, boat hulls are items commonly made of
fiberglass. Fiberglass parts are produced in molds through a manual process
known as a lay-up. In the most suitable case, advanced boatyards are able to
manufacture hull mold through a technique similar to the one we proposed, but
that is not assisted by automated systems. In particular the inner support is realized
by the milling of a polystyrene block and, upon it, a paste is manually deposited.
After the deposition, a finishing of the external surface is required.
The alternative approach proposed in this paper is aimed to replace the laborintensive and time-consuming process of hand making, by the combination of two
successful technologies that can guarantee shorter lead times and lower expense.
336
G. Caligiana et al.
As above detailed, the mold can be arranged, through an hybrid approach, by a
first rough machining of a support and, upon it, by the automatic deposition of
fused material that will reproduce a target shape with good precision. Furthermore, in order to accomplish to accuracy standards, refinement operations can be
performed. The hybrid 3D printer carries out all the addictive-deductive phases.
Figure 7, that follows, shows the traditional hand-made mold construction and,
on the other side, the successful techniques proposed in the hybrid manufacturing.
Fig.7. The comparison between traditional and innovative manufacturing for the same kind of
product: a boat mold.
Table 1 collects some data about the two facing manufacturing approaches.
The mayor costs are evaluated and compared and the lead times have been estimated. The last row evidences the time and cost reductions that the new hybrid
approach delivers, compared to the handmade mold approach, for a race boat hull,
more or less 5 meters long. The estimate for a hand-made mold is € 6500 and 86
hours of lead time. In contrast, the hybrid manufacturing yields a lead time of 56
hours and €4700 in cost, with evident savings.
Table 1. A rough valuation of time and costs of hand-made and automated manufacturing.
Resources
Raw material
CAM & setup
Add/sub & setup
Labor time
Machining cost
Total Expense
SAVINGS
Hand-made
Costs (€)
3500
100
1280
1620
6500
Hand-made
Time (h)
Hybrid-Manuf.
Costs (€)
900
Hybrid-Manuf.
Time (h)
200
200
3420
4720
28%
8
10
38
56
35%
4
64
18
86
CAD-CAM integration for 3D Hybrid Manufacturing …
337
5 Conclusions
The design approach presented in this paper aimed to enhance the flexibility of
production in terms of sizes, accuracy and functionality of products, to reduce
waste, to minimize handcrafted operations and to make affordable the manufacturing speed even on pieces of large dimensions.
Depending on the assigned part, addictive and subtractive techniques can be
interchanged. A part could be produced by addictive deposition and then could be
milled, in order to reach more accurate shape or dimensions, or it can be prepared
starting from a block of raw material, different from the material of the part, and,
upon it, the final material can be addicted. This way, the shape can be obtained
only by the deposition of few layers upon an inner core. Depending on the
attainable shape of the part and on its material, a spray technique can be adopted
in order to realize a 3D deposition.
The present paper is dedicated to introduce and evaluate different strategies, or in
other terms, different combinations of machining features (addictive or deductive)
and different materials to complete a prototype model or mold, with evident reduction in time and costs, faced to traditional manufacturing. The optimization/analysis piece of software is full integrated in classic CAD/CAM environment
for better supporting the design and engineering processes.
References
1. Nannan G., Ming C. L. Additive manufacturing: technology, applications and research need.
Frontiers of Mechanical Engineering, 2013, 8(3), 215–243.
2. Hongbin L., Taiyong W., Jian S., Zhiqiang Y. The adaptive slicing algorithm and its impact
on the mechanical property and surface roughness of freeform extrusion parts. Virtual and
Physical Prototyping, 2016, 11 (1), 27-39.
3. Bassoli E., Gatto A., Iuliano L., Violante M.G. 3D Printing technique applied to rapid casting. Rapid Prototyping Journal, 2007, 13 (3), 148-155.
4. Chin-Sheng Chen Jintong Wu. CAD/CAM Systems Integration. Integrated Manufacturing
Systems, 1994, 5 (4/5), 22-29.
5. Vinodh S., Sundararaj G., Devadasan S.R., Kuttalingam D., Jayaprakasam J., Rajanayagam
D. Agility through the interfacing of CAD and CAM. Journal of Engineering Design and
Technology, 2009, 7 (2), 143-170.
6. Bianconi, F., Conti P., Moroni S. An approach to multidisciplinary product modeling and
simulation through design-by-feature and classification trees. In Proc. of the 16th IASTED
Int. Conf. on Applied Simulation and Modelling, Palma de Mallorca, 2007, pp. 288-293.
7. Liverani A., Leali F., Pellicciari M. Real-time 3D features reconstruction through monocular
vision. International Journal on Interactive Design and Manufacturing, 2010, 4 (2), 103-112.
8. Cerardi, A., Caneri, M., Meneghello, R., Concheri, G. Mechanical characterization of polyamide porous specimens for the evaluation of emptying strategies in rapid prototyping. In
Proc. of the 37th Int. MATADOR 2012 Conference, Manchester, July 2012, pp. 299-302.
9. Liverani A., Ceruti A. Interactive GT code management for mechanical part similarity
search and cost prediction. Computer-Aided Design and Applications, 2010, 7 (1), 1-15.
Section 2.3
Experimental Methods in Product
Development
Mechanical steering gear internal friction:
effects on the drive feel and development of an
analytic experimental model for its prediction
Giovanni GRITTI1, Franco PEVERADA1, Stefano ORLANDI1, Marco
GADOLA2, Stefano UBERTI2, Daniel CHINDAMO2, Matteo ROMANO2
and Andrea OLIVI1.
1
ZF-TRW Active and Passive Safety Systems, 25063 Gardone V.T. (BS) Italy
2
Dept. of Mechanical and Industrial Engineering, University of Brescia, Italy
* Corresponding author. Tel.: +39-030-371-5663 ; E-mail address:
daniel.chindamo@unibs.it
Abstract: The automotive steering system inevitably presents internal friction
that affects its response. This is why internal friction phenomena are carefully
monitored either by OEMs and by vehicle manufacturers. An algorithm to predict
the mechanical efficiency and the internal friction of a steering gear system has
been developed by the ZF-TRW Technical Centre of Gardone Val Trompia and
the University of Brescia, Italy. It is focused on mechanical steering gear of the
rack and pinion type. The main contributions to the overall friction have been
identified and modelled. The work is based on theoretical calculation as well as
on experimental measurements carried out on a purpose-built test rig. The model
takes into account the materials used and the gear mesh characteristics and enables the prediction of the steering gear friction performance before the very first
prototypes are built.
Keywords: steering, friction, rack and pinion, steering feel, vehicle dynamics
1 Introduction
Car manufacturers tune the steering system very carefully in order to meet customer requirements. The steering system has a primary impact on the tactile feel
perceived by the driver through his hands acting on the steering wheel. This perception –often called “steering feel”- is considered to be vital “because steering is
the driver’s main line of communication with the car; distortion in this guidance
channel makes every other perception more difficult to comprehend” [1]. Accord© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_35
341
342
G. Gritti et al.
ing to [2], steering feel, or steering torque feedback, is widely regarded as an important aspect of the handling quality of a vehicle, as it is known to help the driver
in reducing path-following errors. Some authors even suggest that, apart from
eyesight, the driving action is mainly based on feedback communication through
the steering [3]. Friction in the mechanical steering gear plays a fundamental role
in the final behaviour of the system and can affect feel and feedback heavily; as
such they have to be finely tuned during the design and development phase. The
subject is examined in depth in [4].
This paper describes the main contributions to the steering gear friction and
how to model them. Given the lack of bibliography on the subject, an energybased approach was devised. It combines Coulomb friction with power loss contributions within the system. This makes it possible to predict the performance in
terms of overall steering gear friction as a function of gear mesh design and material characteristics. As no evidence was found of a similar concept applied to mechanical gears, the method can be considered innovative.
2 How friction influences the drive feel
The steering system plays an important role in determining the driving feel, as
it is the most direct linkage between the tyre contact patch and the driver itself.
Apart from the effects of tyre characteristics like the self-aligning torque, in
theory the steering system influence on driver perception should depend upon
steering geometry and servo assistance curves only. However, a typical steering
system, even in its simplest form (i.e. unassisted rack & pinion), features a gear
mesh and many other components sliding on each other, so it is inevitably subject
to friction and the related actions and forces. These phenomena play a significant
role in the system response but, as explained later in this paper, sometimes their
effect is welcome. The overall friction force in a steering gear is given by the contribution of different sources, mainly the gear mesh and the sliding plastic components which support the rack in order to achieve an ideal meshing condition. In
particular, the main contributions to be considered are: 1) static friction, 2) dynamic friction, 3) friction variations along rack travel. On top of that it is important to underline that the overall friction of the steering system is not only given
by the steering gear box but it also includes friction in the suspension joints, in the
assistance system, and in the steering column bearings and joints.
Mechanical steering gear internal friction ...
343
2.1 Effects on unassisted steering and Electric Power Steering
The dynamic friction is the contribution which resists the movement of the
steering system. For a given tie rod load, an increase of the dynamic friction will
lead to a higher steering wheel torque to keep the system in motion.
The on-centre steering condition is a regime where steering friction (and eventually backlash) can have large influence on the vehicle behaviour. In an on-centre
manoeuvre, therefore with very small steering wheel angles where the steering
angle/steering torque curve is nearly flat, a high level of friction would mask the
steering wheel torque small variations to be applied during any manoeuvre around
the straight-ahead position. Another kind of issue is related to steer returnability.
When exiting from a turn, releasing the steering wheel should result in the steering wheel itself returning to its central position even without action from the
driver, bringing the vehicle back to the straight-ahead direction thanks to the selfaligning actions acting on the tyre contact patch. A high dynamic friction, perhaps
combined with a non-sporty suspension/steering geometry (such as low castor angle and/or longitudinal trail), can result in a residual angle at the end of the manoeuvre. This tendency can easily lead to a very poor driving feel.
Finally, it should be stressed again that the steering wheel, as well as allowing
vehicle control, is the most relevant connection with the tyre contact patch. As a
matter of fact, it provides indications about the level of tyre grip and lateral load
along corners, or about road surface irregularities and imperfections. One of the
main friction effects is to work as a filter, therefore a high level of dynamic friction could filter out the information coming from the wheels, making steering feel
poor and deteriorating active safety as the driver is partially isolated from the
road.
On the other side, a very low dynamic friction could transmit every vibration
caused by an irregular road surface texture all the way up to the steering wheel,
thus making the driving feel tiring and somehow annoying.
The term static friction refers to the kind of friction that resists a change of
state in a system at rest. It can be sensibly higher than dynamic friction, and the
main issues it can cause are often strictly related to the difference between the two
kinds of friction, and to the transition from one to the other.
The most relevant effect of a high static friction can be felt during small correction manoeuvres around the on-centre position, probably on a straight road,
where the self-aligning actions are low or negligible, and consequently the steering wheel torque is down to a minimum. In this condition the static friction is experienced by the driver as a so-called “sticky feel”.
Another unpleasant effect due to a high difference between static and dynamic
friction-related forces is the so-called “emptying” of the steering wheel. This is
experienced when moving the steering wheel after a steady-state cornering manoeuvre. In other words, the steering system is kept in a quiescent state along the
corner, but as soon as the steering wheel is moved the change from static to dy-
344
G. Gritti et al.
namic friction results in a reduction of the torque required. The effect can be experienced either when increasing and when reducing the steering lock angle.
In the first case, if a driving path correction is required to negotiate a tighter
turning radius, as soon as the driver moves the steering wheel to increase the lock
angle the transition from static to dynamic friction will lead to a transient reduction of the torque required. This is in contrast with his expectation, since normally
the steering effort is somehow proportional to how much the driver moves away
from the straight running condition, in order to overcome the self-aligning actions
related to tyre self-aligning moments and steering geometry.
On the other side when exiting from a steady-state cornering manoeuver, as
soon as the steering wheel is moved from the typical mid-corner quiescent state
back towards the on-centre position, the torque reduction will be larger than expected because of the transition from static to dynamic friction, with a consequent
tendency to widen the path more than required.
As a matter of fact, when the difference between static and dynamic friction
forces is high, any small steering angle corrections to be normally performed during driving, and requiring reversal of the steering velocity, will be inaccurate and
the car as a whole will be perceived as slightly unpredictable and inconsistent
with the driver’s inputs. Another effect related to an excessive difference between
static and dynamic friction is the generation of stick-and-slip phenomena, with the
excitation of vibrations in the steering system resulting in the generation of noise.
The assisted steering systems are affected by internal friction as well. The assistance can mitigate the negative effects of the friction, but not completely. In
addition, the servo system itself can be adversely affected by the presence of friction.
The electric power steering works via an actuator which controls the movement of the rack or pinion, depending on the torque and speed input measured on
the steering column. Each car model requires an appropriate tuning of the operating logic, which is also based on other data from the vehicle ECUs. One of the
tuning targets is to artificially filter the effects of friction (by means of an active
self-centering action for instance), although this may require a compromise on
other aspects of the steering feel. In any case the main effects of friction on an
electric power system should be considered added to those already present in a
simple manual steering system. First of all, a high dynamic friction requires a
higher assistance level from the electric motor, which in turn has an impact on energy consumption therefore on fuel consumption end emissions.
On the other side a very low dynamic friction, with a poor filtering action with
respect to road inputs, could induce instability problems with the generation of vibrations and discontinuity in the steering wheel torque. Again, an excessive drop
between static and dynamic friction could create problems during the servo assistance tuning phase as well. When calibrating the system, it is important not to neglect friction variation, both in terms of time (i.e. the effect of wear) and in terms
of part to part tolerance variation. That means the tuning should ensure a good
steering feel, independently from the inevitable effects of running and wear in
Mechanical steering gear internal friction ...
345
terms of friction, and independently from the small product variability and tolerances which can’t be avoided, even with a very stable manufacturing process.
2.2 Hydraulic power steering
If compared to a column assistance EPS, the friction in a hydraulic steering
system comes mainly from the steering gear. By comparison with a standard mechanical steering gear, a hydraulic system presents additional sources of friction
mainly due to the hydraulic seals i.e. the proportional valve sealing rings, hydraulic cylinder seals, and the hydraulic piston seal.
A very high dynamic friction could lead to the same self-centering issues of the
manual and electric systems, and to a decay of the feeling in the “almost straight”
driving. However, in this case a low dynamic friction could be critical as well.
Indeed, hydraulic systems are affected by vibrations and resonances, which can
be excited either by the pump, by the hydraulic pipes elasticity, by the proportional valve torsion bar etc. Friction works as a damper against this kind of issues,
hence if it is not enough, noise and steering wheel vibrations can occur.
Another peculiar phenomenon related to friction in the hydraulic steering gears
is hysteresis. The assistance level of a hydraulic system depends upon the angular
misalignment between the two components of the proportional hydraulic valve:
the input shaft which is solidly connected to the steering column and the sleeve,
which is fixed to the pinion. Hysteresis is related to the friction between the two
components above, which have to work with a very narrow clearance in order to
ensure the correct flow of the hydraulic fluid. This leads to recurring contact; in
this case friction resists the relative rotation between the two valve components
therefore leading to a different assistance level depending on whether the steering
wheel torque is increasing or decreasing. In the loading phase friction resists valve
opening, hence the assistance level might be lower than expected, while when releasing the steering wheel, friction resists the valve closing action. This can lead
to an unexpectedly high level of assistance. This problem is usually perceived as
unpleasant in S-shaped curves and changes of direction.
3. FRICTION SOURCES AND MEASUREMENTS
In order to simulate the operation of a steering gear, and to predict its efficiency, the first step is to identify the various components dissipating energy
through friction. In the following pages the analysis will be focused on the standard rack and pinion mechanical steering gear type. In this case the contributions
to friction are: the rack and pinion mesh, the sliding zone between rack and bush,
the sliding zone between rack and yoke (see Figure 1).
346
G. Gritti et al.
The typical component-level test aimed at measuring steering gear friction performance is the so-called returnability test. This test evaluates the resistance of the
steering system alone to self-centering actions offered by tyres and steering/suspension geometry. It is carried out by securing the steering box on the test
bench with the pinion shaft left free. The load required to move the rack along its
axis is evaluated by means of an actuator equipped with a load cell and fixed to a
tie rod. The load measured is the Returnability load R, that can be seen as the sum
of all the single contributions to friction:
(1)
R Rg Ry Rb
Where Rg is the gear mesh contribution to total Returnability load, Ry is the
yoke liner contribution and Rb is the rack bush contribution.
This test (and other similar tests aimed at steering gear friction evaluation)
should be performed on a completely assembled steering gear. Needless to say it
is often useful to predict internal friction and its effects already in the design
phase and before any prototype is manufactured.
Fig. 1. Friction sources in a mechanical steering gear.
4. MODELING OF FRICTION SOURCES
The friction produced at the rack and pinion mesh interface can be evaluated
by estimating power dissipation due to friction in the gear teeth contact zone.
When in motion, the rack and pinion coupling is affected by sliding phenomena
between the teeth surfaces. There is always more than one pair of teeth in mutual
double flank contact. An energy approach has been used in order to estimate the
dissipation of the gear mesh. In a generic sliding system, the power loss caused by
frictional effects Nf is given by:
(2)
N f Ff ˜ Vs
where Ff is the friction force (normal contact force multiplied by the friction
coefficient) and Vs is the sliding speed.
For a steering gear mesh it is possible to use the same relationship. Ff is replaced by Rg and Vs is replaced by the rack speed Vr:
Mechanical steering gear internal friction ...
Nf
347
(3)
Rg ˜Vr
If the sliding speeds and the contact forces between the teeth in contact are
known, it is possible to evaluate the power loss. Consequently, it is possible to
evaluate the gear mesh contribution to the Returnability load R dividing the power
loss by the linear speed of the rack:
Nf
(4)
Rg
Vr
For a spur gear the sliding speed Vs at any point along the path of contact is
constant because the path of contact is parallel to the gear axis. Vs becomes null
on the pitch diameter only.
For a helical gear like the rack and pinion mesh, the path of contact is not parallel to the gear axis, so it is not possible to identify an instant pure sliding speed.
However, it is possible to compute the sliding speed integral along the path of
contact (e.g. for one tooth, see Figures 2, 3, and 4):
lo
(5)
A
V dl
s
³
s
li
Where li and lo are the inlet and outlet points of the tooth flank contact path
and As is the instant sliding area.
Figs. 2, 3, 4. Sliding speed vector: decomposition on tooth flank and rack teeth plane.
In order to obtain the power loss, the sliding area should be multiplied by the
linear contact pressure along the contact line. For one tooth only it is:
Nf
Vs ˜ Ff
lo
lo
li
li
³ Pg ˜ pc ˜Vs dl Pg ˜ pc ˜ ³ Vs dl Pg ˜ pc ˜ As
(6)
where μg is the friction coefficient between the gear tooth and pc is the tooth
linear contact pressure (normal load on the tooth divided by the actual total length
of the path of contact). pc is assumed constant as demonstrated in [5].
In a defined point of the contact line, the sliding speed is given by the vector
difference between the rack speed Vr and the pinion tangential speed Vp. The sliding speed necessarily lies in the rack tooth flank plane πf, see Figure 2.
Vs can be calculated as the composition of two sliding speeds, the first (Vs1) in
a plane parallel to the rack teeth (Figure 3), and the second (Vs2) in the pinion
transversal plane (Figure 4), where:
348
G. Gritti et al.
Vpr
§
·
(7)
¨¨
¸¸
© sen 90 E r ¹
Vr, βr (angle between rack tooth and rack axis) and βhsg (angle between pinion
axis and rack axis, in a plane parallel to both) are constant in every point of the
meshing, so Vs1 and Vpr (rack speed projection on the pinion transversal plane)
are necessarily constant too in every point of the contact paths. Taking Vpr into
account allows to draw the following considerations in the pinion transversal
plane (Figure 5, where ψis the angular coordinate of the rack-pinion contact point,
and αtp is the pressure angle on the pinion transversal plane).
§
·
Vpr
Vp
Vs 2
(8)
¨
¸
sen \
sen 90 Dtp \ ¨© sen 90 Dtp ¸¹
where Vs2 =f(ψ). Finally, it is possible to calculate the sliding speed Vs as:
(9)
Vs Vs1 Vs 2
The power loss due to sliding friction is calculated by numerical integration
along the contact path. It is therefore possible to evaluate the Returnability load
contribution of the gear mesh (Rg). All the above in this paragraph is based on [6].
Vs1
sen E hsg
Vr
sen 90 E r E hsg
Fig. 5. Decomposition of the sliding speed in the pinion transversal plane.
During the rack motion the yoke liner works in a pure sliding condition. The
yoke spring load balances the separation force given by the rack and pinion meshing. In this case the Coulomb friction model can be deemed appropriate to represent the system. The contribution to the total Returnability load given by the yoke
can be expressed as:
(10)
Ry P y ˜ Fy
where Fy is the resultant force acting on the yoke liner and μy is the coefficient
of friction between the liner material and the rack itself to be taken from an experimental look-up table as described below.
The direction of the separation force given by the gear mesh depends upon its
geometry. The separation force amount depends primarily upon mesh design, and
also upon friction given by the three single sources. The friction generated between rack and bush depends upon the material of the bush itself, on the preload
given by the housing on the bush and in turn on the rack. Hence the preload derives from the design preload. However, apart from the effect of variations in the
speed of the system, the Returnability load Rb given by the bush is constant.
Mechanical steering gear internal friction ...
349
5. EXPERIMENTAL MEASUREMENTS/LOOK-UP TABLES
As shown above, in order to model the different contributions to the total rack
pull some parameters have to be taken from a look-up table, to be filled by means
of experimental tests performed on the purpose-built test bench. In order to predict Rg (gear mesh contribution, see (1)) it is necessary to know the coefficient of
friction (CoF, μg) between the rack and the pinion in the meshing zone. The CoF
can be evaluated by performing a test very similar to the Returnability load test,
where bush and yoke are replaced by a low friction support with rolling bearings.
This test has to be performed for different rack speeds as the CoF is dependent
upon the relative velocity between contact surfaces. Once the average Returnability load has been determined the calculation shown in Section 4 has to be reversed
in order to compute the steel on steel CoF. A typical trend is shown in Figure 6.
Fig. 6. Gear mesh coefficient of friction vs. rack speed.
Regarding the yoke, a specific test has been designed, where the pinion is replaced by a low friction support with roller bearings. The same approach has been
used to replace the rack bush. For the yoke a proper support that allows to control
and monitor the test preload has been designed. The Returnability load measured
in this way is then divided by the preload, to estimate the dynamic coefficient of
friction of each material to be tested. The test is performed at different speeds in
order to create a look-up table as above.
The rack bush contribution to the total Returnability load has to be directly
evaluated. The test is once again very similar to the Returnability load test, and as
such it can be performed on the same test rig. Both yoke liner and pinion are replaced by roller bearing based supports. The bush is supported in a housing of the
same dimension of the aluminium gear box housing of the steering system, this allows to preload the bush in the same way it is preloaded in the real steering gear
system. The tests are performed at different speeds as usual, once again in order to
obtain a comprehensive look-up table.
Solving the friction model requires a numerical approach with iterative computation cycles. Therefore, the creation of a dedicated tool based on MS Excel ® was
deemed necessary. When properly set with all the input parameters (meshing geometry, test speed, yoke and bush material, spring preload, possible resisting
350
G. Gritti et al.
loads, etc.) it gives the total Returnability load split into each contribution, the
pinion torque and both the direct and reverse efficiency values.
Two comparisons between Returnability load measurements and respective
predictions based on the average measurement on a sample of 24 steering gears
are shown below. Gears 1 and 2 are components of different car models, each
with its peculiar geometry and materials. The 24 samples for each gear type basically encompass the whole manufacturing tolerance range.
Figure 7 shows a good fitting with the simulation results. Figure 8 shows that
average real-life and computed values along rack travel appear to be consistent,
while the correlation with peak-to-peak values is weaker, as the latter is influenced by parameters that were not yet considered (e.g. tooth shape errors, rack
rolling due to gear mesh separation forces, and yoke clearance).
Figs. 7, 8. Model vs real-life Returnability measurements: gears 1 and 2 (left) and gear 1 (right).
6. RESULTS AND CONCLUSION
An experimental/analytical model was developed to predict friction forces in a
mechanical steering gear. It is based on power loss contribution given by gear
mesh, yoke liner and rack bush. A dedicated test bench was developed in-house.
A comparison of theoretical results with real-life measurements shows a good correlation regarding mean values. Therefore, it is possible to predict friction effects
before the prototyping phase. As a matter of fact, this simulation tool is now a
standard within the design phase. This has led to development cost savings for
ZF-TRW and its customers, and to a more informed design process. Future model
developments will take parameters like rack rolling and yoke clearance into account, in order to achieve an improved correlation with peak-to-peak pull force
values as well.
References
1. D. Sherman in Car & Driver magazine, Dec. 2012.
Mechanical steering gear internal friction ...
351
2. N. Kim, D.J. Cole: A model of driver steering control incorporating the driver's sensing of
steering torque. Vehicle System Dynamics, 49(10), 2011, pp 1575-1596.
3. R.S. Sharp: Vehicle dynamics and the judgement of quality (pp 87-96), in J.P. Pauwelussen:
Vehicle performance – understanding human monitoring and assessment. Swets & Zeitlinger, 1999.
4. F. Peverada, M. Gadola: Lecture notes on vehicle dynamics and design – steering systems,
University of Brescia, Italy, 2013.
5. G. Henriot: Ingranaggi, trattato teorico e pratico. Tecniche Nuove, 1977.
6. ISO 21771:2007; Gears -- Cylindrical involute gears and gear pairs -- Concepts and geometry.
Design of an electric tool for underwater
archaeological restoration based on a user
centred approach
Loris BARBIERI*, Fabio BRUNO, Luigi DE NAPOLI, Alessandro GALLO
and Maurizio MUZZUPAPPA
Università della Calabria - Dipartimento di Meccanica, Energetica e Gestionale (DIMEG)
* Corresponding author. Tel.: +39-0984-494976; fax: +39-0984-0494673. E-mail address:
loris.barbieri@unical.it
Abstract
This paper describes a part of the contribution of the CoMAS project ("In situ
conservation planning of Underwater Archaeological Artifacts"), funded by the
Italian Ministry of Education, Universities and Research (MIUR), and run by a
partnership of private companies and public research centers. The CoMAS project
aims at the development of new materials, techniques and tools for the documentation, conservation and restoration of underwater archaeological sites in their natural environment. This paper details the results achieved during the project in the
development of an innovative electric tool, which can efficiently support the restorers’ work in their activities aimed to preserve the underwater cultural heritage
in its original location on the seafloor. In particular, the paper describes the different steps to develop an underwater electric cleaning brush, which is able to perform a first rough cleaning of the submerged archaeological structures by removing the loose deposits and the various marine organisms that reside on their
surface. The peculiarity of this work consists in a user centred design approach
that tries to overcome the lack of detailed users’ requirements and the lack of
norms and guidelines for the ergonomic assessment of such kind of underwater
tools. The proposed approach makes a wide use of additive manufacturing techniques for the realization and modification of prototypes to be employed for insitu experimentation conducted with the final users. The user tests have been addressed to collect data for supporting the iterative development of the prototype.
Keywords: Product Design, User centred design, Additive Manufacturing, Underwater Applications.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_36
353
354
L. Barbieri et al.
1 Introduction
For a country like Italy, which owns one of the richest artistic and archaeological
heritages of the world, the restoration and preservation of archaeological and cultural artefacts and sites is a challenge that requires a significant use of resources. If
these artefacts or sites are submerged, the efforts aimed to preserve the heritage
present a very high degree of difficulty. The operations conducted on submerged
sites have to follow an entirely different approach, if compared to the emerged
(terrestrial) sites and this approach has yet to be defined, both for the lack of ad
hoc devices and the absence of specific methodologies.
Unesco’s guidelines [1] for the restoration and preservation of cultural heritage are
in full force and effect only since 2001 and they expressly provide for ‘in situ conservation’ as the first option, before any other intervention. Starting from this indication, in the last decade many surveys and interventions have been conducted in
submerged archaeological sites, with the aim to create the basis for ‘underwater
archaeological tourism’.
In order to maintain the state of conservation of the submerged structures, to delay
as much as possible the proliferation of pest microorganisms (biofouling, limestone deposits, sediments, etc.) and to preserve the areas of interest, a new skilled
professional is born: the underwater restorer. These professionals have to combine
the restorers’ skills to those of professional divers, by operating directly in archaeological sites, performing clean-up operations, maintenance and consolidation
of the areas to be restored. For all these operations, underwater archaeologists use
the same "terrestrial" tools - ice axes, scrapers, chisels, scalpels, sponges and
sweeps - adapted to the new environment. Currently, devices designed specifically
to support the work of underwater restorers are not available on the market.
Thanks to the CoMAS project [2] ("In situ conservation planning of Underwater
Archaeological Artifacts" – www.comasproject.eu), funded by the Italian Ministry
of Education, Universities and Research (MIUR) and run by a partnership of private companies and public research centers, it has been possible to overcome the
limitations of the equipment previously adopted by underwater restorers for cleaning operations. In fact, the CoMAS project aimed at the development of new materials, techniques and tools [3,4] for the documentation, conservation and restoration of underwater archaeological sites in their natural environment. Among these
tools an electric cleaning brush tool has been developed and tested in order to satisfy restorers’ needs that occur during the subsequent phases of the cleaning work
performed, during the in-situ restoration process on the submerged artifacts.
The paper describes the user centred design (UCD) approach adopted for the development of the electric underwater tool and details the various steps of the design process carried out with the ongoing support of end users for the testing of
the different prototypes. In particular, the testing activities have been performed
with the involvement of underwater restorers and professional divers and have
Design of an electric tool for underwater …
355
been carried out over the entire life-time of the CoMAS project in the Marine Protected Area - Underwater Park of Baiae (near Naples, Italy).
2 User centred design approach
In a typical UCD process [5, 6], there are three essential iterative steps which
should be undertaken in order to incorporate the users’ needs before to proceed
with the implementation of the final design solution. The process starts with an
analysis step that aims to understand and specify the context of use and define user
requirements. The second step produces design solutions that are tested and evaluated in the following stage. The third one, indeed, is an empirical measurement
step where user studies are carried out to collect objective and subjective usability
data that allow engineers to evaluate how much the design differs from users’
needs and desires. The process involves iterating until the requirements are satisfied.
Our process follows the abovementioned UCD recommendations and then requires the validation of the engineers’ assumptions with the direct involvement of
end-users at every stage of the development process. Since users have no previous
experience with underwater electric tools but have only an idea of the desired object and, furthermore, due to the lack on the market and of the state-of-the-art of
electric underwater devices for in-situ conservation it has been necessary to enrich
the evaluation step of an entire set of experimental activities that focus on the
technical and mechanical requirements.
On the basis of these considerations, the UCD approach started with the definition
of users’ needs through a direct communication and deep conversation with underwater restorers by means of focus groups and interviews. The focus group encouraged users to share their feelings, ideas and thoughts based on their knowhow, while the interviews allowed to find out personal experiences and attitudes.
In both cases, the designers gathered a large amount of information that allowed
them to have a better comprehension of the context of use and users’ desires and
needs. These needs have been interpreted and translated by engineers into a preliminary set of usability and technical requirements that, due to the novelty of the
product, were not sufficient for a complete determination of the design specification.
In order to overcome this shortage, a first prototype of the underwater tools has
been developed and tested with the use of different sensor types that allowed engineers to acquire a large amount of experimental data necessary to perform an
accurate product development process and an optimized design of the tool.
Four different physical prototypes have been developed, taking advantage of the
modern additive manufacturing and topology optimization techniques [7,8], and
tested, both in laboratory studies and in the underwater environment, throughout
the entire UCD process. The tests performed in the real operating conditions have
356
L. Barbieri et al.
been carried out by end users to evaluate the tool in terms of functional and usability requirements.
3 Electric tool design and testing
This section describes the development process of the electric underwater tool that
has been carried out in accordance to UCD approach described in the previous
section. The different prototypes have been manufactured by means of traditional
machining processes but also thanks to the adoption of additive manufacturing
techniques. In particular, Direct Metal Laser Sintering (DMLS) and Selective Laser Sintering (SLS) technologies have been used for the prototyping of metal and
polymer parts. The choice of the most suitable technology was dictated by the
analysis of the functional characteristics and the complexity of the geometry of
each component.
3.1 First prototype
The following image (Fig. 1) depicts the virtual mock-up of the first prototype of
the electric tool. The instrument tool is composed by two cylindrical aluminum
cases, assembled by means of flanges, that house a 36V brushed motor that gives a
maximum no-load speed of 1400 rpm.
Fig. 1. Virtual prototype.
The tool is powered by a 36V lead battery pack, mounted inside an external steel
case that it is placed on the seabed during the operations.
Above the flanges, that assemble the two main parts, a waterproof cylindrical
chamber, with a diameter of approximately 10 cm, has been placed in order to
house the data logger that collects the different data output of the sensors that
equip the tool. In particular, the sensors installed are three load cells to measure
Design of an electric tool for underwater …
357
the axial forces and sensors capable to monitor the engine operating parameters,
such as the electric current drawn and the number of turns, in order to make and
estimation of the torque arising during the working operations.
The calibration process has been carried out in the laboratories by means of a testing workbench specifically design for these kind of sensors. In particular, as
showed in the following image, the testing workbench has been configured for the
measurement of the axial loads (Fig. 2a) and torque (Fig. 2b) that operate on the
mandrel.
Fig. 2. Laboratory tests. Workbench for the measure of axial force (a) and torque (b).
Once the laboratory tests have been accomplished then field trials have been carried out with the participation of underwater restorers and certified deep sea divers
that performed different kind of experimentation of the electric tool on different
materials (Fig. 3a), biofouling organisms (Fig. 3b) and conditions of use.
Fig. 3. Users testing on different kind of biofouling (a) and materials (b).
In particular, users focused their attention also on the usability and manoeuvrability of the instrument under different buoyancy conditions (Fig. 4a) and on tool
switching operations (Fig. 4b).
358
L. Barbieri et al.
Fig. 4. Users testing usability (a) and the switching of the brushes (b).
The information acquired by the data-logger have been processed and integrated
with video by means of a software developed ad-hoc. In particular, figure 5 shows
a frame of the software developed to support engineers in the interpretation of the
data acquired during the tests. The software gives information about the place and
the time of the experiment, average values of the main parameters involved during
the test and a graphical timeline representation of their actual values.
The analysis of the data acquired with the sensorized prototype allowed engineers
to have a deeper knowledge about the technical and mechanical requirements to
satisfy in order to meet users’ needs. In particular, it has been possible to define
the weights, the working and reaction forces, the ergonomic and functional operation characteristics of the instrument.
The results have shown that the manoeuvrability of the tool represents a critical issue that demand specific attention throughout the entire lifetime of the product development process. In fact, the efficacy of the instrument is tightly related to the
direction and force applied by the user that, in such a difficult working environment, are, in turn, strictly affected by the ergonomic and usability characteristics
of the device.
Fig. 5. Software for the integration and visualization of the information acquired by the datalogger.
Design of an electric tool for underwater …
359
3.2 Second prototype
The underwater restorers’ feedbacks and the results of the data gathered and
analised during the test have been taken into account in oder to redesign and
improve the underwater tool. The following image shows the comparison between
g 6a) and the second prototype
p
yp (Fig.
g 6b).
the first (Fig.
Fig. 6. The first prototype (a) compared to the second one (b).
The new tool is more compact and manageable. The back part of the tool has been
redesigned in nylon in order to optimize its geometry and reduce its dimensions.
The engine management system has been improved thanks to the adoption of an
electronic controller card instead of the on-off switch button. The lead battery has
been substituted with a longer lasting lithium battery of 36V. The adoption of a
lithium battery allowed a significant reduction of 90% of the dimensions of its waterproof case. Furthermore, according to the feedbacks provided by underwater restorers, the handle has been redesigned in order to allow users to easily counteract
the reaction forces and torques generated throughout the use of the instrument.
The new handle, manufactured by means of a water-jet cutting of an aluminum
plate, is placed in the anterior part of the tool, near to the mandrel, and presents a
symmetrical handlebar with two grips incorporating the controls (Fig. 7a).
Fig. 7. The second physical prototype (a) during user testing (b) in underwater environment.
360
L. Barbieri et al.
A first series of laboratory tests have been carried out on the second prototype
with hydrostatic experimentations at a maximum pressure of 4bar and duration of
60 minutes.
Subsequently, a second phase consisting of extensive user studies, has been carried out in the testbed of the underwater archaeological park of Baiae. Here, underwater restorers have tested the tools on various submerged remains affected by
different kind of bio-fouling organisms. Fig. 7b shows the user during the removal
of algae by means of the underwater device equipped with a hard nylon brush.
3.3 Final prototype
The testing activities carried out on the second prototype have made it possible to
detect some critical aspects of particular attention on which it was necessary to
keep working to find some improvements that better satisfy users’ needs.
In particular, with regard to the handle design, if on one side, the large handle of
the second prototype allowed users to easily counteract the reaction forces, on the
other side, it exhausted the wrist muscles more quickly and did not allow a precise
control of the tool. For these reasons a third handle design has been developed as
depicted in Fig. 8a. The tool presents two large independent handles that allow to
work always with straight wrists and a secure power grip. The first U-shaped handle is form fitting and ergonomic due to its curved shape manufactured by means
of 3D printing technologies. The second handle provides a comfortable control on
the switch handle placed on it and features a locking knob that allows to customize
its angle in accordance to the direction’s force the user want to exert.
If compared to the second prototype, the third design version is featured also by a
keyless chuck that make faster and simpler the changing of the brushes.
Fig. 8. The third physical prototype (a) and the final design (b) of the electric tool.
The third prototype underwent also laboratory tests and field trials performed with
end-users whose feedbacks have been incorporated by engineers in the final design version of the electric underwater tool (Fig. 8b).
Design of an electric tool for underwater …
361
The final design presents other important improvements. The device is equipped
with a 4 pole brushless motor that double the performances exerted by the previous one. The engine control system has been improved too thanks to the adoption
of an electronic programmable control unit. The back part of the tool has been
manufactured in aluminum to improve the heat exchange, while battery case
weight has been optimized thanks to the adoption of Delrin plastic material. Furthermore, the handle switch has been replaced with a magnetic one that makes
more effective the user’s comfort.
Fig. 9. The final prototype of the electric tool tested by final users.
The final tests have been performed in the area of Portus Iulius where archaeological structures (Fig. 9b) and several mosaic floors (Fig. 9a) and opus signinum
floors lay on the seabed at a depth of 3-5 meters. The tool has been used by restorers in different phases of the restoration work in relation to the cleaning operation
that had to perform on various construction material or for the removal of a specific living organism, such as, algae, sponges, molluscs, etc..
4 Conclusions
The paper has presented a user centered design approach adopted for the development of an innovative underwater electric tool. This device is an outcome of the
CoMAS project and has been specifically developed to support underwater restorers during their activities of conservation and restoration of underwater archaeological sites.
The development process has been carried out with the constant support and feedbacks of end users that have been of fundamental importance especially during the
testing activities to validate the functionality of the prototype and to guide design
improvements. The four prototypes have been developed taking advantage of the
362
L. Barbieri et al.
great versatility and high capability to manage complex geometries offered by the
additive manufacturing technologies.
The final users have expressed their full satisfaction for the results achieved in the
UCD process. The developed electric underwater tool is easy to use and allows restorers to operate with better results in terms of speed and freedom.
The good results and the effectiveness of the described UCD approach have
pushed researchers and designers of the CoMAS project to implement the same
process for the development of a full set of electric underwater tools able to support restorers in all their different activities performed for the mechanical cleaning
of submerged archaeological remains.
Acknowledgments The authors want to express their gratitude to all the underwater restorers,
underwater operators and underwater instructors that have been actively involved in the design
process. A special thanks to Roberto Petriaggi, former director of Underwater Archaeological
Operation Unit at ISCR, for his support and scientific expertise. The authors would like to thank
also the Soprintendenza Archeologia della Campania for the permission to conduct the experimentation of the electric tool in the Baie underwater archaeological site.
All the design activities have been carried out in the “CoMAS” Project (Ref.: PON01_02140 –
CUP: B11C11000600005), financed by the MIUR under the PON ’R&C’ 2007/2013 (D.D. Prot.
n. 01/Ric. 18.1.2010).
References
1. Unesco, 2001. Convention on the protection of the underwater cultural heritage, 2 November
2001. Retrieved 01/02/2016 from http://www.unesco.org
2. Bruno F., Gallo A., Barbieri L., Muzzupappa M., Ritacco G., Lagudi A., La Russa M.F.,
Ruffolo S.A., Crisci G.M., Ricca M., Comite V., Davide B., Di Stefano G., Guida R. The
CoMAS project: new materials and tools for improving the in-situ documentation, restoration
and conservation of underwater archaeological remains. Accepted for publication in the Marine Technology Society (MTS) Journal, 2016.
3. Bruno F., Muzzupappa M., Gallo A., Barbieri L., Spadafora F., Galati D., Petriaggi B.D.,
Petriaggi R. Electromechanical devices for supporting the restoration of underwater archaeological artifacts. In: OCEANS 2015-Genova. IEEE, 2015. p. 1-5.
4. Bruno F., Muzzupappa M., Lagudi A., Gallo A., Spadafora F., Ritacco G., Angilica A.,
Barbieri L., Di Lecce N., Saviozzi G., Laschi C., Guida R., Di Stefano G. A ROV for
supporting the planned maintenance in underwater archaeological sites. In: OCEANS 2015Genova. IEEE, 2015, p. 1-7.
5. Vredenburg K., Isensee S., Righi C. User-Centered Design: An Integrated Approach. Upper
Saddle River, NJ: Prentice Hall PTR, 2002.
6. ISO 9241-210:2010. Ergonomics of human-system interaction. Part 210: Human-centred design for interactive systems.
7. Muzzupappa M., Barbieri L., Bruno F., Cugini U. Methodology and tools to support
knowledge management in topology optimization. Journal of Computing and Information
Science in Engineering, 10(4), 2010, 044503.
8. Muzzupappa M., Barbieri L., Bruno F. Integration of topology optimisation tools and
knowledge management into the virtual Product Development Process of automotive components. International Journal of Product Development, 2011, 14(1-4), 14-33.
Analysis and comparison of Smart City
initiatives
Aranzazu FERNÁNDEZ-VÁZQUEZ1* and Ignacio LÓPEZ-FORNIÉS1
1
Department of Design and Manufacturing Engineering, María Luna 3, Zaragoza, 50018, Spain.
* Tel.: +34-669-390-186; fax: +34 976 76 22 35; E-mail address: aranfer@unizar.es
Abstract: Complexity in cities is expected to become even higher in the short
term, which implies the need to face new challenges. The Smart City (SC) model
and its associate initiatives have become very popular for undertaking them but it
is not often very clear what it really means. Starting with a previous classification
of the initiatives developed under the SC model into two big categories, according
to their approach to citizens, this paper aims to make a critical analysis of this
model of city, and to propose the development of new initiatives for it based on
Citizen-Centered Design methodologies. Living Labs, both as methodology and as
organization, appear in this context as an interesting choice for developing initiatives with real citizen involvement along the entire design process, which it is expected to arise in later stages of research.
Keywords: Smart City, Living Lab, Citizen Centered Design, Design methods.
1 Introduction
Over the last decades cities have been facing new challenges that are expected to
become even bigger in the short term. The fact that 54% of world’s population
live in cities [1], and the expectation that it will increase up to 66% by 2050, are
incessantly repeated data that appears in almost every paper or publication regarding urban planning or cities [2][3][4]. These facts are usually used for highlighting
the urgency with which new approaches must be made to improve citizens’ conditions now and for the near future.
In this context, many models have emerged claiming to be the solution for the
upcoming challenges: eco-city, high-tech city or real-time city. One of the most
successful ones is Smart City (SC), and many initiatives and much research have
been developed in recent years around it. The objective of this study is to make a
critical analysis of different initiatives developed within this model based on the
role of citizens in each one of them. Citizen implication is a fact that can guarantee
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_37
363
364
A. Fernández-Vázquez and I. López-Forniés
the success of the initiatives and its economic and social viability, which is of major interest for all the parties involved in the develop of cities [5][6]. According to
the results of the investigation, it is intended, in the following phase of this research, to develop new initiatives for the SC based on citizens’ interest, integrating
user-centered design methodologies.
It becomes clear that intensive research and numerous proposals have been developed under the SC label lately, but yet there is not a unique definition for SC,
and the indicators of the “smartness” of a city are still far from indisputable [7].
Nevertheless, the analysis of urban governance has appeared as a promising approach for measuring the impact of innovation in urban daily processes [8], and
for this end, it is interesting to analyse the role of citizen in the whole process.
Thus, analysing publications of the last fifteen years, more than one thousand
research articles can be found in Scopus with “smart cities” within their title. In
those, two broad categories can be established on SC initiatives when it regards to
the role of the citizens:
x The first, more abundant in publications, comprises proposals that focus on the integration of Information and Communication Technologies (ICT) to city services and infrastructure. In general, they respond
to a top-down approach, in which the initiatives are mainly developed
by administrations and/or companies, with citizens as mere end users.
x The second one, in some ways opposite, includes initiatives that pose
a redefinition of the ICT approach, and offers a user-centered design
focus. It responds to a bottom-up approach, in which citizen participation is encouraged throughout the process.
2 Smart City models and initiatives based on ICT
2.1 Technological definitions of SC
The first approach defines SC as the city that is using new ICT´s innovatively and
strategically to achieve its aims. According to this definition, the Smart City is
characterized by its ICT infrastructures, which facilitate an urban system increasingly, smart, interconnected and sustainable [2].
The paradigm that supports the need of this ICT deployment is the Internet of
Things (IoT), which proposes a system in which the pervasive presence of a variety of devices able to interact with each other without the intervention of people.
In this context, SC is driven and enabled by interconnected objects placed in the
urban space. Based in technology such as modern wireless sensing machine to
machine (M2M), Radio Frequency Identification (RFID) or Wireless Sensor Networks (WSN), IoT is supposed to successfully contribute to a more efficient and
Analysis and comparison of Smart City initiatives
365
accurate use of the resources [9], allowing access to a large amount of information
(Big Data) that can be processed for its subsequent use by data mining techniques.
The futuristic concept of a SC where citizens, objects, utilities, etc., are seamlessly connected using ubiquitous technologies is almost a reality, so as to significantly enhance the living experience in 21st century urban environments [10].
Proposals undertaken with this approach have been developed within the field of
transport, services and energy efficiency of cities, and all those related with big
data and data mining, can be included within this approach too. Many of them also
have been supported, promoted and/or advertised by large ICT´s companies, such
as Endesa-Enel and IBM in Malaga (Spain), IBM in Songdo City (South Korea),
TECOM Investments in SmartcityMalta (Malta), Cisco Systems in Holyoke, Massachusetts (USA) and Telefonica in Santander (Spain).
Fig. 1. Typical IoT approach Smart City representation [6]
But this point of view has not only been encouraged by companies. The European Commission itself started promoting a SC model with bigger focus on energy
efficiency, renewable energy and green mobility than in citizens themselves [11].
This tendency has slightly changed recently, but not significantly yet.
This issue has also been the subject of much academic research, mainly within
the fields of Computer and ICT sciences. Therefore, the investigation has focus
primarily on issues such as the architecture protocols and infrastructure needed for
the deployment of this model, as mobile crowd sensing (MCS) [12], or adaptations
of previously existent architectures, such as Extensible Messaging and Presence
Protocol (XMPP) [13], for developing new services for this city model.
366
A. Fernández-Vázquez and I. López-Forniés
2.2 ICT based SC initiatives: problems and redefinition
The previous definition of SC and its associated initiatives has, however, been
questioned [14][15][16][17]. On the one hand, it has been argued that while there
were no general consensus on the meaning of the SC term or what its describing
attributes were, there have been an intensive “wiring” of cities and the collection
of big amounts of information, without consideration of some of the possible associated problems, such as the need of ensure the privacy of participants when
data are collected by directly instrumenting human behaviour [14]. Accordingly,
“cities often claim to be smart, but do not define what this means, or offer any
evidence to support such proclamations” [15].
On the other hand, when analysing most of the initiatives developed within the
field of SC, it can be seen that the results only slightly resemble their ambitious
initial objectives. It appears to become difficult to “transform the higher level concepts found in SC literature into actionable and effective policies, projects and
programs that deliver measurable value to citizens” [16]. With pressure growing
for cities to get even smarter, smart city claims have a self-congratulatory nature
that is causing a kind of anxiety around the development of this model [17].
3 Smart City initiatives based on Citizens
In response to the problems arising from the technological predominant SC model,
a current of opinion has claimed that the design of the genuine smart city only
could be possible by the emergence of smart citizens, who would be the ones that
will conferred the "smart" attribute to cities [18] [19].
Instead of considering people as just another one of the enabling forces of the
SC [20], these proposals have opted for the application of citizen-centric and participatory approaches to the co-design and development of Smart City. This model
is emerging as a new and specific type of SC, the Human Smart City [21].
In spite of that, most of the proposals in which the emergence of smart citizens
is supposedly intended have limited citizen’s participation to roles of data provider
[22] or tester of a pre-designed model or service [23], but on rare occasion have
implicated them in the entire process. The main exception, and the environment
that has made possible the emergence of projects in which citizens have played a
major role throughout the entire process, have been the experiences of Living
Labs developed in the field of SC.
Analysis and comparison of Smart City initiatives
367
3.1. Living Labs general definition and first SC experiences
Living Labs (LL) have been defined both as a research and development methodology and as the organization that is created for its practice [24], and many times it
also refers to the context or space in which is developed.
As a methodology, LL is one in which innovations are created and validated in
collaborative, multi-contextual and multi-cultural empirical real-world environments [25]. This approach seeks for the implication of users in every phase of the
process as the mean to ensure their engagement with the services or products developed, and it is performed through iterative cycles of proposal, development of
alternatives and testing at every stage of the process. Thereby, it can be considered
a User Centered Design (UCD) methodology for the way in which user involvement is encouraged.
Referring to LL as an organization, many European cities have established their
own ones for developing new initiatives. The European organization that brings
together most of this LL is the European Network of Living Labs (ENoLL) [26],
which was legally established as an international association in 2.010, and it has
developed since then all kind of initiatives for spreading its aims, methods and
objectives.
Fig. 2. Map of existing LL according to ENoLL Web Site [20]
From the beginning, LL have focused in developing new business models,
mainly in technical and industrial contexts. And due to the lack of definition of the
SC and the difficulty of city leaders to identify the quantifiable sources of value
that ICT networks can generate for them, this focus have made LL appear as an
ideal candidate to create an appropriate model for the implementation of the SC
[27] [28]..
These SC LL have aimed at improving the governance of cities, promoting
proposals coming from citizens themselves and applying user-centered design
methodologies, such as co-design or service design [29][30] [31].
368
A. Fernández-Vázquez and I. López-Forniés
3.2. Living Labs problems regarding SC
Considering the experiences and studies developed, it is not so clear which category of methodologies LL could be included in. Although it has been claimed to
be a User Driven methodology, one of the main problems of European LL has
been the difficulty for citizens to forward their initiatives and ideas to the LL, so
users can not be considered as those who actually run the innovation process. According to that, LL could be better considered as a methodology between User
Centered Design and Participatory Design. But much investigation is yet needed
for defining the characteristics and potentials of LL methodologies [32].
Besides, it has been difficult to create a really consistent audience for these initiatives, so that sometimes the results are not significant or do not allow to obtain
sufficient data for processing. It has got difficult, mainly in countries with little
tradition of citizen involvement such as Spain, to get citizens involved implicated
in those projects. As the common good, understood as the social benefit achieved
by citizenship by the active participation in the realm of politics and public services, has not been interiorized as desirable by society, the social benefit finally is
not achieved. Thus, many of the projects have remained in academia.
Finally, initiatives related to LL have still relied largely on the involvement of
an administration for its development, which on one hand has limited its scope of
action because of the context of crisis of recent years. And on the other hand, it
has been paid little attention to cost-effectiveness in LL projects, which can hinder
a future sustainable financing for private stakeholders.
4 Summary and Benchmarking of SC initiatives
It can be occasionally confusing to distinguish between initiatives, and ICT based
ones often seem to adopt a citizen driven approach, as by establishing a distinction
between so-called “hard” and “soft” domains, and including under the “soft”
definition all those related to governance and people [33]. But a clear distinction
can be made between the two models by analysing the indicators shown in Table
1. Some of these indicators have been previously explained in the previous sections, such as the leaders and drivers of the process in each category, or their characteristic features.
The facts have been extracted from experiences exposed by international organizations, such as the previously mentioned ENoLL, or in cities web pages. This
information has been completed with searches in SCOPUS within the smart city
term in combination with “ICT”, “citizen”, “user” and, finally, “Living Labs”.
These searches have been made since 2013, and after filter the information, for
eliminate irrelevant information, more than 200 articles were analysed for obtaining the facts exposed.
Analysis and comparison of Smart City initiatives
369
Table 1. Benchmarking of SC models.
ICT based SC
Citizen based SC
Leaders and
ICT/Energy/Utility companies
Neighbourhood associations
drivers
City policy actors
Small collectives
Beneficiary
Companies, Authorities and
Citizens and Involved collectives
Citizens (partially)
Innovation base
Technological based
Open or collaborative innovation
Objectives &
Urban development
Infrastructure improvements
Efficient spending
Social welfare
Common good
Public resources
Individual funds
Private investment
Crowdfunding
Characteristic
Features
Networks
ICT Devices
Data Collection
Citizen participation
Open clouds and platforms
Social services
Pros
Secured funding for projects
Big media power
Data mining resources
Secured citizen engagement
Targeted initiatives
Focus towards Common good
Cons
Poor citizen participation
Fuzzy goals
Private benefits
Lack of funds
Poor communication power
Need for new tools/methods
priorities
Resources
Engagement of citizens
Although Citizen based SC initiatives rely on co-creative and collective processes with involved groups of people that can be autonomous, ICT features can
become a very strong support. It is only necessary to re-think the idea of city we
are heading to.
5 Conclusions and further research
The notion of Smart City on the one hand refers to cities that are increasingly
composed of and monitored by pervasive and ubiquitous computing, and, on the
other hand, to those whose economy and governance is being driven by innovation, creativity and entrepreneurship, enacted by smart people.
370
A. Fernández-Vázquez and I. López-Forniés
However, it does not seem to be a clear way of linking the two ideas into specific initiatives, and only the experiences arose in the so called “living labs” could
be considered close to have reached a proper convergence between the two models, by involving citizens throughout the whole process while integrating ICT in a
proper way. But they are not large in number or homogeneous in characteristics
and scope, and have had limited citizens participation and involvement. Further,
the dissemination of the results has not been enough to promote similar initiatives,
and the dependence on administration involvement can hinder their future.
LL characteristics are anyway very promising from the designer’s perspectives,
as they allow the emergence of new processes that can develop real and better user
involvement in SC. The integration of citizen-driven processes for fostering participation in the early stages of the initiatives or the search for new communication
channels for allowing better result dissemination are just two of the possible research fields for the near future.
It is our intention to try to develop in the short term a pilot project in the field
of SC using LL Design Methods and Citizen-Driven processes. The participation
of citizens along the entire design process might ensure that the product or service
will meet a real need in a proper way, which it is very interesting for companies
and administrations, thereby achieving the involvement of all stakeholders and ensuring the viability of the initiatives. And as it would imply that throughout the
process user participation would be sought, the promotion of citizen creativity and
entrepreneurship would be also achieved.
References
1. United Nations. World Urbanization Prospects: The 2014 Revision. 2.014. New York.
2. Kumar Debnath A., Chor Chin H., Haque M. and Yuen B. A methodological Framework for
benchmarking smart transport cities. Cities, 2014, 37, pp.47-56.
3. Jair Cabrera, O. Infraestructuras que dan soporte a ciudades inteligentes. CONACYT symposium for scholars and former grantees. 2012. Available at: http://docplayer.es/7437135Ponencia-oscar-jair-cabrera-bejar.html [last date of access: 18/04/2016]
4. Karadağ, T. An evaluation of the smart city approach. Doctoral Dissertation, 2013. Middle
East Technical University.
5. Macintosh, A. Using Information and Communication Technologies to Enhance Citizen Engagement in the Policy Process, in OECD, Promise and Problems of E-Democracy: Challenges of Online Citizen Engagement, OECD Publishing, Paris. 2004. DOI:
http://dx.doi.org/10.1787/9789264019492-3-en
6. De Lange, M, De Waal, Mn. Owning the city: New media and citizen engagement in urban
design, First Monday, [S.l.], nov. 2013. ISSN 13960466, available at:
http://pear.accc.uic.edu/ojs/index.php/fm/article/view/4954/3786. Date accessed: 14/06/2016.
7. Manville, C et al. Mapping smart cities in the EU. 2014. Available at:
http://www.rand.org/pubs/external_publications/EP50486.html. Date accessed: 14/06/2016
8. Anthopoulos, L. G., Janssen, M., & Weerakkody, V. Comparing Smart Cities with different
modeling approaches. In Proceedings of the 24th International Conference on World Wide
Analysis and comparison of Smart City initiatives ...
371
Web Companion, May 2015, pp. 525-528, International World Wide Web Conferences Steering Committee.
9. Jin, J. Gubbi, J. Marusic, S. & Palaniswami, M. An information framework for creating a
smart city through internet of things. Internet of Things Journal, IEEE, 2014, 1(2), 112-121.
10. Dohler M. Vilajosana I., Vilajosana X. & LLosa, J. Smart cities: An action plan. In Barcelona Smart Cities Congress. Barcelona, Spain, December 2011,
11. Centre of Regional Science, Vienna UT. Smart cities – Ranking of European medium-sized
cities. Final Report. 2012. Available at: http://www.smart-cities.eu/press-ressources.html.
Date accessed: 18/04/2016.
12. Cardone C., Cirri A., Corradi A., Foschini L. The ParcipAct Mobile Crowd Sensing Living
Lab: The Testbed for Smart Cities. IEEE Communications Magazine, 2014, 52(10), 78-85.
13. Szabo R. et al. Framework for Smart City Applications based on Participatory sensing. In 4th
IEEE International Conference on Cognitive Infocommunications. Budapest, Hungary, 2013
14. Stopczynski A., Pietri R., Pentland A., Lazer D., Lehmann, S. Privacy in sensor-driven human data collection: A guide for practitioners. 2014. arXiv preprint arXiv:1403.5299.
15. Holland R. Will the real Smart City please stand up?. Creative, progressive or just Entrepreneurial. City, 2008, 12 (3), 302-320.
16. Cosgrave E., Arbuthnot K., Tryfonas, T. Living labs, innovation districts and information
marketplaces: A systems approach for smart cities. Procedia Computer Science, 16, 2013,
pp. 668-677.
17. Allwinkle S., Cruickshank, P. Creating smart-er cities: An overview. Journal of urban technology, 2011, 18 (2), 1-16.
18. Department for Business Innovation & Skills, Smart Cities. Background paper, available at:
https://www.gov.uk/government/publications/smart-cities-background-paper, 2013. Date accessed: 14/06/2016.
19. Haque, U. (2012). Surely there's a smarter approach to smart cities?. Wired, 17, 2012-04.
20. TECNO - Cercle Tecnològic de Catalunya. Hoja de Ruta para la Smart City. Available from:
http://www.socinfo.es/contenido/semina-rios/1404smartcities6/03-ctecno_hoja_ruta_smartcity.pdf. Date accessed: 18/04/2016.
21. Marsh J., Molinari F., Rizzo F. Human Smart Cities: A New Vision for Redesigning Urban
Community and Citizen’s Life. In Knowledge, Information and Creativity Support Systems:
Recent Trends, Advances and Solutions. 2016. pp. 269-278. (Springer International Publishing).
22. https://smartcitizen.me/ [last date of access: 15/04/2016].
23. https://stormclouds.eu/ [last date of access: 15/04/2016].
24. Almirall, E., Lee, M., & Wareham, J. Mapping living labs in the landscape of innovation
methodologies. Technology Innovation Management Review, 2012, 2(9), 12.
25. Schumacher J., Feurstein, K. Living Labs – the user as co-creator. 2007.
26. http://www.openlivinglabs.eu/
27. Cosgrave E., Arbuthnot K., Tryfonas, T. Living labs, innovation districts and information
marketplaces: A systems approach for smart cities. Procedia Computer Science, 16. 2013, pp.
668-677.
28. Eskelinen, J., Garcia Robles, A., Lindy, I., Marsh, J., & Muente-Kunigami, A. CitizenDriven Innovation (No. 21984). The World Bank. 2015.
29. http://humansmartcities.eu/project/apollon/
30. http://my-neighbourhood.eu/
31. http://www.opencities.net/node/66
32. Dell'Era, C., Landoni, P. Living Lab: A Methodology between User̺Centered Design and
Participatory Design. Creativity and Innovation Management, 2014, 23(2), 137-154.
33. Neirotti, P., De Marco, A., Cagliano, A. C., Mangano, G., & Scorrano, F. Current trends in
Smart City initiatives: Some stylised facts. Cities, 2014, 38, pp.25-36.
Involving Autism Spectrum Disorder (ASD)
affected people in design
Stefano Filippi* and Daniela Barattin
Politecnico di Ingegneria e Architettura Dept. (DPIA), University of Udine, Italy
* Corresponding author. Tel.: +39-0432-558289; fax: +39-0432-558251. E-mail address:
stefano.filippi@uniud.it
Abstract. This research aims at moving from design for disabled people to design
led by disabled people. This is achieved by defining a roadmap suggesting how to
involve people affected by Autism Spectrum Disorder (ASD) in design. These
people could represent an added value given their uncommon reasoning mechanisms. The core of the roadmap consists of tests involving groups of ASD and
neurotypical people. These tests are performed using shapes; the testers are asked
for interacting with these shapes and highlighting aroused functions, meanings and
emotions. The outcomes are analyzed in terms of variety, quality, frequency and
originality, and elaborated in order to pursue unforeseen, innovative design solutions.
Keywords: Design Activities, Autism Spectrum Disorder (ASD), Design by disabled people.
1 Introduction
Classic design activities consist of neurotypical people developing products for
neurotypical people and, recently, for disabled people as well. The literature
shows many examples of design for disabled people, referring on one hand to
physical disabilities and ergonomic issues and on the other hand to cognitive disabilities and the compatibility between the product and the human problem solving
process. Ergonomic issues are debated, for example, in Casas et al. [1], aiming at
designing an intelligent system with a monitoring infrastructure that helps elderly
with disabilities to overcome their handicap in performing household tasks. The
focus on cognitive disabilities is placed, for example, by Friedman and Bryen [2];
they define twenty guidelines for Web accessibility for people with different disabilities. Dawe [3] describes interviews with young people with cognitive disabilities and their families aimed at highlighting design aspects about assistive technologies to implement in the product, like portability, ease-of-learning, etc.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_38
373
374
S. Filippi and D. Barattin
Up to now, design activities have always consisted in exploiting skill and
knowledge of neurotypical designers to develop products for disabled people. The
research described in this paper aims at subverting this by introducing the concept
of design by disabled people. Specifically, it proposes a roadmap that suggests
how to involve people affected by Autism Spectrum Disorder (ASD) in design activities as effectively as possible. These people show specific characteristics like
schematic and practical reasoning, high sensibility and strong emotional answers
to external stimuli, and a very peculiar way to interpret the external world and to
interact with it. As described in [4], these characteristics make ASD people one of
the best candidates to give un-imaginable explorations of the design space and this
could lead to innovative design solutions. The roadmap should allow establishing
an effective collaboration with ASD people by considering them as active actors
in design activities and by assigning them a clear, well-recognized role in the
product development process. Among other consequences, all of this could increase their chances for what concerns possible job placements.
The involvement of ASD people exploits the "from shapes to functions" design
activities, where people interact with a set of shapes and express possible aroused
functions. This choice comes from many reasons. First, shapes are physical entities; therefore, ASD people do not need to use imagination in interpreting them (it
could be a problem for some of them). Second, these design activities allow for
considering and exploiting the meanings and emotions arisen by the interaction
with the products. As highlighted in [5, 6], meanings and emotions aroused during
interaction are fundamental to positively manage the product complexity and the
resource management during the product development process. Finally, pure
shapes allow ASD people to express their impressions and suggestions more
freely, without those limitations imposed by inner structures, materials, etc.
Neurotypical people will undergo the same activities and they will be considered
as controls. The outcomes of the two groups will be analyzed in terms of variety,
quality, frequency and originality.
2 Background
2.1 Autism Spectrum Disorder (ASD)
The Autism Spectrum Disorder (ASD) is an umbrella term that covers heterogeneous, complex and lifelong neurodevelopmental disorders that affect the way a
person communicates and relates to other people and to the world around him/her
[7, 8]. People affected by ASD are characterized by the "triad of impairments" [9].
The social/emotional impairments focus on difficulties in building friendships appropriate for the age, in managing unstructured parts of the day, in predicting the
Involving Autism Spectrum Disorder (ASD) affected people in design
375
behavior of other people and in working cooperatively. Language/communication
impairments deal with the difficulties in processing and retaining information, in
sustaining a conversation, in understanding the body language (facial expression
and gesture), the jokes and sarcasm and the differences between literal and interpreted verbal expressions. The flexibility of thought impairments consider the difficulties in coping with changes in routine because they are overly dependent, in
imagining objects and concepts, in generalizing information and in managing empathy [9-11].
In recent years, ASD people have started to be involved in design activities.
Frauenberger, Makhaeva and Spiel [4] are developing the "OutsideTheBox" project. Thanks to participatory design activities, they work with ASD children aiming at designing technological products suitable for their own needs and interests.
Malinverni et al. [12] exploit again participatory design activities to develop a kinetic-based game for ASD children to help them acquiring simple abilities in social interaction. Lowe et al. [7] exploit participatory observations, co-design workshops, interviews and mapping tools to involve ASD adult people in designing
living environments to enhance their everyday life experiences at home.
2.2 "From shapes to functions" design activities
The "from shapes to functions" design activities are based on the generation and
analysis of fashionable shapes [13]. Their generation aims at arousing specific
emotions in the people interacting with them. The analysis of the shapes usually
consists of tests where users interact with these shapes using touch and sight. This
analysis aims at highlighting both the emotions aroused by the shapes and possible
product behaviors and the related functions to implement in products showing
those shapes afterwards. Alessi is an example of company exploiting these design
activities. It produces iconic objects like household appliances, etc. [14].
3 Activities
The research activities define the roadmap that suggests how to involve ASD people in design as effectively as possible. Figure 1 shows this roadmap; each activity
is described in the following.
Goal definition. The aim of the exploitation of the roadmap is to design a product
belonging to a specific application domain by exploiting the peculiarities of the
ASD people. The product will be suitable for both ASD and neurotypical people.
Output definition. The expected outcomes consist of a list of design solutions
generated by elaborating the functions, meanings and emotions aroused during interaction. Variety, quality, frequency and originality are the parameters exploited
376
S. Filippi and D. Barattin
to generate the design solutions. Moreover, a comparison to the outcomes generated by a group of neurotypical people (the controls) highlights the different reasoning and understanding of the world of the two types of testers. Concerns also regard the distribution as well as the presence of counter-posed outcomes inside the
ASD group and between the two groups.
Goal definition
Output definition
Input definition
Selection of design activities
Generation of material
Environment Setup
- Design of a
product
belonging to a
defined
application
domain,
involving ASD
people
- List of design
solutions (as
elaboration of
functions, meanings
and emotions)
- People to involve
-Participatory design exploiting the
"from shapes to functions" design
activities
- Set of shapes
- Relaxed and familiar
environment
- Simple and structured tests
- Forms for data collection
- Application
domain
- Guide documents
- Four parameters for
the analysis
Test execution
- One tester at a time
- Execution of five steps
- Filling of forms
Data collection and analysis
- Classification of data respect to the tester type, the
shape and the topic (function, meaning or emotion)
- Analysis respect of the four parameters (variety,
quality, frequency and originality)
- Few people
- Video recording
equipment
Drawing conclusions
- Formulation of design
solutions for the specific
application domain
Figure 1. The roadmap for the effective involvement of ASD people in design.
Input definition. There are two important input to define: the people to involve
and the application domain where the shapes are considered. Considering the people, two groups are selected, one composed by ASD people and the other by
neurotypical people. The number of participants of each group should be the same
in order to have the same influence on the design solutions. This number depends
from available time and resources, as well as from the expected quality of the results in terms of contents and statistical relevance. To make the activities feasible
and their management easier, ASD people must be able to understand the activities they will be called to perform and communicate their impressions easily, in
order to reduce the test duration and minimize the intervention of other people
than designers, like parents or psychologists, to solve possible misunderstandings
and/or problems. The age of the testers depends from the application domain. Finally, for a good characterization of both ASD and neurotypical people, preparatory tests like Raven’s Standard Progressive Matrices [15] and Trail Making [16]
should be adopted. Considering now the application domain, its definition is needed to lead the shape choice and make the analysis of the outcomes focusing on the
sole interesting aspects. This avoids considering shapes and generating results that
are not interesting for the specific application.
Selection of design activities. The selection of the design activities depends mainly on the characteristics of the people involved, especially regarding the ASD
people. Previous researches suggest the participatory design as the best testing activities [4, 7, 9]. People are involved in performing tests and these tests can be exploited in different design phases [2, 4]. The roadmap shows a customized release
of participatory design suitable for performing the "from shapes to functions" activities. People undergo the tests to highlight functions starting from fashionable
Involving Autism Spectrum Disorder (ASD) affected people in design
377
shapes in the concept generation phase of the product development process. This
kind of activities lets the people express their thoughts freely because there are not
constraints due to inner structures, materials, etc., typical situations when characteristics of real products are involved. The test must show a clear structure to help
ASD people in understanding the sequence of the activities and the designers in
leading the process all the time as best as possible. The activities must consist of
simple sub-activities where people interact with shapes exploiting sight and touch,
answering to interviews and filling questionnaires. The interaction must be completely free, except for a precise timing marked by the designers. Questions must
be short and focused on specific interaction aspects. Verbal and written questions
must be suitable for all the people who could have different ways to communicate
their impressions. The voice tone asking for the verbal questions must be calm and
colloquial to create a relaxed environment.
Generation of material. The material to perform the design activities, the tests in
particular, consists of a set of shapes, the documents the testers will use as guides
and the forms the designers and testers will fill during the data collection. The
shape choice constitutes the most important decision to take. Several criteria are
proposed to select those shapes that should exploit at best the characteristics of the
ASD people. First of all, the shapes must be real instead of digital. The testers
must have the possibility to get in physical touch with the shapes because ASD
people could show difficulties to work with imagination [3, 4]. Second, these
shapes must be composed by simpler shapes that help ASD people to recall past
uses, moments when these uses happened and the related functions, meanings and
emotions. Examples of simple shapes are the ice cream cone, the telephone handset, the door handle, etc. Although these shapes offer a clear and known basis to
start reasoning, they barely limit the ASD people freedom of thinking; the uncommon links and relationships that the ASD people could see among these
shapes could suggest different functions and evoke unexpected/unusual meanings
and emotions. Third, the dimensions of the shapes must be chosen carefully because ASD people could find difficulties in mapping/scaling the shapes in their
mind with different dimensions than expected and the test results can be heavily
affected by this. The same could happen with colors. ASD people might consider
colors as an important aspect to evaluate and this can generate noise [17]. For this
reason, all shapes must have the same color in order to minimize the number of
variables to take care of. Fourth, ASD people often focus their attention on details
instead of on the shape as a whole and this behavior is quite different from that of
neurotypical people. Introducing details allows designers to keep ASD people focused and interested throughout the test. At the same time, the number of details
for each shape must be low, otherwise ASD people receive too many stimuli and
they could be too stressed to conduct the test in a natural way [9]. Fifth, since the
surface finishing of the shapes can have a deep impact on the emotions of ASD
people given their higher sensibility, shapes with rough and/or irregular surface
finishing must be avoided [9, 17]. Sixth, ASD people are attracted by symmetry;
therefore, the exploitation of symmetrical shapes can be a good way to capture
378
S. Filippi and D. Barattin
their attention [17]. Finally, designers should propose a low number of shapes,
e.g., five at most. More shapes could compromise the quality of the results because an excessive cognitive workload in terms of attention and stress would be
required. Table 1 reports some examples of shapes referring to the application
domains where home appliances and stationery are developed. These shapes are
built on simpler ones, as a bowl (shape a) a knob (c) or a needle (d); each of them
has from one (a and c) to three details (d) and the surfaces of all of them are
smooth and show the same, neutral color.
Table 1. Examples of shapes.
Application domain
Shape 1
Shape 2
Home appliances
a
b
Stationery
c
d
Together with the shapes, five documents must be prepared to perform the tests
and to make the information gathering easier. The first document contains the
claim the designers will say to the testers before the interaction with the shapes
takes place. This claim should be presented in a narrative way [12]. A generic example that can be exploit in different application domain is "you are going to see
some objects. I will ask you to perform some actions with them. In the meantime,
please tell me your sensations out loud. Specifically, I would like to know if these
objects recall something to you, if you think they could be useful for doing something, and if they arouse particular emotions to you. Are you ready?". The second
document will be given to the testers; it describes the activities they are called to
perform. This document is important especially for ASD people since they work
better by following written instructions because they do not like improvisation
and/or confusion and they work better with visual instructions [2, 9, 17]. The third
document contains the questions that designers and psychologists will ask to the
Involving Autism Spectrum Disorder (ASD) affected people in design
379
testers during the interaction with the shapes. These questions should recall these
ones: "is the object recalling something specific - a place, a moment, another object, etc. - to you?"; "does the object suggest performing specific actions to you
(for example, if the object would recall a window handle, it could suggest the action turn to open)?"; "do you think the object could be useful for doing something?"; "are you experiencing specific emotions while interacting with the object?". The fourth document is a form used by designers to collect the answers of
the testers as well as personal comments about the testers' thinking out loud activities. Finally, the fifth document contains similar questions to those in the third one
and the testers are called to fill this document by themselves. Several empty spaces are present where the testers are free to add information in any format they like
(e.g., text, sketches, etc.).
Environment setup. The environment where the design activities will take place
must be suitable for ASD people. It should be relaxed [9] and somehow familiar
[2] in order to avoid possible causes of stress like interferences, noises, etc. For
example, a room with some games, a desk and a sofa would be suitable for children because this would replicate their bedroom where they feel safe. For adults, a
mimic of a living room could be the best solution. Very few people should be present during the test execution; one designer who leads the test and a psychologist
should be enough. For this reason, and for data collection and archiving, tests must
be recorded. This must be obviously performed with the testers' consent, but the
video recording equipment must be out of sight to avoid affecting the testers'
stress level.
Test execution. Once identified the testers, defined the application domain, prepared all materials and the environment, the test activities can start. These activities are performed one tester at a time to avoid the testers influencing each other.
The activities should run as follows.
1. The designer introduces the test by reading the first document. Moreover,
he/she gives the tester the second document containing the list of the activities
to perform.
2. The designer places the first shape on the table and asks the tester for watching
it carefully, without touching it. After a short period (not more than 10 seconds), the designer starts to ask the questions contained in the third document
and uses the fourth document to annotate any comment and suggestion the tester should express spontaneously. Of course, the timing will need to be the same
for every tester.
3. After another short period (a bit longer than the previous one, but not more than
30 seconds), the designer invites the tester to touch/manipulate the shape. After
the same short period of activity 2, the designer asks again the questions contained in the third document. The designer carries on writing in the fourth document specific tester's comments and suggestions that should arouse.
380
S. Filippi and D. Barattin
4. After the same short period of activity 3, the designer takes the shape away and
gives the tester the fifth document to fill. The designer allows some minutes (3
to 5) for performing this task.
5. Once finished, activities 2 to 4 are applied for all the other shapes.
Data collection and analysis. At the end of the tests, data are collected from
questionnaires, designers' notes and recorded videos. Data are classified against
the types of testers (ASD vs. neurotypical), the shapes and the three topics of interest (functions, meanings and emotions). Functions, meanings and emotions are
analyzed against four parameters. The first parameter, the variety, focuses on the
number of functions highlighted for each shape and on the differences among these functions. The same applies for meanings and emotions. The second parameter,
the quality, refers to the completeness of functions, meanings and emotions. The
level of detail, the quantity of information given and the clearness of the verbal
and written expressions shown by the specific tester are all covered by this parameter. The frequency, third parameter, indicates the level of importance of a function, a meaning or an emotion. If a specific shape suggests the same function to
many testers, this means that designers should consider this function as intrinsically connected to that shape. Finally, the originality, the fourth parameter, highlights
the presence of possible innovative functions, meanings and emotions. Functions
and meanings completely different from all the others could represent new interpretations of a shape; an unexpected emotion could represent the possibility to attract new people towards that shape. The suggestions freely expressed by the testers are classified against the shape and the function they are related to and are
exploited in the following activities. At the end, the outcomes of the two groups of
testers are compared in order to highlight overlaps and/or differences in the way
the two groups interpret the shapes in term of functions, meanings and emotions.
This integrates the results and helps in generating richer and more complete design
solutions.
Drawing conclusions. Thanks to the previous analysis, the most important and
useful meanings, emotions and functions are highlighted. These meanings and
emotions increase the contents and so the chances of the functions they belong to
be selected. After that, these enriched functions are elaborated to define the design
solutions. Obviously, these solutions will be formulated for the specific domain;
anyway, they could be exploited also in other application domains thanks to a
generalization of them.
4 Discussion
The proposed roadmap shows an ordered list of activities to perform. It is generic
and flexible enough to be adapted to every application domain and to add new parameters for a finer analysis of the outcomes. Moreover, the roadmap is thought to
Involving Autism Spectrum Disorder (ASD) affected people in design
381
support at best the characteristics and needs of the ASD people in order to maximize the achievement of unforeseen and innovative design solutions. This mainly
regards the shape selection and the test execution. The roadmap has already received a first positive judgment both from the psychologists that helped its generation and from other professionals working in the ASD fields. This research could
show interesting theoretical implications. For example, the TRIZ theory about systematic innovation [18] tells that innovation can rely on searching solutions in
domains completely different from the ones the designers are used to. But, all of
this is meant to happen by exploiting the same reasoning mechanisms. Here, on
the contrary, innovative design solutions are searched by exploiting different reasoning mechanisms; different application domains are eventually considered only
afterwards.
Once described the positive aspects of the research, some drawbacks need to be
pointed out as well. First of all, a real application in the field to confirm the correctness of the roadmap and support the effectiveness of its results is on the way
but the results of these activities are still missing. Moreover, comparisons with
similar existing methods have not been performed up to now. Finally, the last two
activities of the roadmap are completely left to the experience and knowledge of
designers and psychologists because no tools or help are given. If designers should
be inexperienced in dealing with ASD people and/or the same should happen for
the psychologists regarding this particular kind of design activities, the design solutions could be incomplete or even wrong.
5 Conclusions
Some years ago, design activities started to focus on the generation of products for
disabled people. This research aims at giving some indications on how to involve
disabled people, in particular people affected by Autism Spectrum Disorder
(ASD), in order to let them directly design products both for neurotypical and disabled people. The result is a roadmap, composed by several activities that exploits
tests involving ASD and neurotypical people. These tests are based on the interaction with specific shapes, aiming at collecting and analyzing pieces of information
about the functions, meanings and emotions those shapes arouse in the testers. All
these data should lead to the generation of innovative and unforeseen design solutions to implement in new products. These results should allow assigning a recognized role for ASD people as active members in design teams; as a consequence,
this could have implications also regarding possible job placement. The current release of the roadmap has already been positively judged by psychologists and experts in the ASD field; nevertheless, it needs further validation to be effectively
exploited in real application domains. Moreover, the structure of the roadmap
must be checked against existing similar design activities. The last two activities
of the roadmap should exploit help or, even better, automatic tools to make their
382
S. Filippi and D. Barattin
execution easier. Finally, the interaction with the shapes should involve other sensorial elements than touch and sight, like sounds, tastes, colors, materials, etc., as
well as their combinations. In this way, the roadmap would become even more
generic and applicable in a wider set of application domains.
Acknowledgements. The authors would like to thank prof. Andrea Marini for his valuable help
in introducing them into the field of the Autism Spectrum Disorder from the psychological point
of view.
Reference
1. Casas R., Marín R. B., Robinet A., Delgado A. R., Yarza A. R., McGinn J., Picking R. and
Grout V. User Modelling in Ambient Intelligence for Elderly and Disabled People. Computers Helping People with Special Needs, 2008, 5105 of the series Lecture Notes in Computer
Science, 114-122.
2. Friedman, M.G. and Bryen D.N. Web accessibility design recommendations for people with
cognitive disabilities. Technology and Disability, 2007, 19, 205–212.
3. Dawe M. Desperately Seeking Simplicity: How Young Adults with Cognitive Disabilities and
Their Families Adopt Assistive Technologies. In Conference on Human Factors in Computing Systems, CHI2006, Montreal, Canada, April 2006.
4. Frauenberger C., Makhaeva J. and Spiel K. Designing smart objects with autistic children.
Four design exposes. In conference on Human-Computer Interaction, CHI 2016, San Jose,
CA, USA, May 2016.
5. von Saucken C.,Michailidou I. and Lindemann U. How to design experiences: macro UX versus micro UX approach. Design, User Experience, and Usability. Web, Mobile, and Product
Design, 2013, 8015, 130-139.
6. Desmet P.M.A. and Hekkert P. Framework of product experience. International Journal of Design, 2007, 1(1), 57-66.
7. Lowe C., Gaudion K., McGinley C. and Kew A. Designing living environments with adults
with autism. Tizard Learning Disability Review, 2014, 19(2), 63 – 72.
8. Baron-Cohen S. Facts: Autism and Asperger syndrome. 2nd ed., 2008 (Oxford Univ. Press).
9. Daley L., Lawson S. and van der Zee E. Asperger Syndrome and Mobile Phone Behavior. In
International Conference on Human-Computer Interaction, HCI2009, San Diego, CA, USA,
July 2009, pp. 344-352.
10. Frauenberger C., Good J., Alcorn A. and Pain H. Supporting the design contributions of children with autism spectrum conditions. In International Conference on Interaction Design and
Children, IDC'12, Bremen, Germany, June 2012, pp. 134-143.
11. American Psychiatric Association. Diagnostic and statistical manual of mental disorders (5th
ed.) DSM-5, 2013, Washington, D.C., USA.
12. Malinverni L., Mora-Guiard J., Padillo V., Mairena M. A., Hervás A. and Pares N. Participatory Design Strategies to Enhance the Creative Contribution of Children with Special Needs.
In International Conference on Interaction Design and Children, IDC'14, Aarhus, Denmark,
June 2014, pp. 85-94.
13. Filippi, S. and Barattin, D. Definition of the form-based design approach and description of it
using the FBS framework. In International Conference on Engineering Design, ICED2015,
Milano, Italy, July 2015.
14. Alessi. The Italian factory of industrial design, 2016. Available online at www.alessi.com/en.
Retrieved 12/04/2016.
Involving Autism Spectrum Disorder (ASD) affected people in design
383
15. Hayashi M., Kato M., Igarashi K. and Kashima H. Superior fluid intelligence in children with
Asperger’s disorder. Brain and Cognition, 2008, 66, 306–310.
16. Reitan R. M. and Wolfson D. The Trail Making Test as an initial screening procedure for
neuropsychological impairment in older children. Archives of Clinical Neuropsychology,
2004, 19, 281–288.
17. Attwood, T. The Complete Guide to Asperger's Syndrome, 2006 (Jessica Kingsley Publishers).
18. Altshuller G. and Rodman S. The innovation algorithm: TRIZ, systematic innovation and
technical creativity, 1999 (Technical Innovation Center, Inc, Worcester, MA).
Part III
Engineering Methods in Medicine
In recent years, engineering methods are more and more spreading in the medicine
field. The research of new engineering techniques and tools for medical applic ations has become a very current topic and, consequently, the new figure of the biomedical engineer has become one of the fastest growing career. The main goal of
biomed ical engineers is to focus on the convergence of disease, technology and
sciences by applying an engineering approach to medicine and, for these reasons,
they work at the intersection of engineering, life sciences and healthcare. Bio medical engineers, in fact, take principles fro m applied science, like mechanical and
computer engineering, and physical sciences and apply them to medicine. The
creation and application of new engineering technologies has modified, over last
years, the classical medical approaches by making the management of various disorders faster, less expensively, safer and with fewer side effects.
The papers presented in this chapter represent an updated report on the biomedical engineering research. Main advances in the use of engineering methods (like
imaging, nu merical simu lations, reverse engineering, CAD modelling, etc...) in
med icine are reported. In most cases, very interesting experimental case studies
concerning real problems, with a substantially degree of technological innovation,
are presented. All the contributions demonstrate that combining engineering a pproach together with medical knowledge can help in the diagnosis, treatment and
prevention of the major d iseases affecting our society. Of course, this chapter is a
very interesting tool for obtaining an understanding of the newest techniques and
research in medical engineering.
Samuel Gomes – UTBM
Tommaso Ingrassia - Univ. Palermo
Rikardo Minguez - Univ. Basque Country
Patient-specific 3D modelling of heart and
cardiac structures workflow: an overview of
methodologies
Monica CARFAGNI1* and Francesca UCCHEDDU1
1
Department of Industrial Engineering, via di Santa Marta, 3, 50139 Firenze (Italy)
* Corresponding author. Tel.: +39-055-2758731 ; fax: +39-055-2758755. E-mail address:
monica.carfagni@unifi.it
Abstract Cardiovascular diagnosis, surgical planning and intervention are among
the most interested in recent developments in the field of 3D acquisition, modelling and rapid prototyping techniques. In case of complex heart disease, to provide
an accurate planning of the intervention and to support surgical planning and intervention, an increasing number of Hospitals make use of physical 3D models of
the cardiac structure, including heart, obtained using additive manufacturing starting from the 3D model retrieved with medical imagery. The present work aims in
providing an overview on most recent approaches and methodologies for creating
physical prototypes of patient-specific heart and cardiac structures, with particular
reference to most critical phases such as segmentation and aspects concerning
converting digital models into physical replicas through rapid prototyping techniques. First, recent techniques for image enhancement to highlight anatomical
structures of interest are presented together with the current state of the art of
semi-automatic image segmentation. Then, most suitable techniques for prototyping the retrieved 3D model are investigated so as to draft some hints for creating
prototypes useful for planning the medical intervention.
Keywords: rapid prototyping; 3D modelling; 3D printing; medical imagery;
heart; cardiovascular diseases; surgical planning.
1 Introduction
The care and management of adult patients with congenital or acquired structural heart disease represents one of the most relevant areas of research in cardiology, documenting a rapid grow of studies related to this vital area. Recent advancements in imaging technology, also taken by engineering [1-3] have
continued to raise awareness of hemodynamically significant intra-cardiac shunt
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_39
387
388
M. Carfagni and F. Uccheddu
lesions in adults. Given the widely ranged complexity of possible structural heart
defects, non-invasive imaging have become paramount in their treatments. Although both two-dimensional (2D) imaging modalities such as echocardiography
and three-dimensional (3D) devices such as computed tomography, and magnetic
resonance imaging (MRI) are undeniably valuable in the evaluation of adult patients with structural hearth disease, these methods are still constrained by their
overall lack of realism and inability to be “physically manipulated”. Thereby, such
techniques remain limited in their ability to effectively represent the complex
three-dimensional (3D) shape of the heart and its peripheral structures.
With the aim of providing an accurate planning of the intervention, an increasing number of Hospitals [4] make use of physical 3D models of the cardiac structure, obtained using additive manufacturing starting from the 3D model retrieved
with medical imagery. In fact, the advent of 3D printing technology, has provided
a more advanced tool with an intuitive and tangible 3D fabricated model that goes
beyond a simple 3D-shaded visualization on a flat screen. For its use in medical
fields, the most important of the many advantages of 3D printing technology are
both the “zero lead time” between design and final production of accurate models
and the possibility of creating specific models resembling the actual structure of
the patient heart: in the clinical setting, the possibility of one-stop manufacturing
from medical imaging to 3D printing has accelerated the recent medical trend towards “personalized” or “patient-specific” treatment.
According to recent literature, the most effective way for creating 3D models
starting from 2D medical imaging is based on the virtuous process cycle (starting
from 2D and 3D image acquisition and providing, in output, a model of the patient
heart) shown in Figure 1.
Fig.1. Patient-specific 3D modelling and printing workflow
Such an innovative process involves a number of steps starting from a medical
imagery with particular reference to (but not exclusively) computed tomography
(CT), multi-slice CT (MCT) and magnetic resonance imaging (MRI) [5-8]. Acquired images are then processed in order to segment regions of interest, i.e. heart
chambers, valves, aorta, coronary vessels, etc. These segmented areas are converted into 3D models, using tools like volume rendering or surface reconstruction
procedures. Due to the increasing number of methods to comply with the above
mentioned process, the main aim of the present work is to provide an overview of
methodologies dealing with patient-specific 3D modelling of heart and cardiac
structures. First, main imaging systems medical imagery for acquiring 2D and 3D
data inferred to heart structure are introduced. Then, the most recent algorithms
for image enhancement and restoration are explored and a brief overview of segmentation and classification algorithms is described. Section 5 is devoted to brief-
Patient-specific 3D modelling of heart …
389
ly overview most promising techniques for 3D heart model reconstruction process.
Finally, in Section 6, some considerations regarding 3D printing of heart structure
are draft.
2 Medical Imaging
Common types of medical imaging used for cardiac structure and heart analysis, include the followings:
(i) X-ray (e.g. radiography, computed tomography (CT)) - Especially in the recent
advances, CT can provide detailed anatomical information of chambers, vessels,
coronary arteries, and coronary calcium scoring. In cardiac CT, there are two imaging procedures: (1) coronary calcium scoring with non-contrast CT and (2) noninvasive imaging of coronary arteries with contrast-enhanced CT. Typically, noncontrast CT imaging exploits the natural density of tissues. As a result, various
densities using different attenuation values such as air, calcium, fat, and soft tissues can be easily distinguished. Contrast-enhanced CT is used for imaging of
coronary arteries with contrast material such as a bolus or continuous infusion of a
high concentration of iodinated contrast material.
(ii) magnetic resonance imaging (MRI) - Is an imaging technique based on detecting different tissue characteristics by varying the number and sequence of pulsed
radio frequency fields, taking advantage of the magnetic relaxation properties of
different tissues [9]. MRI measures the density of a specific nucleus, normally
hydrogen, which is magnetic and largely present in the human body, including
heart [10], except for bone structures.
(iii) ultrasound - For cardiac usage, ultrasound is applied by means of an echocardiogram able to provide information on the four chambers of the heart, the heart
valves and the walls of the heart, the blood vessels entering and leaving the heart
and the pericardium.
(iv) nuclear (e.g., positron emission tomography - PET) - A PET scan is a very accurate way to diagnose coronary artery disease and detect areas of low blood flow
in the heart. PET can also identify dead tissue and injured tissue that’s still living
and functioning.
3 Image Enhancement and Restoration
Digital medical imagery often suffers from different kinds of degradation, such
as artefacts due to patient motion or interferences, poor contrast, noise and blur
(see Figure 2). To improve the quality and visual appearance of medical images,
two main procedures are usually adopted, namely image restoration and image enhancement [11].
390
M. Carfagni and F. Uccheddu
Image restoration algorithms primarily aim at reducing blur and noise in the
processed image, naturally related to and introduced by the data acquisition process. Denoising methods require to estimate and model blur and noise that affect
the image and depend on a number of factors, like capturing instruments, transmission media, image quantization, discrete sources of radiation, etc. For example,
standard digital images are assumed to have additive random noise which is modelled as a Gaussian, speckle noise is observed in ultrasound images, whereas Rician noise affects MRI images [12].
Fig.2. Example of a cardiac TC.
Image enhancement techniques are mainly devoted to contrast enhancement, in
order to extract, or accentuate, certain image features so as to improve the understanding of information content and obtain an image more suitable than the original for automated image processing (e.g. for highlighting structures such as tissues
and organs). In literature, methods such as range compression, contrast stretching,
histogram equalization with gamma correction [13] are usually adopted to enhance
the quality of medical images. In general, despite the effectiveness of each single
approach, usually a combination of different methods allows to achieve the most
effective image enhancement result [14].
4 Segmentation and classification
Segmentation is the process of dividing an image into regions with similar
properties such as grey level, colour, texture, brightness, and contrast. In medical
imagery, the role of segmentation consists in identifying and subdividing different
anatomical structures or regions of interest (ROI) in the images. As result of the
segmentation task, the pixels in the image are partitioned in non-overlapping regions, belonging to the same tissue class. Disconnected regions can belong to the
same class, whose number is usually decided according to a prior knowledge of
the anatomy. Some approaches [15] adopt texture content to perform image segmentation and classification: the aim of texture based segmentation method is to
Patient-specific 3D modelling of heart …
391
subdivide the image into region having different texture properties, while in classification the aim is to classify the regions which have already been segmented. A
texture may be fine, coarse, smooth, or grained, depending upon its tone and structure, where tone is based on pixel intensity properties in primitive while structure
is the spatial relationship between primitives [16].
Automatic segmentation of medical images is a valuable tool to perform a tedious task with the aim of making it faster and, ideally, more robust than manual
procedures. However, it is a difficult task as medical images are complex in nature
and often affect by intrinsic issues such as:
ƒ partial volume effects, i.e. artefacts occurring when different tissue types mix
up together in a single pixel and resulting in non-sharp boundaries. Partialvolume effects are frequent in CT and MRI, where the resolution is not isotropic and, in many cases, is quite poor along one axis of the image (usually
the Z or longitudinal axis running along the patient body);
ƒ intensity inhomogeneity of a single tissue class that varies gradually in the
image, producing a shading effect;
ƒ presence of artefacts;
ƒ similarity of grey values for different tissues.
Many different approaches have been developed for automatic image segmentation that is still a current and active area of research. Classifications of existing
segmentation methods have been attempted by several authors (e.g. [17]). Similarly to other image analysis fields also in medical image segmentation, automatic
methods are classified as supervised and unsupervised, where the main difference
resides in the operator interaction required by the first approaches throughout the
segmentation process. The methods that identifies regions of interest by labelling
all pixels/voxels in the images/volume are known as volume identification methods. On the contrary, approaches that recognises the boundaries of the different
regions are called boundary identification methods [18]. Low-level techniques
usually rely on simple criteria based on grey intensity values, such as thresholding,
region growing, edge detection, etc. More complex approaches introduce uncertainty models and optimization methods, like statistical pattern recognition based
on Markow Random Field [19], deformable models [20], graph search [21], artificial neural networks [22], etc. Finally, the most advanced methods may incorporate higher-level knowledge, such as a-priori information, expert-defined rules,
and models. Methods like atlas-based segmentation [23] and deformable models
belong to this last group. For patient-specific application in surgical planning a
fully and accurate automatic segmentation approach would be desirable to make
the process fast and reliable. Unfortunately, anatomical variability and intrinsic
image issues limit the reliability of fully automatic approaches. At the end of the
segmentation process, the operator interaction is still required for error corrections. Interactive segmentation methods, employing for example manual segmentations in a small set of slices and automatic classification of the remaining volume using patch-based approach [24], provide promising results and thus seem to
open future research on this field.
392
M. Carfagni and F. Uccheddu
5 3D heart model reconstruction
After segmentation, a surface model should be generated by using, for instance,
a marching cube method [25] or other 3D contour extraction algorithms [26]. The
resultant surface can be used as the starting point for either generation of higher
order representation, such as non-uniform rational B-splines NURBS-based surfaces, or for meshing improvement using, for example, mesh growing methods
[27, 28], Delaunay meshing techniques [29], Poisson surface reconstruction
method [30] or other voxel-based methods [31-32].
However, the retrieved 3D model is not suitable for 3D printing for a number
of reasons such as too many mesh units and/or incomplete topological structure.
Therefore, topological correction, decimation, Laplacian smoothing, and local
smoothing [33, 34] are needed to create a 3D model ready for 3D printing. In general, the accuracy of the 3D printing object depends on the combination of the accuracy of the medical image, which should be as thin as possible, the appropriate
imaging process for 3D modeling, and the 3D printing accuracy of the system.
Figure 3. Orthogonal sectioning of a 3D CT volume image through MPR. Single orthogonal
plane views: a) axial or XY plane, dividing the body into Superior-Inferior parts; b) sagittal or
XZ plane, dividing the body into Left-Right parts; c) coronal or YZ plane dividing the body into
Anterior-Posterior parts d) Orthogonal planes visualized in the cubic volume.
One major challenge faced in creating physical models lies in disconnection between the digital 3D surface models and the original 2D image. Currently available industry specific image-processing software remains limited in its ability to
generate digital 3D models that are directly applicable to rapid prototyping. As a
result, true integration of the raw 2D image data into the generated digital 3D surface models is lost. The post 3D processing (i.e., correction of errant points and
elimination of various artefacts within the digital 3D surface model) therefore relies heavily on the expert clinical and anatomic knowledge of the graphic editor,
Patient-specific 3D modelling of heart …
393
especially because a wide array of structural heart anomalies that significantly deviate from conventional cardiovascular anatomy may be present.
6 Additive technologies and 3D Printing
The most common additive technologies used in medicine are selective laser
sintering, fused deposition modelling, multijet modelling/3D printing, and stereolithography. Selective laser sintering (3-D Systems Inc., Rock Hill, SC) uses a
high-power laser to fuse small particles of plastic, metal, or ceramic powders into
a 3D object [35]. Selective laser sintering has the ability to utilize a variety of
thermoplastic powders and has a high geometric accuracy but is generally higher
in cost than other additive methods.
In fused deposition modeling (Stratasys Inc, Eden Prairie, Minn), a plastic
filament (typically acrylonitrile butadiene styrene polymer) is forced through a
heated extrusion nozzle that melts the filament and deposits a layer of material
that hardens immediately on extrusion [36]. A separate water-soluble material is
used for making temporary support structures while the manufacturing is in progress. The process is repeated layer by layer until the model is complete.
Multijet modeling or 3D printing (Z Corporation, Burlington, Mass) essentially
works like a normal ink-jet printer but in 3D space. In this process, layers of fine
powder (either plaster or resins) are selectively bonded by printing a water-based
adhesive from the ink-jet printhead in the shape of each cross section as determined by the computer-aided design file. Each layer quickly hardens, and the
process is repeated until the model is complete [37].
In stereolithography, models are built through layer-by layer polymerization of
a photosensitive resin. A computer-controlled laser generates an ultraviolet beam
that draws on the surface of a pool of resin stimulating the instantaneous local polymerization of the liquid resin in the outlined pattern. A movable platform lowers
the newly formed layer, thereby exposing a new layer of photosensitive resin, and
the process is repeated until the model is complete.
Depending on their intended application (i.e. education, catheter navigation,
device sizing and testing, and so on), physical models may be printed in multiple
materials using a variety of 3D printing technologies, each with its own collection
of benefits and shortcomings. For example, multijet modelling technology can be
used to generate full-colour models to highlight anomalous structures or specific
regions of interest. Printing times are fast (approximately 6-7 hours per model)
and cost-effective. Although flexible models may be prototyped by multijet modelling technology, the properties of the material often fail to accurately mimic true
tissue properties. PolyJet Matrix printing technology offer the ability to print
physical models in materials that more closely resemble the properties of native
tissue, thus representing the new direction in rapid prototyping technology with its
ability to print in different materials simultaneously. This unique technology will
394
M. Carfagni and F. Uccheddu
allow most physical models to be printed in durable materials (e.g., plastic),
whereas specified segments (e.g., interatrial septum, septal defects, vascular structures, and so on) are printed in less durable, but more lifelike, materials (e.g. rubber polymers) for more realistic manipulation.
7 Discussion and conclusions
Figure 4: Sample full-color physical models printed on the left with Multijet modelling
technology, on the right with Polyjet Matrix technology.
With the development of inexpensive 3D printers, 3D printable multi-materials,
and 3D medical imaging modalities, 3D printing medical applications for hearth
diseases among others, have come into the spotlight. Due to the availability of
transparent, full-coloured, and flexible multi-materials, 3D printing objects can be
more realistic, miming the properties of the real body; i.e., not only hard tissue
alone but also hard and soft tissue together. Several major limitations, such as
those associated with the technology and the time and cost of manufacturing 3D
phantoms, remain to be overcome. Development and optimization of the entire
procedure, from image acquisition to 3D printing fabrication, are required for
personalized treatment, even in emergency situations. In addition, to produce an
effective 3D printing object, multidisciplinary knowledge of the entire 3D printing
process chain is needed; namely, image acquisition using a protocol suitable for
3D modeling, post-processing of the medical images to generate a 3D
reconstructed model, 3D printing manufacturing with an appropriate 3D printing
technique, and post-processing of the 3D printing object to adapt it for medical
use.
References
1. Liverani, A., Leali, F., Pellicciari, M., Real-time 3D features reconstruction through monocular vision, International Journal on Interactive Design and Manufacturing, Volume 4, Issue
2, May 2010, Pages 103-112.
Patient-specific 3D modelling of heart …
395
2. Furferi, R., Governi, L. Machine vision tool for real-time detection of defects on textile raw
fabrics (2008) Journal of the Textile Institute, 99 (1), pp. 57-66.
3. Renzi, C., Leali, F., Cavazzuti, M., Andrisano, A.O., A review on artificial intelligence applications to the optimal design of dedicated and reconfigurable manufacturing systems International Journal of Advanced Manufacturing Technology, Volume 72, Issue 1-4, April 2014,
Pages 403-418
4. Itagaki, Michael W. “Using 3D Printed Models for Planning and Guidance during Endovascular Intervention: A Technical Advance.” Diagnostic and Interventional Radiology 21.4
(2015): 338–341. PMC. Web. 4 Apr. 2016.
5. H. Zhang et al., “4-D cardiac MR image analysis: left and right ventricular morphology and
function,” IEEE Trans. Med. Imag. 29(2), 350–364 (2010).
6. Wu, Jia, Marc A. Simon, and John C. Brigham. "A comparative analysis of global shape analysis methods for the assessment of the human right ventricle." Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization ahead-of-print (2014): 1-17.
7. Punithakumar, Kumaradevan, et al. "Right ventricular segmentation in cardiac MRI with moving mesh correspondences." Computerized Medical Imaging and Graphics 43 (2015): 15-25.
8. Cappetti, N., Naddeo, A., Naddeo, F., Solitro, G.F., 2015, Finite elements/Taguchi method
based procedure for the identification of the geometrical parameters significantly affecting
the biomechanical behavior of a lumbar disc, Computer Methods in Biomechanics and Biomedical Engineering, article in press, DOI: 10.1080/10255842.2015.1128529
9. Rohrer, M., Bauer, H., Mintorovitch, J., Requardt, M., & Weinmann, H. J. (2005). Comparison of magnetic properties of MRI contrast media solutions at different magnetic field
strengths. Investigative radiology, 40(11), 715-724.
10. Kuppusamy, P., & Zweier, J. L. (1996). A forwardǦsubtraction procedure for removing hyperfine artifacts in electron paramagnetic resonance imaging. Magnetic resonance in medicine, 35(3), 316-322.
11. Hill, D. L., Batchelor, P. G., Holden, M., & Hawkes, D. J. (2001). Medical image registration. Physics in medicine and biology, 46(3), R1.
12. Motwani, M. C., Gadiya, M. C., Motwani, R. C., & Harris, F. C. (2004, September). Survey
of image denoising techniques. In Proceedings of GSPX (pp. 27-30).
13. Draa, A., Benayad, Z., & Djenna, F. Z. (2015). An opposition-based firefly algorithm for
medical image contrast enhancement. International Journal of Information and Communication Technology, 7(4-5), 385-405.
14. Maini, Raman, and Himanshu Aggarwal. "A comprehensive review of image enhancement
techniques." arXiv preprint arXiv:1003.4053 (2010).
15. Glatard, Tristan, Johan Montagnat, and Isabelle E. Magnin. "Texture based medical image
indexing and retrieval: application to cardiac imaging." Proceedings of the 6th ACM SIGMM
international workshop on Multimedia information retrieval. ACM, 2004.
16. Skorton, D. J., Collins, S. M., Nichols, J. A. M. E. S., Pandian, N. G., Bean, J. A., & Kerber,
R. E. (1983). Quantitative texture analysis in two-dimensional echocardiography: application
to the diagnosis of experimental myocardial contusion. Circulation, 68(1), 217-223.
17. Pham, Dzung L., Chenyang Xu, and Jerry L. Prince. "Current methods in medical image
segmentation 1." Annual review of biomedical engineering 2.1 (2000): 315-337.
18. Withey, Daniel J., and Zoltan J. Koles. "A review of medical image segmentation: methods
and available software." International Journal of Bioelectromagnetism 10.3 (2008): 125-148.
19. Zhang, Y., Brady, M., & Smith, S. (2001). Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. Medical Imaging, IEEE Transactions on, 20(1), 45-57.
20. Nealen, A., Müller, M., Keiser, R., Boxerman, E., & Carlson, M. (2006, December). Physically based deformable models in computer graphics. In Computer graphics forum (Vol. 25,
No. 4, pp. 809-836). Blackwell Publishing Ltd.
396
M. Carfagni and F. Uccheddu
21. Schenk, Andrea, Guido Prause, and Heinz-Otto Peitgen. "Efficient semiautomatic segmentation of 3D objects in medical images." Medical Image Computing and Computer-Assisted Intervention–MICCAI 2000. Springer Berlin Heidelberg, 2000.
22. Furferi, R., Governi, L., Volpe, Y. Modelling and simulation of an innovative fabric coating
process using artificial neural networks (2012) Textile Research Journal, 82 (12), pp. 12821294.
23. Išgum, Ivana, et al. "Multi-atlas-based segmentation with local decision fusion—application
to cardiac and aortic segmentation in CT scans." Medical Imaging, IEEE Transactions on
28.7 (2009): 1000-1010.
24. Coupé, P., Manjón, J. V., Fonov, V., Pruessner, J., Robles, M., & Collins, D. L. (2011).
Patch-based segmentation using expert priors: Application to hippocampus and ventricle
segmentation. NeuroImage, 54(2), 940-954.
25. Lorensen, W. E., & Cline, H. E. (1987, August). Marching cubes: A high resolution 3D surface construction algorithm. In ACM siggraph computer graphics (Vol. 21, No. 4, pp. 163169). ACM.
26. Han, Chia Y., David T. Porembka, and Kwun-Nan Lin. "Method for automatic contour extraction of a cardiac image." U.S. Patent No. 5,457,754. 10 Oct. 1995.
27. Di Angelo, L., Di Stefano, P. & Giaccari, L. “A new mesh-growing algorithm for fast surface
reconstruction”. Computer – Aided Design, vol. 43 (6), 2011, p. 639-650.
28. Di Angelo, L., Di Stefano, P. & Giaccari, L. “A Fast Mesh-Growing Algorithm For Manifold
Surface Reconstruction”. Computer – Aided Des. and Applic., vol. 10 (2), 2013, p. 197-220.
29. Young, P. G., Beresford-West, T. B. H., Coward, S. R. L., Notarberardino, B., Walker, B., &
Abdul-Aziz, A. (2008). An efficient approach to converting three-dimensional image data
into highly accurate computational models. Philosophical Transactions of the Royal Society
of London A: Mathematical, Physical and Engineering Sciences, 366(1878), 3155-3173.
30. Lim, S. P., & Haron, H. (2014). Surface reconstruction techniques: a review. Artificial Intelligence Review, 42(1), 59-78.
31. Furferi, R., Governi, L., Palai, M., Volpe, Y. From unordered point cloud to weighted Bspline - A novel PCA-based method (2011) Applications of Mathematics and Computer Engineering - American Conference on Applied Mathematics, AMERICAN-MATH'11, 5th
WSEAS International Conference on Computer Engineering and Applications, CEA'11, pp.
146-151.
32. Governi, L., Furferi, R., Puggelli, L., Volpe, Y. Improving surface reconstruction in shape
from shading using easy-to-set boundary conditions (2013) International Journal of Computational Vision and Robotics, 3 (3), pp. 225-247.
33. Furferi, R., Governi, L., Palai, M., Volpe, Y. Multiple Incident Splines (MISs) algorithm for
topological reconstruction of 2D unordered point clouds (2011) International Journal of
Mathematics and Computers in Simulation, 5 (2), pp. 171-179.
34. Volpe, Y., Furferi, R., Governi, L., Tennirelli, G. Computer-based methodologies for semiautomatic 3D model generation from paintings. (2014) International Journal of Computer
Aided Engineering and Technology, 6 (1), pp. 88-112.
35. Di Angelo, L., Di Stefano, P. “A new method for the automatic identification of the dimensional features of vertebrae”. Comp. Meth. and Progr. in Biom., vol. 121 (1), 2015, pp. 36-48.
36. Vandenbroucke, B., & Kruth, J. P. (2007). Selective laser melting of biocompatible metals
for rapid manufacturing of medical parts. Rapid Prototyping Journal, 13(4), 196-203.
37. Mironov, V., Boland, T., Trusk, T., Forgacs, G., & Markwald, R. R. (2003). Organ printing:
computer-aided jet-based 3D tissue engineering. TRENDS in Biotechnology, 21(4), 157-161.
A new method to capture the jaw movement
Lander BARRENETXEA1, Eneko SOLABERRIETA1, Mikel ITURRATE1
and Jokin GOROZIKA1
1
Department of Graphic Design and Engineering Projects, Faculty of Engineering, University
of the Basque Country UPV/EHU, Urkixo zumarkalea z/g, 48013 Bilbao, Spain
* Corresponding author. Tel.: +34-94-601-4184; fax: +34-94-601-4199. E-mail address:
lander.barrenetxea@ehu.eus
Abstract In traditional dentistry, orthodontics and maxillo-facial surgery,
articulators are mainly used to simulate the dental occlusion. Dental implants and
syndromes such as functional occlusion require instrumentation for the planning
previous to the surgery. There are various mechanical articulators on the market.
However, most of them only simulate the rotation of the jaw about an axis running
through the virtual condyles. However, the real movement includes translation and
rotation and differs from one patient to another. Surgeons and dentists require a
comprehensive simulation system as a support for their work. This article
describes the work carried out to develop a method to capture mandibular
movement. Taking into consideration the market proposals and in comparison
with them, this system is intended to be as cheap and simple as possible.
Keywords: Motion sensor, jaw movements, computer program, prostheses’
manufacture, LEAP.
1 Introduction
Within a fully digitalized process to make dentures[1, 2], this study aims to
develop a method to record the mandibular movement performed by a patient.
This method should be cheaper and easier than existing applications. Our goal is
to obtain a registration method with an accuracy inferior to 0.1mm, a maximum
price of 200 €, together with an open system architecture.
In this project, the steps to follow are:
• Development of the movement capturing software
• Design of the mountings for sensors and references
• Analysis of the accuracy of the obtained measurements.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_40
397
398
L. Barrenetxea et al.
2 State of the Art
The following are the mandibular movement capture contactless techniques
currently available on the market:
• ARCUSdigma:
The ARCUSdigma system [3, 4] uses ultrasound transmission to measure and
reproduce the jaw movements. Its operation is relatively simple. On the one hand,
a bow with four microphones is fixed to the skull and on the other hand, a support
with three pingers is set on the jaw. The intensity of each frequency, captured by
each microphone determines its relative distance. These twelve measurements
allow this device to interpolate the relative position of the support.
• Freecorder BlueFox:
This contactless system [5, 6] tracks a series of encoded visual patterns. First,
to measure the position of the skull, a bow with references is placed on the ears
and then, it is supported on the nose’s bridge. Another light modular arch is
attached to the jaw to capture its movement.
The modularity of the lower arch accelerates and eases the installation and the
recording. Using special cameras, the patterns are captured 100 times per second,
thus achieving very high resolutions (1/1000mm.)
• JMA Zebris:
JMA Zebris system [7] has a customized jaw’s anchor that joins the lower arch
by means of magnets. Another upper arch is placed on the skull and nose’s bridge.
Both of them have electronic sensors that measure relative distances. The system
determines the jaw’s relative position by calculating the flight times of ultrasonic
pulses.
• Research:
At the Kang Cheng University –Taiwan-, Jing-Jing Fang and Tai-Hong Kuo,
researchers of the Department of Mechanics developed a method to record jaw
movement [8, 9]. In this case, customized stents are generated according to dental
molds or directly from teeth to be used as mountings for trace plates. Some
cameras record the movement of those plates and their position is automatically
calculated by interpolating images.
All presented systems measure indirectly jaw movement by means of tracking
plates, modular arches or brackets attached to teeth. To reproduce jaw movement,
dental molds or dentures must be 3D scanned twice, first separately and second,
with the fixed added elements. The first scan captures the teeth surface. The
second one gives the relative distance from the references to the teeth. Therefore,
having the references, the recorded movements and the relative distance,
A new method to capture the jaw movement
399
reproducing the mandibular arch is possible. When recording only relative
positions, it is also necessary to fix and measure an initial position to establish the
relative positions of skull and jaw. This measurement can be carried out at any
time of motion capture. From this position, dental arches can be placed in space.
Each system has its own closed software package that allows the processing the
data. Sometimes it is necessary to take some initial measurement to use it as
reference.
3 Methodology
3.1 Software development
In order to develop the software, the choice of the hardware is a previous step.
According to the premise of the system’s low price, an inexpensive commercial
motion sensor system was selected: the LEAP Motion[10].
This sensor is a small and light USB device that can be attached to mobile
systems, arches, brackets, etc. It scans nearby hemispheric environment distances
between 7cm and 1m by means of two cameras and three infrared LED. Three
hundred readings per second are transmitted in real time to the computer. This
peripheral allows programming in many languages (C++, C#, Unity, Objective-C,
Java, Python…) and operates under different operating systems (Windows, OS X,
Linux) thus meeting the open system requirement. Since C++ is one of the most
widespread programming languages, it has been selected for this project, thus
facilitating the project’s future development. Besides, it can be connected with
hardware without requiring any virtual platform, obtaining high performance
programs.
The LEAP Motion device, designed to capture the movements of fingers and
hands, comes with a "tool" configuration to register physical pointers. The
software development kit (SDK) offers a series of public functions and properties
to determine which of the elements captured by the device will be used. The
following functions have been selected:
• TipPosition: pointer position
• Direction: tool’s direction vector
• Float Length: tool’s estimated length (in mm)
• Float Width: estimated thickness (diameter) of the tool (in mm)
• Int count (): number of visible ítems
400
L. Barrenetxea et al.
This software aims to capture the movement of three cylindrical pointers
attached to the mandibular arch. In order to facilitate this work and after analyzing
this device’s operation, cylinders end in conical surfaces. Besides knowing the
position, it is also necessary to know the time of each of the shots and this time
has to be the same for the three pointers. If one reference is not recorded at a given
time, the data from the other two are purged. In order to quicken the process, the
capture rate is reduced to 30 frames per second (of 300 possible captures) and this
parameter is easily adjustable. The axis vector of each cylinder, as well as the
diameters are also captured. The algorithm to capture the position references was
developed according to these variables. It is possible to use fewer parameters, but
these selected parameters permit the filtering of registered positions if necessary.
Fig. 1. Algorithm flowchart.
A new method to capture the jaw movement
401
As the flowchart shows, once the libraries are inserted, the sensor is initialized.
It starts searching “Pointables” and then, it determines how many of them are
visible. If the number is equal to 3 (number of references), the counter of
identified “Pointables” units is reset. The data of each “Pointables” are
successively extracted and then added to a text file until the three elements are
processed. When the three “Pointables” present in the "t1" time lapse have been
processed, their center of gravity and the vector resulting from adding the three
director vectors are calculated and added to the text file. Once this cycle of "t1"
time finishes, a new cycle of capture begins. All captured or calculated data are
stored cyclically in a text file for a later filtering and processing.
3.2 Design of physical elements
The auxiliary equipment shall consist of two parts. On the one hand, the sensor
support is designed and on the other hand, the reference tool that will be recorded
is designed. The LEAP allows capturing two systems simultaneously: it has been
designed to capture the motion of ten fingers and group them into two hands.
However, analyzing the available systems on the market, several of them simplify
the calculations by fixing them to the skull. In this way, errors are minimized and
only the relative movement between dentures must be measured from an initial
position.
x Sensor’s support:
This piece is responsible for setting the LEAP to the skull. The optimal
distance of this device, together with the orientation relative to the mandibular
arch and any device that will fix the LEAP have been analyzed. The
characteristics of the sensor’s cameras involve a minimum distance limited to
7cm. Furthermore, a fork was inserted in order to allow an angular adjustment for
different physiognomies. After performing some tests, it was found that, when
placed face up, the sensor was found to be more prone to "noise" due to light
pollution. It was decided that the support would be fixed to the front with ribbons
and facing down. Interferences due to the body can be easily removed with a black
cloth.
x Reference tool:
The LEAP sensor reads the support fixed to the mandibular arch. To design this
support and to optimize the capturing process, it is necessary to determine what
the sensor reads and how it performs these readings.
The outer finish should not be reflective (noise, false readings) or too dark (no
catch). Besides, translucent or transparent material cannot be used because the
light emitted by the LED scatters and gives errors. Light colors and mates are best
402
L. Barrenetxea et al.
captured. Physical elements will be built with white plastic using rapid
prototyping machines [8]. This choice facilitates the redrawing of parts based on
previous results.
The LEAP device was designed to capture preferably cylindrical fingers and
pointers. The analyses [11] show no significant variations in the robustness of
catches when varying the diameters between 3 and 10 mm. An average diameter
value of 7 mm was selected. This value ensures rigidity without adding excessive
weight to an element that should be attached to the mandibular arch.
Initially it was decided to place a trihedral formed by cylinders of equal length
in the reference tool. This arrangement resulted in errors because the software
mistook the rods together. The tool was then modified by allocating different
distances to each of the cylinders as well as different angles between each. This
introduces another filtering element that strengthens the system.
Fig. 2. Sensor’s support and reference tool.
3.3 Precision analysis
The nominal accuracy of LEAP is 0.01mm. However, as it happens with all
optical and mobile methods, it will vary depending on environmental conditions
and the extent of this variation must be determined.
The system to determine the accuracy of the method is very simple. The
designed tool consists of three cylinders and the distance between their ends is
known. These data are compared with the distances between ends obtained by the
captured points. To obtain these data, the LEAP is coupled to a dummy and the
reference tool is fixed to a dental mold. This arrangement allows carrying out
calibrations and all kinds of repetitive motion. Between one test and another, the
program was reset and the sensor was recalibrated.
A new method to capture the jaw movement
403
Based on text files, the results were imported into Excel and were filtered to
eliminate false or repeated readings. The distances between points in each of the
temporary sections, as well as these values’ means and standard deviations were
calculated.
Finally, the references were three-dimensionally scanned in different positions
and then, the results obtained were compared with LEAP readings. A structured
light GOM ATOS scanner was used for the three-dimensional scanning [12].
Maximum errors happened in movements parallel to LEAP’s visual. The
focusing distance is modified and the device must fix it in real time. There are two
types of errors:
- Cylinders’ end-points incorrect determination along the axis. It is constant in
each session and proportional in each of the cylinders. Between sessions, similar
and parallel triangles are generated joining the end-points. The maximum distance
obtained from the theoretical triangle has been 0.02369mm.
- Position errors. Comparing LEAP with GOM ATOS the maximum error has
been 1,81mm at the end of a cylinder and perpendicular to it. Applying Thales, the
error at the closest point to the teeth is 0,2245mm.
4 Conclusions and future works
The proposed system has been able to capture the movement of the mandibular
arch. However, the maximum accuracy achieved has only reached 0.2 mm at best.
A number of problems worsened the accuracy of the captures and contaminated
the data:
• Measurements variations: it was observed that without changing the reference
tool, the distances between points varied between different tests while director
vectors remain constant. The error keeps constant throughout each test. After each
calibration, the LEAP does not always give a point for each reference in cylinder’s
end. It tends to move slightly along the axis. An analysis of the data shows that
this variation is proportional to the variations of other two references. If we
generate a triangle with captured points, proportional triangles are created
following the law of Thales. This homogeneity facilitates avoiding the error. If the
three cylinders agree on one point and the original triangle is known, it is possible
to calculate the distance between the theoretical triangle and the captured one and
compensate it.
• Changing the sequence of points: the LEAP does not associate one number to
each pointer detected. Identification varies depending on the order in which they
404
L. Barrenetxea et al.
are registered. This is a known bug that developers hope to be able to correct in
the future with upcoming SDKs. Anticipating this error, we are working with
different parameters like "Float length" and "Float width" to filter and sort the
results before calculating the distances.
• Interferences: sometimes the environment produces false readings giving
more than three points. In these cases, captured extra data allow filtering to
remove incorrect readings.
Although the movement made by the references was captured, this achievement
still needs some improvement to achieve greater accuracy. We believe that the
error in the variation of the measures is responsible for the decrease in accuracy
because there is a proportionality between them. Moreover, the points filtering
system, which is up to now manual, should be automated relying on the captured
data.
Acknowledgments The authors of this paper want to thank the Eusko Jaurlaritza - Gobierno
Vasco SAIOTEK 2013 (SAI13/355) for financing this research project.
References
1. Solaberrieta, E., Mínguez, R., Barrenetxea, L., Otegi, J.R., Szentpétery, A. Comparison of
the accuracy of a 3-dimensional virtual method and the conventional method for transferring
the maxillary cast to a virtual articulator. Journal of Prosthetic Dentistry. Volume 113, Issue
3, 1 March 2015, Pages 191-197.
2. Solaberrieta, E., Otegi, J.R., Goicoechea, N., Brizuela, A., Pradies, G. Comparison of a
conventional and virtual occlusal record. Journal of Prosthetic Dentistry. Volume 114, Issue
1, 1 July 2015, Article number 1650, Pages 92-97.
3. ArcusDigma: (April 2016) http://www.kavousa.com/US/Other-Products/LaboratoryProducts/ARCUSdigma.aspx?sstr=1
4. Cardenas Martos, A et al. Registro de la dinámica témporomandibular mediante ultrasonidos
con ARCUSdigma de KaVo. Av Odontoestomatol [online]. 2003, vol.19, n.3 [citado 201604-22], pp.131-139. ISSN 0213-1285.
5. Freecorder BlueFox: (April 2016) http://www.freecorder.de/
6. Freecorder BlueFox specs: (April 2016) http://www.drdougerickson.com/prosthodontictechonology-duluth/freecorder_min_en.pdf
7. JMA Zebris: (April 2016)
http://www.zebris.de/english/zahnmedizin/zahnmedizinkiefergelenkanalyse.php?navanchor=10017
8. Fang, Jing-Jing; Kuo, Tai-Hong. Modelling of mandibular movement. Computers in Biology
and Medicine , November–December 2008. Volume 38 , Issue 11 , 1152 - 1162
9. Fang, Jing-Jing; Kuo, Tai-Hong. Tracked motion-based dental occlusion surface estimation
for crown restoration. Computer-Aided Design. Volume 41, Issue 4, April 2009, Pages 315–
323.
10. LEAP Motion: (April 2016) https://www.leapmotion.com/
11. Daniel Bachmann, Frank Weichert , Bartholomäus Rudak, Denis Fisseler. Analysis of the
Accuracy and Robustness of the Leap Motion Controller. Sensors 2013, 13(5), 6380-6393
12. GOM ATOS: (April 2016) http://www.gom.com/metrology-systems/system-overview/atoscompact-scan.html
Computer Aided Engineering of Auxiliary
Elements for Enhanced Orthodontic Appliances
Roberto SAVIGNANO1*, Sandro BARONE1, Alessandro PAOLI1 and
Armando Viviano RAZIONALE1
1
Department of Civil and Industrial Engineering, University of Pisa, Pisa, Italy.
* Corresponding author. Tel.: +39-050-221-8000 ; fax: +39-050-221-8065. E-mail address:
roberto.savignano@for.unipi.it
Abstract Orthodontic treatments based on removable thermoplastic aligners are
becoming quite common in clinical practice. However, there is no technical literature explaining how the loads are transferred from the thermoformed aligner to the
patient dentition. Moreover, the role of auxiliary elements used in combination
with the aligner, such as attachments and divots, still needs to be thoroughly explained. This paper is focused on the development of a Finite Element (FE) model
to be used in the design process of shape attributes of orthodontic aligners. Geometrical models of a maxillary dental arch, including crown and root shapes, were
created by combining optical scanning and Cone Beam Computed Tomography
(CBCT). Finite Element Analysis (FEA) was used to compare five different aligner’s configurations for the same tooth orthodontic tipping movement (rotation
around the tooth’s center of resistance). The different scenarios were analyzed by
comparing the moment along the mesio-distal direction of the tooth and the resulting moment-to-force ratio (M:F) delivered to the tooth on the plane of interest.
Results evidenced the influence of the aligner’s configuration on the effectiveness
of the planned orthodontic movement.
Keywords: Orthodontic tooth movement; orthodontic aligner; anatomical modelling; numerical analysis.
1 Introduction
Orthodontics is the branch of dentistry specialized in the correction of malocclusions by using different kind of appliances. Among them, removable thermoplastic
aligners (RTAs) are the latest innovation, even if until the last decade they represented only a small part of the overall orthodontic treatments due to the highly
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_41
405
406
R. Savignano et al.
specialized and manual processes required [1]. The recent diffusion of CAD/CAE
methodologies allowed for an industrial approach for both design and manufacturing of RTAs, thus increasing their use in common clinical practice. Removable
aligners, made of transparent material and then almost invisible, raised a growing
interest as aesthetic alternatives to conventional fixed devices, especially for adult
treatments. The force-moment system delivered to the target tooth is generated by
the difference between template and dentition geometry since each aligner is
shaped a bit different from the actual target tooth position within the mouth. A set
of different aligners, sequentially worn by the patient, is required to achieve the
final desired outcome since each of them is designed to perform only a limited orthodontic movement. The shape of each aligner is designed by a technician
through CAD software tools starting from the original tooth position in the mouth,
obtained by a digitalization process, and knowing the desired target tooth placement at the end of the treatment.
Even if orthodontic treatments based on RTAs are becoming quite common in
clinical practice, there is no technical literature describing how thermoformed
aligners deliver forces and moments to tooth surfaces. Moreover, RTA treatments
are usually associated to the use of auxiliary elements, such as attachments and/or
altered aligner geometries as divots to improve the treatment effectiveness. However, current literature is mainly based on reporting clinical outcomes without
providing thorough scientific description of their efficacy. Some attempts to evaluate loads delivered by the aligner have been made by using multi-axis
force/torque transducers for different orthodontic in-vitro scenarios composed of
replicated polymeric dental arches [2, 3]. These approaches, however, require the
manufacturing of a different resin replica for each different RTA attribute to be
analyzed, thus burdening the RTA optimization in terms of both time and costs.
Moreover, material properties of the resin models are different from those of dental structures and there is no distinction between the different anatomical tissues
(Bone-Ligaments-Tooth).
In the orthodontic research field, the finite element method (FEM) proves to be
an effective non-invasive tool to provide quantitative and detailed data on the
physiological tissue reactions occurring during treatments [4-6]. In particular, Finite Element Analysis (FEA) has been used in dentistry since 70s [7] since they
are capable of evaluating not only the force system delivered to the tooth, but also
stress and strains induced to the surrounding structures (periodontal ligaments and
bone).
This paper aims at analyzing the influence of auxiliary element’s features on
the the force-moment system delivered to a central incisor by using a finite element model. 3D anatomies of a maxillary dental arch, including crown and root
shapes, were modelled by combining optical scanning and Cone Beam Computed
Tomography (CBCT). FEA was used used to compare five different aligner’s configurations for the same tooth orthodontic tipping movement (rotation around the
tooth’s center of resistance).
Computer Aided Engineering of Auxiliary …
407
2 Materials and methods
2.1 Geometrical modelling
Dental data, captured by independent imaging sensors, were fused to create multibody orthodontic models composed of teeth, oral soft tissues and alveolar bone
structure. The methodology is based on integrating CBCT scanning and surface
structured light scanning. An optical scanner was used to reconstruct tooth crowns
and soft tissues (visible surfaces) through the digitalization of plaster casts. Tooth
roots were obtained by segmenting CBCT data sets set through the anatomydriven segmentation methodology described in [8]. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor were fused within multi-body orthodontic models with minimum user interaction. A segment of six
frontal maxillary teeth was selected from the whole maxillary arch. The periodontal ligament (PDL), which is the soft biological tissue located between the tooth
and the alveolar bone, has a variable thickness, with a mean value of 0.2 mm [9].
For this reason, in this paper, it was modelled as a uniform 0.2 mm thick layer between each tooth and jawbone. The RTA was supposed to have a 0.7 mm constant
thickness, originating from the 0.75 mm thick disk before the thermoforming process [10], and was modelled by exploiting CAD tools in order to define a layer
completely congruent with the tooth crown surfaces [4]. The obtained 3D anatomical geometries (Figure 1) were auto patched to create trimmed NURBS surfaces,
finally converted into "IGES" models.
Bone
PDL
Teeth
Aligner
Fig. 1. Geometrical representation of the modelled orthodontic anatomies.
The tooth axes were defined according to the Local Reference System proposed
by [11]. The z-axis is associated with the lower inertia moment of the geometrical
model and is obtained through the Principal Component Analysis of the polyhedral surface by considering the masses associated with the barycenter of the trian-
408
R. Savignano et al.
gles of the polyhedrons, which are proportional to its area. Two sections of the
tooth were created and analyzed to identify the positive direction of the z-axis.
The tooth was sliced by two different planes perpendicular to z-axis and 3 mm far
from the tooth extremities. The section showing the worst approximation of a circle is considered as located upper (Γc). The mesiodistal (y-axis) and the
labiolingual (x-axis) axes are orthogonal to the z-axis and are obtained by analyzing the principal component of inertia of the planar section Γc.
Attachments and divots were created through Boolean operations between
tooth, RTA and prismatic or spherical volumes respectively, as shown in Figure 2.
They were both located at the center of the tooth crown. The attachment geometries were created on the tooth surface, having sizes on the x, y and z directions of
1×3×1.5 mm for the horizontal attachment and 1×1.5×3 mm for the vertical attachment. The divot spherical geometries, having a radius of 1 mm, were created
on the external surface of the RTA. Therefore, a 0.3 mm of initial penetration was
added to the model.
Fig. 2. Divot and attachment creation workflow.
2.2 Finite element model
Data were imported within the finite element modeler (Ansys® 14). All bodies
were meshed with solid tetrahedral elements resulting in approximately 220000
nodes and 134000 elements. The mesh size varied slightly between different scenarios due to introduction of the auxiliary elements mesh. The mechanical response of cortical bone, teeth, attachments and RTA was described by using a linear elastic constitutive model (Table 1). Dental tissue was modelled has a uniform
body without taking into account the division into dentin, enamel and pulp [9]. In
technical literature, different biomechanical models have been proposed to simulate the PDL properties [12]. The linear elastic model demonstrated to be appro-
Computer Aided Engineering of Auxiliary …
409
priate to simulate the PDL behavior during the initial phase of the orthodontic
movement when the PDL maximum strains is lower than 7.5% [13]. However,
this requirement was not satisfied by the orthodontic movement simulated in this
paper. For this reason, the volumetric finite strain viscoelastic model was implemented as proposed by Wang et al. [14]. The removable appliances were simulated as made of a polyethylene terephthalate glycol-modified (PETG) thermoplastic
disc, whose mechanical properties were evaluated through a set of tensile tests
carried out under different experimental conditions. Auxiliary attachments, which
are made of dental composite material, were supposed to have the same tooth’s
material properties.
Table 1. Material properties used for the numerical simulations.
Tooth
Bone
RTA
Attachment
E (MPa)
20000
13800
1400
20000
Poisson’s ratio
0.3
0.3
0.3
0.3
The evaluation of the effectiveness of the loads delivered by an orthodontic device to dentition can result in a challenging task. It is expectable to have a complex load system simultaneously acting in all the three spatial planes. The analysis
of the relationship between the 3D tooth movement and the delivered loads is possible by comparing moment-to-force ratios (M:F) on the plane of interest [15].
The force system is measured at the tooth center of resistance (CRES). The concept
of the center of resistance of a tooth is analogous to the concept of the center of
mass except for the fact that it is not related to a free body. It is rather related to a
body with constraints, as the tooth in the alveolar complex. If a force is applied on
the CRES the tooth shows a pure translation [16]. In the three-dimensional space,
each M:F is defined by combinations of the forces contained in the plane and moments perpendicular to it. Moreover, the absolute values of the desired moment or
force needs to be taken into account. The parameter M:F provides a description
about the quality of the force system. A higher M:F value measured at the expected Center of Rotation (CROT), means that the resulting CROT is closer the expected one. While the M or F absolute values are related to the quantity of the
force system [17]. Three simulations were run by applying a moment of 1.5 Nmm
parallel to each reference tooth axis in order to find the CRES [16].
Teeth and ligaments were relatively constrained by a bonded contact, which
only allows small sliding movements between joined nodes. The same constraint
was used to join bone and ligaments. The contact surface between teeth and RTA
was set as frictionless. The mesial and distal surfaces of the bone were fixed in all
directions. The creation of an initial penetration between the target tooth and the
aligner is necessary in order to generate the loading condition. For this reason, the
target tooth was rotated around y-axis by 0.3°. The resulting movement is called
410
R. Savignano et al.
bucco-lingual tipping. The solver determined the equilibrium between the bodies,
thus removing the initial geometrical. The final allowed penetration was set at
0.01 mm, which was appropriate considering that the initial penetration on the target tooth ranged from 0.09 mm to 0.36 mm (Figure 3).
Initial Penetration (mm)
0.36
0.32
0.28
Initial Rotation
0.24
0.20
CRES
0.16
0.12
0.08
0.04
0
(a)
(b)
Fig. 3. (a) CRES and rotation imposed to the target tooth in order to create the initial penetration
for an aligner with a single divot (b).
3 Results
Five different aligner’s configurations were considered for the numerical simulations as shown in Figure 4. In particular, an aligner without auxiliary elements
(standard), an aligner with a single or a double divot geometry, and an aligner with
a vertical or a horizontal attachment were considered. The main parameters analyzed by FEA were:
x maximum tooth displacement;
x force system delivered to the tooth, measured at the CRES.
Figure 4 shows the displacement maps of the target tooth obtained for each
scenario, while Figure 5 summarizes the force system delivered by the appliance
for all the aligner’s configurations. Table 2 reports the resulting force systems and
the M:F values.
Table 2. Force system measure at the CRES for each scenario.
Standard
Divot
2 divots
Vertical Attachment
Horizontal Attachment
My (Nmm)
24
71.3
77.7
37.4
42.8
My/Fx (mm)
12
9.8
9.6
14.4
15.9
My/Fz (mm)
-26.7
-89.1
-70.6
-37.4
-42.8
Computer Aided Engineering of Auxiliary …
Standard
Divot
2 Divot
Vertical
Attachment
411
Horizontal
Attachment
Displacement (mm)
0.075
0.067
0.058
0.050
0.042
0.033
0.025
0.017
0
Fig. 4. The five different aligner’s configurations used for the numerical simulations along with
the displacement maps for the target tooth relative to each scenario.
Fig. 5. Summary of the force system elicited by the aligner to the target tooth in the five different
configurations.
The quality of the force system, attested by the M:F parameter, increases using
an attachment independently of its orientation (vertical or horizontal), while it decreases by using a divot. The most interesting values are those associated to the
My/Fx parameter since the parameter My/Fz presents high values also by using the
standard aligner configuration. The distance between the expected C ROT and the
actual CROT is defined by the relation D=k/(M:F), where k depends on the specific
tooth morphology and the force system and values greater than 26 (Table 2) can
412
R. Savignano et al.
all be considered adequate in order to obtain the expected movement [17]. Figure
6 reports an example of the inverse relationship between M:F and D for a generic
tooth having k = 10.
D(CROT-CRES) Vs M:F
12
D (mm)
10
8
6
4
2
0
1
2
3
4
5
6
7
8
9
10 11 12
M:F (mm)
Fig. 6. Example of the inverse relationship between M:F and D for a generic tooth (k=10).
4 Discussion
This paper aims at demonstrating how CAD/CAE techniques could be usefully
applied to study orthodontic treatments performed by transparent removable
aligners. In particular, the design and optimization of auxiliary elements for a
bucco-lingual tipping of a maxillary central incisor has been analyzed. The obtained results demonstrate that auxiliary elements can improve the treatment effectiveness. Figure 4 evidences how the use of a single divot causes the highest tooth
movement, with the tooth apex incurring in a 0.075 mm displacement. In particular, a moment along the y-axis of 71.3 Nmm is obtained, which results about 2
times higher than the one obtained by using an attachment and about 3 times higher than the one obtained with a standard RTA (Table 2). This effect can be ascribed to the increased initial penetration, which results in a higher load delivered
to the target tooth. The configuration with a double divot produces a better result
with respect to the single divot geometry. The My value increases from 71.3 Nmm
to 77.7 Nmm. The configurations with an attachment provide a more accurate
movement in all the scenarios, as attested by the My/Fx values. The horizontally
disposed attachment provides a higher moment value with respect to the vertical
one, due to the greater initial contact area.
The attachment is placed by the dentist onto the patient dentition through a
template designed by the orthodontist. Therefore, its shape and position are highly
precise. The divot geometry, instead, is manually created by the dentist through a
tong. For this reason, it is possible that the actual divot is not congruent with the
requirements prescribed by the technician. RTAs success and diffusion within the
Computer Aided Engineering of Auxiliary …
413
orthodontic field mainly relies on the aesthetic advantage compared with classic
fixed orthodontic appliances. Even if the attachment color is usually similar to that
of the patient’s dentition, its size and location can undermine the aligner invisibility. Therefore, compared to the divot, the attachment is less desirable by a patient
looking for an almost invisible appliance.
5 Conclusions
CAD/CAE approaches can improve the knowledge about tooth-appliance interaction in orthodontics, thus allowing an enhancement of the effectiveness of customized orthodontic appliances. In particular, the use of auxiliary elements represents the most challenging issue of aligner-based treatments. In this regard, some
conclusions can be drawn:
x Auxiliary elements can improve both the amount and the quality of the
load delivered to the tooth.
x The use of a divot provides a higher load to the target tooth, but with a
lower accuracy.
x The use of two horizontally disposed divots generates an orthodontic
movement slightly better than a single divot.
x The use of attachments increases the movement accuracy, which is
defined by the M:F parameter.
x The horizontal attachment slightly outperforms the vertical one with
regard to the amount of My delivered to the tooth.
Further efforts should be concentrated on the analysis of multiple movements
for different teeth with the aim at obtaining generic rules for the selection of the
most appropriate auxiliary element for each specific condition.
References
1.Kesling H.D. Coordinating the predetermined pattern and tooth positioner with conventional
treatment. American journal of orthodontics and oral surgery, 1946, 32, pp. 285-293.
2.Hahn W., Engelke B., Jung K., Dathe H., Fialka-Fricke J., Kubein-Meesenburg D., and SadatKhonsari R. Initial forces and moments delivered by removable thermoplastic appliances during rotation of an upper central incisor. Angle Orthodontist, 2010, 80(2), pp. 239-246.
3.Elkholy F., Panchaphongsaphak T., Kilic F., Schmidt F., and Lapatki B.G. Forces and moments delivered by PET-G aligners to an upper central incisor for labial and palatal translation. Journal of orofacial orthopedics = Fortschritte der Kieferorthopadie : Organ/official
journal Deutsche Gesellschaft fur Kieferorthopadie, 2015, 76(6), pp. 460-475.
4.Barone S., Paoli A., Razionale A.V., and Savignano R. Computer aided modelling to simulate
the biomechanical behaviour of customised orthodontic removable appliances. International
Journal on Interactive Design and Manufacturing (IJIDeM), 2014. doi:10.1007/s12008-0140246-zpp. 1-14.
414
R. Savignano et al.
5.Martorelli M., Gerbino S., Giudice M., and Ausiello P. A comparison between customized
clear and removable orthodontic appliances manufactured using RP and CNC techniques.
Dental Materials, 2013, 29(2), pp. E1-E10.
6. 6.Barone S., Paoli A., Razionale A.V., and Savignano R. Design of customised orthodontic
devices by digital imaging and CAD/FEM modelling. in BIOIMAGING 2016 - 3rd International Conference on Bioimaging, Proceedings; Part of 9th International Joint Conference on
Biomedical Engineering Systems and Technologies, BIOSTEC 2016,2016,pp. 44-54,
7.Farah J.W., Craig R.G., and Sikarskie D.L. Photoelastic and finite element stress analysis of a
restored axisymmetric first molar. Journal of Biomechanics, 1973, 6(5), pp. 511-520.
8.Barone S., Paoli A., and Razionale A.V. CT segmentation of dental shapes by anatomy-driven
reformation imaging and B-spline modelling. International Journal for Numerical Methods in
Biomedical Engineering, 2016, 32(6), e02747, doi: 10.1002/cnm.2747.
9.Dorow C., Schneider J., and Sander F.G. Finite element simulation of in-vivo tooth mobility in
comparison with experimental results. Journal of Mechanics in Medicine and Biology, 2003,
03(01), pp. 79-94.
10.Ryokawa H., Miyazaki Y., Fujishima A., Miyazaki T., and Maki K. The mechanical properties of dental thermoplastic materials in a simulated intraoral environment. Orthodontic
Waves, 2006, 65(2), pp. 64-72.
11.Di Angelo L., Di Stefano P., Bernardi S., and Continenza M.A. A new computational method
for automatic dental measurement: The case of maxillary central incisor. Comput Biol Med,
2016, 70, pp. 202-209.
12.Fill T.S., Toogood R.W., Major P.W., and Carey J.P. Analytically determined mechanical
properties of, and models for the periodontal ligament: Critical review of literature. Journal of
Biomechanics, 2012, 45(1), pp. 9-16.
13.Poppe M., Bourauel C., and Jager A. Determination of the elasticity parameters of the human
periodontal ligament and the location of the center of resistance of single-rooted teeth a study
of autopsy specimens and their conversion into finite element models. Journal of orofacial orthopedics = Fortschritte der Kieferorthopadie : Organ/official journal Deutsche Gesellschaft
fur Kieferorthopadie, 2002, 63(5), pp. 358-370.
14.Su M.Z., Chang H.H., Chiang Y.C., Cheng J.H., Fuh L.J., Wang C.Y., and Lin C.P. Modeling
viscoelastic behavior of periodontal ligament with nonlinear finite element analysis. Journal
of Dental Sciences, 2013, 8(2), pp. 121-128.
15.Smith R.J. and Burstone C.J. Mechanics of tooth movement. American Journal of Orthodontics and Dentofacial Orthopedics, 1984, 85(4), pp. 294-307.
16.Viecilli R.F., Budiman A., and Burstone C.J. Axes of resistance for tooth movement: does the
center of resistance exist in 3-dimensional space? American Journal of Orthodontics and
Dentofacial Orthopedics, 2013, 143(2), pp. 163-172.
17.Savignano R., Viecilli R.F., Paoli A., Razionale A.V., and Barone S. Nonlinear Dependancy
of Tooth Movement on Force System Directions. American Journal of Orthodontics and
Dentofacial Orthopedics, 2016, 149(6), pp. 838-846.
Finite Element Analysis of TMJ Disks Stress
Level due to Orthodontic Eruption Guidance
Appliances
Paolo NERI1*, Sandro BARONE1, Alessandro PAOLI and Armando
RAZIONALE 1
1
Department of Civil and Industrial Engineering – DICI, University of Pisa
Largo L. Lazzarino 2, 56122 - Pisa. Italy.
* Corresponding author. Tel.: +39-050-221-8019; fax: +39-050-221-0604. E-mail address:
paolo.neri@dici.unipi.it
Abstract In the present work, the effect of Eruption Guidance Appliances
(EGAs) on TemporoMandibular Joint (TMJ) disks stress level is studied. EGAs
are orthodontic appliances used for early orthodontic treatments in order to prevent malocclusion problems. Commercially available EGAs are usually produced
by using standard sizes. For this reason, they are not able to meet all the specific
needs of each patient. In particular, EGAs are symmetric devices, while patient
arches generally present asymmetric conditions. Thus, uneven stress levels may
occur in TMJ disks, causing comfort reduction and potential damage to the most
solicited disk. On the other hand, a customized EGA could overcome these issues,
improving the treatment effectiveness. In this preliminary study, a Finite Element
(FE) model was developed to investigate the effects of a symmetric EGA when
applied to an asymmetric mouth. Different misalignment conditions were studied
to compare the TMJ disks stress levels and to analyze the limitations of a symmetric EGA. The developed FE model can be used to design patient-specific EGAs,
which could be manufactured by exploiting non-conventional techniques such as
3D printing.
Keywords: Eruption Guidance Appliance (EGA); TMJ disorders; Patientspecific orthodontic appliance; TMJ disks stress; FE model.
1 Introduction
Mandible positioning with respect to maxilla has a great influence on the overall patient health [1]. When misalignments or other geometrical defects are present, corrective actions must be taken. Eruption Guidance Appliances (EGAs) represent a widely used orthodontic equipment, which gradually recover mandible
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_42
415
416
P. Neri et al.
position to a healthy condition. Its effectiveness is widely documented in literature, especially if the treatment is performed during childhood. A silicone rubber
appliance is usually produced by a molding process. Anyway, only standard sizes
are available, corresponding to different misalignment grades or malocclusion situations, in order to reduce manufacturing costs. This implies that patient-specific
issues (e.g. mandible/maxilla asymmetries or teeth deformities) cannot be taken
into account when choosing the EGA. This leads to non-optimized solutions, lowering the appliance efficiency. For this reasons, the design of a patient-specific appliance could improve the treatment effectiveness. A better fit between EGA and
tooth geometries could be obtained, thus reducing the stress intensity at
TemporoMandibular Joint (TMJ) level. Clearly, the conventional molding process
does not allow the manufacturing of an economically sustainable customized appliance. 3D printing techniques could rather be used, thus allowing the design of
any shape fitting the specific patient needs. However, a preliminary step is required to verify the advantages of a customized EGA with respect to standard
symmetric appliances. Several papers regarding the estimation of the forces acting
on the condyles caused by the bite force are available in literature, e.g. [2]. Anyway, these papers are mainly based on highly simplified models. Moreover, these
analytical approaches are based on experimental measurements of the bite force,
without taking into account any orthodontic appliance. The Finite Element (FE)
method has been successfully applied to biomedical analyses [3], and some FE
simulations of TMJ behavior are also reported in literature [4]. Anyway, few papers introduce the EGA behavior in the analysis [5].
In the present paper, a FE model was developed to study the effect of a symmetric EGA applied to a patient having different malocclusion problems. In particular, the stress produced on the TMJ disks was taken into account in the case of
II class malocclusion, i.e. the lower jaw in occlusion is positioned further back in
relation to the upper jaw with respect to the ideal antero-posterior occlusion relationship.
A healthy maxilla and mandible geometry was firstly analyzed, in order to have
a reference value for the TMJ disks stress levels. Then, different malocclusion
levels were simulated by geometrically misaligning the mandible with respect to
the maxilla. This approach allowed to study the effects determined by different
misalignment conditions and to evaluate the stress intensification occurring on the
condyle disks when the symmetric appliance is used on an asymmetric mouth.
2 Finite element model description
The FE model was aimed at estimating the stress intensity values at condyle
disks level corresponding to different mandible misalignment conditions. All the
simulations were performed by using ANSYS Workbench software. The bodies
included in the model were temporal bones (articular fossa), condyles, mandibular
Finite Element Analysis of TMJ Disks Stress …
417
teeth, maxillary teeth, TMJ disks and the EGA. The material of the TMJ disks was
assumed to be linear, homogeneous and isotropic, according to [2], with a
Young’s modulus of 5 MPa, while EGA Young’s modulus was assumed to be 3
MPa. The Poisson’s ratio was set to 0.3 for both materials. Both EGA and TMJ
disks are composed by low stiffness materials, so the large displacement option
was considered for the analysis. This hypothesis requires a non-linear analysis,
with longer computational time, but it also allows the achievement of more realistic results. On the other hand, human bones can be considered rigid bodies since
their Young’s modulus is in the range 1-30 GPa [6], which can be considered
much greater than silicon rubber and TMJ disks Young’s modulus values. For this
reason, they were modeled as surface bodies instead of volume bodies, in order to
reduce their number of elements and thus the computational time. These bodies
only provide boundary and loading conditions to the more compliant parts, thus
the strain and stress solution in their domain is of low interest in the present work.
A preliminary sensitivity analysis has shown that the shell thickness does not influence the results if a values greater than 1 mm is chosen. For this reason, a value
of 2 mm was set for all the bone bodies.
2.1 Geometry definition
Geometrical information about condyles and temporal bones were obtained by
segmenting Cone Beam Computed Tomography (CBCT) data with 3D Slicer, an
open-source software for medical image analysis [7]. TMJ disks were modeled by
filling the empty space between condyle and articular fossa with an ellipsoid. Material penetration was removed by using Boolean operations on the studied geometries, i.e. by subtracting bone geometries from the ellipsoid. Finally, fillets were
added to the obtained disk geometry, in order to avoid fictitious stress intensification in numerical results. Figure 1(a) shows the final obtained disk geometry and
its location between the bones structure.
Labial
shield
Lingual
shield
Occlusal
bite
(a)
Fig. 1. CAD models: (a) TMJ disk geometry and (b) EGA geometry.
(b)
418
P. Neri et al.
The virtual model design of the EGA was inspired by the standard available
physical model used for the correction of II class malocclusion. The model is
composed of three main geometric elements: occlusal bite, labial shield and lingual shield (Figure 1(b)). The overall size is parameterized on the standard size of
the child’s arches.
In order to simplify the analysis, just one side of the model was created by using the acquired data and Boolean operations, while the other side was added by
symmetry. Mandibular and maxillary teeth anatomies were also available as symmetric geometries representing a healthy condition. This circumstance allowed
performing preliminary simulations without any misalignment at all, thus providing a reference value for TMJ disks stress levels. Furthermore, this symmetric geometry allowed the complete control of the desired load. The mandible misalignment and asymmetry was then simulated by geometrically displacing the subassembly composed of mandibular teeth, maxillary teeth and the EGA.
2.2 Connections and contact pairs
The interaction between the different bodies was simulated by using rigid joints
and contact pairs. In particular, the connections between the two condyles and the
mandibular teeth was ensured by using a fixed joint, connecting all their degrees
of freedom. This allowed the distribution of the load introduced by the biting force
(see below) on both teeth and condyles. Several contact pairs were then defined to
connect the simulated bodies. Frictionless contact pairs were defined between
TMJ disks and temporal bones assuring the load transmission along the normal direction and leaving the tangential direction unconstrained. In this way, a displacement of the disks in the articular fossa is allowed as a consequence of the
misalignment between mandible and maxilla teeth. No-separation contact pairs
were defined between the condyles and the TMJ disks, thus constraining both
normal and tangential directions. This choice prevents the relative displacement
between condyles and TMJ disks, thus reducing convergence problems. These
contact pairs still require a linear solver, reducing the computational effort.
Finally, several frictional contact pairs were defined between teeth (both mandibular and maxillary) and the EGA, thus increasing the computational time since
a non-linear solution process is required [8]. However, a better reproduction of the
real condition is obtained since the friction coefficient between silicone and teeth
is generally not negligible. A sensitivity analysis was performed, with a friction
coefficient value ranging from 0.1 to 0.3. A variation of 25% of the maximum
Von Mises stress in the disk was found. However, a constant value of 0.2 was
considered for all the performed simulations, since the present study is more
aimed at comparing different configurations rather than obtaining absolute results.
Finite Element Analysis of TMJ Disks Stress …
419
2.3 Boundary and loading conditions
Model boundary conditions were used to set bodies constraints and to impose the
desired misalignment configurations. A fixed constraint was applied to the temporal bones for all the performed simulations. Maxillary teeth boundary conditions
were used instead to set the loading condition. Two different misaligned placements of mandibular and maxillary teeth were considered in this work: displacement along the Y direction and rotation along the Y direction (Figure 2). The reference system was defined with the Z-axis perpendicular to the occlusal plane and
the Y-axis parallel to the occlusal plane and approximately congruent with the
palato-buccal direction of the anterior teeth. Mandibular teeth were rigidly rotated
with respect to the ideal healthy condition in order to represent the rotational misalignment. This circumstance determined an asymmetric contact during the solution
process causing an uneven contact pressure on the EGA. The whole sub-assembly
composed of mandibular teeth, maxillary teeth and EGA was rigidly moved of the
desired displacement along the Y direction with respect to condyles, temporal
bones and TMJ disks in the CAD model. This choice allowed representing the
translational misalignment still maintaining the correct relative positioning between teeth and EGA. The effect of this misalignment was then introduced in the
simulation by imposing a fixed displacement along the Y direction to the maxillary teeth only. In this way, the maxilla was forced to move to the original position, thus applying a load to the mandibular teeth through the EGA. The fixed
joint applied to the mandibular teeth then allowed the load transmission to the
condyles, and consequently to the TMJ disks.
Fixed
support
Rigid joint
Y misalignment
Biting force
Y rotation
Fig. 2. Model view with applied load and boundary conditions.
420
P. Neri et al.
Finally, the biting force was introduced in the model as a loading condition.
This was practically obtained by applying a remote force to both the condyles. The
remote force application point was chosen to reflect the location of the biting
muscles insertion on the mandible (Figure 2). The force value was gradually increased from 0 N up to a maximum value of 30 N, which was then kept constant
in all the performed simulation in order to compare the obtained results. Nonlinear phenomena are introduced in the model by the large displacement hypothesis and the frictional contact pairs. Thus, it was not possible to directly scale the
TMJ disks stress level with respect to the applied biting force, so that the comparison had to be performed at equivalent biting force.
3 Simulated misalignment configurations
Four different teeth configurations were tested in order to compare the results.
Firstly, a situation of an ideal perfect healthy mouth was considered, in order to
have a reference value for disks stress level. No misalignment was introduced in
this first analysis, obtaining a substantially symmetric solution, coherent with the
model symmetry. Then, a symmetric misalignment was tested by introducing 4
mm displacement between maxillary and mandibular teeth along the Y direction.
The third simulation was performed by applying an asymmetric misalignment
consisting of a 2° rotation of the mandible with respect to the maxilla around the
Y-axis. No displacement along the Y direction was added in this simulation. Finally, the two misalignment conditions were combined in the fourth simulation. Table 1 summarizes the four misalignment conditions considered in the performed
analyses. The misalignment values were chosen referring to an actual patient malocclusion case, in order to represent a realistic situation.
Table 1. Misalignment conditions for the studied simulations.
Simulation N.
Y Displacement (mm)
Y Rotation (°)
1
0
0
2
4
0
3
0
2
4
4
2
It is worth noting that in a non-linear analysis the loading history is important
in determining the results. The biting force was kept constant (1 N) until the Y
displacement, when present, was fully recovered in the simulation, thus allowing
to better reproduce the actual loading history. The biting force was then gradually
increased up to the chosen maximum value of 30 N.
Finite Element Analysis of TMJ Disks Stress …
421
4 Results
The described model was developed for comparative purposes. The maximum
equivalent stress in the TMJ disks was computed using the Von Mises criterion.
The stress distributions corresponding to the biting force of 30 N are reported in
Figure 3 and summarized in Table 2. The section plane of each figure was chosen
to show the maximum value obtained in the corresponding misalignment configuration.
Table 2. Results summary corresponding to a biting force of 30 N.
Simulation N.
Left disk (MPa)
Right disk (MPa)
1
0.37
0.37
2
0.82
0.83
3
0.47
0.38
4
0.62
0.91
The behavior of the maximum stress, with respect to the applied biting force, is
reported in Figure 4. In order to better compare results for the different misalignment configurations, Figure 4 only shows the results from 3 N to 30 N, i.e. when
the Y displacement was recovered and the biting force was gradually increased.
Left
Right
Sim. 1
Sim. 2
Sim. 3
Sim. 4
Fig. 3. Von Mises stress levels: section plane through maximum stress values.
MPa
422
P. Neri et al.
Figure 4 evidences that symmetric configurations cause even stress distributions in left and right disks (simulation 1 and 2). In particular, the plot relative to
left and right disk of simulation 1 are perfectly overlapped. On the other hand,
when an asymmetric misalignment is introduced, the stress distribution is not
symmetric in the two disks (simulation 3 and 4). The reference values obtained
through simulation 1 show that all the misalignment configurations determine
higher stress levels in the disks. In particular, simulation 4, characterized by both
Y rotation and Y displacement between mandibular and maxillary teeth, results in
a difference greater than 30% between left and right disk stress values.
Maximum Von Mises stress (MPa)
1.0
Sim. 1 - L
Sim. 1 - R
Sim. 2 - L
Sim. 2 - R
Sim. 3 - L
Sim. 3 - R
Sim. 4 - L
Sim. 4 - R
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0
5
10
15
20
Biting Force (N)
25
30
35
Fig. 4. Results comparison: maximum Von Mises stress against biting force.
5 Conclusions
The present work was aimed at developing a finite element model for TMJ disks
stress level analysis due to the use of Eruption Guidance Appliances. The maximum Von Mises stress in the disks was taken into account as a comparison parameter to study the effect of different misalignment configurations. The analysis
showed that when a symmetric EGA is applied to an asymmetric mouth, uneven
stress distributions in the disks occur, thus proving that a symmetric EGA leads to
asymmetric loading of the TMJ disks. This issue could produce some damage to
the most stressed disk. Moreover, the patient comfort could decrease, thus reduc-
Finite Element Analysis of TMJ Disks Stress …
423
ing the amount of time spent by the patient wearing the appliance and consequently lowering the treatment effectiveness.
This preliminary study showed that standard designed EGAs, which are not optimized for the specific patient anatomy, present critical issues when applied to
generic asymmetric mouths. Further developments could be aimed at designing
patient-specific EGAs to be produced with 3D printing or other non-conventional
techniques. The optimization process of customized appliances could be driven by
the developed FE model in order to evaluate the influence of the different geometrical and anatomical parameters.
References
1. Wang X., Xu P., Potgieter J. and Diegel O. Review of the Biomechanics of TMJ. In 19th International Conference on Mechatronics and Machine Vision in Practice, M2VIP, Auckland,
November 2012, pp.381-386.
2. Li G., Sakamoto M. and Chao E.Y.S. A comparison of different methods in predicting static
pressure distribution in articulating joints. Journal of Biomechanics, 1997, 30, 635-638.
3. Ingrassia T., Nalbone L., Nigrelli V., Tumino D. and Ricotta V. Finite element analysis of two
total knee joint prostheses. International Journal on Interactive Design and Manufacturing,
2013, 7, 91-101.
4. Citarella R., Armentani E., Caputo F. and Naddeo A. FEM and BEM Analysis of a Human
Mandible with Added Temporomandibular Joints. The Open Mechanical Engineering Journal, 2012, 6, 100-114.
5. Tilli J., Paoli A., Razionale A.V. and Barone S. A novel methodology for the creation of customized eruption guidance appliances. In Proceedings of the ASME 2015 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, IDETC/CIE, Boston, August 2015, pp.1-8, doi:10.1115/DETC2015-47232.
6. Odin G., Savoldelli C., Boucharda P. and Tillier Y. Determination of Young’s modulus of
mandibular bone using inverse analysis. Medical Engineering & Physics, 2010, 32, 630-637.
7. Barone S., Paoli A. and Razionale A.V. Computer-aided modelling of three-dimensional maxillofacial tissues through multi-modal imaging. In Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine, 2013, 227(2), 89-104
8. Barzi E., Gallo G. and Neri P. FEM Analysis of Nb-Sn Rutherford-Type Cables. IEEE Transaction on Applied Supercoductivity, 2012, 22, 1-5.
TPMS for interactive modelling of trabecular
scaffolds for Bone Tissue Engineering
Fantini M1, Curto M1 and De Crescenzio F1*
1
University of Bologna, Department of Industrial Engineering, Bologna, Italy
* Corresponding author. Tel.: +39 0543374447. E-mail address:
francesca.decrescenzio@unibo.it
Abstract The aim of regenerative medicine is replacing missing or damaged
bone tissues with synthetic grafts based on porous interconnected scaffolds, which
allow adhesion, growth, and proliferation of the human cells. The optimal design
of such scaffolds, in the Bone Tissue Engineering field, should meet several geometrical requirements. First, they have to be customized to replicate the skeletal
anatomy of the patient, and then they have to provide the proper trabecular structure to be successfully populated by the cells. Therefore, for modelling such scaffolds, specific design methods are needed to conceive extremely complex structures by controlling both macro and micro shapes. For this purpose, in the last
years, the Computer Aided Design of Triply Periodic Minimal Surfaces has received considerable attention, since their presence in natural shapes and structures.
In this work, we propose a method that exploit Triply Periodic Minimal Surfaces
as unit cell for the development of customized trabecular scaffolds. The aim is to
identify the mathematical parameters of these surfaces in order to obtain the target
requirements of the bone grafts. For that reason, the method is implemented
through a Generative Design tool that allow to interactively controlling both the
porosity and the pores size of the scaffolds.
Keywords: Bone Tissue Engineering, Scaffold Design, Triple Periodic Minimal
Surfaces, Generative Design.
1 Introduction
Missing or damaged bone tissues of the human body are usually replaced by bone
grafts, which are obtained in an auto-graft approach, or by synthetic grafts, which
are manufactured with biocompatible materials. The second option is obviously
the less invasive one and has been widely studied in order to provide bone substitutes that are engineered to be successfully integrated with the existing tissues of
the patient. Actually, Bone Tissue Engineering (BTE) is the discipline for the de-
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_43
425
426
M. Fantini et al.
sign and manufacturing of interconnected porous scaffolds, which allow the regeneration of bone tissues, since the cells can gradually and progressively populate the ducts of the lattice structure [1].
Basic geometrical requirements for the design of customized bone scaffolds are
the porosity and the pores size, together with the shape of the individual anatomy
and the specific defect site of the patient.
Computer Aided Design (CAD) and Solid Freeform Fabrication (SFF) technologies are providing valuable tools to conceive, generate, evaluate and manufacture
such scaffolds in a Computer Aided Tissue Engineering (CATE) approach [2, 3, 4,
5]. Therefore, design methods are being explored to efficiently generating complex surfaces for interconnected porous structures to be produced via Additive
Manufacturing (AM) Error! Reference source not found.6, 7, 8, 9]. For what
concerns the design methods, a well-known approach is to create hierarchical
structures based on unit cells that are replicated in the 3D space in order to obtain
a lattice structure that, intersected with the boundary surface of given individual
anatomies, allows the generation of customized porous scaffolds.
Initially, the idea was to create unit cells libraries, either using image-based design approaches, or using CAD approaches based on Boundary Representation (BRep) or Constructive Solid Geometry (CSG) [10]. Recently, a significant interest
has increased around hyperbolic functions and, specifically, in Triply Periodic
Minimal Surfaces (TPMS) [11]. Thanks to embedded properties of this class of
surfaces, researchers are focusing on their applicability in the biomedical field,
such as in other domains that can exploit the possibility of designing porous interconnected structures based on TPMS surfaces.
Therefore, new methods are needed to make the design of such structures really
interactive in order to give the designer the possibility to explore solutions that
meet specific geometrical requirements that, especially in the biomedical domain,
are completely different from case to case and from patient to patient. Instead of
having a classical CAD approach to model a specific scaffold, we formalized a
workflow that contains the rules to generate the scaffold that meet the porosity and
pores size requirements, together with the boundary surface of the specific defect
site of the patient.
2 TPMS
Minimal surfaces are defined as surfaces with zero mean curvature that minimize
the surface area for given boundary conditions (a closed curve lying on the surface). With planar curves, these surfaces are planar. With three-dimensional
curves, these surfaces do not present discontinuities, thus resulting in extremely
smooth surfaces.
Using a combination of trigonometric functions is possible to generate a wide
range of periodic shapes. Many periodic minimal surfaces have been studied and
TPMS for interactive modelling of trabecular …
427
interesting properties have been proven, expressly by their three dimensional periodicity. Lord and Mackay report a survey about periodic minimal surfaces of cubic symmetry [12] and define this class as the most complex and interesting class
of minimal surfaces. Indeed, a unit cell with cubic symmetry can be used as the
building block of an interconnected porous lattice that can be easily obtained as a
three-dimensional array of unit cells.
There are different approaches for modelling minimal surfaces. One of these is
the use of the implicit method that defines the surface as the boundary of a solid
obtained for that points for which a given function f(x,y,z,t)=0 is satisfied. For a
sphere the function being x2+y2+z2-1=0 means that the space is divided into two
subspaces given by the points inside the sphere x 2+y2+z2<1, the points on the surface x2+y2+z2=1 and the points outside the sphere x2+y2+z2>1. In a cube of unit
side containing a sphere of unit diameter, the unit cell, the solid and the void
points can be easily identified.
Among this class of surfaces, TPMS are triply periodic. Being the aim of this
work the design of porous interconnected scaffolds with a trabecular structure, two
different TPMS unit cell with cubic symmetry, which allow creating TPMS-based
lattices, have been selected and are shown in Fig. 1: the Diamond surface (D) and
the Gyroid surface (G).
‫ܦ‬ǣ‫ ܼ݊݅ݏܻ݊݅ݏܺ݊݅ݏ‬൅ ‫ ܼݏ݋ܻܿݏ݋ܿܺ݊݅ݏ‬൅ ܿ‫ ܼݏ݋ܻܿ݊݅ݏܺݏ݋‬൅
ܿ‫ ܼ݊݅ݏܻݏ݋ܿܺݏ݋‬ൌ ‫ݐ‬ሺ‫ݍܧ‬Ǥ ͳሻ
‫ܩ‬ǣ ܿ‫ ܻ ݊݅ݏ ܺ ݏ݋‬൅ ܿ‫ ܼ ݊݅ݏ ܻ ݏ݋‬൅ ܿ‫ ܺ ݊݅ݏ ܼ ݏ݋‬ൌ ‫ݐ‬ሺ‫ݍܧ‬Ǥ ʹሻ
Fig. 1. Unit cells and interconnected porous lattice for the Diamond (left) and the Gyroid (right).
The Diamond surface was mathematically defined by Schwarz, in the 19th century, as a representative of TPMS with cubic symmetry [13]. Its labyrinth graphs
are four-connected diamond networks, since every cell is connected to its four
neighbors in the geometry of a tetrahedron.
The Gyroid surface was discovered by Schoen in 1970 during a study on the
aerospace applications of minimal surfaces [14]. With triple junctions, this surface
divides the space in two distinct regions, both with their own helical character. It
contains no straight lines and the topological symmetry of these sub-volumes is
inversion.
428
M. Fantini et al.
3 CAD generation of TPMS unit cells
The TPMS unit cells were generated by means of K3dSurf. This free software tool
allows visualization and manipulation of mathematical surfaces in three, four, five
and six dimensions, also supporting parametric equations and isosurfaces. Moreover, it is possible to transform each mathematical surface into a 3D model with a
defined bounding box. Modelling parameters are the grid resolution, the x, y, z
domain and the offset value (t).
The grid resolution has impact on the smoothness of the 3D model, but increasing this value, also the size of the mesh increases, requiring extra computing
memory and time. The maximum value allowed is 100x100x100 and, as trade-off,
the grid resolution is set to 33x33x33 (the same on each axis).
To build a lattice as a three-dimensional array of a reference symmetric structure, the TPMS unit cell needs a bounded symmetric domain. Selecting different
x, y, z domains for the same mathematical surface, the resulting 3D models are
characterized by the same size of the bounding box (a 650 mm sided cube), but
differs each other in the number of the pores, in the dimension of the pores and in
the size of the mesh. Among different boundary conditions, the [-4π ÷ 4π] domain
is chosen on each axis as a good compromise between the pores size and the
sharpness of the 3D model due to a coarse tessellation of the surface.
The isosurface function corresponding to each TPMS, can be edited by setting
different offset values (t) in the implicit equation of the TPMS surface (Eq. 1 and
Eq. 2). This allows the characterization of the mathematical surface resulting in a
unit cell with different values of porosity. Therefore, a proper t value can be set,
for each desired porosity according to the kind of bone that must be replaced.
Finally, the mathematical surfaces modelled in K3DSurf can be exported in.obj
format as TPMS unit cells with cubic symmetry (Fig. 2) and then can be imported
as mesh models in a CAD environment. Such mesh models can be used as the
building block of the three-dimensional array to obtain an interconnected porous
lattice. Therefore, TPMS unit cells represent the input component for the Generative Design (GD) process described in the next section.
Fig. 2. Unit cells generated via K3DSurf, setting grid resolution to 33x33x33, x, y, z domain to [4π ÷ 4π] and null offset values (t) for the Diamond (left) and the Gyroid (right)..
TPMS for interactive modelling of trabecular …
429
4 Generative Design process
The concept of GD is based on the idea of producing digital shapes that follow
rules that can be written in a source code. First, the designer’s idea has to be formalized in the code, and then the computer interprets the code generating the
shape. The designer can modify the code and the parameters after evaluating the
output. This approach is widely expanding since Computer Aided Industrial Design (CAID) provides scripting capabilities and intuitive tools to create scripts
through graphical interfaces. One of these is Grasshopper, the Rhinoceros 5 plug
in, conceived to create scripts in a tabs and canvas interface where the flow to
generate shapes, calculate parameters and evaluate properties can be implemented.
Therefore, for designers and architects, CAID evolved into GD, allowing the generation of an infinite number of shapes that follow specific rules.
In the design of scaffolds for BTE, such rules have to be identified by studying
the problem of substituting bone defects with synthetic grafts that mimic the patient’s bone. Tissues information are commonly obtained by means of non invasive imaging methods, such as Computer Tomography (CT) or Magnetic Resonance Image (MRI).
The first information, essential for the surgical planning, evaluates the global
bone properties at macrostructural level assessing the bone porosity (P%). The
second one concerns the microstructural level, and in particular the pores size.
In generating a lattice useful to realize a patient specific scaffold, the GD process needs two geometrical input components: the TPMS unit cell, which features
the trabecular pattern, and the patient bone geometry that has to be replaced. In
this case, both are 3D meshes: the first is an .obj file coming from K3dSurf, the
second one, generally, is an .stl file coming from the DICOM data set of the defect
site of the patient.
As sample case study for this work, we considered the replacement graft for a
patient affect by a severe atrophy to the right mandibular ramus (Fig. 3).
Fig. 3. The boundary surface of the scaffold designed for a patient affect by a severe atrophy to
the right mandibular ramus.
To design the scaffold, requirements in terms of desired percentage porosity
(P%) and pores size must be satisfied. First of all, different TPMS unit cells are
imported, as .obj file, in Rhinoceros environment in order to evaluate the percentage porosity (P%), related to the offset value (t), set in the implicit equation of the
surface via K3dSurf. Thereafter, the bounding box of each TPMS unit cell is
430
M. Fantini et al.
evaluated. The relative percentage porosity (P%) can be then determined by the
relationship between the volume of the scaffold and the volume of the bounding
box.
ܲΨ ൌ ܸ஻௢௨௡ௗ௜௡௚஻௢௫ െ ்ܸ௉ெௌௌ௖௔௙௙௢௟ௗ
‫ͲͲͳ כ‬ሺ‫ݍܧ‬Ǥ ͵ሻ
ܸ஻௢௨௡ௗ௜௡஻௢௫
A GD flow has been formalized in order to automatically compute the percentage porosity (P%) of any unit cell generated through TPMS equations by varying
the offset value (t) to obtain the required porosity of the bone. Moreover, the
nominal pores size of each unit cell has been computed according to offset value
(t). Results are reported and discussed in next section.
In order to mimic the patient bone, both at macrostructural and microstructural
level, the pores size has to be constrained in the GD flow, as depicted in Fig. 4.
Fig. 4. Customized porous scaffold generation flow.
Therefore, the input data are the target pores size, the TPMS unit mesh (with
the required porosity and the nominal pores size) and the mesh representing the
patient bone geometry, thus the needed bone graft shape. The TPMS unit cell is
generated via K3DSurf (a 650 mm sided cube) with a measurable nominal pores
size. The target pores size is required to compute the scale factor to be applied to
the TPMS unit cell based on the ratio between the desired and the nominal pores
size of the TPMS unit cell. Thus, a scaled TPMS unit cell with the appropriate
TPMS for interactive modelling of trabecular …
431
pores size is generated. Then, in order to cover the patient bone geometry, the
scaled TPMS unit cell is replicated in a three-dimensional array so that the total
volume is larger than the volume of the bone graft to be produced. Thus, the number of array elements in x, y, z direction are computed based on the bounding box
of the needed bone graft. Finally, a Boolean intersection between the TPMS lattice
mesh and the patient bone geometry allow obtaining the watertight mesh of the
customized scaffold. For what concerns the computational burden on standard laptop, the scaffold of the example (bounding box: 48.05x25.91x24.66 mm) can be
obtained based on a Gyroid TPMS with different pore sizes (mm) and corresponding computational time (s): 4.0 - 29.1; 2.7 - 76.6; 1.3 - 1592.7; 1.0 - 6559.9.
5 Results and discussion
As reported in literature, bone scaffolds play a fundamental role for the regeneration of new bone tissues. In addition, scaffolds act as carriers for morphological
proteins distribution, encouraging the osteoconductive activity [15]. Finally, osteogenesis comes after scaffold cell seeding, causing new bone formation. Into the
osteogenesis, scaffolds should mimic bone morphology, structures and functions
with the aim to optimize the integration with the surrounding tissue. Therefore, the
requirements of interests, such as percentage porosity (P%) and pores size have
been extensively studied.
From the literature, the trabecular bone has a porosity variable in the range
[50% ÷ 90%], while the compact bone has a lower porosity [<10%][16], due to
the Heversian channels. For what concern the pores size, the minimum value required to regenerate mineralized bone is generally considered to be approximately
100 μm [17]. Other investigations have indicated that the appropriate pores size
for attachments, differentiations, ingrowth of osteoblasts and vascularization is
approximately 200-500 μm [18] or 300-400 μm [19] in porous bone substitute applications. More recently, it has been observed that for metallic scaffolds (Titanium), realized by means of Selective Laser Sintering (SLS), the size of the pores
is one of the critical factors and should be between 100 and 600 μm [20]. Other
studies [21], related to the bone ingrowth, have shown that bone increasing not
only depends on the architecture and pores appearance, but also by a pores size
variable in the range [300 ÷ 800 μm]. It is also reported that for in vivo BTE,
minimum pore sizes of 300 μm are needed for capillary formation instead; up to
800 μm pores size, there are no statistical difference in scaffolds bone ingrowth
and bone formation. Generally, for bone scaffolds, the pores size should be maintained in a given range [150 ÷ 600 μm] [22].
According to the data reported above, we have set the following parameters:
•
Porosity ≈ 80%
•
Pores size ranging between [150 ÷ 600 μm]
432
M. Fantini et al.
The pores size is evaluated as the diameter of sphere that can be inscribed inside each cavity (void) of the TPMS unit cell, as shown in Fig. 5.
Fig. 5. Evaluation of the pores size as the diameter of sphere inscribed inside each cavity (void)
Keeping the same grid resolution (33x33x33) and the same x, y, z domain [-4π
÷ 4π], the percentage porosity (P%) and the pores size of the TPMS unit cells were
computed by varying the offset value (t). These values are reported in Fig. 6 showing the linear correlation between such parameters and the offset value (t).
Fig. 6. Diagrams with percentage porosity (P%) and pores size, in relation to the offset value (t),
for different TPMS unit cells
It can be observed that, for what concerns the Diamond and the Gyroid TPMS,
in order to generate lattices with the target porosity of 80%, each trigonometric
equations can be rewritten as follow:
‫ܦ‬ǣ‫ ܼ݊݅ݏܻ݊݅ݏܺ݊݅ݏ‬൅ ‫ ܼݏ݋ܻܿݏ݋ܿܺ݊݅ݏ‬൅ ܿ‫ܼݏ݋ܻܿ݊݅ݏܺݏ݋‬
൅ ܿ‫ ܼ݊݅ݏܻݏ݋ܿܺݏ݋‬ൌ ͲǤ͸Ͷሺ‫ݍܧ‬Ǥ ͷሻ
TPMS for interactive modelling of trabecular …
433
‫ܩ‬ǣܿ‫ ܻ݊݅ݏܺݏ݋‬൅ ܿ‫ ܼ݊݅ݏܻݏ݋‬൅ ܿ‫ ܺ݊݅ݏܼݏ݋‬ൌ ͲǤͺͷሺ‫ݍܧ‬Ǥ ͸ሻ
In Fig. 7 the Diamond and Gyroid TPMS unit cells are shown with the sphere
representing the nominal pores size. For the Diamond TPMS unit cell (t 80% =
0.64), the nominal pores size, thus the diameter of the corresponding sphere, is
118.02 mm; for the Gyroid TPMS unit cell (t 80% = 0.85), the nominal pores size,
thus the diameter of the corresponding sphere, is 141.69 mm. Now, knowing the
right offset value t80% and the corresponding nominal value of the pores size, according to the desired pores size of the scaffold, ranging from [150 μm ÷ 600 μm],
it is possible to obtain the TPMS scale factor, used in the GD flow.
In this way, the trabecular structure is modelled as isotropic. However, a gradient in pore size and porosity could be introduced by adding to the TPMS equation,
for example, a linear term for z-values, instead of a constant offset value (t).
Fig. 7. TPMS cubic unit cell with the sphere representing the nominal pores size
6 Conclusions
This work describes a GD flow to generate customized scaffolds on specific patient anatomy, which allows to interactively controlling the internal morphology
of the lattices, in terms of porosity and pores size. This method produces a watertight mesh of the trabecular scaffold that can be manufactured by means of AM
technologies.
This method can be also applied to further application fields, other than BTE,
and, in general, in the design of customized porous lattices with a given 3D
boundary surface.
Attempts to export TPMS scaffolds towards a 3D printer were successfully performed, while structural analysis, by means of CAE tools, is still to be investigated.
Limits that bound the applicability of this method are those of any porous structure, thus the minimal dimension of the section of conducts and the maximum
volume of the customized scaffold for a given section. Extreme conditions can af-
434
M. Fantini et al.
fect the computational time and the manufacturability if the scaffold do not meet
the requirements or constraints of the manufacturing process.
References
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Salgado AJ, Coutinho OP, Reis RL. Bone Tissue Engineering: State of the Art and Future
Trends. Macromol Biosci. 2004;4(8):743-765.
Sun W, Starly B, Nam J, Darling A. Bio-CAD modeling and its applications in computeraided tissue engineering. Computer-Aided Design. 2005;37(11):1097-1114.
Bucklen B, Wettergreen M, Yuksel E, Liebschner M. Bone-derived CAD library for assembly of scaffolds in computer-aided tissue engineering. Virtual Phys Prototyp.
2008;3(1):13-23.
Chua CK, Leong KF, Cheah CM, Chua SW. Development of a Tissue Engineering Scaffold
Structure Library for Rapid Prototyping. Part 1: Investigation and Classification. Int J Adv
Manuf Technol. 2003:21(4);291-301.
Peltola SM, Melchels FP, Grijpma DW, Kellomäki M. A review of rapid prototyping techniques for tissue engineering purposes. Ann Med. 2008;40(4):268-280.
Yang S, Leong KF, Du Z, Chua CK. The Design Of Scaffolds For Use In Tissue Engineering. Part I Traditional Factors. Tissue Eng. 2001;7(6):679-89.
8
Xiao D, Yang Y, Su X, Wang D, Sun J. An integrated approach of topology optimized
design and selective laser melting process for titanium implants material. Bio-Medical Materials and Engineering. 2013;23(5);433-445.
Williams JM, Adewunmi A, Schek RM, Flanagan CL, Krebsbach PH, Feinberg SE, Hollister SJ, Das S. Bone tissue engineering using polycaprolactone scaffolds fabricated via selective laser sintering. Biomaterials. 2005;26(23):4817-4827.
Fantini M, Curto M, De Crescenzio F. A method to design biomimetic scaffolds for bone
tissue engineering based on Voronoi lattices. Virtual Phys Prototyp. 2016;11(2): 77-90.
Hollister SJ. Porous scaffold design for tissue engineering. Nat Mater. 2005;4(7):518-524.
Yoo DJ. Computer-aided Porous Scaffold Design for Tissue Engineering Using Triply Periodic Minimal Surfaces. Int J Precis Eng Man. 2011;12(1):61-67.
Lord EA, Mackay AL. Periodic minimal surfaces of cubic symmetry. Curr. Sci. 2003;85
(3):346-362.
Schwarz HA. Gesammelte Mathematische Abhandlungen vol.1, Springer, Berlin, (1890).
Schoen AH. Infinite periodic minimal surfaces without self-intersections. NASA Techn.
note no. D-5541, (1970).
Karageorgiou V, Kaplan D. Porosity of 3D biomaterial scaffolds and osteogenesis. Biomaterials. 2005;26:5474-5491
Hollister SJ, Kikuchi N. Homogenization Theory and Digital Imaging: A Basis for Studying the Mechanics and Design Principles of Bone Tissue. Biotechnology and Bioengineering. 1994;43(7):586-596.
Hulbert SF, Young FA, Mathews RS, Klawitter JJ, Talbert CD, Stelling FH. Potential ceramic materials as permanently implantable skeletal prostheses. J. Biomed. Mater. Res.
1970;4(3):433-456.
Clemow AJ, Weinstein AM, Klawitter JJ, Koeneman J, Anderson J. Interface mechanics of
porous titanium implants. J Biomed Mater Res. 1981;15(1):73-82.
Tsuruga E, Takita H, Itoh H, Wakisaka Y, Kuboki Y. Pore size of porous hydroxyapatite as
the cell-substratum controls BMP-induced osteogenesis. J Biochem. 1997;121(2):317-24.
TPMS for interactive modelling of trabecular …
20
21
22
435
Challis VJ, Roberts AP, Grotowski JF, Zhang LC, Sercombe TB. Prototypes for bone implant scaffolds designed via topology optimization and manufactured by solid freeform fabrication. 2010;12(11):1106-1110.
Melchels FP, Bertoldi K, Gabbrielli R, Velders AH, Feijen J, Grijpma DW. Mathematically
defined tissue engineering scaffold architectures prepared by stereolithography. Biomaterials. 2010;31(27):6909-6916.
Cornell CN. Osteoconductive materials and their role as substitutes for autogenous bone
grafts. Orthopedic Clinics of North America. 1999;30(4): 591-598.
Mechanical and Geometrical Properties
Assessment of Thermoplastic Materials for
Biomedical Application
Sandro BARONE1, Alessandro PAOLI1, Paolo NERI1, Armando Viviano
RAZIONALE1* and Michele GIANNESE1
1
DICI – Department of Civil and Industrial Engineering, University of Pisa
* Corresponding author. Tel.: +39-050-221-8012; fax: +39-050-221-8065. E-mail address:
a.razionale@ing.unipi.it
Abstract Clear thermoplastic aligners are nowadays widely used in orthodontics
for the correction of malocclusion or teeth misalignment defects. The treatment is
virtually designed with a planning software that allows for a definition of a sequence of little movement steps from the initial tooth position to the final desired
one. Every single step is transformed into a physical device, the aligner, by the use
of a 3D printed model on which a thin foil of plastic material is thermoformed.
Manufactured aligners could have inherent limitations such as dimensional instability, low strength, and poor wear resistance. These issues could be associated
with material characteristics and/or with the manufacturing processes. The present
work aims at the characterization of the manufactured orthodontic devices. Firstly,
mechanical properties of different materials have been assessed through a set of
tensile tests under diơerent experimental conditions. The tests have the purpose of
analyzing the eơect that the forming process and the normal use of the aligner may
have on mechanical properties of the material. The manufacturing process could
also introduce unexpected limitations in the resulting aligners. This would be a
critical element to control in order to establish resulting forces on teeth. Several
studies show that resulting forces could be greatly influenced by the aligner thickness. A method to easily measure the actual thickness of the manufactured aligner
is proposed. The analysis of a number of real cases shows as the thickness is far to
be uniform and could vary strongly along the surface of the tooth.
Keywords: 3D Human Modeling; Virtual Design; Clear Aligner; Thermoforming Process; Mechanical Properties Assessment; Optical 3D Scanner.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_44
437
438
S. Barone et al.
1 Introduction
Orthodontic treatment with clear aligners is becoming every day more popular
as orthodontic approach, also thanks to the diffusion of the CAD/CAM technologies in biomedical applications. Compared with traditional fixed appliances with
metallic braces, clear aligners are aesthetically appealing and more comfortable to
wear. These treatments are preferred by adult patients, who don’t want to treat
their malocclusions in a traditional way. However, sometimes it is difficult to resolve complicated cases such as severe crowding, deep bites or round teeth rotations with aligners. Often there is the need of case refinement or even to turn back
to braces due to the low efficacy and predictability of aligners [1-2].
The treatment process is based on the use of a digital impression, which could
be obtained with a 3D scan of a physical impression of the patient mouth or by a
direct scan of the desired geometry with an intra-oral scanner. The treatment is
virtually designed with a planning software that allows for the definition of a sequence of little movements in order to move the tooth from the initial position to
the final desired one. Every single step is transformed into a physical device, the
aligner, by the use of a 3D printed model on which a thin foil of plastic material is
thermoformed. However, manufactured aligners could have inherent limitations
such as dimensional instability, low strength, and poor wear resistance. These issues could be associated with materials characteristics and/or with the manufacturing processes.
Mechanical performances of aligner’s thermoplastic material play a critical role
in developing continuous orthodontic forces able to yield the desired results. The
orthodontic thermoplastic materials should have particular characteristics including transparency, lower hardness, better elasticity and resilience, and resistance to
aging.
The present work aims at the characterization of the manufactured orthodontic
devices. Firstly, mechanical properties of three different materials commonly used
in orthodontic practice has been assessed through a set of tensile tests carried out
under diơerent conditions. The tests have the purpose of analyzing the eơect that
the forming process, and subsequently the normal use of the aligner, may have on
properties of the material. For this reason, the samples were subjected to an aging
treatment in a particular solution that reproduces the biochemical behavior of human saliva.
On the other way, also the manufacturing process could introduce unexpected
limitations in the resulting aligner [3-4]. This would be a critical element to control within the aligner in order to explore resulting forces on teeth. Moreover,
force application in the tooth-aligner system is not a fully understood mechanism.
Several studies show that it could be greatly influenced by the aligner thickness
[5-7]. A method to easily measure the actual thickness of the manufactured aligner
is proposed. The method is based on the use of an optical scanner to acquire the
physical model used for thermoforming the device with and without the aligner.
Mechanical and Geometrical Properties …
439
The difference between the two acquisitions depends only on the thickness of the
aligner. The analysis of a number of real cases shows as the thickness is far to be
uniform and could vary strongly along the surface of the tooth.
Results obtained in the present work have allowed to characterize the aligner in
terms of mechanical and geometrical properties with high level of confidence.
This allows for a more accurate definition of the problem and a better design of
the treatment planning. Results could be also used to realize a reliable finite element method (FEM) model to use for treatment optimization purposes [8-9].
2 Materials and methods
2.1 Mechanical properties
Aligners in the oral cavity are subjected to aggressive environment that could
lead to a high degradation of their mechanical properties with negative influence
on treatment efficacy. For this reason, tensile tests were extended to different laboratory conditions to simulate the actual operating conditions. In particular, an
aging stage was designed to reproduce the influence of the saliva corrosion on the
aligner. An artificial mixture was used to reproduces the biochemical behavior of
human saliva [10-11].
2.1.1 Tensile test
The test campaign was designed on the EN ISO 527-1:2012 [12] technical indications that establishes general conditions and procedures for tensile tests on
plastic materials.
For each of the three materials, tests in original conditions (as from supplier),
after the thermoforming stage and after thermoforming and aging stage, were carried out. All materials were supplied in circular plates (diameter 125 mm) with
thickness of 0.75 and 1 mm. Both shape and size of the specimens were chosen on
this thickness value as suggested by normative.
The thermoforming process was performed under manufacturer’s recommendations. The thermoformed specimens were obtained by using the same technological process adopted to manufacture the aligner. A 3D printed disk of 80 mm diameter and 12 mm height was used as model form for the thermoforming machine
‹
(Ministar S from Scheu Dental ). The parameter used was 70°C for the infrared
440
S. Barone et al.
lamp and 4 bar pressure for 30 seconds. The flat surface on the top of the disk
provides a flat area of thermoformed material, which can be used to obtain the desired specimens. Final specimens were then obtained by high-pressure waterjet
cutting and manually refined. Each specimen was then measured with a micrometer (CLM1-15QM, Mitutoyo Co.) to establish the cross section area and preliminary weighted to evaluate the water quantity absorbed after the aging stage (Table
2). Some samples waere subjected to the aging treatment. The aging stage consisted in a bath in the prepared compound (Table 1) for 7 days at an environmental
temperature of 37°C. This time corresponds to half the normal use time of each
aligner during the actual treatment. It is justified because the water absorption
from plastic materials mainly occurs in the first 72-168 hours [13]. At the end of
the aging stage, the material was placed in a special environment (silica gel bell)
in order to reach the equilibrium condition and its weight variation was evaluated
to establish the absorbed rate of water.
The tensile tests were performed by the Instron Universal Testing Machine
5500R at a temperature of 23°C. A total of 10 specimens for each material were
tested under each condition (figure 1). Two different test campaigns were designed to determine the elastic modulus and the tensile yield stress. The testing
speed was defined as 0.1 mm/min to obtain data for the elastic modulus and 5
mm/min for the stress–strain curves as suggested by the ISO527-1. Mean and
standard deviation values were determined for each item. Comparisons among
materials under each test condition were finally performed.
Table 1. Chemical composition of artificial saliva pH=6.5.
Compound
Content [g/l]
NaCl
KCL
CaCl2·2H2O
KH2PO4
Na2HPO4·12H2O
KSCN
NaHCO3
Citric Acid
0.6
0.72
0.22
0.68
0.856
0.06
1.5
0.03
Table 2. Weight variation of material specimen for water absorption.
Material
Mat 1
Mat 2
Mat 3
Before aging (g)
After aging (g)
Variation %
0.1334
0.1255
0.0974
0.1339
0.1261
0.0981
0.375%
0.438%
0.719%
Mechanical and Geometrical Properties …
441
Fig. 1. Sample of the stress-strain curve obtained for a set of specimens of the material “Mat 1”
in original supplier condition.
3.1 Geometrical properties
The manufacturing process could introduce unexpected variations of the geometry of the resulting aligner. The measurement of the actual aligner’s shape would
be a critical element in order to assess the force system delivered by the appliance.
In this work, a method to easily measure the actual thickness of the manufactured
aligner is proposed. The method is based on the use of an optical scanner to acquire the physical model used to thermoform the device with and without the
aligner worn on it. The difference between the two acquisitions depends only on
the thickness of the aligner.
3.1.1 3D optical measurement
In this work, the focus was on the measurement of the thickness of the aligners
to assess its value along the teeth surfaces. At this purpose, several aligners were
manufactured with the same technological approach used for the usual aligner
production. Reference models of an upper and a lower human arch were 3D printed by a Stratasys Eden500V polyjet machine in high quality printing setup (slice
thickness of 16μm, accuracy of 200μm). The material used was the VeroDent
MED670 as construction material and SUP705 as supporting material. The supporting material was finally removed with high-pressure water jet in order to obtain the final models. The models were used to thermoform the aligners samples,
with the same parameters used for the thermoformed samples of the tensile test (§
2.2.1). Finally, the aligners were manually cut and refined with specific milling
tools.
An optical 3D scanner, specifically developed for dental models [14-15], was
used to measure the real shape of the aligner.
442
S. Barone et al.
The acquisition methodology is based on an active stereo vision approach,
which uses a binary coded lighting (fringe projection) to recover 3D points of tar‹
get surfaces. The hardware setup is composed of a DLP projector (OPTOMA
EX330e, resolution XGA 1024x768 pixels) and an 8-bit monochrome charge‹
coupled device (CCD) digital camera (The Imaging Source DMK 41BF02, resolution 1280x960 pixels) equipped with a lens having focal length of 16mm
(PENTAX, C31634KP 2/3 C-mount). Camera and projector are used as active devices of a stereo triangulation process, and the stereo rig has been configured for a
working distance of 300mm and a working volume of 100×80×80mm, with a resulting lateral resolution of 0.1mm and accuracy of 10μm. A calibration procedure
is required to calculate the intrinsic and extrinsic parameters of the optical devices,
with respect to an absolute reference system. The optical devices have been integrated with a double motorized turntable (figure 2a). The acquisition system allows for a complete automatic measurement of dental model arches.
a)
b)
c)
Fig. 2. Model placed on the acquisition plate of the 3D scanner (a), detail of the 3D printed model with aligner (b) and the result of the acquisition (c).
The methodology developed for the measurement of the aligner’s geometry
consists in the acquisition of the model used for the thermoforming process with
and without the aligners fitted on it. The model and the aligner surfaces were uniformly covered with an optic spray (Occlu Plus Hager & Werken) in order to have
a uniform non transparent color (figure 2b). The software Raindrop Geomagic
‹
Qualify was used to compare the results. The two acquisition were aligned in a
common reference system by a best fit matching algorithm. Differences were
evaluated in terms of 3D distances between corresponding points of the surfaces
and 2D distances between some main slices of the two digital models.
Mechanical and Geometrical Properties …
443
4 Results and discussion
The mechanical properties of thermoplastic materials used for the production of
clear aligners may greatly vary from the nominal values for what concerns elastic
modulus and tensile yield stress. The variation can be influenced by several factors, such as manufacturing technologies and use conditions. This study focused
on the assessment of material’s mechanical properties by reproducing some of the
actual use conditions, in order to establish the values of changes rate. Moreover,
an assessment of the geometrical shape of the manufactured aligners has been carried out.
4.1 Tensile tests
Tables 3 and 4 and the graphical plots in the figure 3 show the results for elastic moduli and tensile yield stress for materials analyzed under the different conditions.
Table 3. Results from tensile test for elastic modulus E.
Material
Mat 1
Mat 2
Mat 3
Condition
E [MPa]
Supplier specification
2200
-
Before thermoforming
1531
41
Thermoformed
1693
51
Aged
1368
35
Supplier specification
2050
-
Before thermoforming
1556
48
Thermoformed
1447
42
Aged
1519
62
Supplier specification
-
-
Before thermoforming
1478
88
Thermoformed
1730
77
Aged
1466
72
σ
Results show how the thermoforming process generally does not lead to a great
decrease in elastic properties of the thermoplastic materials, except for one of the
tested materials (Mat 3), which shows a 30% decrease in tensile yield stress after
thermoforming. For the other materials, a maximum properties variation of 15%,
with respect to the supplier specifications, can be observed. A general reduction of
the elastic modulus has been measured with respect to values declared in the technical sheet (except for the Mat 3 for which no technical data were found), while
444
S. Barone et al.
very similar values were found for tensile yield stress. In some cases, a little increase in values has been obtained after the thermoforming process.
Table 4. Results from tensile test for tensile yield stress.
Material
Mat 1
Mat 2
Mat 3
Condition
Tensile yield
stress [MPa]
σ
Supplier specification
53
-
Before thermoforming
49,29
0,45
Thermoformed
53,52
4,84
Aged
49,49
1,76
Supplier specification
50
-
Before thermoforming
52,10
1,49
Thermoformed
48,75
2,57
Aged
50,62
2,88
Supplier specification
-
-
Before thermoforming
62,37
0,90
Thermoformed
41,92
2,94
Aged
44,61
1,82
Fig. 3. Distribution of elastic modulus and tensile yield stress in the tested conditions.
Intraoral environment could also lead to changes in material mechanical properties. The main influence is from the rate of water absorbed and the aggressive
characteristic of the human saliva. These effects have been taken into account by
the tensile tests on aged material. The results, in this case, show that the elastic
modulus generally decreases with respect to both the supplier specifications and
the simply thermoformed material. It is also worth noting that water absorption
could affect the sizes of the aligner and have an influence on the fit on the patient
arch, with potential loss of the effectiveness by unpredictable orthodontic force on
the teeth.
Mechanical and Geometrical Properties …
445
4.2 Geometrical measurement
The correct use of thermoplastic aligners in orthodontic practice requires the
knowledge of the real geometry of the actual aligner, particularly about its thickness along surfaces. The change of the shape is surely influenced by the hygroscopic property of the material, which needs to be assessed. Indeed, the standard
production technology used for the fabrication of the aligners has the main influences on the resulting shape of the object. In this work, a method to measure the
resulting size of the aligners was proposed in order to relate the real shape object
with the technological approach adopted to produce the aligner.
The acquisitions show that the thickness of the artefacts is far away from being
uniform and can vary significantly along its surface. Figure 4 shows the measured
shapes of the aligners.
Fig. 4. 3D compare map and 2D measurement on a slice of an upper and a lower arch.
5 Conclusion
Mechanical properties of the materials used for the production of clear aligners
are of utmost importance to evaluate the effectiveness of an orthodontic treatment
with these devices. Technical data supplied along with materials cannot always be
used as a reference, but need to be assessed taking into account the different conditions in which they are used. Results obtained in the present work suggest that
the magnitude of the effects of the different conditions on the mechanical properties of thermoplastic materials can differ between different materials. Therefore,
materials for orthodontic appliances should be selected after a detailed characterization of their mechanical properties in the simulated intraoral environment. On
the other way, the force delivery process in the tooth-aligner system is not a fully
understood mechanism. Several studies show that it could be greatly influenced by
the aligner thickness. In this work, a methodology to measure geometrical proper-
446
S. Barone et al.
ties of clear aligners is proposed. The results obtained for real cases show as the
thickness is far to be uniform and could vary strongly along the surface of the
tooth. Future works will be dedicated on the validation of the proposed method by
its application for optimize real orthodontic cases.
References
1. Simon M., Keilig L., Schwarze J., Jung B.A., Bourauel C. Forces and moments generated by
removable thermoplastic aligners: incisor torque, premolar derotation, and molar
distalization, Am. J. Orthod. Dentofacial Orthop., 2014,145:728-736.
2. Miller K.B., McGorray S.P., Womack R., et al. A comparison of treatment impacts between
Invisalign aligner and fixed appliance therapy during the first week of treatment, Am. J.
Orthod. Dentofacial Orthop., 2007,131 :302.el-302.e9.
3. Zhang N., Bai Y., Ding X., Zhang Y. Preparation and characterization of thermoplastic materials for invisible orthodontics, Dent. Mater. J., 2011,30:954–959.
4. Ma Y.S., Fang D.Y., Zhang N., Ding X.J., Zhang K.Y., Bai Y.X. Mechanical Properties of Orthodontic Thermoplastics PETG, The Chinese journal of dental research, 2016, 19 (1), pp.
43-48.
5. Hahn W., Dathe H., Fialka-Fricke J., et al. Influence of thermoplastic appliance thickness on
the magnitude of force delivered to a maxillary central incisor during tipping, Am. J. Orthod.
Dentofacial Orthop, 2009,136:12.e1–12.e7.
6. Martorelli M., Gerbino S., Giudice M., Ausiello P. A comparison between customized clear
and removable orthodontic appliances manufactured using RP and CNC techniques, Dental
Materials, 2013, 29(2), pp. 1-10
7. Hahn W., Engelke B., Jung K., et al. Initial forces and moments delivered by removable thermoplastic appliances during rotation of an upper central incisor, Angle Orthod., 2010,
80:239–246.
8. Savignano R., Viecilli R.F., Paoli A., Razionale A.V., Barone S. Nonlinear dependency of
tooth movement on force system directions, American Journal of Orthodontics and
Dentofacial Orthopedics, 2016, 149 (6), pp. 838-846.
9. Barone S., Paoli A., Razionale A.V., Savignano R. Design of customised orthodontic devices
by digital imaging and CAD/FEM modelling. In BIOIMAGING 2016 - 3rd International
Conference on Bioimaging, Proceedings, Part of 9th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2016, pp. 44-549.
10. Duffó G. S., Castillo E. Q. Development of an artificial saliva solution for studying the corrosion behavior of dental alloys, Corrosion 60.6, 2004, 594-602.
11. Porcayo-Calderon J., et al. Corrosion Performance of Fe-Cr-Ni Alloys in Artificial Saliva
and Mouthwash Solution, Bioinorganic chemistry and applications, 2015
12. ISO 527-1:2012(en) Plastics – Determination of tensile properties. International Organization
for Standardization, Geneva, Switzerland, 2012
13. Ryokawa H., et al. The mechanical properties of dental thermoplastic materials in a simulated intraoral environment. Orthodontic Waves 65.2, 2006, 64-72.
14. Barone S., Paoli A., Razionale A.V. Computer-aided modelling of three-dimensional maxillofacial tissues through multi-modal imaging, Proceedings of the Institution of Mechanical
Engineers, Part H: Journal of Engineering in Medicine, 2013, 227 (2), pp. 89-104.
15. Barone S., Paoli A., Razionale, A.V. Multiple alignments of range maps by active stereo imaging and global marker framing, Optics and Lasers in Engineering, 2013, 51 (2), pp. 116127.
The design of a knee prosthesis by Finite
Element Analysis
Saúl Íñiguez-Macedo1, Fátima Somovilla-Gómez1, Rubén Lostado-Lorza1*,
Marina Corral-Bobadilla1, María Ángeles Martínez-Calvo1, Félix Sanz-Adán1
University of La Rioja, Mechanical Engineering Department, Logroño, 26004. La Rioja,
Spain.
1
* Corresponding author. Tel.: +0034 941299727; fax: ++0034 941299727. E-mail address:
ruben.lostado@unirioja.es
Abstract The purpose of this paper is to study two types of knee prosthesis that
are based on the Finite Element Method (FEM). The process to generate the Finite
Element (FE) models was conducted in several steps. A 3D geometric model of a
healthy knee joint was created using 3D scanned data from an anatomical knee
model. This healthy model comprises a portion of the long bones (femur, tibia and
fibula) as well as by the lateral and medial meniscus, cartilage and ligaments. The
digital model that was obtained was repaired and converted to an engineering
drawing format using CATIA© software. Based on the foregoing format, two
types of artificial knee prostheses were designed and assembled. Mentat Marc©
software was used to model the healthy and artificial knee FE models. The healthy
and artificial knee FE models were subjected to different loads. The anthropometry of the human body that was studied and the combination of loads to
apply to the knee were obtained by use of 3D Static Strength Prediction software
(3DSSPP©). The Von Mises stresses, as well as all the relative displacements of
the components of the healthy and artificial knee FE model, were obtained from
the Mentat Marc© software. The Von Mises stresses for both the cortical and the
trabecular bone of the artificial and healthy knee FE model were analyzed and
compared. The stresses that were obtained from the two knee prosthesis that were
studied based on the artificial FE models were very similar to those stresses that
were obtained from healthy FE models.
Keywords: Finite Element Model (FEM), Knee joint, Total Knee Replacement,
Ligaments.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_45
447
448
S. Íñiguez-Macedo et al.
1 Introduction
Every year, thousands of patients suffer severely diseased knee joints, as in rheumatoid arthritis or osteoarthritis. Total Knee Replacement (TKR) can help to relieve pain and to restore functioning of knee joints to return to a more active life.
[1]. Knee arthroplasty (UKA) is an effective treatment for localized osteo-arthritis
of the knee joint, and has demonstrated exceptional success rates in a number of
studies [2]. The knee is one of the most complex joints in the human body. It is
distinguished by its complex geometry and multibody articulations. Optimal joint
stability and compliance during functional activities are provided by anatomical
structures, such as ligaments, menisci, and articular cartilage. However, abnormalities due to age, injury, disease, and other factors can affect biomechanical functioning of the knee joint [3]. This paper analyzes the behavior of two types of artificial knee prosthesis using the Finite Element Method (FEM). The use of FE
analysis has become increasingly popular, as it allows for detailed analysis of the
behavior of the joint/tissue under complex, clinically-relevant loading conditions.
During the past three decades, a large number of FE knee models of varying degrees of complexity, accuracy, and functionality have been reported in the literature [4, 5]. This study presents a healthy Finite Element (FE) model that comprised a portion of the long bones (femur, tibia and fibula), as well as by the lateral
and medial meniscus, cartilage and ligaments. The process to generate the healthy
Finite Element (FE) model was undertaken in several steps. Initially, a 3D scanning of an anatomical knee model generated the first digital model. This digital
model was re-paired and converted into an engineering drawing format using
CATIA© software. A combination of tetrahedral, hexahedral, and line finite elements (FE) were used with Mentat Marc© software to model the Finite Element
(FE) healthy knee model from the engineering drawing format previously generated. A combination of loads to apply to the knee were obtained using the 3D Static
Strength Prediction software™ (3DSSPP) and the anthropometry of the patient
studied. A nonlinear FE analysis in which the mechanical contacts among the long
bones, lateral and medial meniscus, and cartilage, as well as the nonlinear behavior of the ligaments, were taken into consideration for obtain the Von Mises
stresses and the displacement of the different component of the healthy knee. Using the healthy knee model that was generated and CATIA© software, two types
of artificial knee prostheses were assembled to generate the artificial knee model
with the engineering drawing in the new format. The model in which the two artificial knee prostheses were joined consisted of a group of long bones from which
the meniscus and cartilage were removed. Similar to the process of that was developed for the assembly of the healthy knee FE model, the two artificial knee
models that were proposed were developed using Mentat Marc© software. In order that the behavior of healthy and artificial knee FE models could be compared,
a group of identical loads, which had been obtained from the 3DSSPP© software,
were applied to both models (healthy and artificial).
The design of a knee prosthesis by Finite Element Analysis
449
2 Material and Methods
2.1 Knee Anatomy
The knee is one of the largest and most complex joints in the body. The knee joins
the thigh bone (femur) to the shin bone (tibia). The smaller bone that runs alongside the tibia (fibula) and the kneecap (patella) are the other bones that form the
knee joint. Tendons connect the knee bones to the leg muscles that move the knee
joint. Ligaments join the knee bones and provide stability to the knee.
2.2 Total Knee Replacement (TKR)
Knee replacement is surgery for people who have severe knee damage. The
most common cause of chronic knee pain is arthritis. Knee replacement can relieve pain and enable one to be more active. When someone has a total knee replacement, the surgeon removes the damaged cartilage and bone from the surface
of the knee joint and replaces them with a prosthesis that is made of metal and
plastic.
Fig. 1. Total Knee Replacement (TKR): (a) Severe osteoarthritis. (b) The arthritic cartilage and
underlying bone have been removed and resurfaced by implants on the femur and tibia [6]
2.2 Healthy and Artificial Model: 3D Scanning
The process to generate the healthy Finite Element (FE) model was developed
in several steps. First, the *.stl files that was generated in the 3D scanning from an
450
S. Íñiguez-Macedo et al.
anatomical knee model (Figure 2a) by Sense 3D Scanner software were imported
into CATIA® CAD software. This digital model was repaired and all of imported
parts were then assembled and converted to an engineering drawing format. The
cartilages that were not reconstructed in the segmentation process were then modelled in order to connect the bones and fill the cartilaginous space. The finished
CAD model was imported and assembled in the non-linear FEA package Mentat
Marc© (Figure 2b). A combination of tetrahedral, hexahedral, and line finite elements (FE) were used to model the healthy knee Finite Element (FE) model from
the engineering drawing format that was generated previously. All FE models of
both healthy and the artificial knee model had a linear formulation. Also, all existing pairs of contacts for both healthy and artificial knee, considered segment-tosegment contact model.
Fig. 2. (a) Anatomical Knee model; (b) Mentat Marc© FE model
2.3Finite Element Model (FEM): Healthy and Artificial Models
Using the generated healthy knee model and CATIA© software, two types of
artificial knee prostheses were assembled in order to generate the artificial knee
model with the new format engineering drawing. The materials used for both prostheses were, respectively, titanium and CrCoMo alloy. Figure 3a shows a titanium
alloy prosthesis modeled with CATIA© software. Figure 3b shows a FE model in
which a knee arthroplasty or TKR has been performed.
The design of a knee prosthesis by Finite Element Analysis
451
Fig. 3. (a) Titanium Alloy Prosthesis. (b) Total Knee Replacement
2.4 Material Properties
The properties of the materials of each of the components that are described in
this article and related to the FE modeling of knee arthroplasty have been selected
from the literature [7]. The most relevant properties are summarized in Table 1.
Table 1. Material Properties.
Material
Femur
Tibia
Perone
Trabecular Bone
Meniscus [8]
Ligaments [9,10, 11]
Titanio Alloy
CrCoMo Alloy
HDPE
Young’s Mod.
[MPa]
12000
12000
12000
100
250
390
107000
200000
1000
Poisson
Nº Elements
Nº Nodes
0.2
0.2
0.2
0.3
0.45
0.4
0.34
0.3
0.46
271780
209722
85882
113476
65197
121803
25609
25609
58465
33455
22440
9612
13968
7911
14375
7237
7237
7010
2.5 Loads
Using the 3D Static Strength Prediction software (3DSSPP©) [12] and the anthropometry of the patient studied, a combination of loads to apply to the knee
452
S. Íñiguez-Macedo et al.
were obtained. The magnitudes of the loads to apply are calculated by the range of
movements provided in the biomechanics of the knee. The 3DSSPP© software
have provided the following loads to apply to the knee of a man of 30 years of age
who has a weight of 120 kg and a height of 1.90 meters. In this case, the resulting
load to apply in the FE model is about 100 kg. Figure 4a shows a threedimensional anthropometric model that was obtained with 3DSSPP software when
climbing stairs is considered. Figure 4b shows the forces that are acting on the
model.
2.6 Boundary conditions
Once the loads that are based on the anthropometry studied and the position of
the human body were obtained with the 3DSSPP© software, they were applied to
both the healthy and the artificial FE models. In this case, an embedment at the
lower end of the tibia and the fibula were applied to a group of nodes, while the
load (100kg.) was applied to a group of nodes at the upper end of the femur. Figure 4c and Figure 4d shows, respectively, the boundary conditions for the FE
model considering both the load and the restriction of movement.
Fig. 4. (a) Antropometry of the body studied. (b) Human Loads arrangement, and (c) Boundary
conditions applied to the FE models: (c) load on the nodes of the femur and (d) embedment on
the nodes of the tibia and fibula
All FE models were simulated in a computer server Intel-Xeon, working in
parallel conditions with 32 GB of RAM and 8 cores. The computational time for
each of the FE models that was analyzed was approximately five hours.
The design of a knee prosthesis by Finite Element Analysis
453
3 Results
Figure 5(a) shows the relative displacements of different parts that make up the
healthy knee FE model. Figure 5(b) shows the relative displacements of titanium
alloy artificial knee model. The two figures show that the relative displacements
of both the healthy and the artificial models were very similar. Similarly, the displacement of the CrCoMo alloy artificial knee model was very similar to the
healthy model. For all cases of artificial FE knee models that were studied, it was
observed that the resultant forces acting on each of the ligaments of the knee were
similar to those obtained in the healthy FE model.
Fig. 5. (a) Relative Displacement of the Healthy FE model. (b) Relative Displacement of the Artificial FE Model
Figure 6 shows the Von Mises stresses in the femoral head of the healthy knee
model (Figure a) and for the artificial knee model (Figure b). It can be seen that
the stresses in the femur head of the healthy FE model have a value of 3.12 MPa,
whereas the artificial FE model has a highly localized value of 10.42 MPa. This
difference between the stresses on the artificial and healthy models is due mainly
to mechanical contact between the bone and the titanium prosthesis.
454
S. Íñiguez-Macedo et al.
Fig. 6. (a) Von Mises stresses on the femoral head of the healthy knee model (b) Von Mises
stresses on the femoral head of the healthy knee model when the femur is in contact with the
prosthesis.
Also, Figure 7 shows the stresses obtained on each of both the healthy and the
artificial ligaments of the FE models that were studied. The maximum stresses for
the medial collateral ligaments were 1.495 MPa (Figure 7a) while the ultimate
tensile strength for these ligaments are around 39 MPa [9]. In a similar fashion,
the maximum stress for the anterior cruciate ligament is 0.3519 MPa (Figure 7b)
and 0.279 for the posterior cruciate ligament (Figure 7c), while the ultimate tensile
strength for these ligaments are, respectively, 13 and 30 MPa [9]. This suggests
that the resultant forces that act on each of the ligaments of the knee did not exg
ceed its tensile strength.
Fig. 7. (a) Maximum stresses for the medial collateral ligaments (b) maximum stress for the anterior cruciate ligament and (c) maximum stress for the posterior cruciate ligament
The design of a knee prosthesis by Finite Element Analysis
4
455
Discussion and Conclusions
The Von Mises stresses for both the cortical and trabecular bones of the artificial
and healthy knee FE models were analyzed and compared. The stresses on the two
knee prostheses that were studied based on the artificial FE models were very similar to those stresses on healthy FE models. In addition, we can see that the maximum Von Mises stresses were registered in the contact zone of the titanium alloy
prosthesis. The stresses in the femur head of the healthy FE model had a value of
3.12 MPa, whereas the artificial FE model had a highly localized value of 10.42
MPa. This difference between the stresses on the artificial and healthy models is
due mainly to localized mechanical contact between the bone and the titanium
prosthesis. In addition, for all cases studied of the artificial FE knee model, the resultant forces that were acting on each of the ligaments of the knee did not exceed
its tensile strength. This study demonstrates that the FEM may be used in combination with 3D design software as a set of efficient tools for the design of human
prostheses.
References
1. Hopkins, Andrew R., et al. Finite element analysis of unicompartmental knee arthroplasty.
Medical engineering & physics, 2010, vol. 32, no 1, p. 14-21.
2. Argenson, Jean-Noël A.; Chevrol-Benkeddache, Yamina; AUBANIAC, Jean-Manuel. Modern unicompartmental knee arthroplasty with cement. J Bone Joint Surg Am, 2002, vol. 84,
no 12, p. 2235-2239.
3. Kaul, Vikas, et al. Finite Element Model of the Knee for Investigation of Injury Mechanisms:
Development and Validation.
4. Adouni, M., Shirazi-Adl, A., and Shirazi, R., 2012, “Computational Biodynamics of Human
Knee Joint in Gait: From Muscle Forces to Cartilage Stresses,” J. Biomech., 45(12), pp.
2149–2156
5. Baldwin, M. A., Clary, C. W., Fitzpatrick, C. K., Deacy, J. S., Maletsky, L. P., and
Rullkoetter, P. J., 2012, “Dynamic Finite Element Knee Simulation for Evaluation of Knee
Replacement Mechanics,” J. Biomech., 45(3), pp. 474–483.
6. Total Knee Replacement: http://orthoinfo.aaos.org
7. Carr, Brandi C.; Goswami, Tarun. Knee implants–Review of models and biomechanics. Materials & Design, 2009, vol. 30, no 2, p. 398-413.
8. Cowin, Stephen C. The mechanical properties of cancellous bone. CRC Press, Boca Raton,
FL, 1989.
9. Woo, S. L. Y., et al. Functional Tissue Engineering of Ligament and Tendon Injuries. Book
Ch. no 9. Translational Approaches In Tissue Engineering And Regenerative Medicine. 2007.
10. Beillas, P., et al. A new method to investigate in vivo knee behavior using a finite element
model of the lower limb. Journal of biomechanics, 2004, vol. 37, no 7, p. 1019-1030.
11. Vairis, Achilles, et al. Evaluation of a posterior cruciate ligament deficient human knee joint
finite element model. QScience Connect, 2014.
12. 3D Static Strength Prediction Program Version. User's Manual. The University of Michigan
Center for Ergonomics. 2016.
Design and Rapid Manufacturing of a
customized foot orthosis: a first methodological
study
Fantini M1, De Crescenzio F1*, Brognara L2 and Baldini N2
1
University of Bologna, Department of Industrial Engineering, Bologna, Italy
2
University of Bologna, Biomedical and Neuromotor Sciences, Bologna, Italy
* Corresponding author. Tel.: +39 0543374447. E-mail address:
francesca.decrescenzio@unibo.it
Abstract A feasibility study was performed in order to demonstrate the benefits
of designing and manufacturing a customized foot orthosis by means of digital
technologies, such as Reverse Engineering (RE), Generative Design (GD) and
Additive Manufacturing (AM). The aim of this work was to define the complete
design-manufacturing process, starting from the 3D scanning of the human foot
anatomy to the direct fabricating of the customized foot orthosis. Moreover, this
first methodological study tries to combine a user-friendly semi-automatic modelling approach with the use of low-cost devices for the 3D laser scanning and the
3D printing processes. Finally, the result of this approach, based on digital technologies, was also compared with that achieved by means of conventional manual
techniques.
Keywords: Reverse Engineering, Generative Design, Computer Aided Design,
Additive Manufacturing, Foot orthosis.
1 Introduction
In general, according to the ISO 8549-1:1989 definition, an orthosis is “an externally applied device used to modify the structural and functional characteristics of
the neuromuscular and skeletal system”. In particular, within the medical field, the
foot orthotics is the specialty concerned with the design, manufacturing and application of foot orthoses, which are the functional devices, conceived to correct and
optimize the foot functions. Nowadays, customized foot orthoses are recognized
as the standard for the treatment of foot and lower limb pathologies.
In clinical practice, the traditional methods for manufacturing this kind of devices are completely manual and are mainly based on plaster casting and hand fab-
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_46
457
458
M. Fantini et al.
rication [1, 2]. For example, a typical workflow can be described by the following
phases. Firstly, in order to produce an effective and comfortable orthosis, which fit
properly and accurately with the patient, a plaster-based impression of the foot in
the neutral position is taken to obtain a consistent cast. Later, a positive replica of
the foot is developed by filling the negative impression cast, typically, using plaster. The next step is the manual modification and smoothing of this replica with
additional plaster material to replicate the soft tissues adaptation on load bearing
and to address the requirements of patient specific problems (Fig. 1). Subsequently, a rectangular sheet of Low Temperature Thermoplastic (LTT) material,
with thickness around 2 to 3 mm, is heated in the oven until the plastic reaches its
softening point and becomes pliable. Then, the heated plastic sheet is draped over
the corrected positive cast in a vacuum former and the vacuum is applied in order
to obtain the impression of the foot on the sheet. After cooling, the formed LTT
sheet is removed from the cast and manually cut to obtain the final shape of the
rigid shell for the foot orthosis. Finally, the rigid shell is completed by adding the
hell and by applying a soft top cover, before delivering to the patient the customized functional foot orthosis (Fig. 2).
Fig. 1. Plaster cast of the foot in the neutral position (left) and that modified with plaster addiction (right).
Fig. 2. Rigid shell for the foot orthosis with the hell (left) and final functional foot orthosis
(right).
Therefore, this conventional approach, widely used among practitioners, is
completely based on manual activities and craft based processes that depend on
the skills and expertise of individual orthoptists and podiatrists that need considerable training and practice in order to reach optimal results. Moreover, this approach is also unpleasant for patients during the cast impression and frequently
Design and Rapid Manufacturing of a customized foot orthosis ...
459
needs to reiterate the process if the orthosis has a poor fit on the foot, thus resulting time-consuming and material-wasting.
On the other side, novel approaches for design and manufacturing customized
foot-wrist orthoses by means of digital technologies are recently reported as an alternative method to overcome these limitations [3,4,5,6,7,8]. These are generally
based on Reverse Engineering (RE), Computer Aided Design (CAD) and Additive
Manufacturing (AM) with the following three main activities:
1) positioning the patient in a way that is suitable for 3D laser scanning and
creating a full point cloud of the foot;
2) processing the data to generate the 3D model of the desired foot orthosis according to the clinical needs;
3) manufacturing the customized functional foot orthosis using a 3D printer.
However, for what concerns the use of digital technologies for producing customized orthoses, some disadvantages that limits the spread of this approach are
also reported [9]. First, the investment required for the devices needed for RE and
AM (3D laser scanning and 3D printer) could be considerable. Then, the 3D modelling process is very hard for practitioners that have not enough knowledge and
skill about CAD applications, so that the training required making them able to
autonomously complete a proper design project could be time and cost prohibitive.
Fortunately, in recent years we have assisted to the proliferation of many manufacturers of 3D laser scanners and 3D printers, thus resulting in a significant costs
reduction of these devices. Again, in the design processes, tools and methods are
quickly evolving from Computer Aided Design (CAD) into Generative Design
(GD) that allows the user to obtain complex design tasks by a semi-automatic
modelling process, and to customize the resulting models by interactively modifying certain parameters.
For these reasons, this first methodological study aims to combine the use of
low-cost devices for 3D laser scanning and 3D printing with a semi-automatic
modelling approach to assess the feasibility of a user-friendly and cost-effective
solution to improve the traditional design-manufacturing process of customized
functional foot orthoses. Therefore, the study has been carried out in an interdisciplinary cooperation approach between the staff of Design and Methods in Industrial Engineering and the staff of Podology. More specifically, a Generative Design (GD) workflow was expressly developed to enable the practitioners without
enough CAD skills to easily design and interactively customize foot orthoses. Additionally, low-cost devices for reverse engineering and additive manufacturing
that have been acquired by the Podology Lab, were also tested and compared with
the high-cost ones of the Department of Industrial Engineering.
460
M. Fantini et al.
2 Methods
The whole methodology can be divided into three main processes. The first process regards the digitization of the foot of the patient by means of 3D laser scanner
devices to produce a digital model. Then, the Generative Design process, which
has been expressly developed for this purpose, allows interactively generating a
customized foot orthosis, also adjusting several features, and exporting the watertight mesh in STL format. Finally, the last process involves the Additive Manufacturing of the physical prototype.
2.1 Digitizing process
The digitizing process was carried out by means of two capturing laser scanner
devices (Fig. 3).
Initially, a method for scanning the foot of a patient-volunteer that minimizes
the time needed to obtain the point cloud and the errors, while aiming at the repeatability of the measure, was set. First, the foot of the patient was kept fixed and
stable with the help of the practitioner; then, the scan was focused only on the
lower part of the sole that is used for the design of the custom foot orthosis. To
validate the result, the point cloud obtained by the scanning of the posterior plantar surface of the foot in the neutral position (direct approach) was compared with
the scanning of the plaster cast of the same foot taken in the neutral position (indirect approach).
Fig. 3. Vivid 9i laser scanner from Konica Minolta (left) and Sense 3D scanner from 3D Systems
(right).
For this validation, the digitizing process was carried out by means of Vivid 9i
laser scanner (Konica Minolta, Tokyo, Japan). This is a tripod mounted noncontact 3D digitizer (approximately 55.000€) that provides high-speed and high-
Design and Rapid Manufacturing of a customized foot orthosis ...
461
accuracy 3D measurements, based on the principle of laser triangulation. It is provided with the Polygon Editing Tool software to control the scanner and acquire
the data.
Afterwards, the models, coming from the direct and indirect approach were uploaded into the open source software MeshLab (MeshLab, Visual Computing Lab
–ISI – CNR) [10], version 1.3.3, and the Iterative Closest Point (ICP) algorithm
was applied to automatically align the two meshes. Then, for measuring the difference between the two meshes, the Hausdorff distance filter was applied. To better
visualize the error, the computed distance values were also visualized using a
quality colour filter. It was observed that in the almost lower part of the sole, the
error is under 1.5 mm (Fig. 4). Differences resulting by the comparison of the
point clouds obtained with the direct and indirect are mainly due to a geometrical
discrepancy between the foot plaster model and the real patient foot during the
scanning. Differences that can be ascribed to the non-stationarity of the patient
during the acquisition process were limited by cutting the upper part of the sole
with the toes that is more susceptible to involuntary movements of the patient.
Fig. 4. Digital models of the foot after indirect (left) and direct (centre) laser scanning with Vivid
9i laser scanner from Konica Minolta; Hausdorff Distance between the two meshes (right).
Additionally, a second capturing device was used; the Sense 3D scanner (3D
Systems, Rock Hill, South Carolina, USA). This is a low-cost handheld device
(approximately 400€) that projects a pattern onto the surroundings using an infrared laser. The comparison was carried out by scanning the foot plaster cast, instead of the real foot itself, to avoid errors due to the different position of the foot
potentially held by the patient-volunteer.
After laser scanning, the digital models of the foot plaster cast obtained by
Vivid 9i laser scanner (575.408 vertex; 287.706 faces) and by Sense 3D scanner
(12.044 vertex; 24.049 faces) were compared in MeshLab. Once aligned the two
meshes by applying the ICP algorithm, the Hausdorff distance between the two
462
M. Fantini et al.
meshes was computed and visualized applying a quality colour filter. An error under 1.5 mm was observed in the plantar surface of the foot (Fig. 5).
Fig. 5. Digital models of the foot after laser scanning with Vivid 9i laser scanner from Konica
Minolta (left) and with Sense 3D scanner from 3D Systems (centre); Hausdorff Distance between the two meshes (right).
2.2 Generative Design process
In the last years, from the perspective of the designers, tools and methods are
quickly evolving from Computer Aided Design (CAD) into Generative Design
(GD) in different application fields, such as architecture, jewellery and industrial
design.
The most important aspect of GD is that it allows the generation of an infinite
number of shapes that follow specific rules, since GD is not about designing a
shape, but it is about designing the process that builds the shape. This approach allows the user to obtain complex design tasks by a semi-automatic modelling process, and to customize the resulting geometrical models by interactively modifying
certain parameters.
In practice, while Rhinoceros 3D (McNeel, Seattle, WA, USA) is a CAD environment, widely used for industrial design, which allows freeform modelling at
any level of size and complexity, Grasshopper is the graphical algorithm editor,
thoroughly integrated with the Rhinoceros 3D modelling tools. It is conceived to
create parametric 3D geometries by dragging components onto a tabs-canvas interface and then visualise the modelling results within Rhinoceros 3D CAD environment.
Design and Rapid Manufacturing of a customized foot orthosis ...
463
Therefore, for what concerns the design process, instead of having a classical
CAD modelling approach, we formalized a GD workflow that contains the rules to
generate a foot orthosis that is customized on the specific patient anatomy, according to the clinical needs (Fig. 6a). Moreover, this method allows interactively
modifying the geometrical features of the foot orthosis (shell thickness, hell size,
etc.) by simply moving the sliders of the control panel (Fig. 6b), and producing in
a semi-automatic approach the watertight mesh ready for the next AM process.
In the GD workflow, the input data are the mesh of the foot and three reference
points, corresponding to the first and the fifth metatarsophalangeal joints and to
the sustentaculum tali that must be marked by the practitioner on the plantar surface of the mesh. This is the only activity requested to the practitioner, since these
three points allow the automatic orientation of the mesh in the CAD environment
for starting the automatic design process. Therefore, the outline of the mesh is
used to represent the contour of the foot, while the first and the fifth metatarsophalangeal joints are also used to mark the line that indicates the anterior border of the
shell of the orthosis (Fig. 7b). Subsequently, two orthogonal series of lines are
projected onto the plantar surface of the mesh and then used to create a blended
surface according to the morphology of the foot (Fig. 7c). Then, this blended surface is thickened and trimmed to create the shape of the shell of the orthosis
(Fig. 7d). The customization process continued by allowing the practitioner to adjust several features of the foot orthosis such as the hell size, chamfer and cut, and
the shell slant bevel (Fig 7e). Finally, this method allows obtaining the watertight
mesh of the customized foot orthosis (in STL format) that can be directly manufactured by means of AM technologies (Fig. 7f).
a
b
Fig. 6. Starting part of the GD workflow with the input data (a) and sliders of the control panel
for interactively modifying the foot orthosis geometrical features (b).
a
b
c
464
M. Fantini et al.
d
e
f
Fig. 7. Some steps of the modelling process: input data: mesh of the foot and three reference
points, (a), outline of the foot and border of the shell (b), blended surface on the mesh (c), basic
shape of the shell (d), final model of the shell (d) and watertight mesh of the customized foot orthosis in STL format (f).
2.3 Additive Manufacturing process
For what concerns the AM process, the customized foot orthosis was manufactured by means of two different 3D printers (Fig. 8), both based on Fused Deposition Modelling (FDM). This technology allow building parts by heating and extruding a thermoplastic filament. In general, the whole process is divided into
three steps:
1. Pre-processing: the 3D printing preparation software orients and slices the
mesh (in STL format), defines any necessary support material and calculates
the extrusion path of the thermoplastic filament.
2. Production: the 3D printer heats the thermoplastic filament to a semi-molten
state and deposits it layer by layer along the extrusion path. Where needed,
the 3D printer deposits also a removable material (soluble or not) that acts as
support for the building part.
3. Post-processing: the user breaks away any support material or, if soluble, dissolves it in hot water-soap bath, to obtain the part ready to use.
Fig. 8. Stratasys Fortus 250mc (left) and Wasp Delta 40 70 (right).
First, the mesh of the customized foot orthosis was manufactured in ABSplus
(Acrylonitrile Butadiene Styrene) by means of Fortus 250mc (Stratasys Inc., Eden
Prairie, MN, USA). Previously, pre-processing was carried out by Insight, the
Stratasys job processing and management software (Stratasys Inc., Eden Prairie,
Design and Rapid Manufacturing of a customized foot orthosis ...
465
MN, USA). This system (approximately 40.000 €) is provided with two extruding
nozzles and works with soluble support material for an hands-free removal postprocessing that allow easily obtaining the final shell.
In addition to this, the same mesh was also manufactured using a PLA (Polylactic Acid) filament (1.75 mm diameter) by means of Wasp Delta 40 70 (CSP
s.r.l., Massa Lombarda, Italy), a low-cost 3D printer (approximately 5.500 €). The
open source software Cura, version 15.04.5 (Ultimaking Ltd., Netherlands) was
used for the pre-processing. Since this system has just one extruding nozzle, the
support material is the same of the building material and in the post-processing
was manually removed. However, due to the simple shape of the customized foot
orthosis, no particular effort was needed to obtain the final shell.
No remarkable difference can be observed with respect to the previous 3D
printed model (Fig. 9).
Fig. 9. Rigid shell for customized foot orthosis manufactured using ABSplus filament by means
of Stratasys Fortus 250mc (left) and PLA filament by means of Wasp Delta 40 70 (right).
3 Discussion
This methodological study describes a novel approach for design and manufacturing customized foot orthosis and some points can be included in the discussion.
First, for what concerns the digitizing process, the point cloud, resulting from
the laser scanning by means of the low-cost system (Sense 3D scanner), appears
accurate enough for the present practical purposes. However, the acquisition process should be completed in a short time, since the patient has to be relaxed and his
foot firmly fixed in the neutral position. Therefore, some practical sessions would
be very useful for the operators before facing a real patient.
Then, with respect to the Generative Design process, the proposed workflow in
Grasshopper is intuitive and allows easily and interactively customizing the final
foot orthosis. Moreover, this workflow could be modified and improved in order
to semi-automatically design specific devices to satisfy the demand of patients
with specific pathologies, for example, with accentuate valgus or varus deformity.
Finally, regarding the Additive Manufacturing process, the low cost 3D printer
(Wasp Delta 40 70) is capable to provide adequate results for the shell of the foot
orthosis. Moreover, this system appears more versatile in reason of the ability to
print in a wide range of different filaments. Therefore, since the market of 3D
466
M. Fantini et al.
printing filaments is rapidly growing, further test with different materials (both
flexible and rigid) can be performed to find the most proper one. In addition, both
the patient-volunteer and the practitioners were asked for feedback and positive
responses were collected.
Besides the advantages due to the better fitting of the foot plant to the orthosis,
a number of advantages under the Design for Manufacturing and the Design for
Environment aspects can be also highlighted. Just to mention some, this practice is
less invasive and more comfortable for the patient, it is a cleaner process to deal
with for the practitioner, while dramatically reducing the waste material. Moreover, the digital model can be used to design further developments in the integration with electronic components for smart technology testing.
4 Conclusions
This first methodological study has validated, in terms of feasibility, that the use
of a GD modelling approach, in combination with low-cost devices for 3D laser
scanning and 3D printing, is a real alternative to conventional processes for creating customized foot orthosis.
The study was carried out in an interdisciplinary cooperation approach between
the staff of Design and Methods in Industrial Engineering and the staff of
Podology, also to transfer skills and knowledge to all the practitioners involved.
Some feasibility tests involving the staff of the medical field indicated that a customized foot orthosis can be designed in a very intuitive way by a nonexperienced user in less than 20 minutes.
Moreover, the low-cost devices for reverse engineering (Sense 3D scanner) and
additive manufacturing (Wasp Delta 40 70) that have been acquired by the Podology Lab, were also were demonstrated suitable for this kind of applications.
References
1. Phillips J.W. The Functional Foot Orthoses, 1990 (Chrchilhill Livingstone, New York).
2. Michaud T.M. Foot orthoses and other forms of conservative foot care, 1993 (Williams &
Wilkins, Baltimore).
3. Jain M. L., Dhande S. G. and Vyas N. S. Virtual modeling of an ankle foot orthosis for correction of foot abnormality. Robotics and Computer-Integrated Manufacturing, 2011, 27(2),
257-260.
4. Mavroidis C., Ranky R. G., Sivak M. L., Patritti B. L., DiPisa J., Caddle A., Gilhooly K.,
Govoni L., Sivak S., Lancia M., Drillio R., Bonato P. Patient specific ankle-foot orthoses using rapid prototyping. Journal of NeuroEngineering and Rehabilitation, 2011, 8(1), 2-11.
5. Telfer S., Pallari J., Munguia J., Dalgarno K., McGeough M. and Woodburn J. Embracing additive manufacture: implications for foot and ankle orthosis design. BMC Musculoskeletal
Disorders, 2012, 13(84), 2-9.
Design and Rapid Manufacturing of a customized foot orthosis ...
467
6. Alam M., Choudhury I. A. and Azuddin M. Development of Patient Specific Ankle Foot
Orthosis through 3D Reconstruction. In 3rd International Conference on Environment Energy
and Biotechnology, Singapore, 2014, pp.84-88.
7. Palousek D., Rosicky J., Koutny D., Stoklásek P.and Navrat T. Pilot study of the wrist orthosis
design process. Rapid Prototyping Journal, 2014, 20(1), 27-32.
8. Dombroski C. E., Balsdon M. E. and Froats A. The use of a low cost 3D scanning and printing
tool in the manufacture of custom-made foot orthoses: a preliminary study. BMC Research
Notes, 2014, 7(1), 443.
9. Cazon A., Aizpurua J., Paterson A., Bibb R. and Campbell R.I. Customised design and manufacture of protective face masks combining a practitioner-friendly modelling approach and
low-cost devices for digitising and additive manufacturing. Virtual and Physical Prototyping,
2014, 9(4), 251-261.
10. Cignoni P., Callieri M., Corsini M., Dellepiane M., Ganovelli F. and Ranzuglia G. (2008),
MeshLab: an Open-Source Mesh Processing Tool. In Sixth Eurographics Italian Chapter
Conference, Salerno, 2008, pp.129- 136.
Influence of the metaphysis positioning in a new
reverse shoulder prosthesis
T. Ingrassiaa)*, L. Nalboneb), Vincenzo Nigrelli a) , D. Pisciottaa), V. Ricottaa),
a) Università degli Studi di Palermo, Dipartimento di Ingegneria Chimica, Gestionale,
Informatica, Meccanica, Viale delle Scienze – 90128 Palermo, Italy,
b) Ambulatorio di Ortopedia e Traumatologia, Azienda Ospedaliera Universitaria Policlinico
Paolo Giaccone di Palermo, 90100 Palermo, Italy
*
Corresponding
author.
tommaso.ingrassia@unipa.it
Tel.:+39
091
23897263;
E-mail
address:
Abstract Aim of this work is to investigate the behaviour of a new reverse
shoulder prosthesis, characterized by a humeral metaphysis with a variable offset,
designed to increase the range of movements and to reduce the impingement. In
particular, by means of virtual prototypes of the prosthesis, different offset values
of the humeral metaphysis have been analysed in order to find the best positioning
able to maximize the range of movements of the shoulder joint. The abduction
force of the deltoid, at different offset values, has been also estimated. The study
has been organized as follows. In the first step, the point clouds of the surfaces of
the different components of the prosthesis have been acquired by a 3D scanner.
This kind of scanner allows to convert camera images into three-dimensional
models by analysing the moiré fringes. In the second step, the acquired point
clouds have been post-processed and converted into CAD models. In the third
step, all the 3D reconstructed models have been imported and assembled through a
CAD system. After, a collision analysis has been performed to detect the
maximum angular positions of the arm at different metaphysis offset values. In the
last step, FEM models of shoulder joint with the new prosthesis have been created.
Different analyses have been performed to estimate how the deltoid abduction
force varies depending on the offset of the humeral tray. The study allowed to
understand how the offset of the metaphysis affects the performances of the
shoulder. The obtained results can be effectively used to give surgeons useful
guidelines for the installation of these kinds of implants.
Keywords: Reverse Engineering, CAD, Reverse Shoulder Prosthesis, Range of
Movements
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_47
469
470
T. Ingrassia et al.
1 Introduction
Many pathologies of the shoulder joint are, nowadays, more frequently treated
using partial or total prosthesis. That has imposed an increasing attention to the
study of the shoulder prostheses both from the clinical and from the engineering
point of view.
In this context, the reverse shoulder prosthesis represents, nowadays, the most
common solution for patients with disabling arthrosis and/or severe injury of the
rotator cuff. In these cases, the reverse shoulder prosthesis allows to cover the
muscle-tendon deficit by taking advantage of the large surface muscles, such as
the deltoid, to enable the abduction of upper limb.
Paul Grammont’s original reverse prosthesis, introduced in 1985,
revolutionized the traditional shoulder arthroplasty thanks to its novel design. This
system focused on four key principles [1], necessary to provide a stable construct
and to allow the deltoid to cover a serious injury of the rotator cuff:
x the centre of rotation must be fixed, lowered and medialized;
x the prosthesis must be inherently stable;
x the lever arm of the deltoid must be effective from the start of the
movement;
x the glenosphere must be large and the humeral cup small to create a semiconstrained articulation.
The main components of a reverse shoulder prosthesis are (Fig. 1): a humeral
stem and a metaphysis, a polyethylene insert, a metaglene and a glenosphere.
Fig. 1. Components of a reverse shoulder prosthesis.
The main difference between a conventional (or anatomical) shoulder
prosthesis and a reverse one is related to the way of reproducing the joint
geometry. Reverse shoulder prosthesis (Fig. 1), in fact, inverts the anatomical
shoulder configuration by fixing a glenosphere to the scapula and a polyethylene
concave insert (cup) to the humerus.
In this way, the anatomical gleno-humeral joint is inverted by creating a
concavity in the humerus and a convexity in the scapula. This inversion allows to
lower and to medialize the position of the rotational centre [2,3] so increasing,
during the arm abduction, the efficiency of the deltoid [4,5] which, when a severe
Influence of the metaphysis positioning in a new …
471
rotator cuff arthropathy occurs, is the only muscle able to ensure the stability of
the shoulder joint and the movements of the upper limb [6,7].
Many researchers have studied, during last years, the characteristics and the
performances of different reverse prostheses, also by introducing innovative
solutions. The most common studies are related to the: Range Of Movement
(ROM) [8], scapular notching [9], loosening of the glenoid prosthetic component
[10] and intrinsic instability [11-13] of the prosthetic system. Recently, innovative
reverse shoulder prostheses have been introduced. These new systems have offaxis pins that allow to obtain different offset of the humeral tray (or metaphysis).
In literature, there is not much information about the influence of the humeral
metaphysis positioning on the reverse shoulder prosthesis performances.
For these reasons, in this work the effect of the humeral tray position has been
analysed to understand how it affects the range of movement and the abduction
force.
To this aim, digital models of the shoulder bones and the prosthetic
components have been created and virtual [14-17] simulations have been
performed.
2 Case study: a new reverse shoulder prosthesis
The innovative reverse shoulder prosthesis studied in this work is the Aequalis
AscendTM Flex by Tornier (Fig.2-left).
Fig. 2. Reverse shoulder prosthesis Aequalis AscendTM Flex (on the left); limit positions and
back side of the humeral tray (on the right).
The innovative characteristic of the Aequalis AscendTM Flex is due to the
humeral tray that, thanks to an off-axis pin (Fig.2-right), can assure a variable
offset. Before assembling the prosthesis components, in fact, the chosen position
of the metaphysis can be fixed through a graduated scale (Fig. 2-right).
472
T. Ingrassia et al.
2.1 3D acquisition and CAD modelling
The shapes of the prosthetic components have been acquired by the COMET 5,
a 3D scan system. This system is composed of an 11 mega-pixel camera, a laser
source, a workstation and a software, the COMET Plus, that manages all the data,
from the scanning phase to the exporting of the point clouds.
The laser source projects fringe patterns over the object to acquire and the
digital camera captures the related image. The projected fringe patterns are
deformed coherently with the shape of the external surfaces of the object. The
analysis of the fringe allows, according to the moiré principle [19], to translate the
acquired images into three-dimensional point clouds. The system has a measuring
volume that can vary from 80 to 1,000 mm3, an accuracy threshold of about 5 μm
and a very reduced acquisition time (about 1 s).
The developed acquisition procedure is here quickly summarized. At first, the
surfaces of the prosthesis components have been opacified with a mat white colour
spray in order to minimize reflective spurious phenomena. After, regular fringe
patterns have been projected on the objects surfaces by means of the laser source.
Multiple images have been acquired by rotating the objects around a vertical axis.
All the images have been processed in order to obtain a point-by-point description
of the scanned surfaces.
The point clouds have been post-processed and interpolated into NURBS
surfaces. In the final step of the process, the acquired surfaces have been
converted into CAD solid models (Fig. 3).
Fig. 3. a) Point cloud, b) NURBS surfaces, c) CAD model of the acquired humeral stem
Influence of the metaphysis positioning in a new …
473
2.2 Virtual assembling
To perform the virtual biomechanical study of the shoulder joint, the digitalized
prosthetic components have been assembled with the CAD models of the shoulder
bones.
The assembly of the CAD models of the bones and the prosthesis has been
made following, step-by-step, the surgical guidelines given by Tornier to perform
a total reverse shoulder arthroplasty. All the parts have been assembled using a 3D
parametric CAD software. The final assembly model (fig. 4) has been fully
parametrized in order to be able to modify the positioning of all the components in
a very simple and fast way during the virtual simulations.
Fig. 4. Assembly of the shoulder joint components (on the left) and rendering (on the right).
3 Kinematic study of the new reverse shoulder prosthesis
To understand how the metaphysis positioning influences the range of
movement of the shoulder joint, several collision analyses have been performed in
a virtual environment. In particular, the assembled model of the shoulder and the
prosthesis has been studied by simulating the arm movements into a 3D CAD
software. The (allowable) extreme positions of the arm have been detected as soon
as two components of the shoulder joint have collided themselves (fig. 5).
474
T. Ingrassia et al.
Fig. 5. Collision observed during the measuring of the maximum abduction angle
Four different configurations were studied (fig. 6): a) lateral offset (grade 6), b)
posterior offset (grade 9), c) medial offset (grade 12), d) anterior offset (grade 3).
Fig. 6. Lateral (a), posterior (b), medial (c) and anterior (d) offset
For all the analyzed configurations of the metaphysis, the ROM of the shoulder
joint has been investigated by finding the limit positions of the arm during
different movements: abduction, adduction, internal and external rotations.
The limit positions have been identified by measuring the maximum angle
values considered as follows: for abduction and adduction, by measuring the
angles between the humerus axis and, respectively, the sagittal and transversal
planes; for internal and external rotations, by considering the absolute value of the
angle between the sagittal plane and the projection of the metaphysis axis on the
transverse plane.
The obtained results are summarized in table 1.
Table 1. Maximum angle values measured for each metaphysis offset.
Offset (grade)
Medial (12)
Anterior (3)
Lateral (6)
Posterior (9)
Abduction
62.08°
67.26°
73.23°
69.02°
Adduction
86.38°
85.00°
87.05°
87.30°
Internal rotation
25.77°
25.96°
25.08°
24.99°
External rotation
68.97°
68.02°
68.36°
67.95°
Influence of the metaphysis positioning in a new …
475
It can be observed that internal and external rotations are substantially not
influenced by the metaphysis positioning. As regards the adduction movement, it
can be lightly improved by choosing a posterior offset. The most considerable
improvement is related to the abduction. A change of the metaphysis offset from
medial to lateral offset, in fact, can increase the maximum angle value from
62.08° to 73.23°, with an increment of about 18%.
4 Numerical study of the new reverse shoulder prosthesis
To understand how the humeral tray position affects the force required to
abduct the arm, non-linear [20-22] FEM numerical simulations have been
executed. For each analysed humeral tray offset, comparative analyses have been
performed by imposing a 10 N vertical load on the elbow and by measuring the
force needed to maintain the arm abducted. The FEM model of the shoulder joint
has been created by importing the CAD assembly into Ansys Workbench and
meshing it with about 60.000 eight-node solid elements (Fig. 7).
Fig. 7. FEM model of the shoulder joint
After, the contact (between all the assembled components) and the boundary
conditions have been imposed. In particular, the scapula has been fully
constrained and a frictional contact, based on the augmented Lagrange algorithm
[23], has been imposed between the glenosphere and the polyethylene insert. All
remaining contact-pairs have been considered as bonded. As suggested in
literature [6, 7], it has been assumed that the deltoid is the only muscle used
during the abduction of the arm. It has been modelled as an inextensible spring
whose ends correspond to the anchor points of the deltoid. The positions of the
virtual anchor points [24] on the scapula and humerus sides have been located
using common kinematic models in literature [24]. The considered abduction
angle is 80°.
The obtained results are summarized in table 2.
476
T. Ingrassia et al.
Table 2. Deltoid abduction force for each metaphysis offset.
Offset (grade)
Abduction Force (N)
Deltoid length (mm)
Medial (12)
Anterior (3)
Lateral (6)
21.34
21.78
22.46
138.07
140.73
143.90
Posterior (9)
22.08
140.43
It can be noticed that the abduction force slightly varies depending on the
metaphysis offset. The extreme values have been found for the medial (21.34 N)
and the lateral (22.46 N) offset. Moreover, it can be observed that the humeral tray
position affects the equivalent length of the deltoid. From medial to lateral offset,
in fact, the deltoid length increases of about 6 mm. This information could be very
useful especially when elderly people, having not very elastic tendons and
muscles, has to be treated with a reverse shoulder arthroplasty.
5 Conclusions
In this work has been presented the study of a new reverse shoulder prosthesis,
characterized by a metaphysis with a variable offset, to detect how its positioning
can modify the ROM of the shoulder and can affect the force to abduct the arm.
The geometries of the prosthesis components have been acquired by 3D scanning
techniques. Digital models of the shoulder bones and the prosthetic parts, obtained
by interpolation of point-by-point raw acquisition data, have been imported into a
CAD software and parametrically assembled. Virtual simulations have been setup
to measure the limit positions of the arm, during abduction, adduction, internal
and external rotations, depending on the offset values of the metaphysis.
Interesting information has been also obtained, by FEM numerical simulations, as
regards the force that the deltoid should apply to abduct the arm.
The study allowed to highlight how the offset of the metaphysis affects the
performances of a shoulder with a reverse prosthesis. It has been found that the
internal and external rotations are not influenced by the metaphysis offset that,
instead, strongly affects the maximum abduction angle. It emerged, also, that a
higher force is needed to abduct the arm when the humeral tray is positioned with
a lateral offset. Moreover, this kind of offset requires a larger elongation of the
muscle and tendons. That could be taken into account when patients with a severe
injury of the rotator cuff must be treated.
The obtained information can be useful and may constitute important
guidelines for surgeons during the installation of this kind of prostheses. The
proposed procedure, moreover, could be automatized to perform customized
analyses in order to find the optimal humeral tray position depending on the
particular shape and dimensions of the shoulder.
Influence of the metaphysis positioning in a new …
477
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Berliner, J. L., et al., Biomechanics of reverse total shoulder arthroplasty. J Shoulder
Elbow Surg (2015) 24, 150-160
Grammont, P.M., Baulot, E.: Delta shoulder prosthesis for rotator cuff rupture.
Orthopedics, 16, 65–68 (1993)
Hoenecke, H., et al., Reverse total shoulder arthroplasty component center of rotation
affects muscle function, J Shoulder Elbow Surg (2014) 23, 1128-1135
Walker, D.R., Struk, A.M., Matsuki, K., How do deltoid muscle moment arms change
after reverse total shoulder arthroplasty?, J Shoulder Elbow Surg, 2015, 1-8.
Henninger, H.B., Barg, A., Anderson, A.E., Bachus, K.N., Effect of deltoid tension
and humeral version in reverse total shoulder arthroplasty: a biomechanical study, J
Shoulder Elbow Surg (2012) 21, 483-490.
Giles, J. W., et al., Implant Design Variations in Reverse Total Shoulder Arthroplasty
Influence the, Required Deltoid Force and Resultant Joint Load. Clin Orthop Relat
Res (2015) 473:3615–3626 DOI 10.1007/s11999-015-4526-0.
Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., Numerical study of the
components positioning influence on the stability of a reverse shoulder prosthesis
(2014) International Journal on Interactive Design and Manufacturing, 8 (3), pp. 187197
Frankle, M.A., Cuff, D., Levy, J.C., Gutiérrez, S.: Evaluation of abduction range of
motion and avoidance of inferior scapular impingement in a reverse shoulder model. J.
Shoulder Elbow Surg. 17(4), 608–615 (2008).
Simovitch, R.W., Zumstein, M.A., Lohri, E., Helmy, N., Gerber, C.: Predictors of
scapular notching in patients managed with the Delta III reverse total shoulder
replacement. J. Bone Joint Surg. 89, 588–600 (2007).
Franklin, J.L., Barrett, W.P., Jackins, S.E., Matsen, F.A.: Glenoid loosening in total
shoulder arthroplasty. Association with rotator cuff deficiency. J. Arthroplasty 3, 39–
46 (1988).
Nalbone, L., et al., Optimal positioning of the humeral component in the reverse
shoulder prosthesis, 2014, Musculoskeletal Surgery, 98 (2), pp. 135-142
Gutiérrez, S., Levy, J.C., Lee, W.E.: Center of rotation affects abduction range of
motion of reverse shoulder arthroplasty. Clin. Orthop. Relat. Res. 458, 78–82 (2007).
Frankle, M.A., Luo, Z.P., Gutiérrez, S., Levy, J.C.: Arc of motion and socket depth in
reverse shoulder implants. Clin. Biomech. 24(6), 473–479 (2009).
Ingrassia, T., Nigrelli, V., Design optimization and analysis of a new rear underrun
protective device for truck. Proceedings of the 8th International Symposium on Tools
and Methods of Competitive Engineering, TMCE 2010, 2, 713-725.
Cerniglia, D., Montinaro, N., Nigrelli, V., Detection of disbonds in multi-layer
structures by laser-based ultrasonic technique, 2008, Journal of Adhesion, 84 (10), pp.
811-829
Ingrassia, T., Mancuso, A., Nigrelli, V., Tumino, D., A multi-technique simultaneous
approach for the design of a sailing yacht, 2015, International Journal on Interactive
Design and Manufacturing, DOI: 10.1007/s12008-015-0267-2
Cappello, F., Ingrassia, T., Mancuso, A., Nigrelli, V., Methodical redesign of a
semitrailer, 2005, WIT Transactions on the Built Environment, 80, pp. 359-369
Chen, F., Brown, G.M., Song, M.: Overview of three-dimensional shape measurement
using optical methods, Opt. Eng. 39(1), 10–22 (2000).
http://www.geomagic.com/en/products/studio/overview.
Fragapane, S., Giallanza, A.,Cannizzaro, L., Pasta, A., Marannano, G., Experimental
and numerical analysis of aluminum-aluminum bolted joints subject to an indentation
process, International Journal of Fatigue (2015), 80, pp. 332-340
478
T. Ingrassia et al.
21. Ingrassia T., Nigrelli V., Buttitta R., A comparison of simplex and simulated annealing for
optimization of a new rear underrun protective device. Engineering with Computers, 2013,
29, 345-358.
22. Marannano, G., Mariotti, G.V., Structural optimization and experimental analysis of
composite material panels for naval use, Meccanica (2008) 43 (2), 251-262.
23. Cerniglia, D., Ingrassia, T., D'Acquisto, L., Saporito, M., Tumino, D., Contact between the
components of a knee prosthesis: Numerical and experimental study, 2012, Frattura ed
Integrita Strutturale, 22, pp. 56-68
24. Pennestrı, E., et al., Virtual musculo-skeletal model for the biomechanical analysis of
the upper limb, Journal of Biomechanics 40 (2007): 1350-1361.
Digital human models for gait analysis:
experimental validation of static force analysis
tools under dynamic conditions
T. Caporaso1*, G. Di Gironimo1, A. Tarallo1 , G. De Martino2 , M. Di
Ludovico2 and A. Lanzotti1* .
1
University of Naples Federico II – Fraunhofer JL IDEAS, DII, P.le Tecchio 80, 80125Napoli,
Italy
2
University of Naples Federico II, DIST, Via Claudio 21, 80125 Napoli, Italy
* Corresponding author - E-mail address: teodorico.caporaso@unina.it
Abstract This work explores the use of an industry-oriented digital human modelling tool for the estimation of the musculoskeletal loads corresponding to a
simulated human activity. The error in using a static analysis tool for measuring
articulations loads under not-static conditions is assessed with reference to an accurate dynamic model and data from real experiments. Results show that, for slow
movements, static analysis tools provide good approximation of the actual loads
affecting human musculoskeletal system during walking.
Keywords: Gait analysis; Virtual simulation; Biomechanics; Kinematics; Dynamics
1 Introduction
Gait analysis is the systematic study of human locomotion that provides quantification of body movements and biomechanics [1]. It uses measurements from motion capture systems (MoCap) and other devices (i.e. force platforms, inertial sensors, surface electromyography systems) to evaluate human walk cycle. As a
result, kinematic and kinetic parameters of human gait and their relationships with
neuromuscular functions can be analysed. The study of human locomotion has
been widely used for various purposes such as diagnosis and treatment of neuromuscular diseases (i.e. cerebral palsy, apoplexy, multiple sclerosis, Parkinson’s
disease), sports rehabilitation and performance, design and evaluation of orthosis
and even security [2, 3].
Virtual reality is another powerful tool to investigate human locomotion. It indeed
allows for reproducing and simulating human patterns through the so-called digital
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_48
479
480
T. Caporaso et al.
human modelling (DHM) that in brief is the technique of simulating body movements of people of different height, weight, age and sex. For instance, the most
common human motion patterns (i.e. fit, reach, grasp, balance, move, lift and
walk) can be simulated along with the articulations movements and the loads affecting the musculoskeletal system.
DHM is already used for very different applications, such as vehicle ergonomics,
cycle time estimation of manual productions and follow-up of medical treatments
in individuals with pathological gait [4]. The use of digital humans in such different fields has led to the birth of several software applications that pay more and
more attention to biomechanics. Santos [5] and Anybody [6], are two software
packages that integrate editable musculoskeletal models and allow making dynamic simulations of a wide variety of movements [7]. However, these software
tools are not commonly used in industry. Other software applications (such as
Human solutions Ramsis, Siemens Jack and Dassault Systèmes Human builder)
are indeed preferred because they are mainly aimed at the study of human-product
and human-machine interactions and also provide data exchange with the most
widespread industrial CAD tools. For the same reason, these applications generally do not implement physical-based dynamics, but just a kinematic engine,
which can be used to estimate the joint torques for static postures.
The present study aims at exploring how static force models can be effectively
used to estimate the musculoskeletal loads corresponding to a simulated human
activity (dynamic conditions). In particular, Siemens Jack is one of the most used
DHM tools across the industry [8, 9], and therefore was selected for our purposes.
It is understood that other software tool could have been considered as well and in
facts they will be matter of further studies.
Jack can perform static analyses with the Force Solver tool. Ground reaction
forces are derived based on foot support configuration and applied loads (i.e. selfweight and possible external further loads), then the torques for each joint and the
affected muscles are computed. The error in using such a static analysis tool for
measuring musculoskeletal loads in not-static conditions is here assessed against
an accurate dynamic model based on the velocity data from a real motion sequence and the actual ground reaction forces measured through a force platform.
The authors developed a proper motion analysis protocol based on Newton-Euler
dynamics equations and a musculoskeletal model reported in the published literature. The results from this protocol have been taken as a reference. Then, the same
MoCap data set (without ground force reactions) have been processed with Jack
and the two results have been eventually compared.
2 Materials and methods
The development of proper motion analysis protocol and Jack approach is described in this section.
Digital human models for gait analysis ...
481
2.1 Experimental set-up
The experiments were conducted at the Laboratory of Advanced Ergonomics
Measures of the University of Naples Federico II (Fraunhofer Joint Lab IDEAS –
MISEF). The laboratory is equipped with a reflective optical motion capture system (MOCAP) and a force platforms system, both provided by BTS SpA. The first
one (BTS SMART-DX 4000) consists of ten infrared digital cameras. Their sampling frequency was fixed to 340 Hz (to achieve the maximum image resolution of
2048×1088 pixel). The second one is a BTS P-6000 system endowed with eight
force platforms. Its sampling frequency was set to 680 Hz (maximum value). The
software used for system calibration and tracking and processing and analysis of
data was also provided BTS. The measurement volume resulted from calibration is
3.29×2.47×9.08 [m] with a mean error of 0.26 mm (Figure 1.a). The y axis is the
vertical direction, while x and z axes are the medio-lateral and the anteroposterior
directions respectively. Siemens Jack v.8.2 was used for virtual simulations.
Figure 1 - a) Lab layout: calibration volume (A); force platforms (B); infrared digital
cameras (C). b) Lateral view of a gait test
The experiments involved a male volunteer, 176 cm tall, with body mass of 69 kg
that is representative of the fiftieth percentile of the random variable ‘height of
Italian population’ and ‘weight of Italian population” (fixed the age) [10]. He is
also normal weight according to its body mass index (BMI=22.27 kg/m 2 [11]).
The well-known motion analysis protocol by Davis [12] was selected for this
study. The Davis protocol, compared to those providing the same accuracy, limits
indeed the discomfort caused by optical markers attached to the skin, and makes
the identification of anatomical reference points easier.
Moreover Davis's protocol is proven to provide valid and reliable results and is
widely used for gait analysis. The protocol is based on eleven anthropometrical
measurements: twenty-two markers must be used for the static acquisition whilst
just twenty-one are needed for the gait analysis. However, to accurately reproduce
the patient movements in virtual reality, the authors increased the number of sites
being observed. In particular, eleven functional markers were added. As a result,
twenty-six anthropometrical measures were collected (Figure 2). At the beginning
of the experiment, the participant was requested to maintain an orthostatic position
482
T. Caporaso et al.
for about 10 seconds. This acquisition (so-called “standing”) was to set a reference posture for his joint angles. Then, he was asked to walk on the force platforms inside the MoCap measurement volume (Figure 1.b) to acquire ten full
strides. At the same time ground forces data were collected.
Figure 2 - Markers-set used for the experiments.
2.2 Data processing
Discrete digital data from tracking system were interpolated with a third-order
polynomial and then treated with a second-order Butterworth low-pass filter [13]
to reduce digital artifacts and noise (i.e. skin motion). A cut-off frequency of 6Hz
(i.e. six-times the frequency of the stride) was set. On the other hand, a threshold
of 5N was applied to the ground force data to remove some known noise (e.g. environmental, electrical, electronic, computer, physiological, etc.). Other spikes in
the digital data were smoothed with a moving averaging triangular window filter.
A proper data processing protocol based on a musculoskeletal model was developed. This protocol involves kinematic data, ground force data, anthropometrical
measurements and a specific anatomical table by Zatsiorsky et al. [14] with the
correction of De Leva [15] to derive joints angles and internal loads affecting the
musculoskeletal system. The estimation strategy for internal-body forces and
torques is more extensively discussed in section 2.4. Given the heel-strike (HS)
and toe-off (TO) events based on the signals from the ground force platform, further parameters such as cadence stride, stance time, swing time, double support
Digital human models for gait analysis ...
483
time and stride length were computed. Then the MOCAP data of standing and
walking acquisitions were imported in Jack through an open data exchange format
(i.e. C3D). It is worth noticing that the data exchange implied also the re-sampling
of the signal from 340 Hz to 30 Hz (as required by Jack).
Figure 3 - a) front view of real marker set, b) the optimal markers set imported in virtual realty; c) the manikin’s front view with markers
A digital human with the same anthropometric characteristics of the volunteer was
created in Jack. The reference posture for the virtual manikin was set on the basis
of the standing posture data. As mentioned, new tracking sites were added to those
already provided by Davis's protocol to get a faithful reproduction of the motion
sequence acquired. The complete marker-set selected for the experiments is shown
in Figure 3.
Figure 4 - Lateral view of walking test in Jack
Finally, MoCap data were imported in Jack and the Force Solver tool was used to
estimate joint angles and related torques. Jack uses a 25-sites marker-set, therefore
8 markers used just for the motion protocol calculations were not imported in it
(i.e. the sites named in Figure 2 as r_elbow2, l_elbow2, r_wrist2, l_wrist2, r_bar1,
r_bar2, l_bar1, l_bar2 respectively). To have consistent values for ground reaction
484
T. Caporaso et al.
forces, walking mode was set as force distribution strategy for the swing phase.
Moreover, temporal gait events (TO and HS) were also identified through a frame
by frame video analysis (Figure 4).
2.3 Human Model: Estimation of internal forces and torques
Human body is here modelled as a system of rigid segments connected together by
proper joints (Link Segment Model, LSM) that represent articulations. For instance, the lower limb is schematized as a set of three rigid segments (representing
thigh, shank and foot respectively) connected together through frictionless hinges
(Figure 5).
Figure 5 - Lower limb: a) Anatomical model; b) Link segment model; c) free body
As is known, the dynamics of a rigid body is driven by the Newton-Euler equations:
­f mv
®
ω ω u Iω
¯τ Iω
(1)
Where f and W are the resultant force and the associated torque acting on the center
of mass (CoM) of each rigid segment respectively and v, Z and I are the velocity
and angular speed of the CoM and the moment of inertia of any rigid segment.
Therefore, if we are given v(t) and Z t from a motion sequence and the ground
reaction forces from a force platform, we can use these equations to derive the
overall force system acting on each segment (inverse dynamics) and thus the corresponding stress in any point of the kinematic chain. In particular, inverse dynamics can be computed efficiently by exploiting the recursive structure of an articulated rigid body system. A well-known recursive algorithm allows indeed
computation of inverse dynamics in linear time proportional to the number of links
in the articulated system [16]. The authors have implemented this algorithm in
Digital human models for gait analysis ...
485
their motion analysis protocol to accurately estimate the joint torques in the kinematic chain, based on the data from the MoCap system and the external loads
measured with the force platform. However, in a state of static mechanical equilibrium, the problem becomes even easier, because the joint torques can be computed
all at once through the inversion of the Jacobian matrix related to the particular
posture considered [16]. This is particularly important, because slow movements
(e.g. walking) could be viewed as a sequence of quasi-static postures. To estimate
the error due to such an approximation (i.e. neglecting inertial effects) a static solution algorithm has been also implemented in the protocol.
As mentioned, most of industry-oriented DHM software (like Jack) does not implement a true inverse dynamics, since simulation environments often miss a full
physics engine, but nevertheless allow for estimating joint torques related to a
static posture, as in the case of Jack Force Solver tool. The latter is based on the
3D Static Strength Prediction Program (3DSSPP) developed by the University of
Michigan [17]. 3DSSPP algorithm neglects any inertial effect (static approach)
and uses an "up-to-down" approach for the solution. This means that the analysis
starts from the hand segment and continues down the body, up to assess the GRF
at the end of force balance. Thus, the GRF is the output of the analysis rather than
the input.
3 Results and discussions
As a first step, the developed motion analysis protocol has been validated. The results for torques measured with both dynamics and static model are shown in Figure 6.
Figure 6 - Ankle (a) and knee (b) torques - static vs dynamic models results. Vertical dotted
line indicate the end of stance phase.
As expected, no significant differences can be appreciated for the ankle, whereas
the curves for knee diverge significantly. At the level of ankle inertial effects are
actually negligible, while they become more important for the correct determination of the torques at knee level. Then, spatial-temporal parameters and joint angles were estimated. The mean values for the two analyses (with Jack and with our
dynamic protocol respectively) with the absolute and relative deviations are listed
in table 1. As mentioned, the results from the developed motion analysis protocol
486
T. Caporaso et al.
are considered as the “true” reference. Results show that the static model of Jack
provides a quite good estimation of temporal parameters of gait such as stance and
swing phase (deviation below 1%). The deviations for double support phase and
mean velocities and stride length are instead slightly higher (values ranging between 6% and 7%).
Table 1 Mean value of gait spatial-temporal parameters and Jack results
Parameter
Abs Error
Rel. Error (%)
Stride Time (s)
Jack result
1.10
Real value
1.12
- 0.02
1.79
Stance Phase (%)
60.9
61.1
0.20
0.33
Swing Phase (%)
39.1
38.9
0.20
0.51
Double Support Phase (%)
12.0
11.3
0.70
6.19
Stride Length (m)
1.25
1.35
- 0.10
7.41
Cadence Stride (cycle/s)
0.91
0.89
0.02
2.25
Figure 7 - Mean joint angle of ankle (a) and knee (b) in the sagittal plane during the stride.
Vertical black lines indicate the end of the stance phase.
Table 2 - Mean joint angles for virtual analysis and real analysis
Parameter
Jack result
Real value
Abs. Error
Rel. Error (%)
Max Flex/Est Knee[deg]
-12.1
-3.3
-8.85
269
Min Flex/Est Knee[deg]
54
64.9
-10.9
16.7
Range Max/Min Knee [deg]
66.1
68.2
-2.02
2.96
Max Flex/Est Ankle[deg]
-15.2
-9.6
- 5.6
58
Min Flex/Est Ankle[deg]
13.1
16.1
-3.0
18.7
Range Max/Min Ankle [deg]
28.3
25.7
2,56
9.96
As shown in Figure 7, also the estimation of joint angles provided by Jack is in
good accordance with our model. However, their range of motion is generally just
slightly underestimated; except in the case of knee, where the deviation measured
is significant (Table 2).
Digital human models for gait analysis ...
487
The lower accuracy in the evaluation of stride length and double support can be
ascribed to the error about the ankles.
Figure 8 - Mean joint torques (static model) at ankle (a) and knee (b) level measured in the
sagittal plane, normalised to stride time (from HS to HS of the same foot). Red vertical
lines indicate the end of the stance phase.
As expected, our static model provides results very close to Jack's ones (Figure
8). The estimation becomes inaccurate just around the beginning of the swing
phase due to an incorrect temporal identification (Jack seems to offset the time
window of the swing phase).
4 Conclusions
In this work the authors assessed the static force prediction tool of Siemens Jack
under dynamic conditions (walking). Results show that, generally, when the speed
of the movements is low enough, Force Solver tool gives a reasonable estimation
of the actual musculoskeletal loads, while the values predicted for ankle loads are
slight less accurate. However, this work is only a first step in the evaluation of
tools for virtual gait analysis. Future works will involve a larger sample of individuals with new marker-sets to better assess the movements of foot articulation.
Walking tests at different speed will be also conducted to evaluate the influence of
velocity on results. Moreover, torques estimation will be improved with a proper
definition of swing and stance phases inside Jack. Other commercial DHM tools
are currently under study.
Acknowledgments The authors would like to thank Annachiara Schettino and Roberta Antonia
Ruggiero for their precious help during the development of the work.
488
T. Caporaso et al.
References
1. Ghoussayni, S.; Stevens, C.; Durham, S.; Ewins, D. Assessment and validation of a simple
automated method for the detection of gait events and intervals., 2004, Gait Posture (20), pp.
266–272.
2. Castelli A. , Paolini G. , Cereatti A., Bertoli M., Della Croce U. Application of a markerless
gait analysis methodology in children with cerebral palsy: Preliminary results, September
2015, Gait & Posture, 42 (2), pp. S4–S5
3. Sigurnjak S., Twigg P., Bowring N. Development of a Virtual Gait Analysis Laboratory,
2008, Measurement and Control
4. Chaffin, Don B. Digital Human Modeling for Vehicle and Workplace Design, 2001, Society
of Automotive Engineers, Inc.
5. T. Marler, S. Beck, U. Verma, R. Johnson, V. Roemig, B. Dariush. A Digital Human Model
for Performance-Based Design, 2015, Lecture Notes in Computer Science, (8529) pp. 136147
6. Damsgaard, M. Rasmussen, J. Christensen, S.T.; Surma, E.; Zee, M.D. Analysis of musculoskeletal systems in the anybody modeling system. 2006, Simul. Model. Pract. Theory 2006,
(14), pp. 1100–1111.
7. A.I. Purdue , A.I.J. Forrester, M. Taylor, M.J. Stokes, E.A. Hansen, J. Rasmussen . Efficient human force transmission tailored for the individual cyclist, June 2010, 8th Conference of the International Sports Engineering Association (ISEA) - Procedia Engineering (2),
(2), pp. 2543-2548
8. Di Gironimo G., Di Martino C., Lanzotti A., Marzano A., Russo G. Improving MTM-UAS
to predetermine automotive maintenance times, 2012, International Journal on Interactive
Design and Manufacturing. 6(4), pp. 265-273.
9. Di Gironimo G., Mozzillo R., Tarallo A. From virtual reality to web-based multimedia
maintenance manuals. 2013, International Journal Interactive Design Manufacturing, 7:183–
190.
10. Cacciari E, Milani S, Balsamo A and SIEDP Directive Council 2002-03. Italian cross sectional growth charts for height, weight and BMI (6-20yr ), 2002, Eur J Clin Nutr. , 56: 17180
11. BMI classification of World Health Organization ,Web access on 6 May 2016
http://apps.who.int/bmi/index.jsp?introPage=intro_3.html
12. Davis, R. B., Ounpuu, S., Tyburski, D., & Gage, J. R. A gait analysis data collection and reduction technique, 1991 ,Human movement science, 10(5), pp. 575-587
13. Kirtley, C. Clinical gait analysis: theory and practice, 2006, Elsevier Health Sciences.
14. Zatsiorsky, V. and Seluyanov, V. Estimation of the mass and inertia characteristics of the
human body by means of the best predictive regression equations, 1985, Biomechanics IXB, pp. 233-239.
15. De Leva, P. Adjustments to Zatsiorsky-Seluyanov's segment inertia parameters, 1996, Journal of biomechanics, 29(9), pp. 1223-1230.
16. Di Gironimo G., Pelliccia L., Siciliano B., Tarallo A.. Biomechanically-based motion control
for a digital human. 2012, International Journal on Interactive Design and Manufacturing.
6(1)1, pp. 1-13.
17. Chiang, J., Stephens, A. and Potvin, J. Retooling Jack’s static strength prediction tool, 2006 ,
No. 2006-01-2350 SAE Technical Paper.
Using the Finite Element Method to Determine
the Influence of Age, Height and Weight on the
Vertebrae and Ligaments of the Human Spine
Fátima Somovilla-Gómez1*, Rubén Lostado-Lorza1, Saúl Íñiguez-Macedo1,
Marina Corral-Bobadilla1, María Ángeles Martínez-Calvo1, Daniel TobalinaBaldeon 1.
1
University of La Rioja, Mechanical Engineering Department, Logrono, 26004, La Rioja, Spain
* Corresponding author. Tel.: +0034 941299727; fax:++0034 941299727. E-mail address:
fatima.somovilla@unirioja.es
Abstract: This study uses the Finite Element Method (FEM) to analyze the
influence of age, height and weight on the vertebrae and ligaments of the human
functional spinal unit (FSU). Two different artificial segments and the influence of
the patient’s age, sex and height were considered. The FSU analyzed herein was
based on standard human dimensions. It was fully parameterized first in
engineering modelling format using CATIA© software. A combination of
different elements (FE) were developed with Abaqus© software to model a
healthy human FSU and the two different sizes of artificial segments. Healthy and
artificial FSU Finite Element models (FE models) were subjected to compressive
loads of differing values. Spinal compression forces, posture data and male/female
anthropometry were obtained using 3DSSPP© software Heights ranging from
1.70 to 1.90 meters; ages, between 30 and 80 years and body weights between 75
and 90 kg were considered for both men and women. Artificial models were based
on the Charité prosthesis. The artificial prosthesis consisted of two titanium alloy
endplates and an ultra-high-molecular-weight polyethylene (UHMWPE) core. An
analysis in which the contacts between the vertebrae and the intervertebral disc, as
well as the behavior of the seven ligaments, were taken into consideration. The
Von Mises stresses for both the cortical and trabecular bone of the upper and
lower vertebrae, and the longitudinal stresses corresponding to the seven
ligaments that connect the FSU were analyzed. The stresses obtained for the two
geometries that were studied by means of the artificial FE models were very
similar to the stresses that were obtained from healthy FE models.
Keywords: Finite Element Model (FEM); prosthesis; age; height; weight.
© Springer International Publishing AG 2017
B. Eynard et al. (eds.), Advances on Mechanics, Design Engineering
and Manufacturing, Lecture Notes in Mechanical Engineering,
DOI 10.1007/978-3-319-45781-9_49
489
490
F. Somovilla-Gómez et al.
1 Introduction
Lower back pain is a widespread problem that is associated with deterioration
of the intervertebral disc. It requires medical treatment as it considerably
constrains normal daily activities. This, results in economic losses in the millions
for both industry and healthcare. This health problem is the second leading cause
of disability worldwide [1, 2]. This study seeks to compare the biomechanical
alterations of a Functional Spinal Unit (FSU) in the lumbar back area, specifically
the L4-L5 vertebrae, of a healthy model and a model that has an artificial disc
prosthesis under conditions of extreme load. Compression in addition to the
prosthesis should exhibit a range of movement within physiological and functional
limits. The behavior of finite element models that behave similarly to a human
intervertebral disc was analyzed. Stress in the contact areas of the cortical and
trabecular bones was examined, as well as ligament stress. Finite element models
were applied to the healthy model and to two different sizes of the artificial
prosthesis model. The artificial FEM utilized a Charité prosthesis, which consisted
of two titanium alloy endplates and an ultra-high-molecular-weight polyethylene
(UHMWPE) core [3].
2 Functional Spinal Unit (FSU) and the Intervertebral Disc
The spinal column consists of overlapping functional spinal units (FSU), as
shown in Figure 1. The basic structural component of the spinal column is called
the Functional Spinal Unit. It is formed by two adjacent vertebrae that are
separated by an intervertebral disc, along with the surrounding ligaments [4]. An
intervertebral disc is located between each vertebrae. The disc consists of two
main parts. One part is a gelatinous substance that is called nucleus pulposus. It
cushions the spinal cord. The other part is the annulus fibrosus, which is a
cartilage ring that surrounds the nucleus pulposus and remains intact when force is
applied to the spinal column [5].
Using the Finite Element Method to Determine …
491
Fig. 1. (a) Functional Spinal Unit (FSU) and (b) Behavior of the ligaments in this
study [6].
The ring’s collagen fibers are arranged in concentric layers. These fibers
anchor the disc to the endplate and constitute a layer of cartilage of approximately
0.5 mm in thickness that covers the surface of the vertebral body to the bony rim
that surrounds it. The intervertebral discs endow the spinal column with flexibility
and act as a cushion during such daily physical activities as walking, running and
jumping. Ligaments play an extremely important role in the spinal column’s
biomechanics and stability. By assuming a limited range of physiological
movements, many numerical models have concluded that ligaments exhibit linear
behavior. This approach, however, may lead to significant errors in the results. For
example, this could result in an error in the radius of the curvature of the stressstrain curve, which is very important to the physiological range. Thus, the great
majority of studies have adopted a model of linear stress-strain behaviour [6], such
as that depicted in Figure 1. The following seven ligaments were incorporated in
the finite ele
Download