Uploaded by mokonacool

WGP Tagungsband eBook DNB

advertisement
7. WGP-Jahreskongress
Aachen, 5.-6. Oktober 2017
Herausgeber:
Prof. Dr.-Ing. R. H. Schmitt
Prof. Dr.-Ing. Dipl.-Wirt. Ing. G. Schuh
7. WGP-Jahreskongress
Aachen, 5.-6. Oktober 2017
Herausgeber:
Prof. Dr.-Ing. R. H. Schmitt
Prof. Dr.-Ing. Dipl.-Wirt. Ing. G. Schuh
Bibliografische Information der Deutschen Nationalbibliothek
Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen
Nationalbibliografie; detaillierte bibliografische Daten sind im Internet
über http://dnb.ddb.de abrufbar.
Robert Schmitt; Günther Schuh (Hrsg.):
7. WGP-Jahreskongress
Aachen, 5.-6. Oktober 2017
1. Auflage, 2017
Apprimus Verlag, Aachen, 2017
Wissenschaftsverlag des Instituts für Industriekommunikation und Fachmedien
an der RWTH Aachen
Steinbachstr. 25, 52074 Aachen
Internet: www.apprimus-verlag.de, E-Mail: info@apprimus-verlag.de
ISBN 978-3-86359-620-0
Table of Contents
Papers
Chapter 1: Advanced Materials ............................................................................................................................ 1
1. Best Paper: Online-Coupled FE Simulation and Microstructure Prediction for the Process Chain of Inocel 718
Turbine Disk
Alexander Krämer, Rajeevan Rabindran, Anna Rott, Ranjeet Kumar, Gerhard Hirt .............................................. 3
2. Simulation of a Stamp Forming Process of an Organic Sheet and its Experimental Validation
Florian Bohne, Moritz Micke-Camuz, Michael Weinmann, Christian Bonk, Anas Bouguecha, Bernd-Arno
Behrens .................................................................................................................................................................. 11
3. Production Chain of Hot-Forged Hybrid Bevel Gears from Deposition-Welded Blanks
Anna Chugreeva, Anas Bouguecha, Bernd-Arno Behrens ..................................................................................... 21
4. Adaption of the Tool Design in Micro Deep Hole Drilling of Difficult-to-Cut Materials by High-Speed Chip
Formation Analyses
Marko Kirschner, Sebastian Michel, Sebastian Berger, Dirk Biermann ............................................................... 29
5. Investigation of Process Induced Changes of Material Behaviour Using a Drawbead in Forming Operations
Harald Schmid, Sebastian Suttner, Marion Merklein ............................................................................................ 37
Chapter 2: Manufacturing Technology.............................................................................................................. 43
1. Best Paper: Comparison of 316L Test Specimens Manufactured by Selective Laser Melting, Laser Deposition
Welding and Continious Casting
Christopher Gläßner, Bastian Blinn, Mathias Burkhart, Marcus Klein, Tilmann Beck, Jan C. Aurich ................ 45
2. Influence of Manufacturing and Assembly Related Deviation on the Excitation Behaviour of High-Speed
Transmissions for Electric Vehicles
Mubarik Ahmad, Christoph Löpenhaus, Christian Brecher .................................................................................. 53
3. Surface Structuring of Forming Tool Surfaces by High-Feed Milling
Dennis Freiburg, Maria Löffler, Marion Merklein, Dirk Biermann ...................................................................... 63
4. Evaluation of Design Parameters for Hybrid Structures Joined Prestressed by Forming
Henning Husmann, Peter Groche .......................................................................................................................... 71
5. System Concept of a Robust and Reproducible Plasma-Based Coating Process for the Manufacturing of Power
Electronic Applications
Alexander Hensel, Martin Mueller, Joerg Franke ................................................................................................. 79
6. Methods for the Analysis of Grinding Wheel Properties
Fabian Kempf, Abdelhamid Bouabid, Patrick Dzierzawa, Thilo Grove, Berend Denkena.................................... 87
7. Influence of Cutting Edge Micro Geometry of Diamond Coated Micro-Milling Tools while Machining
Graphite
Yves Kuche, Julian Polte, Eckart Uhlmann ........................................................................................................... 97
8. New Production Technologies of Piston Pin with Regard to Leightweight Design
Nadja Missal, Mathias Liewald, Alexander Felde ............................................................................................... 105
9. Contact Conditions in Bevel Gear Grinding
Mareike Solf, Christoph Löpenhaus, Fritz Klocke ............................................................................................... 113
,
10. Fundamental Investigations of Honing Process Related to the Material Removal Mechanisms
Meik Tilger, Tobias Siebrecht, Dirk Biermann .................................................................................................... 121
11. Fine Positioning System for Large Components
Maik Bergmeier, Berend Denkana ....................................................................................................................... 129
12. Selective Laser Melting of TiAI6V4 Using Powder Particle Diameters Less than 10 Microns
Michael Kniepkamp, Mara Beermann, Eberhard Abele ...................................................................................... 137
Chapter 3: Industry 4.0 ..................................................................................................................................... 147
1. Best Paper: Prototyping in Highly-Iterative Product Development for Technical Systems
Sebastian Schlösser, Michael Riesener, Günther Schuh ...................................................................................... 149
2. An Analytical-Heuristic Approach for Automated Analysis of Dependency Losses and Root Cause of
Malfunctions in Interlinked Manufacturing Systems
Thomas Hilzbrich, Felix Georg Mueller, Timo Denner, Michael Lickefett ......................................................... 159
3. Design of a Modular Framework for the Integration of Machines and Devices into Service-Oriented
Production Networks
Sven Jung, Michael Kulik, Niels König, Robert Schmitt ...................................................................................... 167
4. Success Factors for the Development of Augmented Reality-Based Assistance Systems for Maintenance
Services
Moritz Quandt, Abderrahim Ait Alla, Lars Meyer, Michael Freitag ................................................................... 175
5. Energy Efficiency through a Load-Adaptive Building Automation in Production
Beiyan Zhou, Thomas Vollmer, Robert Schmitt ................................................................................................... 183
6. Vertical Integration of Production Systems for Resource Efficiency Determination
Thomas Vollmer, Niklas Rodemann, Robert Heinrich Schmitt ............................................................................ 192
7. Decentral Energy Control in a Flexible Production
Sebastian Weckmann, Darian Schaab, Alexander Sauer ..................................................................................... 203
Chapter 4: Assembly .......................................................................................................................................... 211
1. Best Paper: Analyzing the Impact of Object Distances, Surface Textures and Interferences on the Image
Quality of Low-Cost RGB-D Consumer Cameras for the Use in Industrial Applications
Eike Schaeffer, Alexander Beck, Jonathan Eberle, Maximilian Metzner, Andreas Blank, Julian Seßner, Jörg
Franke ................................................................................................................................................................. 215
2. Multi-Criteria Classification of Logistics Value Streams by Using Cluster Analysis
Siri Adolph, Tobias Keller, Joachim Metternich, Eberhard Abele ...................................................................... 223
3. Optimising Matching Strategies for High Precision Products by Functional Models and Machine Learning
Algorithms
Raphael Wagner, Andreas Kuhnle, Gisela Lanza ................................................................................................ 231
4. PLM-Supported Automated Process Planning and Partitioning for Collaborative Assembly Processes Based
on a Capability Analysis
Simon Storms, Simon Roggendorf, Florian Stamer, Markus Obdenbusch, Christian Brecher ............................ 241
5. A Three-Step Transformation Process for the Implementation of Manufacturing Systems 4.0 in Medium-Sized
Enterprises
Christoph Liebrecht, Jonas Schwind, Moritz Grahm, Gisela Lanza .................................................................... 251
,,
6. Dynamically Interconnected Assembly Systems – Concept Definition, Requirements and Applicability
Analysis
Guido Hüttemann, Amon Göppert, Pascal Lettmann, Robert H. Schmitt ............................................................ 261
7. Flexibility through Mobility: the E-Mobile Assembly of Tomorrow
Achim Kampker, Peter Burggräf, Kai Kreisköther, Matthias Dannapfel, Sebastian Bertram, Johannes Wagner
............................................................................................................................................................................. 269
8. Evaluation of Technology Chains for the Production of All-Solid-State Batteries
Joscha Schnell, Andreas Hofer, Célestine Singer, Till Günther, Gunther Reinhart ........................................... 295
9. Scalable Assembly for Fuel Cell Production
Tom Stähr, Florian Ungermann, Gisela Lanza .................................................................................................... 303
10. Conception of Generative Assembly Planning in the Highly Iterative Product Development
Marco Molitor, Jan-Philipp Prote, Stefan Dany, Louis Huebser, Günther Schuh .............................................. 313
11. Automated Calibration of a Lightweight Robot Using Machine Vision
David Barton, Jonas Schwab, Jürgen Fleischer .................................................................................................. 321
Chapter 5: Organization of Manufacturing .................................................................................................... 329
1. Best Paper: Monetary and Quality-Feature-Based Quantifications of Failure Risks in Existing Process Chains
Kevin Nikolai Kostyszyn, Robert Schmitt ............................................................................................................. 331
2. Development of a Cost-Based Evaluation Concept for Production Network Decisions Including Sticky Cost
Aspects
Julian Ays, Jan-Philipp Prote, Bastian Fränken, Torben Schmitz, Günther Schuh ............................................. 339
3. The Effect of Different Levels of Information Exchange on the Performance of Resource Sharing Production
Networks
Marit Hoff-Hoffmeyer-Zlotnik, Daniel Sommerfeld, Michael Freitag ................................................................. 347
4. Evaluation of Planning and Control Methods for the Design of Adaptive PPC Systems
Susanne Schukraft, Marius Veigt, Michael Freitag ............................................................................................. 355
5. Concept for Clustering of Similar Quality Features for Optimization of Processes in Low-Volume
Manufacturing
Jonathan Greipel, Yifei Lee, Robert H. Schmitt ................................................................................................... 363
6. Approach to Design Collaboration in Interdisciplinary Product Development Using Dependency Features
Christian Mattern, Michael Riesener, Günther Schuh ......................................................................................... 373
Chapter 6: Machine Tools ................................................................................................................................. 381
1. Kinematically Coupled Force-Compensation – Experimental and Simulative Investigation with a Highly
Dynamic Test Bed
Marcel Merx, Christoph Peukert, Jens Müller, Steffen Ihlenfeldt........................................................................ 383
2. H’ Position Control for Machine Tool Feed Drives
Thomas Berners, Sebastian Kehne, Alexander Epple, Christian Brecher ........................................................... 391
3. Electromagnetic Systems to Control the Accuracy of Ram Travel at the Presence of Horizontal Process Forces
Michael Gröne, Richard Krimm, Bernd-Arno Behrens........................................................................................ 401
4. A Tool Sided Approach for High Speed Shear Cutting on Conventional Path-linked Presses
Stefan Hilscher, Richard Krimm, Bernd-Arno Behrens ....................................................................................... 409
,,,
5. Parallel-Driven Feed Axes with Compliant Mechanisms to Increase Dynamics and Accuracy of Motion
Christoph Peukert, Marcel Merx, Jens Müller, Matthias Kraft, Steffen Ihlenfeldt .............................................. 417
6. A Scalable, Hybrid Learning Approach to Process-Parallel Estimation of Cutting Forces in Milling
Applications
Michael Königs, Frederik Wellmann, Marian Wiesch, Alexander Epple, Christian Brecher.............................. 425
Posters
Chapter 1: Manufacturing Technology............................................................................................................ 433
1. Binderless-cBN for ultra-precision machining of hardened steel moulds
Julian Polte, Christoph Hein, Dirk Oberschmidt, Eckart Uhlmann .................................................................... 435
2. A new approach for calculating the Analytical Resultant Force of Sawing Processes Considering the Tooth
Impact Momentum
Daniel Albrecht, Thomas Stehle ........................................................................................................................... 443
3. Methodology to determine the heat partition in the milling process
Thorsten Augspurger............................................................................................................................................ 451
4. Influences of indentation forming on surface characteristics and metallographic properties of bright finishing
alloys
Johannes Beck, Martin Friedrichsen, Marion Merklein ...................................................................................... 459
5. Application and development of analytic accelerated test models for the lifetime prediction of a novel
contacting method
Matthias Friedlein, Michael Spahr, Robert Süß-Wolf, Jörg Franke .................................................................... 465
6. Surface integrity of cryogenic turned austenitic stainless steels AISI 347 and AISI 904L
Hendrik Hotz, Benjamin Kirsch, Patrick Mayer, Steven Becker, Erik von Harbou, Annika Boemke, Robert
Skorupski, Marek Smaga, Ralf Müller, Tilmann Beck, Jan C. Aurich ................................................................. 473
7. Numerical Prediction of the Process Limits of Aluminum Chip Extrusion
Felix Kolpak , Christoph Dahnke, A. Erman Tekkaya ........................................................................................ 481
8. Influence of Fiber Orientation on Material Removal Mechanisms during Grinding Fiber-Reinforced Ceramics
with Porous Matrix
Sebastian Müller, Christian Wirtz, Daniel Trauth, Fritz Klocke ......................................................................... 487
9. Manufacturing and application of PCD micro-milling tools
Mitchel Polte, Dirk Oberschmidt, Eckart Uhlmann ............................................................................................. 495
10. Finishing of internal geometries in selectively laser melted components by Abrasive Flow Machining
Christian Schmiedel, Danny Schröter, Tiago Borsoi Klein, Eckart Uhlma ......................................................... 503
11. High-throughput material development using selective laser melting and high power laser
Konstantin Vetter, Hannes Freiße, Frank Vollertsen........................................................................................... 511
12. Warm and hot forming of 7000 aluminum alloys
Hendrik Vogt , Christian Bonk, Sven Hübner, Bernd-Arno Behrens .................................................................. 519
13. Production of a supporting plateau out of hard particles in a tool surface and its influence in dry sheet metal
forming
Hannes Freiße, Konstantin Vetter, Thomas Seefeld, Mitchel Polte, Julian Polte ................................................ 527
,9
14. Characterisation of strain rate dependency of sheet metals under biaxial loading with a strain rate controlled
hydraulic bulge test
Matthias Lenzen, Sebastian Suttner, Marion Merklein ........................................................................................ 535
15. Prediction of Process Forces in Gear Honing
Marco Kampka, Caroline Kiesewetter, Christoph Löpenhaus, Fritz Klocke ....................................................... 541
Chapter 2: Industry 4.0 ..................................................................................................................................... 549
1. Learning and information at the workplace as shown in the Future Work Lab
Thilo Zimmermann, Julia Denecke, Urban Daub, Moritz Hämmerle, Bastian Pokorni, Maik Berthold ............. 551
2. Cyber-physical compressed air systems enable flexible generation of compressed air adapted to demand of
production
Ralf Böhm, Sven Weyrich, Jörg Franke ............................................................................................................... 557
3. A methodology for a structured process analysis of complex production processes with a small database
Ricarda Schmitt, Johannes Riedel, Franz Dietrich Klaus Dröder ....................................................................... 565
Chapter 3: Assembly .......................................................................................................................................... 573
1. Concept for Automated Fuse and Relay Assembly in the Field of Cable Harness Manufacturing
Paul Heisler, Christian Sand, Florian Hefner, Robert Süß-Wolf, Jörg Franke ................................................... 575
2. Evaluation of optimization methods on the shop floor of a serial assembly
Patrick Pötters, Robert Schmitt, Bert Leyendecker .............................................................................................. 581
Chapter 4: Organization of Manufacturing .................................................................................................... 589
1. Identifying Requirements for the Organizational Implementation of Modular Product Platforms
Johanna Koch, Martin Sommer, Michael Riesener, Günther Schuh.................................................................... 591
2. Towards Agile Quality Management in Mechatronic Product Development
Marie Lindemann, Jan Kukulies, Björn Falk, Robert H. Schmitt ........................................................................ 599
3. Energy Demand Planning and Energy Technology Planning in the Condition Based factory Planning
Approach
Peter Burggräf, Matthias Dannapfel, Julian Utsch, Jérôme Uelpenich, Cornelia Rojacher............................... 609
4. Derivation of steering measures within a product development project by aid of sensitivity analysis
Christian Dölle, Michael Riesener, Sören Brockmann, Günther Schuh .............................................................. 617
Chapter 5: Machine Tools ................................................................................................................................. 625
1. Potentials and Challenges of Augmented Reality for Assistance in Production Technology
Christian Fimmers, Katrin Schilling, Simon Sittig, Markus Obdenbusch, Christian Brecher ............................. 627
2. Analysis and simulative prediction of the thermo-elastic behavior of ball screws
Kolja Bakarinow, Stephan Neus, Florian Kneer, Alexander Steinert, Christian Brecher, Marcel Fey ............... 637
3. Incremental Manufacturing: Design Aspects of flexible hybrid Manufacturing Cells for multi-scale Production
Klaus Dröder, Roman Gerbers, Ann-Kathrin Reichler, Paul Bobka, Anke Müller, Franz Dietrich .................... 645
9
&KDSWHU
$GYDQFHG0DWHULDOV
Online-coupled FE simulation and microstructure prediction for the
process chain of an Inconel 718 turbine disk
Alexander Maximilian Krämer1,a, Rajeevan Rabindran1,b, Anna Rott2,c, Ranjeet
Kumar3,d, Gerhard Hirt1,e
1
Institute of Metal Forming (IBF), RWTH Aachen University, 52072 Aachen, Germany
2
SMS group GmbH, 41069 Mönchengladbach, Germany
3
Simufact engineering GmbH, 21079 Hamburg, Germany
a
kraemer@ibf.rwth-aachen.de, brabindran@ibf.rwth-aachen.de, cAnna.Rott@sms-group.com,
d
ranjeet.kumar@simufact.de, ehirt@ibf.rwth-aachen.de
Keywords: Finite element method, modelling, metal forming
Abstract. Components used in aerospace and energy applications commonly face a multitude of
challenges during fabrication: e.g. high costs, reliability of the final product properties and complex
material behaviour. In conjunction this leads to very complex process chains during fabrication,
including several different forming operations like punching, ring rolling and forging as well as
intermediate heating and cooling operations. For single forming operations force, temperature,
strain, stress and others properties can be predicted accurately using finite element (FE) simulation
providing an alternative to experiments. For the prediction of the microstructure evolution a
technique called offline-coupling can then be applied. In this technique first a FE simulation is used
to predict the geometrical, thermal and mechanical properties of a workpiece. Afterwards the output
of the FE simulation is used to predict the microstructure in a separate program. However, for long
process chains multiple FE simulations have to be coupled, each representing a separate forming
process. Since the strain is simply accumulated over all simulations the reduction of internal
stresses due to effects like static recrystallization (SRX) cannot be accounted for in offlinecoupling. Hence the material state in FE simulations can start to deviate from the true material state
and in turn cause difference in the stress state resulting in inaccurate force predictions. In this paper
an online-coupling of the FE software Simufact.forming and the program StrucSim, that can
account for effects of recrystallization and grain size evolution on the flow stress, is presented. The
online-coupling is achieved via a new subroutine that connects StrucSim to the solver of
Simufact.forming. During the coupling the local deformation conditions calculated by
Simufact.forming are transferred to StrucSim in each iteration step of every increment for every
integration point. StrucSim then predicts the microstructural evolution and its influence on flow
stress as well as equivalent strain and returns it to Simufact.forming for the next iteration. A direct
influence of the recrystallization on the mechanical properties is thereby achieved. StrucSim is
optimized to represent the microstructural behaviour of the nickel-based alloy Inconel 718 by
extensive experimental studies and consequent fitting of the material model. Furthermore the
complex process chain for a turbine disk is represented in a series of FE models using the onlinecoupling in every FE model, incorporating punching, forging, multiple ring rolling and various
heating and cooling simulations. In addition, mechanical and microstructural properties have been
measured before and after each forming operation in the industrial process chain by the SMS group.
The results are compared and show a successful online-coupling with correct predictions of all
relevant properties. With the new successfully developed online-coupling it is possible to accurately
predict the recrystallization and its effects on the mechanical properties during FE simulations.
Introduction
FE simulations have been used for many years in most fields of metal forming [1]. The goal of
FE simulations is to provide an alternative to experimental investigations and supplement designing
new products or optimizing fabrications processes. They can provide a level of insight into
3
mechanical and thermal properties that would otherwise require extensive experimental effort and
are virtually for free in comparison, making FE simulations indispensable nowadays [2].
Many common processes can be simulated using commercially available solutions (e.g. Abaqus,
Forge, Simufact). These tools are usually able to describe a variety of processes. By executing
different simulations consecutively it is also possible to describe processes consisting of multiple
forming operations. Here the final geometry, mechanical and thermal properties of one simulation
are transferred to the subsequent one and used as initial conditions providing a continuous evolution
of the relevant macroscopic properties [3]. This procedure faces an inherent challenge concerning
the evolution of strain and stress, though. During forming operations the strain is accumulated
according to the deformation. To calculate the flow stress, flow curves are provided via a table or
equation within the FE software. If effects, like static recrystallization (SRX), occur this assumption
is no longer valid since SRX decreases the accumulated strain. By not considering such effects the
deviation from the true material state can accumulate, leading to a misrepresentation of the material
properties. Especially for long process chains the probability for a misrepresentation increases due
to the likelihood of SRX occurring along the process chain.
In addition, some materials exhibit complex material behaviour like phase transformations or
precipitations. In these cases the precise simulation of the material properties for the whole process
chain is particularly important. Here small deviations in process parameters like temperature or
strain can lead to significant differences in the material properties [4]. Common examples for such
materials include nickel-based [4], aluminum [5] and titanium [6] alloys.
In this paper the industrial scale fabrication of an Inconel 718 turbine disk is discussed.
Inconel 718 belongs to the class of nickel-based alloys that typically exhibit a drastic
microstructural change at the so called delta-solvus temperature. The delta-solvus temperature
(1020-1050 °C) marks the dissolution of the delta-phase that, if present, restricts grain growth [7].
The turbine disk is used in aircraft engines and fabricated in a long process chain with multiple
complex forming operations.
The goal of this paper therefore is introducing an online-coupling between the FE software
Simufact.forming and a program to predict the microstructural evolution, StrucSim. It further aims
at exploring the feasibility and potential area of application for an online-coupling in the framework
of simulating long process chains employing complex materials. First the difference between offline
and online-coupling is outlined. Next the material model and parameterization of StrucSim to
represent the properties of the nickel-base alloy Inconel 718 are explained. Then, the onlinecoupling and the material model are validated using laboratory scale experiments. Afterwards the
FE simulations of the process chain are described. Finally the experimental results from the
industrial scale process chain and results of the simulation are compared.
Coupling microstructure evolution to FE simulations
There are two possibilities to couple the microstructure evolution to FE simulations
x Online-coupling where the results of the microstructure evolution are connected to the
solver of the FE simulation and directly influence the results.
x Offline-coupling in which the microstructure evolution is calculated after the FE
simulation without influencing the FE simulation.
Offline-coupling has been applied in various contexts like stretch rolling [2], forging [8], ring
rolling [9] and hot forming [10]. Online-coupling is rarely pursued since it does not provide
improvements for single forming operations and increases the calculation time. For process chains
with multiple forming operations the increased accuracy can be worth the additional calculation
time, though.
Coupling StrucSim and Simufact.forming. The authors have developed an online-coupling
between the commercial FE software Simufact.forming and StrucSim, a program to predict the
microstructure evolution. The principle of the online-coupling and the differences to offlinecoupling are sketched in Figure 1. The main difference stems from the inclusion of the flow stress
4
calculation from the microstructure evolution during every iteration of the FE software. This
enables accounting for mechanisms like static recrystallization between forming operations. The
developed online-coupling can be utilized with all features Simufact.forming provides (e.g.
remeshing, domain decomposition, stage control) by including the corresponding subroutine.
Online-coupling
Offline-coupling
Simufact.forming
Initialize
StrucSim
Calculation of
, ,
FE simulation
Initialize
Microstructure
evolution
Calculation of
, , ,
Calculation of
,
, ,
Calculation of
, ,
Iteration loop
Iteration loop
Accept iteration for
Accept iteration for
Increment loop
Increment loop
Finish
Finish
Figure 1: Sketch of the flow diagram for online- (left) and offline- (right) coupling. The steps in
the FE software and microstructure evolution are highlighted in grey and bronze
respectively. ‫ݐ‬, ߝ, ߝሶ, ܶ, ܴܺ, ߪ, ݀ and ߝ௔௖௖ are time, strain, strain rate, temperature,
recrystallized fraction, flow stress, grain size and accumulated strain.
StrucSim parameterization
In order to use the online-coupling of StrucSim [11] with Simufact.forming [12] for the
simulation of the process chain of an Inconel 718 turbine disk, StrucSim has to be parameterized to
represent the material properties of Inconel 718. The chemical composition of Inconel 718 and the
material model incorporated into StrucSim are detailed in Table 1. The parameterization is achieved
by experimental determination of the flow curves, dynamic (DRX) & static (SRX) recrystallization
kinetics, grain growth as well as grain size after DRX & SRX and subsequent parameter fitting. The
samples for these experiments are in the same state as the workpiece after upsetting during the
process chain (see Figure 4).
Table 1: Chemical composition of the investigated Inconel 718 alloy. Material model employed
in StrucSim for the description of investigated Inconel 718
Ni
bal.
Fe
18.17
Cr
17.90
Mo
2.92
Nb
5.42
C
0.02
Al
0.46
Ti
0.97
Flow stress
Co
0.14
Si
0.08
Mn
0.06
Cu
0.04
Grain size
஼
ߝ
ߝ
‫ ’š‡ ڄ‬ቆͳ െ ቇቇ
ߝ௣
ߝ௣
ߝ െ ߝ௖ ஽మ
ൌ ͳ െ ‡š’ ቆെ‫ܦ‬ଵ ൬
൰ ቇ
‫ܦ‬ଷ ‫ߝ ڄ‬௖
ߪ஽ோ௏ ൌ ߪ௣ ‫ ڄ‬ቆ
ܺ஽ோ௑
ߝ௣ ൌ ‫ܣ‬ଵ ‫ڄ‬
ߪ௣ ൌ
஺
݀଴ మ
‫ߝ ڄ‬ሶ
஺య
‫ܣ‬ହ
‫ ’š‡ ڄ‬൬
൰
ܴ‫ܶڄ‬
ͳ
‫Š‹• ”ƒ ڄ‬ሺܱଵ ‫ ܼ ڄ‬ைమ ሻ
ଷ
ܼ ൌ ߝሶ ‫ ’š‡ ڄ‬൬
ܳ௪
൰
ܴ‫ܶڄ‬
ߝ௖ ൌ ‫ܣ‬ସ ‫ߝ ڄ‬௣
݀ௗ௥௫ ൌ ‫ܤ‬ଵ ‫ܼ ڄ‬஻మ
(1)
(2)
஼ௌమ
݀௦௥௫ ൌ ‫ܵܥ‬ଵ ‫݀ ڄ‬଴
(7)
‫ ܼ ڄ‬஼ௌఱ ‫ ’š‡ ڄ‬൬
‫ܵܥ‬ସ ‫ܳ ڄ‬ௌ
൰
ܴ‫ܶڄ‬
(8)
ଵ
(3)
(4)
(5)
(6)
݀௚௚ ൌ
ு஽
൬݀଴ భ
ܳ௄ௐ ு஽భ
൅ ‫ܦܪ ڄ ݐ‬ଶ ‫ ’š‡ ڄ‬൬െ
൰൰
ܴ‫ܶڄ‬
(9)
SRX kinetic
ܺௌோ௑ ൌ ͳ െ ‡š’ ൭ሺͲǤͷሻ ‫ ڄ‬൬
ி
‫ݐ‬
‫ݐ‬ହ଴
‫ݐ‬ହ଴ ൌ ‫ܨ‬ଵ ‫݀ ڄ‬଴మ ‫ ߝ ڄ‬ிయ ‫ܼ ڄ‬ிర
ீ
൰ ൱
(10)
(11)
5
Substructures
H
Resulting
flow curve
[S0]
Hcrit
Hardening
0
DRX
[S0]
[S1]
[S0]
[S0]
[S1]
Flow stress
DRX
Strain
[S2]
[S2]
Hcrit Hpeak
[S1]
Hss
Strain
Figure 2: Schematic representation of the StrucSim algorithm [13].
Since StrucSim predicts the flow stress based on the microstructural evolution the initially
independent semi-empirical equations of the material model have to be coupled. StrucSim achieves
the coupling via a composite of substructures where each substructure represents a different
material state as depicted in Figure 2. A new substructure is created when the critical strain is
exceeded and recrystallization starts; its size is determined by the fraction of the material
undergoing recrystallization. The overall material properties are determined by averaging over all
substructures weighted by the fraction of the substructure [7].
Validation of the coupling on the laboratory scale
Using the online-coupling and StrucSim parameterized to predict the microstructure evolution
for Inconel 718 a validation is carried out. Several double compression tests are conducted on a
dilatometer employing various combinations of temperature, strain rate, intermediate times and
strain distributions. The range of all process parameters is given in Table 2. An overall accuracy for
force and grain size prediction of 5.6 % and 13.9 % respectively is achieved using the onlinecoupling. An excerpt of the measured and predicted force during the second compression of a
double compression test (1100 °C, 0.1 /s, 0.15, 0.45, 200 s) is depicted in Figure 3 b).
Table 2: Range of the process parameters during the double compression tests.
Temperature
[°C]
950-1100
1.2
Strain rate
[1/s]
0.03-2
Strain 1st
compression [-]
0.15-0.45
Strain 2nd
compression [-]
0.15-0.45
5
a
0.8
0.6
0.4
3
Online-coupling
2
Offline-coupling
1
0.2
0
b
4
Force [kN]
Strain [-]
1
Intermediate
time [s]
10-600
50
100
Time [s]
150
200
0
Measurement
1
2
3
4
Time [s]
Figure 3: Strain over time used for the force calculation in the FE simulation for a double
compression test (a). Force over time during the second compression of the same
double compression test (b). Measurement is depicted by the solid black line, onlinecoupling by dotted blue line and offline-coupling by dashed green line.
6
Figure 3 a) shows a comparison of the strain used to calculate the force during simulations
utilizing online and offline-coupling for the double compression test. The significant difference
during the intermediate time and second compression stems from static recrystallization. The
importance becomes evident in the calculated forces during the second compression shown in
Figure 3 b). For one, the overall difference to the measurement (about 15 %) in the offline-coupling
is larger compared to the online-coupling. Secondly, the calculated force is relatively steady with
the slight increase stemming from the increasing contact area between die and workpiece. This is
due to the fact that the flow curve of Inconel 718 reaches a steady state at a strain of around 0.35 for
the conditions during the depicted double compression test, therefore leading to a steady force
prediction when neglecting the influence of SRX. The online-coupling depicts an increasing flow
stress and therefore increasing force because the initial microstructure at the start of the second
compression is fully recrystallized thus starting at the incipient flow stress.
It can be concluded that an online-coupling can increase the accuracy of flow stress and force
prediction when coupling multiple simulations if static recrystallization occurs during any of them.
This effect is more pronounced with faster recrystallization which typically requires higher
temperatures and lower strain rates.
Simulation of the process chain
After the successful implementation of the online-coupling for double compression test the
applicability needs to be broadened and validated for more complex processes. Therefore a complex
process chain of the industrial fabrication of an Inconel 718 turbine disk is set up. The process chain
consists of various forming, heating and cooling operations as well as transporting the workpiece
between the required setups. Each of these operations is represented by a separate FE simulation in
Simufact.forming and connected to ensure a continuous simulation, as explained earlier. Figure 4 a)
displays the forming operations which include upsetting, piercing, ring rolling and forging. Before
each forming operation the workpiece is heated to the working temperature and cooled to room
temperature afterwards. The final machining (Figure 4 b)) is not simulated since it is assumed that
no significant change in the material properties takes place.
All process parameters like geometries of the tools, positioning, velocities and durations are
taken from the industrial process and used as input for the FE simulations. The material parameters
of the tools are provided by the industry or taken from literature.
a)
Initial geometry
c)
After upsetting
Model parameterization
After piercing
After 1st ring rolling
After 2nd ring rolling
After 3rd ring rolling
Final geometry
after forging
b)
Figure 4: Sketch of the geometry after each processing step and pictures taken during the
industrial fabrication a). Geometry after machining b) and position of the turbine disk
in the turbine c) courtesy of “Leistritz Turbinentechnik GmbH”.
7
Three steps, each increasing the complexity of the simulation, are taken to successfully achieve
and validate the simulation of the process chain using an online-coupling:
1. Set up all simulations and run them without a coupling to StrucSim.
2. Run each simulation using an online-coupling starting with a fresh microstructure.
3. Run all simulations transferring the microstructure evolution between simulations.
First, all FE simulations were set up and run without a coupling to StrucSim focusing only on the
correct prediction of the temperature and geometry measured during fabrication. In this phase
simulation parameters like mesh size and boundary conditions are optimized independently of any
coupling. Next the online-coupling is used for all simulations. In order to validate the coupling for
complex forming operations all simulations are run without coupling the microstructure evolution
between simulations. Finally the simulations using online-coupling are connected achieving a
continuous microstructure evolution throughout the process chain.
Results and discussion
The following discussion focuses on the feasibility and area of application of online-coupling
rather than a detailed comparison to results from offline-coupling and measurements. All
measurements are obtained during industrial scale fabrication of the turbine disk by SMS group,
Kind&Co GmbH and Leistritz Turbinentechnik GmbH. In order to investigate the grain size after
each forming operation industrial fabrication is repeated five times. In these cases the fabrication is
stopped after the cooling subsequent to the forming operation that is investigated. Then samples are
machined from the workpiece and the grain size is determined using the intercept method.
Following the previously established three step validation the simulations are run without a
coupling to validate the FE simulations themselves. The comparison between simulation and
measurement of temperature, using thermal cameras, and geometry yielded small differences. In the
simulations the diameter is roughly 3 % smaller and the temperature is about 4 % lower. The
deviation of the geometry probably stems from slight differences in the simulated ring rolling
kinematics compared to the fabrication process.
In order to validate the online-coupling for more complex single forming operations, compared
to a double compression test, each simulation is run employing the online-coupling but without
transferring the microstructure between simulations. As expected and detailed earlier employing the
online-coupling results in negligible differences concerning the temperature and geometry in
comparison to the offline-coupling, since only single forming operations are simulated.
Finally the online-coupling including the transfer of previous results is used. Figure 5 shows a
comparison of measured and simulated forces using online and offline-coupling in the radial rolling
gap for the first and second ring rolling. The results serve as representatives for all simulations as
they yield comparable features. In both cases the simulated forces are 10-15 % lower than the
measurement. The dip at around 5 seconds in the second ring rolling is due to slip during the
simulation while the two sharp minima during online-coupling stem from restarts of the simulation.
There are two explanations for the overall underestimation of the force. It can in parts be
attributed to the deviations in the geometry and temperature stemming from deviating kinematics.
While the lower temperature will generally lead to an overestimation of the force, the smaller ring
diameters entail lower strain rates and different material flow in radial direction which will result in
lower radial forces. On the other hand, the material state is different from the samples used for
parameterization which might contribute to the differences, too. Additionally the assumed heat
transfer coefficient (HTC) or friction behaviour could in part be responsible if underestimated.
The measured and simulated grain size, using online-coupling, are depicted in Figure 6. The
results on the left are again a more detailed representative of the whole process chain simulation
which is shown on the right. For the whole process chain the simulated grain size is well within one
standard deviation of the measurement with a maximal deviation of 3.5 —m in the central region of
the workpiece. At the edges of the workpiece the deviation increases which might again be
attributed to the differing temperature and geometry as well as inaccuracies concerning the HTC
and friction behaviour resulting in more homogenous conditions during the simulation.
8
1st ring rolling
1.8
1.4
1.4
Force [MN]
Force [MN]
2nd ring rolling
1.8
1.0
Online-coupling
0.6
Offline-coupling
0.2
Measurement
0
5
10
15
20
1.0
0.6
0.2
25
30
Restart
0
5
10
15
20
25
30
Time [s]
Time [s]
Figure 5: Force vs time for the first (left) and second (right) ring rolling in the radial rolling gap.
Measurement is depicted by the solid black line, online-coupling by dotted blue line
and offline-coupling by dashed green line.
Measurement
Simulation
14
25
15
13.5
9
12
15
11
16
16
10
19
20
0
Upsetting
13.5 14.3
13
Piercing
12.5 14.3
13
1st ring rolling
14.3
13
ASTM class
Grain Size [ђm]
5 10 15 20 25
Initial
13
13
14
Grain size
[ђm]
Simulation
Measurement
2nd ring rolling
ASTM 8
13
17
15
13.5 14.2 13.5
Inside
Outside
ASTM 9
ASTM 10
3rd ring rolling
Final forging
Figure 6: Left: Measured and simulated grain size after the first ring rolling. The number
corresponds to the grain size while the colored rectangles represent the ASTM class.
Right: Evolution of the simulated (green) and measured (black) grain size including the
standard deviation, the corresponding points on the left are highlighted in black.
A comparison of the offline-coupling to measurements is much more difficult. During the long
process chain several FE operations change the local mesh, e.g. remeshing, restart and data transfer
from 2D to 3D simulations. Since the local strain, strain rate and temperature history is required for
a calculation using the offline-coupling the point of interest needs to be tracked throughout all
simulations. The mentioned FE operations disrupt the ability to accurately track that point. Thus this
method is error prone and subject to growing uncertainties the longer the process chain is.
In conclusion the continuous simulation of a long industrial process chain including multiple
forming operations is possible. Employing an online-coupling yields a continuous microstructure
evolution as well as accounting for static recrystallization and its influence on the flow stress and
force. However, the advantages of online-coupling could not be shown as clearly as in the
laboratory test due to the lower temperature resulting in the presence of delta-phase precipitates.
This leads to less pronounced recrystallization reducing the usefulness of online-coupling.
Nevertheless the online-coupling of FE simulations and microstructure evolution is feasible without
reducing the functionality of the FE software.
The area of application of online-coupling can therefore be established. Due to the increased
simulation times (around 5-25 times longer) it can only serve as a validation tool not as a permanent
replacement of offline-coupling. Online-coupling should be employed if large deviations in force
prediction occur in offline-coupling which has an increased likelihood for processes that exhibit fast
recrystallization.
9
Summary and outlook
This paper presents and discusses the differences between an online and offline-coupling of the
FE software Simufact.forming and StrucSim, a program to describe the microstructure evolution.
Results of simulations using offline and online-coupling from laboratory scale double compression
tests are detailed and compared to measurements. It is shown that neglecting the effects of static
recrystallization for the force prediction in offline-coupling leads to significantly worse predictions.
Furthermore the industrial scale fabrication of an Inconel 718 turbine disk is simulated by coupling
multiple FE simulations. Thereupon the results of the simulation employing an online and offlinecoupling are presented. Geometry and temperature are predicted within a small error margin of 4 %.
The simulated force is 10-15 % lower than the measurements in both cases probably stemming from
a slightly different material state and the inaccuracies in geometry and temperature. The grain size
is in good agreement with the experimental results differing by only 3 —m on average. It is therefore
concluded that the accurate simulation of an industrial scale fabrication process including multiple
different forming operations is possible. In addition, the feasibility of online-coupling that can
handle all common features of FE simulations is validated. Online-coupling is therefrom derived to
be best applied as a validation tool if simulations yield large deviations from measurements.
In order to consolidate the feasibility and area of application more examples of complex
processes have to be investigated. A special focus should be placed on processes that exhibit fast
recrystallization and those that have not been covered yet, like flat rolling and open die forging.
References
[1] J.-L. Chenot et all., Recent and future developments in the finite element metal forming
simulation, 11th International Conference on Technology of Plasticity, ICTP 2014
[2] H. Grass, C-Krempaszky, T. Reip, E. Werner, 3-D Simulation of hot forming and
microstructure evolution, Computational Materials Science 2003, pp. 469-477
[3] A. Govik, L. Nilsson, R. Moshfegh, Finite element simulation of the manufacturing process
chain of a sheet metal assembly, Journal of Materials Processing Technology 2012, pp. 1453-1462
[4] C. Dayong et all., Characterization of hot deformation behaviour of Ni-base superalloy using
processing map, Materials and Design 2009, pp. 921-925
[5] O. Engler, Simulation of Recrystallization and Recrystallization Textures in Aluminium
Alloys, Materials Science Forum 2012, pp. 399-406
[6] I. Weiss, S. L. Semiatin, Thermomechanical processing of beta titanium alloys – an overview,
Materials Science & Engineering A 1998, pp. 46-65
[7] S. Azadian, L.-Y. Wei, R. Warren, Delta phase precipitation in Inconel 718, Materials
Characterization 2004, pp. 7-16
[8] N. Bontcheva, G. Petzov, Microstructure evolution during metal forming processes,
Computational Materials Science 2003, pp. 563-573
[9] G. Schwich, T. Henke, J. Seitz, G. Hirt, Prediction of Microstructure and Resulting Rolling
Forces by Application of a Material Model in a Hot Rolling Process, Key Engineering Materials
2014 Vols. 622-623, pp. 970-977
[10] M. Pietrzyk, Through-process modelling of microstructure evolution in hot forming of steels,
Journal of Materials Processing Technology 2002, pp. 53-62
[11] K. Karhausen, R. Kopp, Model for integrated process and microstructure simulation in hot
forming, Steel Research 1992, pp. 247-256
[12] Simufact.engineering GmbH, Simufact.forming – User Guide 2016
[13] A. M. Krämer, J. Lohmar, G. Hirt, Precise prediction of microstructural properties with
minimal experimental effort for the nickel-base alloy Inconel 718, Adv. Mat. Res. 2016, pp. 43-50
10
Simulation of a Stamp Forming Process of an Organic Sheet and its
Experimental Validation
Florian Bohne1,a, Moritz Micke-Camuz1,b, Michael Weinmann2,c, Christian
Bonk1,d, Anas Bouguecha1,e, Bernd-Arno Behrens1,f
1
Institute of Forming Technology and Machines (IFUM), Leibniz Universität Hannover, An der
Universität 2, 30823 Garbsen, Germany
2
Institute of Polymer Materials and plastic Engineering, TU Clausthal, Agricolastr. 6, 38678
Clausthal, Germany
a
bohne@ifum.uni-hannover.de, bmicke@ifum.uni-hannover.de, cmichael.weinmann@tuclausthal.de, dbonk@ifum.uni-hannover.de, ebouguecha@ifum.uni-hannover.de,
f
behrens@ifum.uni-hannover.de
Keywords: Composite, Simulation, Forming
Abstract
Due to the increasing use of multi-material constructions in light-weight applications, numerous
technological questions arise for design, production and simulation technology, which evoke
considerable research requirements. This work describes the organic sheet forming simulation of a
complex shell geometry. The organic sheet used in this work consists of a glass fibre reinforced
thermoplastic matrix, whose mechanical properties are strongly temperature dependent. Therefore a
focus is put on the temperature distribution during the forming phase. It is shown how the strong
change in material properties is accounted for in the simulation model and the preceding material
characterisation. Furthermore it is shown how the gained material data is implemented in the model,
in order to obtain a stable simulation and to predict the temperature distribution as well as the
overall forming behaviour. In order to consider the heat loss during the transport from the oven to
the forming tools, the transport phase is included in the examination. Finally, the simulation results
were validated using experimental data.
Introduction
The use of fibre-reinforced plastics in the automotive branch increases steadily. The reason for this
trend is the ambition to reduce the weight of vehicles with help of the application of light-weight
materials by mainly conserving the structural stiffness and strength. The use of thermoplastic based
semi-finished products such as organic sheets makes it possible to use conventional stamping
processes and to implement short cycle times, due to their re-melting properties.
In order to introduce this material class into the large scale production several development steps on
different levels have to be taken. On the production level the main focus is put on the development
of suitable manufacturing processes. In order to analyse these processes suitable simulation
methods have to be created, which demand for appropriate material models including methods to
identify the parameters.
Material characterisation and simulation. The organic sheet modelled in this work comprises of
a glass-fibre fabric with a thermoplastic matrix. Both constituents influence the overall forming
behaviour of the organic sheet.
Fabrics show two main deformation characteristics: shearing and bending. The shearing behaviour
can be defined by four critical material parameters: pre- and post-locking shear modulus, the
locking angle and the intraply shear modulus [1]. Up to the locking angle the shear resistance can be
assumed to be negligible [2]. These parameters are characterised either by the picture frame or bias
extension test. Both methods are discussed in [2, 3] and lead to similar results. For the picture frame
11
test a sheet sample is clamped into a frame and shearing of the fabric is applied. In [1] a modified
picture frame test (Fig. 1a) was developed in order to account for improper alignment of the fibres
with the frame. The bending properties of the composite mainly depend on the stiffness of the
thermoplastic matrix and thereby on the temperature. In case of a change from liquid to solid during
the forming phase the influence of the matrix on the overall forming behaviour increases strongly.
In [4] the sensitivity of wrinkling to the bending stiffness was analysed and the influence of the
temperature on the wrinkling was shown in a forming simulation. The bending properties of
reinforced thermoplastic sheets were measured in the cantilever test, which was performed in an
environmental chamber. In [5] the draping of a double dome geometry was investigated and
simulated. Truss elements in combination with membrane elements were used to map the influence
of the fibres. A constant temperature for the forming phase was assumed. The author concludes that
the temperature evolution over the whole process including the heating phase, the transfer and
forming phase must be taken into account for a precise prediction of the draping process. In [6] the
author couples the thermal and mechanical analysis in order to account for the strong influence of
the temperature on the mechanical properties during the forming phase. The material model
MAT_249 of the commercial simulation code LS-Dyna in combination with a shell element
formulation was used in [7, 8] in order to describe the forming behaviour of a unidirectional
composite shell structure. In [9] the same material model was compared to different models of the
same code with regard to the simulation of the forming behaviour of a 3D-woven preform. The
model was found to properly capture the in-plane response and the draping behaviour. It combines
the influence of the fabric ો୊ and the thermoplastic matrix ો୑ , whose influences superimpose [10]:
ો ൌ ો୑ ൅ ો୊
(1)
The mechanical response of the matrix material is described by an elastic plastic behaviour. The
influence of the fibres on the stress tensor is described as a sum over the single fibre families [10].
ͳ
ͳ
(2)
ો୊ ൌ ෍ ݂ሺߣ௜ ሻሺ࢓௜ ۪࢓௜ ሻ ൅ ෍ ݃௜ǡ௜ାଵ ሺ߮ሻሺ࢓௜ ۪࢓௜ାଵ ሻ
‫ܬ‬
‫ܬ‬
The vectors ࢓࢑ represent the current configuration of the fibre families. The functions ݂ሺߣ௜ ሻ and
݃ሺ߮ሻdefine the mechanical response of the fibre to the elongation and shearing of the fibres. The
superposition of the fibre and matrix contribution to the stress tensor in combination with a shell
element formulation facilitate the domain discretisation and the evaluation of the simulation results
in comparison to approaches, in which the constituents are modelled individually. Therefore the
software LS-Dyna and the material model MAT_249 are used in combination with a thermal
analysis for the comparatively complex shell structure and forming process, encountered in this
work.
Forming. Thermoplastics with fibre reinforcement, such as organic sheets, are formed under
elevated temperatures [11]. The temperature affects the mechanical properties strongly during the
forming phase. In order to melt the matrix material and create a formable state the organic sheet
must be heated. Different heating strategies are applicable: radiative, convective and contact
heating. The temperature decreases on the way from the oven to the press due to convection as well
as radiation and during the forming due to tool contact. In [12] the influences of the part thickness,
interlaminar shear, the mold temperature and the transfer time on the forming process were
investigated. It was observed that the heat loss caused by convection during the transfer phase is
important for the temperature evolution during the whole process. The transport and positioning is
realized by means of an appropriate gripper system. The forming is followed by a cooling phase, in
which pressure is applied and the thermoplastic matrix is reconsolidated. A premature cooling can
lead to fibre deflection at internal radii as well as fibre fractures at external radii [13] and can be
prevented by heating the tools to a temperature of 50°C up to 150°C below the processing
temperature of the respective matrix material. A further forming flaw is the occurrence of wrinkles,
which is on one hand caused by the shearing properties of the material at the respective forming
12
temperatures and on the other caused by the geometry. A double curved geometry increases the risk
of wrinkling, but can be significantly reduced by optimizing the semi-finished blank. A further
remedy is the application of a defined tensile stress during forming [13] introduced by retention
forces, which can be applied by using heated blank holder systems [12] or locally installed grippers
[6]. These have been used in [15] to counteract a severe wrinkling situation caused by a complex
geometry. By additionally reducing the width of clamps it was possible to increase the local
shearing in the wrinkling area and thereby the size of the wrinkle.
Material characterisation
fibre
direction
shearstress [kN/mm2]
In order to obtain the thermodynamic and mechanical material properties of the organic sheet a
material characterisation in combination with a simulation of the tests were conducted. The forming
behaviour of the organic sheet at temperatures over the melting point is characterised by shearing of
the fabric, which was measured in the picture frame test. In order to heat and hold the temperature
at experimental conditions the picture frame was located in an environmental chamber. One end of
frame translates by the distance ȟ݈, which introduces a shearing of the fabric. The measured
material properties were implemented in the material model. It is assumed that the matrix stiffness
is negligible. The simulation model consists of the organic sheet and rigid bodies, which represent
the frame. The experimental and simulation results are displayed in Fig. 1b and it can be observed
that the experimental results are properly approximated by the simulation model.
In order to capture the influence of the matrix stiffness at temperatures below the melting point a
tensile test was conducted. A rectangular sheet is clamped into a tensile testing machine, heated in a
defined area with help of contact heating and stretched. To minimise the influence of the fabric the
fibres are aligned 45° to the stretching direction. The recorded Youngs Modulus and plastic
behaviour were implemented in the material model for the temperature range under the melting
point.
a)
b)
organic sheet
clamps
Δl [mm]
Figure 1: a) Picture frame test in an environmental chamber and the respective simulation model b)
Simulation and experimental results.
Transfer phase
The transfer of the organic sheet from the oven to the press is an essential key phase concerning the
heat loss [12] and thereby affecting the initial temperature distribution of the forming process. In
order to compute the initial temperature distribution convective heat and radiative heat transfer is
taken into account. A sheet thickness of 1.5 mm and a transfer time of ten seconds are assumed.
Different transfer velocities and sheet thickness are investigated as well.
13
oven
trajectory
upper surf
core layer
lower surf
organic sheet
Figure 2: Schematic transport phase (left), Boundary conditions and discretization of the organic
sheet.
Modelling approach. During the transfer phase the real relative velocity distribution of the
surrounding air on the sheet surface depends strongly on the specific trajectory from the oven to the
press. During transport the heated sheet deforms under the influence of gravity. In order to obtain a
general insight in the process parameters rather than computing the temperature distribution for a
specific trajectory and gripper position a simplified model is applied, in which the local distribution
over the sheet is neglected. However, in order to account for the temperature gradient within the
sheet thickness a lumped element model is used. The sheet is subdivided into three volumes, two on
both outer surfaces and one for the core of the sheet. An energy balance is created for each volume,
leading to a resulting set of differential equations:
݉௜ ‫ܿ ڄ‬௩ ‫ܶ ڄ‬ሶ௜ ൌ ෍ ܳሶ௜௝ െ ܳሶ௢௨௧ǡ௜ ܳሶ௜௝ ൌ
ߣ௜௝ ‫ܣ‬൫ܶ௝ െ ܶ௜ ൯
݈௜௝
ܳሶ௢௨௧ǡ௜ ൌ ߙ‫ܣ‬ሺܶஶ െ ܶ௜ ሻ
(3)
(4)
(5)
The internal heat fluxes are described by ܳሶ௜௝ , which depend on the cross section between the layers
‫ܣ‬, the heat conductivity ߣ௜௝ and the distance ݈௜௝ between the midpoints of the Volumes. Heat fluxes
due to convection and radiation of the outer layers are given by ܳሶ௢௨௧ǡ௜ . For the convective heat
transfer a flow over a horizontal plate is assumed. It is defined by the heat transfer coefficient ߙ, the
outer surface and the temperature difference between the surface and the surrounding air. The
respective equation regarding convective heat flux and radiation are taken from [16]. The arising set
of differential equations is integrated with help of an explicit Runge-Kutta scheme. For the
evaluation of the results an average temperature for the organic sheet is computed.
Results of transfer model. The simulation model is compared to experimental test data in Fig 3.
The simple simulation model captures quite accurately the temperature of the end phase of the
transport phase, despite minor deviation. A variation of the transfer velocity of 1 m/s is less
influential than a thickness reduction of 0.5 mm. For a sheet thickness d of 1.0 mm the temperature
drops from 280°C by 55°C within 15 seconds, reaching the solidification temperature. For the
assumed sheet thickness and transfer time an initial temperature for the forming process of 255°C is
computed.
14
d = 1.5 mm
temperature [°C]
temperature [°C]
d = 1.0 mm
Positioning
in the press
Transfer time
time [s]
time [s]
Figure 3: Comparison of experimental and simulation results (left) and temperature distribution
over time for different velocities and organic-sheet thicknesses (right).
Forming Phase
The transport phase is followed by the forming phase, in which a punch drapes the organic sheet
into a forming die. In order to investigate the technological forming behaviour experiments were
conducted at the Institute of Forming Technology and Machines at the Leibniz Universität
Hannover. The forming simulations are derived from the experimental setup and the numerical
results are compared to samples produced within the experiments. The conducted experiments are
described in detail in [15]. The reference part is a down scaled battery tray, comprising a tunnel as
well as step geometry. The organic sheet is clamped and connected by springs to a frame. It is
positioned between an upper and lower tool and closes within two seconds. The consolidation time
of the organic sheet is not examined. Due to the necessity to account for temperatures under and
above the melting point, the influence of the strongly changing matrix stiffness had to be taken into
account.
Modelling approach. For modelling the forming phase the material model MAT_249, which was
parameterised in the material characterisation, is applied. The model of the process is shown in Fig
4. The organic sheet is meshed with quadrilateral shell elements with a size of 1.5 mm. The tools
are meshed with a varying mesh size, accounting for the small radii. The metallic clamps impede
the heating and the melting of the clamped areas.
upper tool
fixed joint
lower tool
spring
clamps
organic sheet
Figure 4: Simulation model of the investigated draping process
The clamped areas are modelled as rigids, which are connected to fix bearings by springs. For the
fabric the measured shearing properties are defined. It was found, that the simulation model shows
an enhanced stability when neglecting the plastic behaviour of the matrix. Due to the objective of
obtaining a stable model, which allows predicting the temperature distribution as well as the overall
15
forming behaviour also in critical forming stages, a pure elastic material response was chosen. The
influence of the temperature change on the mechanical properties was considered by applying a
temperature dependent Youngs Modulus of the thermoplastic matrix. The definition of the stiffness
turned out to be a crucial point, in terms of prediction quality and the forming model stability. By
applying a low stiffness of the thermoplastic matrix the simulation model returns a better shearing
behaviour but leads to high mesh distortions in the forming simulation, which leads to premature
simulation terminations. Therefore the following three assumptions about the Youngs modulus of
the matrix were applied in the simulation model:
x
x
x
Youngs modulus as measured in the tensile test for temperatures under 220°C
Youngs modulus decreases linearly to 0.001 GPa in the temperature range between 220°C
and 245°C
Youngs modulus is equal to 0.001 GPa over the temperature 245°C.
For the thermal calculations an isotropic material was assumed. An initial temperature of 110°C
was chosen for the tool temperature. The surface temperature of the tools is assumed to be constant,
due to the high heat capacity and thermal conductivity of tool steel compared to the organic sheet. A
total process time of two seconds is assumed. Due to long simulation times, the process time is
scaled to 10 ms, assuming that the influence of the inertia does not perturb the results. The energy
equation is scaled in order to artificially generate the temperature distribution of the real process.
The simulation model does not comprise a model for fibre fracture. Thus, the local weakening of
the organic sheet by possible breakage of the fibres is not taken into account. Possible bonding
effects of the thermoplastic matrix have also not been incorporated in the model. Therefore a
bonding of the edges, which goes along with an increased structural stiffness, can neither be
considered.
Results forming simulation. The simulation model shows a severe wrinkling over the tunnel
geometry, which was found in the experiments as well. Figure 5 shows a comparison of the
formation of the wrinkles in the simulation and the experiments.
Figure 5: Series of the wrinkling formation (organic sheet 0/90°) [13]
16
The arising wrinkles are compressed by the tool, leading to highly distorted elements, which
strongly weakens the stability of the simulation model. A more detailed comparison of the
experimental results and simulation results regarding the wrinkle is found in [15]. The fabric in the
clamped areas is strongly sheared in the experiment in order to adapt to the applied retention forces.
Due to the artificially high matrix stiffness in the simulation model the shear angle is under
predicted in the clamped area. Nevertheless, the overall shear angle prediction is acceptable. For
instance, the shear angle distribution in the step areas is properly predicted, which is shown in Fig.
6.
shear angle
clamp area
step area
80°
50°
15°
--15°
-45°
-75°
shear angle
clamp area
80°
50°
15°
-15°
clamp area
-45°
-28°
-75°
step area
step area
39°
34°
Figure 7: Simulated shear angle distribution and comparison between experimental and simulation
results.
In Fig. 5 the simulated temperature distribution at the end of the forming phase is shown. The
temperatures in the flank region decreases quickly. After one second the temperature falls under the
melting temperature. The bottom area and flank area start to cool down simultaneously until the
sheet geometry adapts to the gap between punch and matrix. This bends the sheets, which interrupts
the contact and thereby the cooling of the bottom area. Further cooling starts in the moment the
organic sheet comes into contact with the tunnel area. Due to these different contact situations for
different areas of the sheet the temperature distribution becomes inhomogeneous. This leads to a
heterogeneous stiffness distribution, in which the material flow in the cold areas is strongly
inhibited forcing the warm areas to deform. The deformation leads to a shear angle distribution
shown in Fig. 6.
17
flank area
bottom area
255°C
234°C
213°C
192°C
168°C
150°C
initial temperature = 255°C
temperature [°C]
tunnel area
melting
temperature
time [s]
Figure 6: Temperature distribution over scaled battery tray (left). Temperature decrease during the
forming process (right)
Conclusion
This research has shown the development of a simulation model for the forming process presented
in [15]. The transfer phase was included in the examination in order to obtain a more precise
temperature distribution in the following forming process. A method to deal with the strong
stiffness change of the thermoplastic matrix on one hand and obtaining a stable model on the other
has been presented. Despite minor deviations in clamped areas an acceptable shear angle
distribution was computed. The results showed that a heterogeneous temperature distribution is
obtained for the assumed forming process. This inhomogeneous temperature distribution, which
falls partially under the melting temperature of the PA6, could lead to an inhomogeneous
consolidation in the work piece.
Acknowledgements
This research and development project is funded by the German Federal Ministry of Education and
Research (BMBF) within the Forschungscampus "Open Hybrid LabFactory" and managed by the
Project Management Agency Karlsruhe (PTKA). The author is responsible for the contents of this
publication.
References
[1] G. Lebrun, M. N. Bureau, J. Denault, Evaluation of bias-extension and picture-frame test methods for the
measurement of intraply shear properties of PP/glass commingled fabrics, Composite Structures 61 (2003) 341-352
[2] I. Taha, Y. Abdin, S, Ebeid, Comparison of Picture Frameand Bias-extension Tests for the Characterisation of Shear
Behaviour in Natural Fibre Woven Fabrics. Fibres and Polymers (2013) 338-344
[3] W. Lee, J. Padvoiskis, E. de Luycker, P. Boisse, F. Morestin, J. Chen, J. Sherwppd. Bias-extension of woven
composite fabrics. Int J Mater Form 1 (2008) 895-898
[4] B. Liang, N. Hamila, M. Peillon, P, P. Boisse, Analysis of thermoplastic prepreg bending stiffness during
manufacturing and of its influence on wrinkling simulations, Compos. A: Appl. Sci. Manuf. 67 (2014) 111-122
[5] P. Harrison, R. Gomes, N. Curado-Correia, Press forming a 0/90 cross-ply advanced thermoplastic composite using
the double-dome benchmark geometry, Compos. A: Appl. Sci. Manuf. 54 (2013) 56-69.
[6] E. Guzman-Maldonado, N. Hamila, N. Naouar, Simulation of thermoplastic prepreg thermoforming based on a
visco-hyperelastic model and a thermal homogenization. Materials and Design 93 (2016) 431-442
[7] Behrens, B.-A.; Vucetic, M.; Neumann, A.; Osiecki, T.; Grbic, N. (2015): Experimental test and FEA of a sheet
metal forming process of composite material and steel foil in sandwich design using LS-DYNA, 18th International
ESAFORM Conference on Material Forming, 15.04.2015 – 17.04.2015, Graz, Key Engineering Materials Vols. 651653 (2015) 439-445
[8] Behrens, B.-A.; Grbic, N.; Bouguecha, A.; Vucetic, M.; Neumann, A,; Osiecki, T. (2015): Validation of the FEA of
a Sheet Metal Forming Process of Composite Material and Steel Foil in Sandwich Design, WGP Kongress, 07.09.2015
– 08.09.2015, Applied Mechanics and Materials Vol. 794 (2015) 75-80
18
[9] Scarlat G., Ramgulam R., Martinsson P., Kasper K., Bayraktar H. Material characterisation of a 3D-woven carbon
fibre preform at macro-scale level for manufacturing process modelling, 11 th European LS-Dyna Conference 2017,
Salzburg, Austria (2017)
[10] LS-DYNA, KEYWORD USER’S MANUAL VOLUME II Material Models LS-DYNA R10.0 (2017) 1226-1233
[11] Behrens B.-A., Hübner S., Neumann A., Forming sheets of metal and fibre-reinforced plastics to hybrid parts in
one deep drawing process. Procedia Engineering 81 (2014) 1608-1613
[12] H. Lessard, G. Lebrun, A. Benkaddour, X.T. Pham, Influence of process parameters on the thermostamping of a
[0/90] 12 carbon/polyether ether ketone laminate, Compos. A: Appl. Sci. Manuf. 70 (2015) 59–68.
[13] Bhattacharyya, D.: Composite Sheet Forming, Elsevier Science B.V.. (1997).
[14] Hinenoa, S., Yoneyamaa, T., Tatsunoa, D., Kimuraa, M., Shiozakia K., Moriyasub, T., Okamotob, M.,
Nagashimab, S., Fibre deformation behavior during press forming of rectangle cup by using plane weave carbon fibre
reinforced thermoplastic sheet, Procedia Engineering 81 (2014) 1614-1619
[15] B.-A. Behrens, A. Raatz, S. Hübner, C. Bonk, F. Bohne, C. Bruns, M. Micke-Camuz: Automated Stamp Forming
of Continuous Fibre Reinforced Thermoplastics for Complex Shell Geometries, 1st Cirp Conference on Composite
Materials Parts Manufacturing, Procedia CIRP (2017)
[16] Gesellschaft, VDI: VDI-Wärmeatlas, 10. Aufl., Wiesbaden: Springer Berlin Heidelberg, 2005.
19
Production Chain of Hot-Forged Hybrid Bevel Gears from
Deposition-Welded Blanks
Anna Chugreeva1,a, Anas Bouguecha1,b and Bernd-Arno Behrens1,c
1
Institute of Forming Technology and Machines, An der Universität 2, 30823 Garbsen, Germany
a
chugreeva@ifum.uni-hannover.de, b bouguecha@ifum.uni-hannover.de
c
behrens@ifum.uni-hannover.de
Keywords: Forging, Hot Deformation, Hybrid Bevel Gear
Abstract. The manufacturing of hybrid components by means of bulk metal forming represents a
promising method to produce near-net-shape components with complex geometries and outstanding
mechanical properties within just a few processing steps. However, the design of corresponding
processes is often challenging due to various technical aspects. This paper deals with a process
chain for the production of a hybrid bevel gear by means of tailored forming technology with a
focus on die forging. It describes main challenges and corresponding solution approaches within
forming stage design. In addition to process design issues, first forged hybrid gears (C22/41Cr4) as
well as experimental results regarding the quality of the joining zone are presented and discussed.
Introduction
Due to high durability and outstanding mechanical properties, hot-forged components are
primarily employed as key components in intensively loaded application areas (e.g. power train,
combustion engines). Conventional forging processes involve workpieces usually made of a single
material. Here, the choice of material represents a compromise in order to withstand local
component-specific loads and meet the overall technical requirements. Nevertheless, monolithic
materials are limited to their material-specific properties. In addition, the automotive and aircraft
industries face steadily increasing technical demands mainly driven by new trends (e.g. engine
downsizing, lightweight design and reduction of CO2 emission) [1]. In contrast to mono-material
parts, hybrid component design offers ample potential to produce components with an extended
functionality, higher durability and specific property profiles which are optimised according to a
particular application area [2]. Thus, the development of efficient process chains for the production
of multi-material components has been gaining in importance in recent years.
For this purpose, a Collaborative Research Centre 1153 (CRC 1153) “Tailored Forming” was
started at Leibniz Universitaet Hannover. This study was carried out within the framework of CRC
1153 and is focused on the development of forming technology for hot-forged hybrid bevel gears
made of a combination of two different steel alloys. At first, a general description of the process
chain for Tailored Forming technology is given. Subsequently, observed challenges as well as
possible design solutions for the investigated hybrid forging process are discussed. Besides the
developed forming tool system and an appropriate heating strategy, hot-forged hybrid bevel gears
are presented. Finally, first results of metallographic investigations for both, deposition-welded
blanks as well as forged hybrid gears are shown. In the last section, a brief summary of the study is
provided together with some concluding remarks.
Survey of the current literature
Multi-material or hybrid design can be applied for all parts requiring different properties in
separate part regions. Combining the individual benefits of several materials in a single component,
this conception can offer the production of application-optimized parts with improved performance.
*
Submitted by: Anna Chugreeva
21
Several basic application examples of hybrid components and aspects of their forming behaviour
have already been investigated within both, scientific and industrial research projects. In previous
studies the production of multi-material compounds is commonly presented by the combination of
forming and joining processes in one stage. For example, Leiber Group GmbH & Co. KG
specialises in automotive lightweight engineering solutions and deals with the hybrid forging of
steel-aluminium compound components. Their product portfolio includes hybrid wheel hubs,
connecting rods and brake discs, where the material bond involves both, force-fit and adhesive
joints [3]. Wohletz and Groche performed the joining of steel and aluminium raw parts
(C15/AW-6082 T6) by means of combined forward and cup extrusion at ambient and elevated
temperatures [4]. It has been shown that the emerging oxide scale at elevated forming temperature
have a negative impact on the final joint interface quality. In a study of Kong et al., a compound of
stainless steel (AISI 316L) and aluminium (6063) was produced by forge welding [5]. They have
stated that among the other process parameters the forming temperature has the most decisive
influence on the resulting quality and the tensile strength of the joint. Kosch investigated the
compound forging of hybrid workpieces in the context of non-uniform temperature distribution
between steel and aluminium raw parts and the emerging intermetallic phases [6]. The acquired
results were transferred to the forging of hybrid steel/aluminium gears (S235JR/EN AW-6082).
Politis et al. studied the cold forming of bi-metal gears consisting of lead core material and a copper
periphery [7]. Essa et al. studied the cold upsetting process of bimetallic billets made of a steel/steel
combination (C15E/C45E) [8]. Further research works presented the joining of sheet and bulk metal
specimens by plastic deformation using impact extrusion and forging processes [9, 10].
The production of hybrid components from pre-joined semi-finished workpieces is wellestablished in sheet metal forming and successfully implemented in the industry as tailored blanks
technology [11]. The use of tailored blanks in bulk metal forming is quite novel. In contrast to
joining by plastic deformation, the objective of this method is improving mechanical properties and
microstructure in already existing joints during the forming process. Klotz at al. investigated the
production of bimetallic gas turbine discs combining two different Ni-based superalloys from hot
isostatically pressed billets by means of isothermal forging [12]. In [13] Foydl et al. employed the
co-extrusion of non-centric steel-reinforced aluminium profiles with subsequent forging.
Domblesky et al. studied the forgeability of friction-welded bimetallic pre-forms by hot
compression tests combining copper, aluminium and steel [14]. Frischkorn et al. examined the hot
forging behaviour of further material combinations comprising steel, aluminium, titanium and
Inconel [15]. Wang et al. studied the hot forging of bi-metallic billets (C15/316L) both, numerically
and experimentally [16]. The billets were produced by weld cladding and subsequently deformed by
upsetting.
General Aspects of the Process Chain for Tailored Forming Technology
The general structure of the entire process chain for the Tailored Forming technology is depicted
in Fig. 1 and contains three basic process steps. At first, raw monolithic metal parts are combined
with bimetallic workpieces (steel/steel or steel/aluminium) with a simple pre-form shape by means
of diverse joining processes (e.g. compound extrusion, friction or deposition welding). The
produced hybrid workpieces are subsequently deformed to several near-net-shape hybrid
components via e.g. impact extrusion, cross wedge rolling or die forging. In order to achieve a
similar forming behaviour within the multi-material compound consisting of dissimilar materials
(e.g. steel/aluminium) and thus to ensure a correct material flow, a specific heating strategy must
also be developed considering the varying thermo-physical properties of the individual materials.
For the steel/steel combination studied in this work, only slight differences in flow stresses are to be
expected at an equally warm forming temperature (approx. 1200 °C). Hence, the investigated
steel/steel combination exhibits almost equal forming behaviour during the forging process, even if
the compound has a uniform temperature profile [17]. Due to this fact, hybrid blanks made of two
22
steels could be heated radial-homogeneously to the conventional forging temperature. Subsequent
processes such as heat treatment and machining finalise the process chain and must often also be
adjusted to multi-material applications.
Manufacturing of
Hybrid Workpieces
Forming
Hybrid Components with
Locally Adapted Properties
Finishing
Figure 1: Process chain of “Tailored Forming” technology
Hybrid Blanks and Bevel Gear
The current study is focused on the process route for the production of a hybrid bevel gear made
of a steel/steel combination. The bevel gear represents a hybrid concept to increase the efficiency of
material utilisation reducing the requirement for more expensive or rare materials. Due to the load
prevalent in gear coupling, some gear areas are exposed to higher strain than others. In contrast to
the inner region, the contact area between meshing gears and tooth root is exposed to higher loads.
This requires a higher performance and a wear resistance of the used material in the critical surface
regions to counteract the high tribological stresses (e.g. high tensile 41Cr4 or high alloyed steels
X45CrSi9-3). For the inner section, which primarily experiences structural stresses, lowperformance materials with a high toughness, ductility and breaking resistance can be used (e. g.
low alloyed steel C22). The investigated bimetallic bevel gear is schematically illustrated in Fig. 2
(right).
41Cr4
C22
Deposition-Welded Workpiece
Hybrid Bevel Gear
Outer Diameter
30 mm
Number of Teeth
15
Core Diameter
27 mm
Outer Diameter
62 mm
Height
79 mm
Height
30 mm
Figure 2: Coaxial deposition-welded blank (left) and forged hybrid bevel gear (right)
The corresponding semi-finished workpieces were designed in accordance with the load
collective prevalent in the final parts (Fig. 2 (left)). For the production of hybrid workpieces,
cylindrical blanks of the base material are coated with a metallic layer by means of deposition
welding. In this study, the investigated multi-material blanks were produced by means of plasma
powder deposition welding, whereas a high tensile steel 41Cr4 has been welded on the core
material. The welding process was carried out by a rotating motion over the cylindrical core made of
23
a low alloyed steel C22 with a diameter of Ø 27 mm. Within a subsequent hot forging process, the
deposition-welded workpieces with coaxial material arrangement were formed to hybrid bevel
gears.
Design of the Tailored Forming Process
As mentioned above, hybrid parts produced by bulk forming processes have a lot of benefits, yet
the manufacturing of forged multi-material components is quite challenging. Therefore, the
corresponding process development can pose several specific issues. In this case, the forging stage
is not only responsible for faultless material flow and accurate mould filling, but also for improving
local mechanical properties and the quality of the joining zone. Optimal forming behaviour is
realised by the accurate design of the forming tool system as well as by the development of an
appropriate heating concept [18].
In order to facilitate material flow and reduce flow stress during forging, the hybrid workpieces
have to be heated up to the material-specific forming temperature. For this purpose, inductive
heating can offer the following advantages compared to conventional furnace heating: short heating
time leading to a shorter cycle time, lower scaling and surface decarburization [19]. With regard to
workpiece geometry, a heating concept with an outer induction coil as depicted in Fig. 3 was
employed.
C22
41Cr4
Outer induction coil
Figure 3: Concept for induction heating of hybrid blanks
For induction heating a middle-frequency generator Huettinger TruHeat MF 3040 with working
frequencies between 5 and 30 kHz and a power range of up to 40 kW was used. To realise an
approximately homogeneous heating with the available system, occurring skin effects were
minimised with the help of a two-step heating strategy. Here, the first intensive heating step is
followed by low-power heating to allow for uniform temperature distribution. In addition, during
the development of the heating strategy, an inhomogeneous induction field was observed due to the
coil end effect and thus an inhomogeneous heating of the blank depending on its initial position in
the induction coil. The arising temperature differences were used in order to heat up the upper part
of the blank more intensively. According to optical temperature measurements on the workpiece
surface after the heated workpiece was ejected from the induction coil, the final temperature
gradient between reference points on the upper and lower part were approx. 100 °C. In this case,
better mould filling, especially in the geared part, was achieved.
After the heating stage, workpieces are automatically transported to the forging press by means
of a programmable robot arm to ensure high reproducibility and avoid heat losses during transfer.
For the manufacturing of a hybrid bevel gear, a single-stage forming process with the tool system
depicted in Fig. 4 has been developed. The corresponding tool system is constructed modularly and
in general consists of a lower die and a pre-stressed geared die which is installed in the upper tool in
order to ensure smooth removal of the forged gears.
24
Insulation Plate
Compression Spring
Stress Ring
Geared Die
Heating Sleeves
Ejector
Lower Die
Heated Workpiece
Insulation Plate
Figure 4:
Tooling system concept for die forging of hybrid bevel gears
Experimental Investigations
For the experimental forging tests, the forming tool system depicted in Fig. 4 was integrated in a
fully automated forging cell at the Institute of Forming Technology and Machines (IFUM) (Fig. 5
(left)). The forging unit at the IFUM consists of an inductive heating system, a robot arm
(Kuka KR16) with a specific gripper device and of a screw press Lasco SPR500 with a maximal
nominal forging force of 5000 kN. Single hybrid blanks are heated up in the induction coil
separately. The heating step takes about 50 seconds. After the required temperature is achieved,
blanks are automatically ejected from the induction coil and automatically transported to the forging
press. Fig. 5 (right) shows a bevel gear forged with the developed tool system directly after the
forging process. The forged parts displayed complete mould filling without any outward forging
defects (e.g. folds) even in the crucial geared area.
Handling
Robot
Screw Press
Forming
Tool System
Induction
Coil
Figure 5: Fully automated forging cell based on Lasco SPR500 screw press for closed-die
forging of hybrid components (left) and a hybrid bevel gear directly after the forging process
(right)
In order to investigate the bonding quality, the forming-induced evolution of micro and macro
structure of deposition-welded blanks and forged parts has been investigated metallographically.
The corresponding results are presented in Fig. 6 and Fig. 7, respectively. The resulting transition
zone contour between the two steel materials can easily be recognised at the metallographic cuts.
The micrographs show a good material compound after deposition welding without any material
separation, damage or microscopic cracks. The welded layer shows a primarily pearlitic
microstructure with a certain amount of ferrite concentrated at grain boundaries. The core material
(C22) exhibits so-called Widmannstaetten ferritic patterns directly at the interface layer (Fig. 6
(bottom left)) induced by high cooling rates after the welding process. The ferritic-pearlitic structure
in the cylinder core is a common microstructure for the steel C22.
25
5 mm
2 mm
300 μm
Cross section
Longitudinal section
The interface zone has a wavy contour in the longitudinal cut caused by closely spaced weld
seams (Fig. 6 (upper left)). The subsequent forging process shapes the wavy contour to a sharpangled form with a reduced distance between the single “waves” (Fig. 7 (upper left)).
5 mm
Figure 6:
1 mm
150 μm
Macro- and micrographs of the joining zone in a deposition-welded workpiece
4 mm
2 mm
500 μm
Cross section
Longitudinal section
According to the micro- and macrographs of the tooth area (Fig. 7 (bottom left)), the welded
layer at the tooth root is significantly thinner in contrast to the tooth crest zone. This can be
explained with higher plastic strains in the tooth root area. The resulting contour line flow between
the two steels can be characterised as inhomogeneous or also as not symmetrical and is caused by
the initial “wavy” form of the interface zone. Therefore, the resulting interface zone contour will be
different depending on the cutting surface position for both, longitudinal as well as transversal cut.
At the same time, the achieved “wavy” form of the bonding zone may lead to a higher total joining
surface area and thus to a better bonding quality.
2 mm
Figure 7:
1 mm
150 μm
Macro- and micrographs of the joining zone in a forged bevel gear
In addition, the hardness was measured in Vickers HV 0.1 before and after forming for both,
welded and core material. The values of 5 indentations were calculated to an arithmetic average for
each material. While the core material has approximately the same hardness after welding (172 HV)
26
such as after forging (168 HV), the welded layer shows a reduced hardness after forging (215 HV)
compared to the initial state (275 HV). This difference can be explained by high cooling rates
during the welding process compared to the sand cooling used after the hot forging process.
Outlook
In addition to the process parameter studies in both forging and welding steps, an integration of a
tailored heat treatment with automatic transfer of the forged parts to a gas nozzle cooling system is
planned in future works. Hence, the parts will undergo an integrated heat treatment directly from the
forging heat also saving the reheating stages. Furthermore, the total impact of such specific interface
zone geometry on the final bonding strength as well as on the global durability of the hybrid
component will be investigated. For this purpose, several experimental tests at real operating loads
will be carried out by a partner institute. In addition, a short parameter study focused on the
variation of the welded layer thickness and different diameters of the core cylinder is planned in
order to clarify their impact on global component quality.
Summary
To summarise, it can be stated that the used plasma powder deposition-welding process leads to
an appropriate bonding of two steel materials which even withstands the extensive deformations of
hot forging processes. In accordance with the first metallographic investigations, neither surface
defects nor micro-cracks have been detected in the joining zone or the deposited material.
Furthermore, significant grain refinement was observed after the forging process in both materials
of the compound (Fig. 7 (bottom right)).
In general, the design of closed-die forging processes is quite challenging (high mechanical and
thermal stresses acting simultaneously; complex material flow and mould filling; possible forging
faults etc.). Particularly in the context of hybrid forging, the local evolution of the interface zone as
well as the behaviour of the entire multi-material compound have to be taken into account in
addition to the general forging issues. During the initial experimental study on the closed-die hot
forging of hybrid bevel gears, several positive results have been achieved (e. g. an appropriate
component quality regarding the joining zone without any macroscopic forging defects and an
almost complete mould filling). Thus, the presented results show the wide potential of the hybrid
forging technology and offer a lot of potential for further investigations, especially regarding
forming process optimisation.
Acknowledgement
The results presented in this paper were obtained within the Collaborative Research Centre 1153
“Process chain to produce hybrid high-performance components by Tailored Forming” in subproject
B2. The authors would like to thank the subproject A4 for supplying deposition-welded hybrid
workpieces and the German Research Foundation (DFG) for the financial and organisational
support of this project.
References
[1] M. Goede, M. Stehlin, L. Rafflenbeul, G. Kopp, E. Beeh, Super Light Car—lightweight
construction thanks to a multi-material design and function integration, European Transport
Research Review 1/1 (2009) 5-10.
[2] D. J. Politis, L. Jianguo, T. Dean, Investigation of material flow in forging bi-metal
components, Steel Research International 14 (2012) 231-234.
[3] R. Leiber, Hybridschmieden bringt den Leichtbau voran, Aluminium Praxis 78 (2012) 8.
27
[4] S. Wohletz, P. Groche, Temperature Influence on Bond Formation in Multi-material Joining by
Forging, Procedia Engineering 81 (2014) 2000-2005.
[5] T. F. Kong, L. C. Chan, T. C. Lee, T. C., Experimental Study of Effects of Process Parameters
in Forge-Welding Bimetallic Materials: AISI 316L Stainless Steel and 6063 Aluminium Alloy,
Strain, 45/4 (2009) 373-379.
[6] K.-G. Kosch, Grundlagenuntersuchungen zum Verbundschmieden hybrider Bauteile aus Stahl
und Aluminium, PhD thesis, Leibniz Universität Hannover, 2016.
[7] D. J. Politis, J., Lin, T. A. Dean, D. S. Balint, An investigation into the forging of Bi-metal
gears, Journal of Materials Processing Technology, 214/11 (2014) 2248-2260.
[8] K. Essa, I. Kacmarcik, P. Hartley, M. Plancak, D. Vilotic, Upsetting of bi-metallic ring billets,
Journal of Materials Processing Technology, 212/4 (2012) 817-824.
[9] S. Hänisch, S., Ossenkemper, A. Jäger, A. E. Tekkaya, Combined deep drawing and cold
forging: an innovative hybrid process to manufacture composite bulk parts, Conference, New
Developments in Forging Technology, 2013.
[10] H. Kache, M. Stonis, B.-A. Behrens, B. A., Hybridschmieden–Monoprozessuales Umformen
und Fügen metallischer Blech-und Massivelemente, wt Werkstatttechnik online, 103 (2013) 257262.
[11] M. Merklein, M. Johannes, M. Lechner, A. Kuppert, A review on tailored blanks – Production,
applications and evaluation, Journal of Materials Processing Technology, 214/2 (2014) 151–164.
[12] U. E. Klotz, M. B. Henderson, I. M. Wilcock, S. Davies, P. Janschek, M. Roth, P. Gasser, G.
McColvin, Manufacture and microstructural characterisation of bimetallic gas turbine discs,
Materials science and technology, 21/2 (2005) 218-224.
[13] I. Pfeiffer, A. Foydl, M. Kammler, T. Matthias, K.-G. Kosch, A. Jaeger, N. B. Khalifa, A. E.
Tekkaya, B.-A. Behrens, Compound Forging of Hot-extruded Steel-reinforced Aluminum Parts,
Steel Research International (2012) 159-162.
[14] J. Domblesky, F. F. Kraft, Metallographic evaluation of welded forging preforms, Journal of
Materials Processing Technology, 191/1 (2007) 82-86.
[15] C. Frischkorn, A. Huskic, J. Hermsdorf, A. Barroi, S. Kaierle, B.-A. Behrens, L. Overmeyer,
Investigation on a new process chain of deposition or friction welding and subsequent hot forging,
Materialwissenschaft und Werkstofftechnik, 44/9 (2013) 783-789.
[16] J. Wang, L. Langlois, M. Rafiq, R. Bigot, H. Lu, Study of the hot forging of weld cladded work
pieces using upsetting tests, Journal of Materials Processing Technology, 214/2 (2014) 365-379.
[17] B.-A. Behrens, K.-G. Kosch, Development of the heating and forming strategy in compound
forging of hybrid steel-aluminum parts, Materials Science and Engineering Technology, 42 (2011)
973-978.
[18] B.-A. Behrens, A. Bouguecha, C. Frischkorn, A. Huskic, A. Stakhieva, Duran, D. Tailored
forming technology for three dimensional components: Approaches to heating and forming,
Conference, ThermoMechanical Processing TMP 2016.
[19] B.-A. Behrens, F. Holz, Verbundschmieden hybrider
Materialwissenschaft und Werkstofftechnik 39 (2008) 599-603.
28
StahlǦAluminium
Bauteile,
Adaption of the Tool Design in Micro Deep Hole Drilling of Difficult-ToCut Materials by High-Speed Chip Formation Analyses
Marko Kirschner1, a, Sebastian Michel1, b, Sebastian Berger1, c
and Dirk Biermann1, d
1
Institute of Machining Technology, Baroper Straße 303, 44227 Dortmund
a
kirschner@isf.de, bmichel@isf.de, cberger@isf.de, dbiermann@isf.de
Keywords: Deep hole drilling, Chip, Analysis
The chip removal in deep hole drilling with smallest diameters represents a major challenge caused
by the limited cross sections of the chip flutes. The production of unfavourable chip forms leads to
an accumulation of the chips in the flutes and results in spontaneous tool failures. The application of
micro deep hole drilling is even more sophisticated if the machining of difficult-to-cut materials like
nickel-based alloys characterised by high strength values and fracture strains is required. In this
paper, an enhanced method of analysis to adapt and optimise the tool design with respect to the chip
formation in single-lip deep hole drilling with smallest diameters is presented. The fundamental idea
of the new analytical technique is the substitution of the surrounding, non-transparent bore hole wall
by transparent acrylic glass. This approach facilitates the visualisation of the chip formation at the
cutting edges as well as the chip removal along the chip flutes by means of high-speed microscopy.
To allow a constant observation of the chip formation and removal process the experiments are
conducted with stationary cutting tools and rotating material samples embedded into acrylic glass.
The integration of the experimental setup into a conventional deep hole drilling machine as well as
the realisation of the visibility despite the constant supply of deep hole drilling oil are shown.
Furthermore, the high-speed chip analysis demonstrates the crucial limitations regarding the
achievable productivity and process stability using the standardised cutting tool design for single-lip
deep hole drilling of nickel-based alloys. Based on these findings, important modifications of the
tool cutting edge angles and the centre distance are derived and thus significant process
improvements have been reached. The results on the essential chip formation are complemented by
analysis of the mechanical tool loads, the tool wear, the surface quality as well as the dimensional
and shape tolerances.
State of the art in micro deep hole drilling
Single-lip deep hole drilling as well as twist deep hole drilling is used to mechanically process
micro deep hole drilling or smallest diameter deep hole drilling, respectively defined as deep hole
drilling with tool diameters d ” 2 mm [1,2]. Due to different development trends in various
industrial sectors, these processes are steadily gaining importance. Typical applications are the
manufacturing of fuel pipes in injectors for the automotive industry, the production of cannulated
implants and surgical instruments in the medical industry as well as the machining of valve, control
and vent holes in hydraulic and pneumatic components in the aircraft industry and in the field of
general mechanical engineering [3]. The term “micro deep hole drilling” does not refer to the tool
diameter, but rather to the strictly limited feed per revolution in a range of only a few microns.
Because of the low tool rigidity and limited achievable feed rates, the material separation usually
takes place in front of the cutting edge rounding under negative effective rake angles. The ratio
between the undeformed chip thickness h and the cutting edge rounding rß considerably correlates
with the resulting mechanical tool loads in micro drilling [4]. In this matter, a decreasing ratio
h/rß < 1 leads to a non-linear increase of the cutting force and the feed force [5,6]. In this
engagement situation of the cutting tool and the workpiece material during the drilling process
undesired force components are dominant by means of material squeezing as well as ploughing.
29
Furthermore, the ratio h/rß has an influence on the chip formation [2,7]. Nevertheless, the
production of favourable chip forms is of decisive importance in micro deep hole drilling due to the
limited cross sections of the chip flutes. Beside the cutting edge rounding, the point geometry has an
important impact on the resulting chip form. In single-lip deep hole drilling the common tool
geometries are defined in accordance with the standard VDI3208 [8]. The widespread point
geometries consist of two cutting edges; the inner and outer cutting edge. The angles of incidence
depend on the diameter. In this matter, single-lip deep hole drills with smallest diameters are
characterised by angles of incidence at the inner cutting edge of Ȁ1 = 50° and at the outer cutting
edge of Ȁ2 = 120°. For functional reasons as well as to improve the process productivity a range of
special point geometries has been developed especially for applications in the diameter range of
D = 1 … 8 mm. Fig. 1 gives an overview of the standard point geometries and selected special point
geometries of single-lip deep hole drills.
Figure 1: Standard and special point geometries of single-lip deep hole drills [2]
The special point geometry (d) is characterised by a bowed outer cutting edge and a short inner
cutting edge. The bowed outer cutting edge allows the machining of a rounded hole bottom.
Furthermore, the bowed outer cutting edge can feature a chip breaker in the form of a notch to
generate smaller chips, which are better to remove and thus enable higher feed velocities. The
special point geometries (e) and (f) have been developed to increase process productivity in
proportion to the standard point geometries. Special point geometry (e) has an additional middle
cutting edge with a length of 5 - 20 % of the tool diameter with an angle of incidence of typically
Ȁ3 = 90°. Thereby, stresses are induced into the chip and favour chip breakage. The special feature
of the point geometry (f) is a modified rake face geometry. A positive rake angle is generated by
means of a chip former. The chip breakage occurs when the chip leaves the lead face and hits the
breakage section, which is designed depending on the workpiece material as well as the cutting
parameters.
30
High-speed chip formation analysis
Due to the enclosed operating zone in deep hole drilling, the chip formation is of paramount
importance with respect to the process stability. Unfavourable chip forms can initiate chip clogging
in the flute and lead to sudden tool breakage. Consequently, a methodology of analysis technique
has been developed to visualise the chip formation at the corresponding cutting edges and allow a
closer look on the chip removal along the flutes in smallest diameter deep hole drilling for the first
time [2,9]. The concept of the analysis technique considers the integration in a conventional deep
hole drilling machine (fig. 2). To realise the working method of stationary tool and rotating
workpiece und thus enabling a continuous observation of the chip formation and chip removal along
the flutes perpendicular to the rake face, the high frequency spindle is used to rotate the workpiece
samples mounted in transparent acrylic glass carriers. The transparent acrylic glass carriers are
clamped by a collet chuck as well as a clamping cone on the opposite side. The feed motion is
reversed and conducted by the NC-cross table, which is equipped with a 4-component-dynamometer
holding a coolant adapter and the cutting tool. To guarantee sufficient cooling and lubrication in
deep hole drilling of difficult-to-cut materials an inner high-pressure cooling is indispensable.
Therefore, the high-pressure pump of the machine is connected to the coolant adapter via highpressure hoses and supplies the operating zone with deep hole drilling oil during the analysis.
Figure 2: Design and experimental setup of the high-speed chip formation analysis
In the experimental analysis a high-speed camera is aligned perpendicular to the rotating acrylic
glass carriers. The high-speed camera is attached to the NC-cross table by an adjustable fixture. By
this means, the feed motion of the cutting tool is synchronised with the path of the high-speed
camera. The illumination of the operating zone as well as the functional tool faces arranged at
different angles is realised by several gooseneck lamps. Prior to the high-speed analysis the process
parameters are adjusted and the measurement chain is configured. The recording of the chip
formation is acquired with a frame rate of 10ௗ000 fps and a magnification of 30x. The exposure time
is 1/20ௗ000 s. Fig. 3 shows the contact situation in the high-speed analysis of single-lip deep hole
drilling from the high-speed camera’s perspective. The methodology respectively the tailored
components are laid out for a high-speed analysis with a tool diameter of d = 2 mm. The tool is
31
carefully inserted in the pre-drilled deep hole in the acrylic glass carrier, before the test material
sample is drilled to a depth of a few millimetres. The high-speed analysis covers all stages in deep
hole drilling from the initial engagement of the drill tip to the stationary deep hole drilling process
with complete cutting edge engagement. Because of the very low feed rates per revolution in micro
deep hole drilling a large number of chip formation processes are covered by drilling a depth of a
few millimetres. Thus, the new methodology of analysis automatically involves a statistical
coverage within one experiment.
Figure 3: Analysis of the chip formation very close to the operation zone by means of highspeed microscopy
The high-speed chip formation analysis presented has been applied to achieve a process design
for single-lip deep hole drilling of the nickel-based alloy Inconelௗ718 with smallest diameters. At the
beginning of the experimental tests, single-lip deep hole drills with a standardised point design (a)
have been used to determine the feasibility as well as the reference situation [10]. The use of these
single-lip deep hole drills with a standardised point design (a) leads to strong limitation with respect
to the process result. The tipped point design is subjected to severe abrasive wear by means of the
hard phases and the precipitations in the microstructure of Inconelௗ718. A further explanation for
short tool life when adjusting higher feed velocities is the long, disadvantageous chip formation.
Using the standardised point design (a) the chips are formed along the complete cutting width. By
means of the velocity gradient, the sheared material flows towards the tool centre hitting the
transition border between chip flute face and oil chamber face. Subsequently, the chip gets folded
due to the high material ductility. In the area of the inner cutting edge the low cutting speeds cause a
segmented chip formation, and the chips tear down. A chip separation at the cutting tip between
inner and outer cutting edge does not occur, with the result that the cutting edge contour is
replicated on the top and bottom of the produced chip. These repetitive chip folding processes lead
to the formation of about 6 to 14 connected roof-like structures, before the chip is separated from
the workpiece. To increase the stability of the cutting edge as well as to influence the chip
formation, modifications of the point design of the single-lip deep hole drilling tools were derived
based on the findings of the high-speed chip formation analysis. The implementation of these
modifications includes a flattening of the point design by adjusting the angles of incidence, adding
an additional middle cutting edge perpendicular to the feed motion and a centre distance
modification. The middle cutting edge between the inner and outer cutting edge enhances the
cutting edge stability and reduces the wear area at the same time. In fig. 4 the influence of the point
design on the chip formation is illustrated.
32
Figure 4: Purposive modification of the point design in micro single-lip deep hole drilling of
nickel-based alloy Inconelௗ718
The use of an optimised point design significantly improves the chip formation. The flattened
point design and the small inner cutting edge lead to a stronger chip curling. Due to the curling the
chips provide a significantly increased cross section and contact surface for the coolant respectively
the chip removal. The reduced total length of the cutting edge also benefits a reduction in the chip
width. Consequently, short and process-advantageous chips are formed and a reliable smallest
diameter deep hole drilling of nickel-based alloy Inconelௗ718 can be achieved.
Comparative performance using solid material
Subsequent to the high-speed analysis, single-lip deep hole drills with standard and optimised
point design were used in tests for drilling solid material samples. Hereby, the findings of the highspeed analysis designed for tool diameters of d = 2 mm and d = 5 mm can be transferred to the tool
diameter of d = 1,3 mm. Realising a length to diameter ratio of lt/D = 30 tools with standard point
design were limited to a maximum drilling path of only lf = 273 mm, whereas tools with optimised
point design obtained a reliable drilling path of lf = 780 mm. In fig. 5 the mechanical tool loads
determined in drilling a reduced depth of lt = 15 mm for both point designs are shown. The
presented mean values and standard deviations of the mechanical loads consider the manufacturing
of 5 bore holes.
33
Figure 5: Influence of the point design on the mechanical tool loads
On the one hand, the optimised, flattened point design with modified angles of incidence causes
a displacement of the cutting edges perpendicular to the feed motion and thus a marginal increase in
the average feed force Ff compared to the standard point design. On the other hand, the drilling
torque Mb which includes the cutting torque Mc and friction torque Mr is reduced as a consequence
of smaller cutting forces as well as radial forces using the flattened point design.
Furthermore the influence of the point design on the dimension and shape tolerances was
determined (fig. 6). The mean values and the standard deviations consider the measurement of 7
bore holes for the standard point design and 20 bore holes for the optimised point design. Contrary
results of the bore hole diameter on the exit side and the straightness deviation are shown as a
function of the point geometry as well as increasing angles of incidence. The trend towards bore
hole diameters comparable to the nominal diameter and reduced straightness deviations can be
explained by the difference in the guide pad recess. The distance between cutting tip and axial guide
pad entrance of the optimised point design is GPROPT = 0,61 mm in contrast to the standard point
design GPRSTD = 0,86 mm. The guide pad recess significantly influences the bore head tilting [11].
Besides, the optimised point design with a bigger angle of incidence at the outer cutting edge and a
smaller angle of incidence at the inner cutting edge benefits smaller passive forces. A reduced tool
deflection leads to lower deviations of the bore hole diameter and an improved straightness
deviation. The use of the standard point design allows the production of average bore hole diameters
within the ISO-tolerance IT7, whereas the optimised point design provides a manufacturing quality
within the tolerance IT4. Because of the oil chamber and thus the missing support diagonally
opposite the circular grinding chamfer, the standard point design produces a bore hole diameter
below the nominal diameter. Regarding the roundness, tolerances within the ISO-tolerance IT6 were
achieved independent of the point design. The diminished circularity using the standard point design
results from the immediate abrasive wear of the cutting tip. Additionally, the surface quality is
compared for both point designs. The realisable surface qualities in single-lip deep hole drilling are
especially limited by the excessive abrasive and adhesive wear of the guide pad and circular
grinding chamfer. As a result of the self-guidance and support of the bore head at the bore hole wall
in the stationary drilling stage the state of wear is directly reflected on the resulting surface quality.
The mean values of the average surface roughness and the arithmetic mean roughness are between
Rz = 2 … 3 μm und Ra = 0,4 … 0,6 μm. The optimised point design offers smaller standard
deviations because of the favourable chip formation.
34
Figure 6: Bore hole quality as a function of the point design
Conclusion
This paper presents a process solution for single-lip deep hole drilling of difficult-to-cut
materials like nickel-based alloys with smallest tool diameters. For this purpose, the state of the art
in smallest diameter single-lip deep hole drilling with respect to the point design is described in the
first step. In the next step, the feasibility of single-lip deep hole drilling of Inconelௗ718 using a
standard point design is determined. To accomplish the upcoming challenges as well as to improve
the strong performance limitations an optimised point design has been derived in cooperation with
the tool manufacturer botek Präzisionsbohrtechnik GmbH. A newly developed methodology of
analysis has significantly contributed to this development and allows a closer look on the chip
formation at the corresponding cutting edges and the chip removal along the chip flutes in smallest
diameter deep hole drilling. Here, samples made of the particular test materials are inserted in
transparent acrylic glass carriers and the chip formation in the operating zone is documented by
high-speed microscopy. The experimental setup of the high-speed chip formation analysis as well as
the analysis using the standardised and optimised point design are shown. Finally, comparative
performance tests by single-lip deep hole drilling of solid material samples with a high length-todiameter-ratio confirm the benefit of the point design optimisation.
35
Acknowledgements
The authors would like to thank the German Research Foundation (DFG) for funding the project BI
498 67 “High speed analysis of the chip formation in small diameter deep hole drilling of highstrength and difficult-to-machine materials”.
References
[1]
R. Eichler: Prozeßsicherheit beim Einlippentiefbohren mit kleinen Durchmessern.
Dissertation, Universität Stuttgart, (1996).
[2]
M. Kirschner: Tiefbohren von hochfesten und schwer zerspanbaren Werkstoffen mit kleinsten
Durchmessern. Dissertation, Technische Universität Dortmund, (2016).
[3]
M. Heilmann: Tiefbohren mit kleinen Durchmessern durch mechanische und thermische
Verfahren. Dissertation, Technische Universität Dortmund, (2012).
[4]
F. Vollertsen, D. Biermann, H. N. Hansen, I. S. Jawahir, K. Kuzman: Size effects in
manufacturing of metallic components. CIRP Annals – Manufacturing Technology,
Volume 58 (2009), S. 566-587.
[5]
F. Klocke, K. Gerschwiler, M. Abouridouane: Size effects of micro drilling in steel.
Production engineering – Research and Development, 3 (2009), S. 69-72.
[6]
K. Risse: Einflüsse von Werkzeugdurchmesser und Schneidkantenverrundung beim Bohren
von Wendelbohrern in Stahl. Dissertation, Rheinisch-Westfälische Technische Hochschule
Aachen, (2006).
[7]
F. Klocke, K. Gerschwiler, M. Abouridouane: Size effects of the tool edge radius on specific
cutting energy and chip formation in drilling. 2nd International conference on new forming
technology, Bremen, (2007).
[8]
VDI-Richtlinie 3208, Tiefbohren mit Einlippenbohrern. Beuth-Verlag, Berlin, (2014).
[9]
D. Biermann, M. Kirschner, D. Eberhardt: A novel method for chip formation analyses in
deep hole drilling with small diameters. Production Engineering – Research and
Developments, Volume 8 Issue 4 (2014), S. 491-497.
[10] D. Biermann, M. Kirschner: Experimental Investigations on Single-lip Deep Hole Drilling of
Superalloy Inconelௗ718 with Small Diameters. Journal of Manufacturing Processes, Volume
20 (2015), S. 332-339.
[11] H. O. Stürenburg: Zum Mittenverlauf beim Tiefbohren. Dissertation, Universität Stuttgart,
(1983).
36
Investigation of process induced changes of material behaviour using a
drawbead in forming operations
Harald Schmid1, a, Sebastian Suttner1, b and Marion Merklein1,c
1
Institute of Manufacturing Technology LFT,
Friedrich-Alexander-Universität Erlangen-Nürnberg
Egerlandstr. 13, 91058 Erlangen, Germany
a
harald.schmid@fau.de, bsebastian.suttner@fau.de, cmarion.merklein@fau.de
Keywords: sheet metal forming, hardening, drawbead
Abstract. In recent past, deep drawing parts became even more complex than they have already
been before. This fact leads to different kinds of material failures, which need to be prevented.
Using draw beads is one option to operate and guide material flow while the forming process takes
place. Research and industrial companies worked out, material is work hardened, when passing the
draw bead. There it is undergoing tensile and alternating bending strains what has an impact on its
later behaviour. This occurrence could be used properly, if analysed and examined correctly.
Separate investigations of what happens during forming in the material are necessary to understand
the whole process in detail. These sub processes here could be for instance closing the part holder,
material entering or departing the draw bead or even bending it. To examine those processes, a strip
drawing test is used as a model process. Here, a metal strip is pulled straight through a drawbead’s
geometry. Within this contribution, the material behaviour during strip drawing into a draw bead is
analysed for a conventional mild steel DC 04. Moreover, the influence of the draw bead on modern
lightweight steels is exemplarily observed for the advanced high strength steel DP800.
After treating the material by pulling it through the draw bead geometry, the strip is pre-strained by
combined cyclic bending and tension. Afterwards, introduced stresses along the sheet thickness are
examined within a micro hardness analysis. Therefore, Vickers hardness is determined in different
forming areas on the inner, outer and middle layer of the sheet to get an incremental knowledge of
the stress history.
Introduction
In today’s industrial metal forming, there are many ways to control material flow during the
forming process. One of these possibilities is for example the change of friction characteristics, the
adaption of blank holder force or the use of drawbeads. In most sheet forming operations,
drawbeads are used as a resistance to material flow. Drawbeads do not only affect material flow but
also have influence on the sheet metal’s material properties and therefore on the forming process.
Besides, Halkaci et al. [1] show the effect of improving formability to AA5574-O by adding a
drawbead to the system. For their experiments, the limiting drawing ratio increases about 10 %, this
makes the use of drawbeads important for sheet forming operations.
While passing a drawbead, a load reversal is introduced in the inner and outer surface of the
blank what will lead to pre-straining, what was already examined in Courvoisier et al. [2]. This
cyclic pre-loading needs to be paid attention to when designing forming operations.
Courvoisier et al. describe a comparison between an analytical drawbead model and conducted
FE simulations [2]. Due to the present load reversal, they also use both isotropic and kinematic
hardening models to examine different modelling approaches.
Moreover, Larsson [3] proposed another numerical study. It is found that for mild steel a mixed
isotropic-kinematic - hardening model should be used to improve results and describe hardening. In
addition, Samuel [4] creates a numerical model to describe pull force, shear force and bending
moment while sheet metal passing a drawbead. He finds his numerical outcome in good agreement
37
with experimental results but does not use a complex model to qualify work hardening processes.
When using high strength steels like DP800, Li et al. [5] describe that DP steel has a higher work
hardening rate in the beginning (here compared to TRIP steel). These discoveries, especially the
cyclic pre-loading, lead to the necessity to fundamentally investigate the influence of passing a
drawbead on subsequent mechanical properties of sheet metal. In this contribution, the material
behaviour under strip drawing with a drawbead is investigated.
Materials
Because of its numerous applications in deep drawing processes and high formability, a deepdrawing steel DC04 with an initial sheet thickness of 1.0 mm is used. This typical representative
was tested to have a tensile strength TS of 314.4 MPa and a uniform elongation UE of 25.9%. Also,
the advanced high steel DP800 with an initial sheet thickness of 1.0 mm is investigated. This kind
of material is nowadays more and more used in technical applications like automotive engineering.
Although, this material has a higher tensile strength TS (817.9 MPa) and a moderate uniform
elongation UE of only 12.3 % compared to DC04.
Methodology of drawbead analysis
The aim of this analysis is to analyse the material behaviour when passing a drawbead. In
preliminary investigations, the macro Brinell hardness is measured on the outer surface before and
after passing a drawbead to analyse the effect on the mechanical behaviour, as seen in Figure 1.
250
t0 = 1.0 mm
pH = 5 N/mm²
+12 %
%ULQHOOKDUGQHVVĺ
HB 2.5/62.5
before passing a drawbead
150
100
after passing a drawbead
+ 68 %
50
0
DC04
material
DP800
Figure 1: Brinell hardness HB 2.5/62.5 on sheet surface before and after drawbead passage
In Figure 1, Brinell hardness measurements of a draw strip test before and after passing a
drawbead are shown. Those measurements are taken from commercial sheet metal with 1 mm
thickness and a galvanized surface area what is typical for automotive production area. For a
bearing pressure of 5 MPa, Brinell hardness increases with 12 % for DP800 or respectively 68 %
for DC04 what also leads to the conclusion of a work hardening after passing a drawbead. These
findings clarify the need of a more detailed analysis of the influence of a drawbead. Therefore, not
only measurements on the surface before and after a drawbead passage are investigated, but also
local observations over the sheet thickness are done.
For analysing the material behaviour locally, micro hardness tests are carried out to detect the
evolution of work hardening when passing a drawbead. Within this contribution, a commonly used
drawbead geometry according to the S-Rail benchmark Numisheet 2008 is used. The geometry was
38
already used and described in Schmid et.al. [6]. In this case, only a U-profile is deep drawn without
the typical S-rail shape. The examinations are realized for two different blank holder forces and for
two different materials to chronologically observe the effect of pre-loading. For that purpose, a cut
out of the drawbead after drawing will be examined, as schematically shown in Figure 2.
3
2
4
5
1
upper
middle
lower
Drawing direction
Figure 2: Principal of different forming areas in a drawbead’s flow path
The chronological drawbead pass will be divided into five investigation areas, which can be also
seen in Figure 2. Those areas are defined because of their different forming history along the
drawing direction. The first area (1) is set in front of the drawbead, where no pre-loading occurs.
Behind this is the first bending and the running-in the drawbead located (2). This is followed by the
top of the drawbead (3) with the reversal bending, the running-out the drawbead (4) with the
unbending area and in the end the area behind the drawbead (5). Moreover, the sheet layer is
divided into three levels to generate results for the upper, lower and the middle layer (see also
Figure 2).
Experimental setup
Within the experimental observations, a deep drawing process with U-profile and the drawbead
geometry from S-rail tool of Numisheet benchmark is used, as exemplarily seen for DC04 in
Figure 3. In here, metal strips with the size of 54 x 224 mm² are deep drawn to 40 mm depth with a
drawing speed of 10 mm/s and bearing pressures of 5 and 10 MPa.
40 mm
a)
b)
20 mm
c)
10 mm
Figure 3: a) DC04 U-profile from S-Rail Numisheet 2008, b) cut out and c) polished specimen
39
Afterwards, the drawbead passages were cut out by a laser and embedded heatless for further
examination, as seen in Figure 3 b. The specimens were grinded and polished, to analyse the effect
of drawbead on the outer layers and the middle layer (see Figure 3 c). This procedure is followed by
micro Vickers hardness tests with HV 0.02 in a Fischerscope HM2000. Every area is tested in three
rows with 10 to 12 points, depending on the actual sheet thickness.
Results
In this research work, the influence of drawbead on the mechanical properties is analysed by
Vickers hardness for the five areas described in the methodology. At first, the micro hardness
development is observed for the mild steel DC04 at the two different levels of bearing pressure, see
Figure 4.
PLFURKDUGQHVVĺ
HV0,02
upper layer
middle layer
lower layer
200
PLFURKDUGQHVVĺ
upper layer
middle layer
lower layer
200
DC04
t0 = 1.0 mm
pH = 5 N/mm²
160
140
120
HV0,02
DC04
t0 = 1.0 mm
pH = 10 N/mm²
160
140
120
2
0
1
0
0
1
2
3
4
GUDZEHDGDUHDĺ
-
6
0
1
2
3
4
GUDZEHDGDUHDĺ
3
4
-
5
6
Figure 4: Micro hardness of DC04 at different areas of drawbead
In the first area in front of the drawbead, hardness measurements show the same results in every
layer for unloaded material, about 110 HV. In the upper and lower layer, hardness increases until
the third area, the top of the drawbead, where the upper layer shows higher rates than the lower
layer. Hardness in the middle layer increases less. For 10 MPa bearing pressure, outer layers seem
to increase even more in hardness with 5 MPa. Also the middle layer shows higher micro hardness
from area to area compared to the option with smaller bearing pressure. The total increase in
hardness is from around 110 HV up to 170 HV.
The results given in Figure 4 indicate a work hardening during pre-loading by a drawbead,
especially in the upper and lower surface. The middle layer in every variation remains under the
outer layers and indicates an overlying tensile force during the bending operation within the
drawbead. The sheet metal obviously passes through tension with alternating bending depending on
the area. From the beginning to the end, work hardening can be seen for every configuration as the
hardness increases apparently. Also work hardening can be monitored from the fourth to the last
area, what practically means the outrun of the drawbead. This seems to be an outcome of the plastic
elongation after the drawbead, which also works as a flow barrier. For 10 MPa bearing pressure,
DC04 seems to decrease more in the upper surface when entering the fourth stage. This could be
explained with the higher blank holder force and a higher back bending or pressure level
corresponding to it.
Since higher pressure due to higher clamping forces leads to an increase of overlapping tensile
stress during bending, a gain of straining is introduced in the sheet metal. Because the maximum
tensile load is introduced in the radius (position 3) in the upper layer, a decrease of hardness is
observed after leaving the drawbead radius. This hardening behaviour correlates to the investigated
40
straining level during the presented drawbead geometry, as described by Schmid et al. [6]. The
decrease of hardness can be explained by the occurrence of the Bauschinger effect, as already
observed by Suttner and Merklein in a tension-compression test for various materials [7].
Continuative, the force of the drawbead to sustain the load leads to tensile forces and with that to a
resurgence of hardness.
For DP800 some trends of Vickers hardness, which are almost similar to the observations of
DC04, can be seen in Figure 5.
upper layer
middle layer
lower layer
PLFURKDUGQHVVĺ
HV0,02
400
440
380
360
340
320
400
380
360
340
320
300
300
280
280
0
0
0
1
2
3
4
GUDZEHDGDUHDĺ
-
6
DP800
t0 = 1.0 mm
pH = 10 N/mm²
upper layer
middle layer
lower layer
HV0,02
PLFURKDUGQHVVĺ
440
DP800
t0 = 1.0 mm
pH = 5 N/mm²
2
1
0
1
2
3
4
GUDZEHDGDUHDĺ
3
4
-
5
6
Figure 5: Microhardness of DP800 at different areas of drawbead
Hardness in the beginning in front of the drawbead is situated around 290 HV for each pressure
level and each layer. In contrast to DC04, the advanced high strength steel exhibits a decrease in
hardness after the second area until the bottom level in the fourth area. In the fifth area, the hardness
increases strongly until a level of around 380 HV. The hardness increases for DP800 from the first
to the last area from around 290 HV up to 380 HV.
Furthermore, material specific details can be noticed in the measurements when comparing
Figure 4 and Figure 5. After the top of the drawbead, the hardness decrease due to load reversal in
the outer surfaces. In the end, the hardness gains because of the introduction of an overlying tension
during drawing. For the two observed materials, the hardness of the middle layer increases due to
the overlying tensile load, but the hardness values are below the outer surfaces of the material.
In addition, DP800’s hardness development shows differences in comparison to the deep
drawing steel DC04. Especially after reaching the middle of the drawbead (position 3), the hardness
decreases more significantly, while a lower decrease is visible for the mild steel. When running out
the drawbead in the fourth area, hardness values are nearly located at the level of unloaded area for
DP800. The effect of increase and decrease appears to be even more significant for the higher blank
holder pressure.
The difference in hardness development of DP800 could be explained by the higher elastic level
due to the higher stress level of DP800. Supplementary, Suttner and Merklein [7] investigated that a
high strength dual phase steel DP600 exhibits a smooth elastic-plastic transition zone when load
reversal from uniaxial tension to uniaxial compression takes place. In contrast to this, this transient
zone is not so distinctive for a mild steel DC06. Therefore, the reduction of hardness within a load
reversal from position 2 to 3 and 4 can also be explained by the transient zone after reloading the
dual phase steel [7]. At this point of view, additional research work on the microstructural changes
and the straining behaviour during passing a drawbead is necessary to analyse the mechanisms of
property changes.
41
Recapitulatory, both materials experience an increase of hardness after passing a drawbead.
Thus, a change of mechanical behaviour occurs, which needs to be considered in the numerical
design of forming operations with drawbeads.
Summary
To summarize these findings, work hardening is shown for DC04 and DP800. Also it can be
pointed out, that a drawbead leads to different hardening rates in the sheet layers which depend on
their local position in thickness and drawing direction. Those hardening measurements also indicate
changes in material properties depending on the forming area of the drawbead passage. DP800 and
varieties with higher blank holder forces shows significant alternating hardening values during the
through run in a drawbead. This could be explained by higher tension strength of DP800 as well as
higher bending and unbending forces. In further observations, the influence of clamping pressure on
the bended cross section should be observed.
The strip drawing test seems to fulfil the expectations for further investigations as a model test
setup for drawbead loading to sheet metal. In comparison, it could be shown that hardness increases
in the same range for both test setups. This comparison was needed to qualify a model test. Other
possibilities for following examinations corresponding to the drawbead passage could be speed,
pressure, friction (oil), other material or the geometry of the drawbead. Therefore, simulation
models could be built up to minimize experimental efforts.
Acknowledgement
For the support in the research projects EFB 08/114 (AiF 18328N) the authors would like to
thank the European Research Association for Sheet Metal Working e.V. (EFB) as well as the
German Federation of Industrial Research Associations „Otto von Guericke“ e.V. (AiF).
References
[1] H. Selcuk Halkaci, M. Turkoz, M. Dilmec, Enhancing formability in hydromechanical deep
drawing process adding a shallow drawbead to the blank holder, Journal of Materials Processing
Technology 214 (2014), 1638–1646. DOI: 10.1016/j.jmatprotec.2014.03.008.
[2] L. Courvoisier, M. Martiny, G. Ferron, Analytical modelling of drawbeads in sheet metal
forming, Journal of Materials Processing Technology 133 (2003), 359–370. DOI: 10.1016/S09240136(02)01124-X.
[3] M. Larsson, Computational characterization of drawbeads, Journal of Materials Processing
Technology 209 (2009), 376–386. DOI: 10.1016/j.jmatprotec.2008.02.009.
[4] M. Samuel, Influence of drawbead geometry on sheet metal forming, Journal of Materials
Processing Technology 122 (2002), 94–103. DOI: 10.1016/S0924-0136(01)01233-X.
[5] H. Li, G. Sun, G. Li, Z. Gong, D. Liu, Q. Li, On twist springback in advanced high-strength
steels, Materials & Design 32 (2011), 3272–3279. DOI: 10.1016/j.matdes.2011.02.035.
[6] H. Schmid, S. Suttner, M. Merklein, An incremental analysis of a deep drawing steel’s material
behaviour undergoing the predeformation using drawbeads, IDDRG 2017 (2017).
[7] S. Suttner, M. Merklein, Characterization of the Bauschinger effect and identification of the
kinematic Chaboche Model by tension-compression tests and cyclic shear tests, IDDRG 2014
(2014), 125–130.
42
&KDSWHU
0DQXIDFWXULQJ7HFKQRORJ\
Comparison of 316L test specimens manufactured by Selective Laser
Melting, Laser Deposition Welding and Continuous Casting
Christopher Gläßner1,a, Bastian Blinn2,b, Mathias Burkhart1,c,
Marcus Klein2,d, Tilmann Beck2,e, Jan C. Aurich1,f
1Institute
for Manufacturing Technology and Production Systems, University of Kaiserslautern,
Germany
2Institute
of Materials Science and Engineering, University of Kaiserslautern, Germany
achristopher.glaessner@mv.uni-kl.de, bblinn@mv.uni-kl.de, cmathias.burkhart@mv.uni-kl.de,
dklein@mv.uni-kl.de, ebeck@mv.uni-kl.de, fpublications.fbk@mv.uni-kl.de,
Abstract:
Additive Manufacturing (AM) is a term for different manufacturing technologies with the operating
principle of adding layer after layer of material to manufacture three-dimensional objects. AM
technologies for manufacturing metal components are on the verge of maturity from rapid prototyping
to industrial manufacturing. Material performance, especially mechanical behaviour, is a key quality
factor to enable the usage of AM manufactured components in highly utilized products such as
commercial vehicles. In the present paper, first results of a comprehensive test program on
mechanical behaviour of AM specimens are presented. The objective is the characterization and
comparison of material performance of test specimens made of AISI 316L (1.4404), manufactured
with Selective Laser Melting, Laser Deposition Welding and Continuous Casting. The applied AM
technologies and manufacturing conditions of the test specimens will be explained. The analysis of
the chemical composition, microhardness, cyclic indentation tests, grinding surface patterns and
tensile strength will be presented with regard to the influence of the different building directions as
well as the influence of the three different manufacturing processes.
Introduction
Additive Manufacturing (AM) is a genus for manufacturing technologies which create threedimensional objects by adding material layer by layer [1]. The main benefit of these technologies lies
in the ease of toolless manufacturing of components with complex geometry that are difficult or even
impossible to manufacture by conventional machining operations. Therefore, AM enables new
component designs, shortened manufacturing processes and components with functional integration.
Since almost 30 years, AM technologies are used for rapid prototyping to shorten product
development cycles [2]. These prototypes mainly serve as visualization models and just have to fulfil
minimal requirements [3]. However, advances in recent years, e.g. in variety of available materials,
component quality, reproducibility and build rates have developed AM technologies to an extent that
they can be used in industrial applications. Especially in commercial vehicle manufacturing, which
is characterised by low quantity and high variety of products, AM is attributed a great potential. The
high variety of products, caused by the heterogeneous demands of customers, leads to complex
process chains within the development and manufacturing networks of commercial vehicle
manufacturers [4]. By reducing the number of necessary manufacturing, assembly and logistic
processes, AM helps to cope with product and process complexity in commercial vehicle
manufacturing and hence constitutes a strategic success factor in competition.
Two common AM technologies for metal components are Selective Laser Melting (SLM) and
Laser Deposition Welding (LDW). SLM is a powder bed based technology that fuses metal powder
layer after layer by a laser beam, while in LDW a laser beam generates a melt pool into which metal
powder is fed via a nozzle [2].
For the usage of AM produced metal components in highly utilized products, e.g. commercial
vehicles, characterization of the mechanical properties is essential. To investigate the material
performance of AM produced austenitic steel AISI 316L, a comprehensive test program with the
45
objective of characterization and comparison of specimens manufactured by SLM, LDW, and -as
reference- Continuous Casting (CC) was developed. First results are shown in the present paper.
Materials and Experiments
Specimens and Materials. The specimens used in the present study are made of AISI 316L (1.4404).
The material is available as conventional CC material, but also as powder for SLM and LDW
processes. Tensile specimens were manufactured with a geometry according to DIN EN ISO 6892-1
[5].
Table 1:
Manufacturing parameters of the AM manufactured specimens
AM Technology
Manufacturing machine
Laser power [W]
Powder size [μm]
Av. layer thickness [μm]
Dimension as build [mm³]
LDW
DMG MORI LASERTEC 65 3D
2000
50-150
400
15x15x103
SLM
EOS M 290
400
25-45
40
Ø14x102
Building direction
vertical
horizontal
vertical
horizontal
(LDW-V)
(LDW-H)
(SLM-V)
(SLM-H)
To keep the process as simple as possible, LDW specimens were pre-manufactured as cuboids and
afterwards turned to final geometry. Compared to SLM, layer thickness is ten times higher in the
LDW process. The average particle size of the ingot powder is bigger for the LDW than for the SLM
process (see Table 1), and particle size distribution shows larger scatter for the LDW ingot material.
The SLM specimens were manufactured with cylindrical geometry and afterwards turned to final
shape. Due to the layer by layer deposition of AM material, the building direction is expected to result
in anisotropic properties. Therefore, specimens were manufactured vertically (LDW-V and SLM-V)
as well as horizontally (LDW-H and SLM-H) to investigate the influence of the different building
directions on the properties. The CC specimens were manufactured by turning from bars with 15 mm
diameter.
Table 2: Chemical compositions and Md30 temperatures of the investigated AISI 316L (1.4404)
batches
Amount of alloying
Md30, Angel
C
N
Si
Mn
Cr
Ni
Mo
Fe
element [Ma-%]
[°C]
0.02 0.08 0.61 1.44 17.68 13.07 2.26 64.68
-58.3
SLM
0.03 0.10 0.53 1.30 16.41 10.54 2.04 68.75
-28.5
LDW
0.02 0.03 0.38 1.65 16.59 10.48 2.03 68.18
9.1
CC
----16.00 10.00 2.00 60.80
61.8
ASTM
min
-117.1
A 182
max 0.03 0.10 1.00 2.00 18.00 15.00 3.00 72.00
The chemical compositions of the investigated AISI 316L stainless steel (1.4404) batches,
determined by spectrophotometric analysis, are summarized in Table 2. All batches are in the range
of ASTM A 182/ A 182M-14a [6]. However, chromium, nitrogen and nickel content differs
significantly. Thus, also the austenite stability, evaluated by the Md30, Angel temperatures (Eq. 1 [7]),
differs significantly.
Md30, Angel = 413 – 462(C + N) – 9.2Si – 8.1Mn – 13.7Cr – 9.5Ni – 18.5Mo .
46
[7] (1)
The Md30, Angel temperature represents the temperature where 50 % of the initial austenite will
transform to martensite when subjected to a total strain of 30 %. Hence, lower Md30, Angel temperatures
indicate higher austenite stability. Note that the batches of the additively processed material show,
due to their chemical composition, higher austenite stability compared to the CC material.
Experimental methods. Light optical micrographs (LOM) were taken with a Leica DM 6000 M
device. To analyze the microstructure, SLM and CC samples were etched using V2A etchant while
for LDW samples Adler etchant was used, which leads to a better visualization of the microstructure
of LDW material due to its lower Cr and higher N content. Scanning electron microscope
observations were performed using a FEI Quanta 600. Microhardness measurements and cyclic
microindentation tests were conducted with a Fisherscope H100 C from Helmut Fischer GmbH.
Microhardness line scans were determined with 120 indentation points with a point to point
distance of 100 μm and a distance to the sample edge of 50 μm. Cyclic microindentation tests were
carried out similar to the procedure described in [8]. The plastic indentation depth amplitude ha,p is
evaluated in analogy to the plastic strain amplitude as the width of the force-indentation-depthhysteresis at mean stress. The resulting hardening exponentCHT eII quantifies the hardening potential.
Macro hardness measurements were conducted in the center of the samples.
Tensile tests were performed on a Zwick/Roell Z250 electromechanical testing device with a
testing procedure according to DIN EN ISO 6892-1 [5]. Temperature in the center of the gauge length
was measured during the tensile tests with one type J thermocouple. The content of magnetic fraction
was measured using a FERITSCOPE MP 30E to determine and quantify the transformation from
paramagnetic austenite to ferromagnetic α’-martensite.
Results and Discussion
Microstructure. Fig. 1 shows LOM of the differently processed materials. The boundaries of the
melt pools can be seen clearly in the microstructures of SLM-H and LDW-H cross sections (Fig. 1a)
and c)). The grains of the additively manufactured specimens are elongated in building direction. This
is specific for additively processed material and was also shown by investigations of selective laser
melted 316L by Yasa et al. [9]. This elongation is caused by the direction of heat conduction during
the manufacturing process. In the cross sections of the vertical built AM specimens, the grain structure
does not show elongation (Fig. 1b) and d)).
Figure 1: Light optical micrographs of 316L (1.4404) cross sections of a) SLM-H b) SLM-V
c) LDW-H, d) LDW-V and e) CC
Grain sizes were rated in conformity to DIN EN ISO 643 [10] (see Table 3). With regard to the
grain size of additively produced material it is obvious that the grains in vertically built cross sections
are significantly smaller compared to grains of horizontally built cross sections, which can be
47
explained with elongation of grains due to the direction of heat conduction. The SLM specimens have
significantly smaller grains than the LDW specimens. The differences in grain size and grain
directions are also obvious in the electron backscatter diffraction (EBSD) grain orientation mappings
(Fig. 2), which also clearly indicate that elongated grains grow over one or more melt pool boundaries
(compare Fig.1a) and c) with Fig. 2a) and c)).
Figure 2: Electron backscatter diffraction orientation maps of cross sections for a) SLM-H,
b) SLM-V, c) LDW-H and d) LDW-V
Different cooling rates of the investigated AM processes lead to significant differences in grain
size. Cooling rates in SLM process are considerable higher due to smaller melt pool sizes. This results
in a faster solidification and, consequently, smaller grains. The grain size class G of SLM specimens
(G = 6 to 7) is slightly smaller than in CC specimens (G = 5), which are distinctly smaller than the
grains of LDW specimens (G = 1 to 3) (see Table 3).
A central aspect in microstructural quality of additively produced materials is the occurrence of
inhomogeneities e.g. pores or oxide inclusions. The porosity of differently manufactured specimens
is quantified based on LOM of cross sections. The porosity of LDW specimens is lower compared to
SLM specimens (see Table 3), caused by higher laser power (see [11]) and higher layer thickness in
LDW process (see [12]). With regard to these two manufacturing parameters, the penetration depth
of the applied melt pool in the previously deposited material is higher and therefore the material offers
a lower amount of lack of fusion, which is the main cause for porosity in additively produced material.
Such a lack of fusion pore is plotted in Fig. 3a). This pore is located between two melt pools, which
is caused by an incomplete bonding.
Figure 3:
a) Pore between the layers of the SLM-H specimen and b) Oxide inclusion and pore in
LDW-H specimen
Table 3:
Porosity and grain sizes of the different manufactured specimens (cross sections)
Specimen
SLM-H
SLM-V
LDW-V
LDW-H
CC
1.915
1.626
0.020
0.032
0
Porosity [%]
6
7
3
1
5
Grain size number G
Furthermore, SiMn oxide inclusions are identified via energy dispersive X-ray (EDX) analysis in
LDW specimens (Fig. 4). These inclusions result from imperfections of the protective inert gas flow
in the manufacturing process (see [13]). Pores in LDW specimens are smaller than the oxide
inclusions (see Fig. 3b)). Therefore, it can be expected that fatigue strength of the LDW specimens
are reduced by a larger extent due to inclusions than due to pores.
48
Figure 4:
SiMn Oxide inclusions of a) LDW-H specimen and b) LDW-V specimen
Mechanical properties. To investigate the mechanical properties of the differently manufactured
specimens, microhardness, cyclic microindentation and tensile tests were conducted. Note that for
AM specimens the direction of microindentation in specimens with horizontal building direction is
parallel to the layer orientation, whereas in vertically built specimens it is perpendicular to the layer
orientation. From the microhardness distribution across the cross section, it is obvious that the CC
specimens show higher hardness in the near surface area (see Fig. 5a)). Opposed to that, additively
manufactured specimens show a rather flat microhardness distribution. The cooling rates’ gradient of
the extreme small single melt pools is higher compared to the gradient of the cooling rate between
centre and near surface area of CC specimens. However, the distribution of cooling rates along the
cross section of AM specimens is nearly homogeneous, due to the small volume fraction of a single
melt pool, which leads to the observed flat microhardness distributions.
Figure 5: Microhardness (a)) and plastic indentation depth amplitude ha,p-N-curves
(b)) determined on cross sections of the differently manufactured specimens
The average hardness of SLM specimens is higher compared to the LDW specimens, which is
consistent with the smaller layer thickness, resulting in higher cooling rates, and correlates with the
smaller grain sizes (see Fig. 1, Fig. 2 and Table 3). The building direction of additively manufactured
specimens has no significant influence on the hardness of the material (Fig. 5a)).
The results of cyclic indentation tests (Fig. 5b), Table 4) show significant differences. SLM
specimens show lower ha,p-values than LDW and CC specimens (Fig. 5b)). The eII values in Table 4
exhibit differences of the hardening potential. SLM and CC specimens have similar eII values whereas
LDW specimens have higher eII values. These differences are due to the different manufacturing
processes and chemical compositions of the materials. Similar to the results of the microhardness
measurements, the building direction has no influence on the values of ha,p and eII.
49
The results of tensile tests are given in Table 4 and Fig. 6. Results of Table 4 are based on
measurements of two specimens for each type of manufacturing process. Tensile strength, yield
strength and elongation at fracture vary significantly. It is obvious that the building direction of the
additively manufactured specimens has a major impact on the tensile and yield strength, i.e. horizontal
building direction leads to significantly higher Rm and Rp0.2. This anisotropy in strength of additively
produced material was also shown by Lewandowski et al., who compared investigations of additively
produced materials according to their mechanical properties [14]. Compared to the CC specimens,
the SLM-V specimens show comparable and SLM-H specimens show even higher tensile strength
(see Fig. 6a)) and Table 4). Tensile strength of LDW-H specimens is similar to CC specimens whereas
the LDW-V specimens have lower values of Rm and Rp0.2.
Table 4: Mechanical properties of the differently manufactured specimens
Specimen
Rm [MPa]
Rp0.2 [MPa]
Youngs modulus [GPa]
el.-pl.-transition [MPa]
based on ΔT
SLM-H
681 ± 7
609 ± 43
167 ± 12
SLM-V
612 ± 2
490 ± 2
152 ± 7
LDW-H
629 ± 7
438 ± 40
172 ± 12
LDW-V
564 ± 9
322 ± 2
170 ± 10
CC
639 ± 2
454 ± 4
161 ± 6
591 ± 46
463 ± 10
428 ± 45
314 ± 2
418 ± 4
28.9 ± 3.9
21.4 ± 1.6
37.8 ± 0.3
26.8 ± 4.9
44.5 ± 1.5
A [%]
ξ
before test 0.13 ± 0.01 0.11 ± 0.02 0.68 ± 0.01 0.68 ± 0.09 0.17 ± 0.01
[Fe-%]
after test 0.13 ± 0.02 0.15 ± 0.01 1.80 ± 0.70 2.30 ± 0.90 8.55 ± 0.15
218
213
171
177
199
Hardness [HV30/10]
0.391
0.399
0.456
0.455
0.396
eII
The phase transformation behaviour of the used material is investigated with measurements of the
magnetic fraction (ξ in Fe-%) (see Fig. 6c) and Table 4). After the tensile test, the magnetic fraction
(ξ in Fe-%) was measured on the fractured surface to determine the maximum of austenite-martensite
transformation. In the initial state, the CC and SLM specimens are nearly fully austenitic whereas the
LDW specimens show small martensite contents. In the SLM specimens no phase transformation was
detected. LDW specimens show a low amount of austenite-martensite transformation and CC
specimens show a significant raise in martensite content after the tensile test. These results correlate
well with the Md30 temperatures given in Table 2, indicating that austenite stability of the investigated
AISI 316L variants is dominated by chemical composition and the manufacturing process plays, if
any, only a minor role in this context.
Highest elongation at fracture occurs in CC specimens. SLM specimens show related to the
building directions lower elongations at fracture compared to LDW specimens. Note that this trend
is in opposite to the effect of increasing elongation at fracture in consequence of smaller grain sizes
(see Table 3). The elongation at fracture correlates with the phase transformation amounts, shown in
the ξ-measurements. Therefore, it can be concluded that higher elongation at fracture is caused by the
deformation induced austenite to martensite transformation. The anisotropic behaviour of additively
manufactured specimens also occurs in the results of elongation at fracture. Lower elongation at
fracture is shown by the vertically built specimens comparing to the horizontally built specimens.
Based on the actually available results, the anisotropy of additively manufactured specimens is
detected solely in tensile tests and microstructural investigations. Therefore, it can be concluded that
the elongation of grains and the layer orientation are mainly responsible for the anisotropic behaviour
of AM specimens.
The temperature of the specimens was measured during the tensile tests with one thermocouple in
the middle of the gauge length. Therefore, the measured temperature evolution depends on the
individual fracture position in each specimen. At the beginning of the test the temperature is
decreasing due to thermoelastic effect. At the onset of plastic deformation the temperature increases.
Plastic deformation can be identified more precisely in temperature measurements than in the
50
determination based on the stress-strain curve (see Fig. 6d) and Table 4). Note that, at least at small
plastic strains, the vertically built specimens show significantly smaller temperature changes than the
horizontally built specimens (see Fig. 6b)). The progress of temperature shows slope change with
beginning of reduction in area.
Figure 6: Results of tensile tests with regard to the measurement of stress (a)), temperature (b)
and d)) and magnetic fraction (ξ in Fe-%) (c))
Summary and conclusions
Mechanical and microstructural properties of AISI 316L stainless steel (1.4404) specimens
manufactured by Continuous Casting, Selective Laser Melting and Laser Deposition Welding are
investigated. The additively manufactured specimens show elongation of grains along the building
direction of the manufacturing process. Furthermore, additively manufactured specimens show a
higher density of defects. In SLM specimens, pores are the dominant defect, whereas at LDW
specimens the oxide inclusions are the most relevant defect type.
The mechanical properties of the differently manufactured specimens differ significantly.
Compared to CC and LDW specimens, SLM specimens show an increased hardness in the centre of
the specimens, which is caused by higher cooling rates in SLM process. CC specimens show
increased hardness in the near surface area. This cannot be observed in additively manufactured
specimens, which show a flat distribution of microhardness along the cross section. This is caused by
a homogeneous distribution of cooling rates along the cross section of the additively manufactured
specimens, due to the small volume fraction of a single melt pool.
The mechanical properties of SLM-V specimens are similar to those of the CC specimens. SLM-H
specimens show even higher values of tensile and yield strength. The elongation at fracture is smaller
51
for the SLM than for CC specimens. Tensile and yield strength as well as the elongation at fracture
of LDW-H specimens are similar to the CC specimens, but LDW-V specimens show significant lower
Rm and Rp0,2 but relatively high elongation at fracture. Differences in elongation at fracture are
influenced by the deformation induced austenite to martensite transformation, which solely occurs in
CC and LDW specimens and leads to higher ductility.
While tensile strength, yield strength and elongation at fracture of additively manufactured
specimens are anisotropic and significantly depend on building direction, microhardness and cyclic
indentation behaviour are not affected by the building direction. From this, it is concluded that the
anisotropic behaviour of additively manufactured specimens is caused by the elongation of grains and
orientation of the deposited layers.
Acknowledgement
The research described in this paper was funded by European Union’s European Regional
Development Fund (ERDF) and the Commercial Vehicle Cluster (CVC) Südwest.
References
[1] I. Gibson, D.W. Rosen, B. Stucker, Additive Manufacturing Technologies, Springer, New York,
2010.
[2] A. Gebhardt, Generative Fertigungsverfahren Additive Manufacturing und 3D Drucken für
Prototyping - Tooling – Produktion, Carl Hanser Verlag, München, 2013.
[3] M. Schmid, Additive Fertigung mit Selektivem Lasersintern, Springer Vieweg, Wiesbaden,
2015.
[4] F.H. Lehmann, A. Grzegorski, Anlaufmanagement in der Nutzfahrzeugindustrie am Beispiel
Daimler Trucks, in: G. Schuh, W. Stölzle, F. Straube (Eds.), Anlaufmanagement in der
Automobilindustrie erfolgreich umsetzen, Springer, Berlin Heidelberg, 2008, pp. 81-90.
[5] DIN EN ISO 6892-1. Metallic materials - Tensile testing - Part 1: Method of test at room
temperature (ISO 6892-1:2016); German version EN ISO 6892-1:2016. 2017.
[6] ASTM A 182/ A 182M-14a: Standard Specification for Forged or Rolled Alloy and Stainless
Steel Pipe Flanges, Forged Fittings, and Valves and Parts for High-Temperature Service. 2014.
[7] T. Angel, Formation of martensite in austenitic stainless steel - Effects of deformation,
temperature and composition, Journal of the Iron and Steel Institute 177 (1954) 165-174.
[8] H.S. Kramer, P. Starke, M. Klein, D. Eifler, Cyclic hardness test PHYBALCHT – Short-time
procedure to evaluate fatigue properties of metallic materials, International Journal of Fatigue
63 (2014) 78-84.
[9] E. Yasa, J-P. Kruth, Microstructural investigation of Selective Laser Melting 316L stainless steel
parts exposed to laser re-melting, Procedia Engineering 19 (2011) 389-395.
[10] DIN EN ISO 643, Steels - Micrographic determination of the apparent grain size (ISO 643:2012);
German version EN ISO 643:2012. 2012.
[11] C. Kamath, B. El-Dasher, G.F. Gallegos, W.E. King, A. Sisto, Density of additivelymanufactured, 316L SS parts using laser powder-bed fusion at powers up to 400 W. International
Journal of additive manufacturing Technology 74 (2014) 65-78.
[12] A.B. Spierings, G. Levy, Comparison of density of stainless steel 316L parts produced with
selective laser melting using different powder grades, SSF Symposium (2009)
[13] P. Ganesh , R. Kaul, G. Sasikala, H. Kumar, S. Venugopal, P. Tiwari, S. Rai, R.C. Prasad, L.M.
Kukreja, Fatigue Crack Propagation and Fracture Toughness of Laser Rapid Manufactured
Structures of AISI 316L Stainless Steel, Metallogr. Microstruct. Anal. 3 (2014) 36-45.
[14] J.J. Lewandowski, M. Seifi, Metal Additive Manufacturing: A Review of Mechanical Properties,
Annual Review of Materials Research 46 (2016) 151-186.
52
Influence of Manufacturing and Assembly Related Deviation on the
Excitation Behaviour of High-Speed Transmissions for Electric Vehicles
Mubarik Ahmad1,a, Christoph Löpenhaus1,b and Christian Brecher1,c
1
Laboratory for Machine Tools and Production Engineering RWTH Aachen
Steinbachstrasse 19, 52074 Aachen
a
m.ahmad@wzl.rwth-aachen.de, bc.loepenhaus@wzl.rwth-aachen.de, cc.brecher@wzl.rwth-aachen.de
Keywords: Manufacturing, Electric vehicle, Dynamics
Abstract. This report investigates the effects of manufacturing and assembly related deviations of
gear tooth contacts in gearboxes on the vibration excitations in high speed applications. In particular,
it shows the impact of positional (due to assembly) and shape (due to manufacturing) deviations on
the long-wave excitation behaviour. In cars, these long wave excitations lead to audible frequencies
that can disturb the driver. In the past these frequencies were masked by other sounds created by the
internal combustion engine. However, growing concerns over pollution, climate change and scarcity
of fossil fuels have caused a rise in electric car ownership. While electric car motors are quieter than
their combustion engine counterparts, this lack of sound accentuates auxiliary noise sources such as
the gearbox.
At higher speeds, the deviation of the tooth contacts over a full revolution become more important.
The long-wave transmission error - which describes the rotational irregularity between the input and
output shaft and represents the noise excitation - is excited by a change in the contact geometry of a
tooth-hunt. At higher speeds this leads to the aforementioned audible frequencies. The results
obtained in this report prove the increased excitation of the rotational frequencies of the transmission
error caused by periodic or stochastic positional and shape deviations based on simulation. Periodic
deviations due to tumbling, concentricity and pitch errors lead to sidebands and rotational frequencies
depending on the periodicity of the deviation. The paper ends with a discussion of the importance of
these deviations in the design process.
Introduction and Motivation
High-speed transmissions are becoming increasingly important in view of the increasing
electrification of drive components, particularly in the automobile industry. Despite the alternative
drive technology, the quality requirement of the customer is still to be met. The noise is a comfort
factor and therefore an essential criterion for buying and considerably affects customer satisfaction.
Added to this are the legal framework conditions, which limit the maximum permissible sound
levels [1].
One of the main noise sources of a vehicle is the drive train. The electrification of the drive
components leads to an increased use of high-speed gearboxes. The input speed is above that of a
combustion engine. With a constant output speed, a higher gear ratio is necessary. While disturbing
noises of the drivetrain components are largely masked by the internal combustion engine, the use of
low-noise electric motors leads to an increased perception of the transmission noise. Thus, the
requirement for the acoustics of the drive components of an electric vehicle increases.
The noise level of the transmission has been steadily reduced over the past few years by a deeper
understanding of the excitation mechanisms [2]. If the tooth contact deviates from the ideal-involute
contact, this results in a rotational irregularity between the input and output shaft. These excitation
frequencies resulting from different mechanisms in the tooth contact are proportional to the rotational
speed [3].
The tooth flank modifications are defined under the consideration of the application in the design
process. In general, tooth flank modifications are tolerated as a function of geometrical parameters
(e.g. module, pitch diameter) or defined by considering many years of experience (DIN 3961 [4],
Submitted by: Mubarik Ahmad
53
DIN 3962-1 [5], DIN 3962-2 [6] and ISO 1328-1 [7]). Deviations resulting from the manufacturing
or assembly process, which relate to the totality of all teeth, are not tolerated by the standard in terms
of the resulting noise behaviour.
The variable contact conditions lead to a stiffness variation from one tooth to another which results
in a long-wave vibration excitation. The compliance of predefined tolerances of an individual tooth
is therefore not equal to a compliance with the functionality, and thus the excitation behaviour.
LANZERATH [8], MÜLLER [9], VELEX and MAATAR [10], CARL [11] and INALPOLAT ET AL. [12]
investigated the influence of pitch errors. These investigations showed that the resulting sidebands of
the gear mesh amplitudes and the rotation orders affect the frequency spectrum and lead to higher
stimulation of system natural frequencies.
Figure 1 shows qualitatively the relationship between the deviations and the resulting excitation
behaviour and illustrates the first four gear mesh frequencies (orders) of the transmission error for a
gear stage. Furthermore, the first four rotation orders which describe the comparatively long-wave
excitation are marked. Electric motors with high-speed gearboxes can reach speeds of up to
nIn = 20000 rpm. For this speed range, the frequency range which can be perceived by human hearing
is logarithmically represented (f = 20 Hz - 20000 Hz). The particularly sensitive frequency range
(f = 1 kHz – 6 kHz) is highlighted.
Figure 1: Influence of Deviations on the Excitation Behaviour
The illustration shows that the deviation of the flank shape is responsible for the excitation of the
gear mesh. Different contact conditions over one revolution of the gear cause the excitation of the
rotational orders. In the case of high-speed transmissions, the occurring short-wave excitations are
only important for the first gear mesh order, whereas the harmonics fall outside of the perceived range
with increasing speed. In contrast, the order of the rotation and its harmonics are audible over a wide
frequency range and partly in the sensitive frequency range of human beings.
Objective and Approach
The objective of this report is to investigate, under dynamic conditions, the influence of long-wave
deviations on the noise behaviour at high speeds, taking the subjective perception into account.
Various types of deviation (wobble, eccentricity and pitch errors) are analysed. The boundary
conditions of the simulation model are kept constant so that a clear correlation between the cause and
the effect can be made. The procedure for the excitation investigation is described in Figure 2.
The types of deviations for the investigation are defined in the experimental design, for which
excitation maps are subsequently generated using the FE-based quasi-static tooth-contact analysis
ZaKo3D developed at the WZL [13]. The excitation maps are the basis of the dynamic simulation
54
according to GACKA for the representation of the gear set variants in the multi-body dynamic
simulation model [14]. The calculated differential velocity is evaluated taking the subjective
perception into account. The focus lies on the gear mesh and rotation orders depending on the value
of long-wave deviations.
Figure 2: Approach for the Investigation of the Dynamic Excitation Behaviour
Test Gear Set and Experimental Design
The investigations are carried out on a test gear which is examined by CARL with a transmission
ratio of z2/z1 = 36/25 in a simulation study [11]. In this study the macro and micro geometry of the
gear pair are kept constant. The height and width crowning for this gear are cĮ = cȕ = 10 ȝm. The
same simulation model of the drive train is used for all investigated deviations. By this, all natural
frequencies of the test setup remain unchanged. The long-wave deviations are applied to the fastturning pinion while the wheel is held ideally.
The excitations as a result of a wobbling, an eccentricity, and different pitch errors are considered.
In this case, the influence of the test rig frequency is also taken into account, as a result of which the
noise radiation can increase significantly, particularly in the case of resonance-critical rotational
speeds.
The test gear set is tested at a constant torque level of TIn = 320 Nm. In order to examine the
influence of the deviations in detail, the respective deviations are considered in the range of
0 ȝm ” f ” 15 ȝm with a step size of ǻf = 1 ȝm.
System Modelling
The tooth contact analysis offers the possibility to determine and evaluate occurring forces and
torques in the tooth contact quasi-statically. The operating conditions, such as speed and torque,
which change in the application lead to fluctuating meshing conditions and consequently to the
excitation of the entire structure. Therefore, a quasi-static consideration of the excitation is not always
sufficient, so that the dynamic aspects must also be taken into account in the design of the flank
modification. A force coupling element has been developed at the WZL specifically for the
calculation of the effective dynamic additional forces in the meshing tooth contact [14].
To use the dynamic model, a quasi-static consideration is necessary. By means of the FE-based
3D tooth contact analysis ZaKo3D different types of gears can be analysed [13]. One advantage of
this tooth contact analysis is the consideration of variable deviations of the tooth flank micro geometry
over one revolution and the illustrating of the influence on the excitation behaviour. The results of
55
the simulation are summarised in excitation maps which describe the relationship between the rolling
position, the torque and the resulting transmission error.
The multi-body simulation model developed and validated in experiments by CARL is used to
calculate the dynamic excitation behaviour [11]. In order to depict the effects in a realistic manner,
test rig frequencies of a complete test set-up from the input to the output drive are simulated. The
spatially discretised overall system is represented in a simplified manner by a system with one
rotational degree of freedom, in which the input and output drives are modelled separately. The
discrete masses of the drivetrain are connected by massless spring-damper systems. The two
separately modelled drives are coupled via the force coupling element. In the force coupling element,
the kinematic state variables from the dynamic drive train model and the resulting excitation forces
are determined for each discrete time step.
Methodology to Analyse the Excitation Behaviour
The simulative test for the determination of the excitation includes an extensive simulation test
program. Gear sets with different deviations are tested. To ensure the comparability of the simulation
results, only the excitation maps are exchanged while the operating conditions are kept constant.
Speed ramps of nIn = 0 – 20000 rpm are analysed. The differential velocity in the tooth mesh is
evaluated. A uniform evaluation methodology is to be used for the analysis of the simulation results
in order to be able to compare the simulation results independently of the periodicity and the amount
of the deviation. Figure 3 shows the procedure of the evaluation methodology. Based on the
methodology the deviations for a gear set and operating condition are classified.
Figure 3: Evaluation Methodology for the Investigation of the Influence of Manufacturing and
Assembly Related Deviations
For this purpose, the fluctuation of the differential velocity between the input and output is first
detected for the investigated speeds. The recorded signal is converted into a frequency spectrum by
means of an FFT analysis. The subjective perception plays a decisive role in the evaluation, so that
the signals are A-weighted. The human hearing is not very sensitive at low and high frequencies, but
between f = 1 kHz and f = 6 kHz the perception is much more sensitive. The A-weighting is a
standard weighting to describe this phenomenon. A transfer of the frequency spectrum into an order
spectrum facilitates the evaluation. In this case, the frequency spectrum is normalised to a reference
frequency, which corresponds to the input rotational speed. Accordingly, the oscillations proportional
to the rotational speed are shown horizontally.
56
If the deviation has a higher periodicity per revolution, the rotation order is stimulated according
to the periodicity. To consider the effects of these deviations on the excitation, the summation of
orders are used for the evaluation. In this case, the ten orders surrounding the gear mesh orders are
summed up to consider the influence of the deviations on the sidebands. This procedure is also
adopted for the rotation orders. For the consideration of the higher-harmonic excitations of the
rotation order and of the higher periodicity, a sum level is determined for the first eleven rotation
orders. These levels are subsequently converted into diagrams which are designed to allow an
evaluation of the deviations depending on the speed and the manufacturing quality.
Analysis of the Excitation Behaviour
An incorrect assembly or misalignment during the manufacturing process can lead to wobble. The
wobble causes a single periodic variation of the lead angle deviation over one revolution of the pinion.
Figure 4 shows the gear data and the A-weighted frequency spectra of the differential velocity. The
FFT spectrum at the top left shows the differential velocity of an ideal manufactured and mounted
gear set, which does not show any long-wave excitation. In comparison, the remaining spectra are
shown for different values of wobbles. For a variant of the lead angle deviation fluctuation of
fHȕ,Fluctuation = 5 ȝm, excitation of the rotation order and its harmonics can be observed. The widening
of the excitation of the gear mesh order results by the excitation of sidebands. An increasing deviation
leads to an increase in the resulting oscillation excitation of the rotation order, the harmonics as well
as the sidebands of the gear mesh orders. From the A-weighted spectra, it is observed that the rotation
orders are higher weighted with increasing speed.
The level of the first gear mesh order is decreasing with higher speeds, but it has a high level over
a wide speed range. The level of the second gear mesh order decreases at a rotational speed of
nIn = 8000 rpm and is no longer in the perceptible level at a rotational speed of nIn = 20000 rpm.
Figure 4: Frequency Spectra of the Ideal and Wobbling Gear Set
A more detailed consideration of the excitation behaviour due to a wobble has been made
(Figure 5). The wobble has been varied from 1 ȝm ” fHȕ,Fluctuation ” 15 ȝm with a step width of
ǻfHȕ,Fluctuation = 1 ȝm. The levels are described in this illustration by the colour scale. If the summed
rotation orders are considered, an increased excitation can be observed with an increasing wobble. In
comparison, the value of the differential velocity of the summed rotation order is about
ǻLdx = 30 dB(A) above that of the summed 1st gear mesh order at the speed of nIn = 20000 rpm. The
level of the differential velocity reaches a value of Ldx = 79 dB(A) at nIn = 20000 rpm. In contrast, the
summed differential velocities of the gear mesh orders show that the energy content is distributed to
the sidebands surrounding of the gear mesh orders so that the summed orders have similar levels.
57
Figure 5: Influence of the Wobble on the Excitation of the Summed Rotation and Gear Mesh
Orders
The eccentricity causes a single periodic change in the centre distance, and thus a change in the
operating meshing angle. The frequency spectrum for an eccentricity of Fr = 15 ȝm shows a high
excitation of the 1st rotation order (Figure 6, upper left). In this case, the level of the summed rotation
orders reaches a maximum of Ldx = 85 dB(A) and is therefore Ldx = 6 dB(A) above that of the wobble.
The summed levels of the gear mesh orders again exhibit a behaviour independent of the deviation
amount.
Figure 6: Influence of the Eccentricity on the Excitation of the Summed Rotation and Gear
Mesh Orders
As the last long-wave deviation, the pitch error is varied. Particularly in profiling manufacturing
processes, the variation of the pitch single deviation may have a higher periodicity or a stochastic
distribution over one revolution.
In the following, two variants of the pitch error are to be considered. A onefold sine and on other
hand a fourfold sine distributed pitch error are investigated. The summed rotation orders show a
maximum level of Ldx = 85 dB(A) at a rotational speed of nIn = 20000 rpm for a onefold sine
58
distributed pitch error with fp = 15 ȝm (Figure 7, left). This is at a similar level compared to the
excitation of an equally high eccentricity.
In case of the pitch error distributed with a fourfold sine, the sidebands enter the resonance regions
earlier due to the higher periodicity. This also applies to the sensitive frequency range of human
hearing. The resulting excitation shows the higher level, which are seen at a wider speed range
(Figure 7, upper right). The summed rotation order also shows a gain in level as the rotational speed
increases due to the excitation that occurs in the audible range. As a result, higher-harmonic
excitations of the long-wave excitation lead to a louder perception of the noise emission. The
maximum reached level value of the rotation order is Ldx = 110 dB(A).
Figure 7: Influence of the Pitch Error on the Excitation of the Summed Rotation and Gear Mesh
Orders
Conclusion
The investigations prove the influence of periodic shape and position deviations on the dynamic
excitation behaviour. In comparison, the different deviations as a function of the amount have a
different effect on the excitation. According to DIN 3961 [4], DIN 3962-1 [5], DIN 3962-2 [6] and
ISO 1328-1 [7], tolerances are made depending on geometry features, but not as a function of speed.
In this case, long-wave deviations are limited by the IT classification with respect to the maximum
permissible deviation values.
Figure 8 shows the summed levels as a function of the deviations for a quality class IT3. The
comparison shows that the summed levels of the 1st gear mesh order have an identical course over
the speed, irrespective of the deviation. The same applies to the 2nd gear mesh order. This is due to
the energy distribution to the sidebands, which in sum, result in a similarly high level. In contrast, the
summed rotation orders show a different behaviour. For the same quality, the errors affect the
differential velocity to different degrees.
A maximum lead angel deviation fHȕ = 4 ȝm fulfils the requirement of the quality class to be tested
for this gear set. For this case, the highest level of the differential speed is Ldx = 64 dB(A). In
comparison to this, the level for an eccentricity and accordingly a concentricity error of Fr = 7 ȝm
results in Ldx = 79 dB(A).
The pitch error is tolerated on the basis of different characteristic values. Accordingly, a onefold
sine distributed single pitch error of fp = 2 ȝm has a level of Ldx = 67 dB(A) at the maximum
rotational speed. If the periodicity is increased while the IT class remains the same, a much higher
excitation behaviour is obtained. The levels show a higher increase already at low speeds and reach
a level of Ldx = 92 dB(A) at a speed of nIn = 20000 rpm.
59
Figure 8: Influence of Long-Wave Deviations on the Excitation Regarding the same Tooth
Quality (IT3)
For the investigated IT class, the wobble has the lowest level. The level of the pitch error having
a onefold periodicity is ǻLdx = 3 dB(A) above that of the wobble. Eccentricity causes an additional
excitation by ǻLdx = 12 dB(A). The gear set, which has a fourfold sine pitch error, shows the highest
level, which is higher than that of the wobble by ǻLdx = 28 dB(A).
Finally, the excitation is considered more detailed depending on the quality classes. For this
purpose, the levels of the summed rotation orders are plotted as a function of the quality in Figure 9
In comparison, the concentricity error and the onefold periodic pitch error have a similar level profile,
but the classification into the quality classes differs in relation to the excitation. When the same
quality classes are considered, the wobble has a significantly lower excitation behaviour, whereas the
pitch error with a fourfold sinus shows a contrary effect. In summary, the results show that the
tolerance fields of the long-wave deviations should be functional and separate for individual
deviations.
Figure 9: Influence of Long-Wave Deviations on the Excitation of the Summed Rotation Order
60
Summary
The results presented in this report show the influence of periodic positional and shape deviations
on the high-speed dynamic excitation behaviour. The differential velocity level was analysed. These
variable deviations affect the contact conditions of the gear pair. Periodic positional and shape
deviations due to wobble, eccentricity and pitch errors have an influence on sidebands and rotational
frequencies. If these effects are seen at higher speeds, it is evident that these affect the audible
frequency spectrum significantly. Due to the energy distribution to the sidebands of gear pairs with
long-wave deviation, the summed order bands around the gear mesh orders have the same level as an
ideal gear pair. Accordingly, an increasing deviation leads to a higher excitation of the differential
velocity level, which in particular dominate the perceived frequency spectrum at high rotational
speeds.
A functionally appropriate evaluation, which considers the influences of the long-wave excitation,
is not fully taken into account in the existing standardization, in particular regarding the periodicity
of deviations over a tooth-hunt. A quality-dependent analysis of the excitation shows the deficit of
the classification of the long-wave deviations. In further studies, the dynamic influences on other
perceived parameters should be systematically checked. From these function-orientated limits for
high-speed transmissions, tolerance classes for the characteristic values can be derived. Based on that,
gearbox manufacturers can evaluate measurements of the gears in industrial practice.
References
[1] J.W. Meschke, V. Thörmann, Langstreckenkomfort, Einflussgrößen und Bewertung, VDIBerichte 1919, 2005
[2] M. K. Heider, Schwingungsverhalten von Zahnradgetrieben, Beurteilung und Optimierung des
Schwingungsverhaltens von Stirnrad und Planetengetrieben, TU München Diss. München, 2012
[3] F. Klocke, C. Brecher, Zahnrad- und Getriebetechnik, Auslegung - Herstellung – Untersuchung Simulation, München, Carl Hanser, 2017
[4] DIN 3961, August 1978, Tolerances for Cylindrical Gear Teeth, Bases
[5] DIN 3962-1, August 1978, Tolerances for Cylindrical Gear Teeth, Tolerances for Deviations of
Individual Parameters
[6] DIN 3962-2, August 1978, Tolerances for Cylindrical Gear Teeth, Tolerances for Tooth Trace
Deviations
[7] ISO 1328-1, September 2013. Cylindrical gears. ISO system of flank tolerance classification.
Definitions and allowable values of deviations relevant to flanks of gear teeth
[8] G. Lanzerath, Untersuchungen des Geräusch- und Schwingungsverhalten schnellaufender
Stirnradgetriebe, RWTH Aachen University Diss. Aachen, 1970
[9] R. Müller, Schwingungs- und Geräuschanregung bei Stirnradgetrieben, TU München Diss.
München, 1991
[10] P. Velex, M. Maatar, A Mathematical Model for Analyzing the Influence of Shape and Mounting
Errors on Gear Dynamic Behaviour, Journal of Sound and Vibration 191 5 (1996) pp. 629–660,
1996
[11] C. F. Carl, Gehörbezogene Analyse und Synthese der vibroakustischen Geräuschanregung von
Verzahnungen, Diss. RWTH Aachen, 2014
[12] M. Inalpolat, M. Handschuh, A. Kahraman, Impact of indexing errors on spur gear dynamics,
International Gear Conference, Lyon, Elsevier (2014) pp. 751–762
61
[13] J. E. Hemmelmann, Simulation des lastfreien und belasteten Zahneingriffs zur Analyse der
Drehübertragung von Zahnradgetrieben, Diss. RWTH Aachen, 2007
[14] A. Gacka, Entwicklung einer Methode zur Abbildung der dynamischen
Zahneingriffsverhältnisse von Stirn- und Kegelradsätzen, Diss. RWTH Aachen, 2013
62
Surface Structuring of Forming Tool Surfaces by High-Feed Milling
Dennis Freiburg1, a, Maria Löffler2, b, Marion Merklein2, c, Dirk Biermann1, d
1
Institute of Machining Technology, Baroper Straße 303, 44227 Dortmund, Germany
2
Institute of Manufacturing Technology, Egerlandstraße 13, 91058 Erlangen, Germany
a
Freiburg@isf.de, bMaria.loeffler@fau.de, cMarion.Merklein@fau.de, dBiermann@isf.de
Keywords: Forming, Milling, Surface modification
Abstract. Structuring of tool surfaces is used in various industrial applications to improve
processes. One example is the sheet-bulk metal forming (SBMF) which combines the advantages of
bulk and sheet metal forming. SBMF enables the production of sheet metal components with
functional elements. Due to the complex load conditions an uncontrolled material flow occurs,
which has a negative effect on form filling accuracy. By using adapted tribological conditions on
tool surfaces it is possible to control the flow of the material. For this reason, forming tools have to
be prepared with suitable surface structures. Most forming tools are characterised by large areas and
are made out of hardened high-speed steel, which makes it difficult to machine surface structures.
Within this study, a new approach for machining surface structures on large surface areas is
presented. Therefore, the factors of influence regarding the achievable surface characteristics are
analysed. In a second step wear and process forces for the structuring process are investigated.
Finally, the operational performance of selected surfaces is conducted in forming tests.
Introduction
Surface structures are in the focus of many scientific investigations, related to different
manufacturing processes and do become more important to numerous industrial applications. Most
of these investigations deal with reducing friction for sliding movements [1]. Another application is
sheet-bulk metal forming SBMF which combines sheet and bulk forming operations [2]. Due to the
combination of both forming processes, this technology is characterised by a complex material
flow [3]. Depending on the shape of the manufactured part, the die filling of small cavities can be
insufficient. Finite Element Analysis (FEA) studies have shown that a material flow control can be
realised by different friction conditions on local areas of the forming tool [4]. Setting up tailored
friction conditions can be processed by using surface topographies with different surface
characteristics for low and high friction [5]. Nowadays, several processes exist which can be used
for applying surface structures. However, just a few of these processes are able to handle hardened
surfaces, which are necessary because of the resulting high contact normal stresses of the bulk
forming [2]. Furthermore, for some applications it is necessary to generate surface structures on
large areas.
One process, which is capable to produce different microstructures into hardened surfaces is
micro-milling [6]. However, the main disadvantage is that the process can only be applied to small
areas due to long process time [7]. Another process is the structuring process by laser texturing,
which is also capable of machining hard materials and a wide range of microstructures [8]. Laser
texturing induces heat and results in most cases in melted areas, which have to be reworked prior to
the forming operation [9]. A process which is capable to produce suitable structures efficiently is
the high-feed milling [10]. This research paper presents investigations related to a high-feed milling
process to create quasi-deterministic surface structures on hardened forming tool areas.
*
Submitted by: Dennis Freiburg, M.Sc.
63
Surface Structures by High-Feed Milling
High-feed milling is a process, which is mainly used for roughening or milling hardened tool
surfaces. Compared with conventional milling, the process differs in a low axial cutting depth and a
high feed per tooth. For the process, tools with a small tool cutting edge angle ț are used, which
reduces the radial process forces [11].
To show the capability of applying a wide range of surface structures generated by high-feed
milling, the influencing factors such as tool geometry and process parameters, have to be
considered.
The tool geometry is one of the main factors of influence for machining surface structures. A market
analysis regarding industrial available high-feed tools revealed that the tools are mainly described
by two different kinds of bounding geometries. The first geometry can nearly be described as a torus
cutter with a large corner radius rİ. The second geometry is described as an end mill cutter with a
small corner radius rİ, which passes into a straight cutting edge by a small tool cutting edge angle ț.
The two conventional tool geometries which have been used for the experimental investigations in
this research paper are shown in Figure 1.
Figure 1: High-feed milling tool geometries used for the experiments
To determine the influence of different surface parameter values, experimental investigations have
been conducted for each tool geometry. The experiments were planned with a Latin Hypercube
Design [12] by using 40 different parameter combinations. Therefore, the milling parameters feed
per tooth fz, lead angle ȕf and the width of cut ae were taken into account for both milling tool
geometries. Parameters such as the depth of cut ap and cutting speed vc were set constant. The
milling was done on a high alloy high-speed steel 1.3344 (ASP 2023®) manufactured powder
metallurgically and hardened to 60 HRC. The high-speed steel 1.3344 is often used for tools in bulk
forming operations and is suitable for processes under demanding conditions [13]. The milling was
performed on a DMG HSC 75 5-axis machining center. The setup and variations of process
parameter values are shown in Figure 2.
Figure 2: Experimental setup and process parameters
Subsequent to the experiments, the surface topography was measured by using an Alicona
InfiniteFocus G5 microscope analysing the 2D roughness parameters as well as 3D surface
parameters. The surface characteristics were used to create statistical DACE models, which were
built using MatLab. The prediction accuracy is shown by the coefficient of determination, which
was calculated using a cross-validation [14]. Results are shown in Figure 3, where mean roughness
Rz measured in feed direction as a 2D parameter and the 3D parameter valley void volume Vvv is
64
shown for tool geometry 2. The surface parameters mentioned were chosen to show any change of
surface concerning roughness as well as in the retained volume which could be important for
SBMF.
Figure 3: Statistic models for milling tool geometry 2; a) ae= 0.5 mm and b) ae= 3 mm
For milling tool geometry 2 the statistical model shows an almost linear dependency of
roughness Rz and feed per tooth fz. Larger feed per tooth leads to higher roughness values, while the
influence of the lead angle ȕf shows a maximum mean roughness around ȕf= 3°. This maximum
occurs because of the small tip of milling tool geometry 2, which has an optimum angle at ȕf= 3°.
Related results regarding the feed per tooth and mean roughness were shown by Peuker [15]. By
increasing the width of cut ae (b), only small changes can be observed for the mean roughness Rz.
For the valley void volume Vvv only minor changes were detected by increasing the lead angle ȕf
and the feed per tooth fz. For larger widths of cut ae, increasing the lead angle and feed per tooth
leads to larger Vvv. This shows, that the Vvv depends more on the width of cut. Similar relations
were achieved for tool geometry 1, but depending on the round shape of the tool tip, no maximum
could be detected by varying the lead angle. This can be attributed to the fact that, in most cases, the
contact surface between the tool and the workpiece can be described by a rounded shape. The
milling grooves nearly have the same skewness varying the lead angle. It can be seen that the lead
angle can represent an influencing factor for 2D roughness parameters depending on the milling
tool. The study demonstrates that depending on process parameter values and tool geometry, the
process can be used to create a wide range of surface structures, which can be adapted to their
specific application.
Investigation of Tool Wear and Process Forces
In most forming applications forming tools are used for manufacturing a large number of parts.
Thus, the tools have to be resistant against wear. For this reason, the tools mainly consist of
hardened powder metallurgical steel. Applying surface structures on hardened steel, the wear of the
milling tools needs to be considered. To analyse the tool wear for a specific surface structure in
laboratory scale, the cutting speed vc was changed. Varying for example the parameter feed per
tooth fz lead to different surface structures, already shown in the section above. For evaluating the
wear of the milling tools, both tool geometries were tested within a specific set of parameters.
Therefore, a fixed area of the surface structures was analysed by confocal light microscopy for 10
65
different states of wear. In contrast to conventional wear tests, the deviation of different surface
characteristics, as well as the surface structure geometry, were examined. In addition, major and
minor cutting edges were observed as well as passive milling forces Fp were measured by a Kistler
force dynamometer for each of the chosen cutting speeds. The results are shown in Figure 4.
Figure 4: Results of wear tests, a) milling tool geometry 1 b) milling tool geometry 2
Regarding the examined cutting speed, both milling tool geometries show a nearly similar
performance. The tools have abrasive flank and rake wear dependent on the cutting speed used. The
lowest cutting speed of vc= 100 m/min shows the best result, which can be explained by the rising
passive force for both geometries at higher cutting speeds. A similar result can be observed for the
surface characteristics Rz, Vvv and topography by larger differences and deviations compared with
the initial condition. Although both geometries have similarities with regard to cutting speed and the
passive forces, geometry 2 shows a better wear resistance. Geometry 1 shows much higher flank
wear after even 20k mm2 of structured surface area and thus changing surface parameters. In
contrast, milling tool geometry 2 has less flank wear due to a better wear resistance and therefore a
better capability of structuring larger areas almost without changes. Besides the differences between
66
both cutting geometries, geometry 2 cuts with the inner minor cutting edge and therefore with a
lower effective cutting speed (40 m/min), which could be an explanation for less wear. It should be
mentioned that both tools can be used for structuring hardened surface areas, while geometry 2 can
even be used for structuring large tool areas.
Operational Performance by experiments with pin extrusion test and sink process
For evaluating the operational behaviour, assorted structures have been tested in different
forming operations. To evaluate the manner regarding flow of sheet material on surface structures, a
pin extrusion test was applied. As mentioned before, high stress and strain rates as well as high
contact normal stresses can arise within sheet-bulk metal forming, therefore a sink process was
performed to evaluate the wear resistance in forming operations of high-feed milled surface
structures.
The pin extrusion test is a laboratory test, which can be used to determine the frictional
behaviour of modified surfaces. The forming conditions of this test are close to those which appear
when features are formed out of the sheet plane. Thus, this test is suitable to investigate the friction
conditions in sheet-bulk metal forming. During the test, a pin is formed out by pressing the upper
die on the sheet. The higher the friction, the more material flows into the cavity due to an impeded
lateral material flow. The test principle is shown in Figure 5b. To derive the friction coefficients
based on Tresca friction law, the principle of numerical identification is used. Using the FE software
simufact.forming, the pin heights for different friction factors were determined. Based on these
results a calibration curve was derived. The friction factor can be calculated by comparing the
calibration curve and the experimentally determined pin heights. For the current investigation three
high-feed milled surfaces were analysed. The surfaces were milled by geometry 1 with constant
milling parameters except for the lead angle ȕf. Lead angles ȕf= 0°, ȕf= 1.5° and ȕf= 3.0° were
picked from the statistical model (Figure 3a) to increase the mean roughness depth Rz. Based on the
characterisation of the topographies (Figure 5a), the modified pin extrusion tools were investigated
regarding their friction factors. To guarantee a statistical coverage, three workpieces were tested.
Figure 5b shows the results of the pin extrusion test for the deep-drawing steel DC04 and dual phase
steel DP600.
Figure 5: a) Surface structures and milling parameters, b) principle and results of pin extrusion test
67
The friction factor increases with growing lead angle. This result is transferable for both materials.
For a lead angle of ȕf= 0 and workpieces out of DC04, the friction factor amounts to
m= 0.08 ± 0.006. A friction factor of m= 0.11 ± 0.002 can be determined for a lead angle of ȕf= 3°.
Thus, with increasing roughness values of the high-feed milled tool surfaces, the friction values
increase. This can be explained by an impeded lateral material flow due to an enhanced catch of the
roughness peaks with increasing roughness values.
Due to more complex forming operations, tool surfaces have to withstand high stresses and strains.
For this reason wear can occur on the tool surfaces during the operational time. To analyse the wear
resistance and wear progress of the modified surfaces, experimental forming tests were conducted
by a sink process. Therefore, two different surface structures milled by high-feed milling geometries
1 and 2 were compared with conventional surfaces produced by grinding and polishing in more than
10,000 strokes. A SCHULER high-speed press type PCr-63-250 with a maximum press force of 63
Mp (612kN) was used for the experiments. In the sink process a round punch face with an area of
79 mm2 is pressed 1 mm into a DC04 sheet. After fixed stroke intervals, the punch was analysed by
confocal light microscopy to determine the wear of the punch using the surface characteristics mean
roughness Rz and valley void volume Vvv. The experimental setup and results are shown in Figure
6.
Figure 6: Experimental setup and results of a sink process
68
The results of the sink process show that all evaluated surfaces are characterised by a run-in
phase, this can be determined by falling and fluctuating values of about 250-300 strokes. Observing
the surface characteristics Rz and Vvv, it can be mentioned that after the run-in phase an almost
stable condition was detected for all tested surfaces. For surfaces milled by geometry 1, 2 and the
conventional grinded surface, the Rz values as well as the Vvv show slightly decreasing values. This
effect seems to be explainable by a smoothing of the micro roughness, while the shape of the
surface structures seems to be still present. A different behaviour is shown for the polished surface,
which can be seen by the percentage of change ¨%Rz and Vvv after 10k strokes. The polished
surface shows a high increasing Rz, with 0.06 μm at start-up it rises up to 0.3μm which is a ¨Rz of
400%, while the ¨%Vvv value decreases. Similar results can be found in [17], where
multifunctional surfaces have been evaluated with polished surfaces in a Sheet-tribo-tester.
Summarised, the high-feed milled surface structures do show the lowest percentage of change ¨%Rz
compared with grinded and polished surfaces. In addition, it should be mentioned that all surfaces
are not yet subject to a critical failure
Summary
The results shown in this paper give an overview of the possibility creating surface structures for
forming applications by high-feed milling. By selecting two conventional high-feed milling tools, it
was possible to produce a wide range of quasi deterministic surface structures in hardened powder
metallurgy high-speed steel. Within this range, the surface structures can be tailored to their specific
application depending on the surface characteristics intended. Because of the high strength of the
hardened high-speed tool steel 1.3344 (>60HRC) wear tests show that even large surface areas of up
to 20,000 mm2 can be structured by one milling tool without any significant changes in the surface
topography. A pin extrusion test was performed to analyse the friction factors for three modified
surfaces. The experiments show that increasing mean roughness Rz leads to higher friction
coefficients. This result can be explained by the impeded lateral material flow due to an enhanced
catch of the roughness peaks. To prove the wear resistance of structured surfaces, a sink process
running 10k strokes was performed, comparing two selected surface structures with a grinded and
polished surface. All surfaces show a run-in phase and no critical point of failure, while the selected
high-feed surface structures had the lowest percentage of change in mean roughness ¨%Rz for the
tested surface characteristics.
Acknowledgement
The authors gratefully acknowledge the financial support of the German Research Foundation
(DFG) within the transregional collaborative research center TR73 “Manufacturing of complex
functional components with variants by using a new sheet metal forming process – Sheet Bulk
Metal Forming” within the subprojects B3 and C1. We also thank the Institute of Metal Forming
and Lightweight Construction of the University of Dortmund for the use of the SCHULER highspeed press.
References
[1] Z. Tang, X. Liu, K. Liu, Effect of surface texture on the frictional properties of grease
lubricated spherical plain bearings under reciprocating swing conditions, Proceedings of the
IMechE. 231 (2017) 125–135.
[2] M. Merklein, J.M. Allwood, B.A. Behrens, A. Brosius, H. Hagenah, K. Kuzman, K. Mori, A.E.
Tekkaya, A. Weckenmann, Bulk forming of sheet metal, Annals of the CIRP. 61 (2012) 725745.
69
[3] D. Gröbel, J. Koch, H.U. Vierzigmann, U. Engel, M. Merklein, Investigations and Approaches
on Material Flow of Non-uniform Arranged Cavities in Sheet Bulk Metal Forming Processes,
Procedia Engineering. 81 (2014) 401–406.
[4] D. Gröbel, R. Schulte, P. Hildenbrand, M. Lechner, U. Engel, P. Sieczkarek, S. Wernicke, S.
Gies, A.E. Tekkaya, B.A. Behrens, S. Hübner, M. Vucetic, S. Koch, M. Merklein,
Manufacturing of functional elements by sheet-bulk metal forming processes, Production
Engineering: Research and Development. 10 (2016) 63-80.
[5] M. Sedlaþek, L.M.S. Vilhena, B. Podgornik, J. Vižintin, Surface Topography Modelling for
Reduced Friction, SV-JME. 57 (2011) 674–680.
[6] E. Brinksmeier, O. Riemer, S. Twardy, Tribological behavior of micro structured surfaces for
micro forming tools, International Journal of Machine Tools & Manufacture. 50 (2010) 425–
430.
[7] E. Krebs, P. Kersting, Improving the Cutting Conditions in the Five-axis Micromilling of
Hardened High-speed Steel by Applying a Suitable Tool Inclination, Procedia CIRP. 14 (2014)
366–370.
[8] K. Sugioka, M. Meunier, A. Piqué, Fundamentals of Laser-Material Interaction and Application
to Multiscale Surface Modification, in: Laser precision microfabrication. Springer-Verlag
(Springer series in materials science, 135), Heidelberg, 2010, pp. 91-120.
[9] F. Klocke, K. Arntz, H. Mescheder, K. Winands, Reproduzierbare Designoberflächen im
Werkzeugbau, wt Werkstattstechnik online. 11/12 (1999) 844-850.
[10] A. Zabel, T. Surmann, A. Peuker, Surface structuring and tool path planning for efficient
milling of dies, 7th international conference on high speed machining proceedings. (2008) 155–
160.
[11] E. Abele, M. Dewald, F. Heimrich, Leistungsgrenzen von Hochvorschubstrategien im
Werkzeug- und Formenbau, Werkzeug und Formenbau. 105 (2010), 737–743.
[12] M. D. Mckay, R. J. Beckmann, W. T Conover, A Comparison of Three Methods for Selecting
Values of Input Variables in the Analysis of Output from a Computer Code, Technometrics. 42,
(2000), 55–61.
[13] L. Petrkovska, D. Hribnak, J. Petru, L. Cepova, Effect of increasing feed speed on the machined
surface integrity, Annals of DAAAM for 2011 & Proceedings of the 22nd International Daaam
Symposium. Bd. 22 (2011) 1039-1040.
[14] T. Wagner, Planning and multi-objective optimization of manufacturing processes by means of
empirical surrogate models, Vulkan Verlag, Essen, 2013.
[15] A. Peuker, Werkzeugentwicklung für die Transplantation thermisch gespritzter
mikrostrukturierter Funktionsschichten auf Druckgusswerkstücke, Vulkan Verlag, Essen, 2015.
[16] R. Hense, P. Kersting, U. Vierzigmann, M. Löffler, D. Biermann, M. Merklein, C. Wels, HighFeed Milling of Tailored Surfaces for Sheet-Bulk Metal Forming Tools, Production
Engineering. 9 (2015) 215-223.
[17] A. Godi, J. Grønbæk, L. Chiffre, Off-line testing of multifunctional surfaces for metal forming
applications, CIRP Journal of Manufacturing Science and Technology. 11 (2015) 28–35.
70
Evaluation of Design Parameters for Hybrid Structures Joined and Prestressed by Forming
Henning Husmann1,a,c and Peter Groche1,b,d
1
Institute for Production Engineering and Forming Machines,
Otto-Berndt-Straße 2, 64287 Darmstadt, Germany
a
husmann@ptu.tu-darmstadt.de, bgroche@ptu.tu-darmstadt.de
c
+49 6151/1623356, d +49 6151/1623143
Keywords: Forming, Joining, Pre-stress
Abstract. Reinforcing stringers, lightweight construction materials and targeted pre-stresses can be
used to increase the weight-specific stiffness and load-bearing capacity of structures. Especially in
the case of hybrid structures made of fibre-reinforced plastics and steel, these approaches can be
integrated into a forming process. For this purpose, the varying spring back after forming of the
different materials can be used to join and pre-stress the components. In the present paper, the
influencing design parameters on the pre-stressing and strengthening potential of such structures are
determined by numerical and experimental investigations. Therefore, a sheet metal blank with a
single stringer which is loosely wrapped by a fibre reinforced thermoplastic strap is formed in a 3point bending process. In order to prevent premature failure of the only slightly stretchable fibrereinforced plastic strap, additional ring-shaped elastic elements are introduced at the coupling point.
The strengthening and stiffening potential is evaluated by comparison of the force displacement
curves resulting from the bending process. It is shown that the weight-specific load bearing
potential of the hybrid structures can be increased by a suitable selection of design parameters and
materials. It is pointed out that the design of the coupling point between fibre-reinforced plastic and
steel is of particular importance.
Introduction
Weight reduction is an important goal in modern industry. In order to increase the energy
efficiency of cars, trucks and airplanes, sheet metal structures are needed which offer a high
strength and stiffness at low weight. Other applications can be found in façade structures for the
construction industry. In order to meet these requirements, hybrid structures made from different
types of materials are increasingly used. Especially the combination of steel and fibre reinforced
plastics (FRP) has significant advantages. Schmidt and Lauter consider the low steel price and the
versatile processing possibilities of the metallic material in combination with the possibility of local
reinforcement by FRPs as a benefit of hybrid constructions since the stiffening effect of the FRP
results in significant reductions of the metallic wall thickness and the total weight of the hybrid
structure [1]. Grujicic describes the advantages of polymer-metal hybrids additionally with
increased bending strengths and improved acoustic damping properties [2]. In order to utilise these
advantages, however, some challenges need to be overcome. According to Wang et al., new tool
concepts have to be developed to meet the combined requirements of the different materials. In
addition, partly complex, additional process steps are necessary to join the foreign materials. Thus,
the potential of the materials is often not sufficiently utilised. [3]
In addition to the use of hybrid designs, the mechanical properties of structures can also be
improved by targeted pre-stresses against their main load direction, thus postponing the failure.
Examples include the pre-stressing of piezo crystals [4], extrusion matrices [5], safety glass [6],
pressure vessels [7], pre-stressed concrete [8] and bridge beams [9].
Finally, the structures’ geometry itself can be optimised by the use of stiffening elements such
as beads and stringers. In this context, Groche et al. demonstrate the possibility to form curved sheet
71
metal structures with stiffening stringers by hydroforming [10], die bending and deep drawing with
rigid tools [11].
Benefit and application of FRP tapes. Continuous fibre reinforced tapes have advantageous
properties regarding tensile strength, stiffness, weight, thermal conductivity as well as durability
[12]. However, the elongation at break of the fibres is limited to small values ranging from 0.2 –
5.3 %, depending on the fibre material [13]. Common applications are local reinforcements of
structures or the transmission of high, concentrated (point) loads. In the latter case, pin-loaded
straps, as shown in Fig. 1a, are suitable connections. They provide the highest load bearing capacity
at minimum weight [12]. However, severe stress concentrations occur at the contact zone of pin and
FRP. In this context, Meier and Winstörfer present the possibility to significantly reduce these
concentrations by using non-laminated FRP straps. As depicted in Fig. 1b, a single strip of a fibre
reinforced thermoplastic is wound around the two pins and only joined at the outermost layer by
fusion bonding. The innermost layer is held in place by friction. Because of the remaining
movability, the forces in the individual layers are equalised during the loading of the strap, as a
result of which higher loads can be endured until failure [14].
a)
FRP strap
pin
b)
FRP strap bonded zone
Fig. 1: Laminated (a) and non-laminated (b) pin-loaded strap (cf. [15])
Pre-stressed structures. In the construction industry, FRP are used to retrofit, stiffen and
strengthen the girders of bridges. Deuring describes the beneficial effects of carbon fibre reinforced
plastics mounted on the bottom side of concrete beams. For this purpose, the bottom side of
concrete beams were reinforced with pre-stressed FRP lamellae and bent until failure in 4-point
bending tests. It was found that the load carried by the FRP as well as the stiffness of the retrofitted
carrier can be significantly increased with rising pre-stress levels of the FRP lamellae [16].
Additional investigations of Ghafoori and Motavalli show that the stiffness of a steel beam,
equipped with FRP straps, increases as the Young’s modulus of the FRP increases. Also, the load
portion carried by the FRP rises [9].
Stringer sheet forming. A common approach to increase the stiffness of structures submitted to
bending forces is the use of stiffening stringers. A significantly increased stiffness, however, comes
along with limited production possibilities. Due to the stringer, flat punches or dies of conventional
deep drawing tools cannot be used to form the stringer sheets. Substituting a solid tool by media,
complex shapes and sheets with high stringers can be processed by hydroforming, as shown in Fig.
2a [10]. However, hydroforming is a comparatively slow process. In order to increase the process
performance in stringer sheet forming, Köhler and Groche investigated the possibility of forming
stringer sheets by means of rigid tools. It is shown that stringer sheet forming by die-bending with
slotted tools, as depicted in Fig. 2b, is a feasible alternative to media based hydroforming [17].
Typical, material dependent failures are the buckling of the stringer, if arranged on the concave side
of the metal sheet, as well as tearing of the stringer or the weld, if arranged on the convex side [11].
Fig. 2: (a) Hydroforming of stringer sheets [18]; (b) die-bending of stringer sheets [11]
72
Objectives. The aim of this paper is the combination of the stiffening potential of stinger sheet
metal structures with the positive properties of continuous fibre reinforced thermoplastics (CFRTP)
to increase the load bearing potential and the damping properties. The support of high-stressed areas
is the main focus in order to shift the failure limit of the hybrid structure. For this purpose, a strap of
CFRTP is wound around the stringer of the sheet metal, using ring-shaped coupling elements to
transmit the forces from the stringer into the CFRTP strap. An elastic-plastic deformation behaviour
of this coupling element is expected which shall compensate for the inadequate elongation
capability of the FRP. The specimens are formed in a 3-point bending test. It is expected that the
stringer sheet blank is plastically formed whereas the CFRTP is solely elastically elongated. It is
assumed that due to the difference in spring back of CFRTP and metal after the forming process, a
beneficial pre-stress remains in the stringer which can postpone the structures failure under
subsequent loading. In the following experimental and numerical investigations, the hypotheses
listed below shall be examined:
- Stringer sheet structures can be joined and pre-stressed with CFRTP straps by forming.
- The failure limit of stringer sheet structures can be postponed by joined CFRTP straps.
- With an increasing stiffness of the CFRTP strap, the pre-stress increases.
- With an increasing stiffness of the coupling element, the pre-stress increases.
- With an increasing yield strength of the coupling element, the pre-stress increases.
Setup for the 3-point bending process
Specimen preparation. The specimen geometry used for the bending tests is shown in Fig. 3. A
laser cut rectangular base plate with a laser welded stringer represents the metal structure to be
reinforced. The stringer and the base plate are made of 1 mm thick steel DC04 (1.0338). The
dimensions of the vertical stringer are 128 mm x 10 mm and the dimensions of the base plate are
130 mm x 50 mm. The joining process is carried out by laser welding at 900 W laser power and 22
mm/s welding speed with an IPG fibre laser system. Both ends of the stringer are attached with
1 mm thick ring-shaped coupling elements also made of DC04. The coupling elements with a height
of 10 mm and an outer diameter of 16 mm are formed into their geometry by an adapted U-O diebending of a laser-cut sheet. By slightly over-bending the limbs, a sufficient clamping force can be
generated so that they are held in a centric position at the stringer ends. Lastly, a CFRTP strap with
unidirectional fibre orientation and a width of 8 mm is manually wound around the coupling
elements in a way that the upper edge of the strap is aligned with the upper edge of the coupling
element. Thus, a distance of 2 mm is kept between the strap and the base plate. As described in the
state of the art, the FRP component is designed as a non-laminated pin-loaded strap. Thus, only the
outermost end of the used 0.25 mm thick glass fibre reinforced polypropylene tape (Sabic UDMAX
GPP 45-70 Tape) is melted by hot air and bonded to the underlying layer under slight tension at its
free end. The innermost layer is fixated by friction. In the experiments, CFRTP straps with 1 to 3
layers are tested. As a reference, a non-reinforced stringer sheet is tested to determine the influence
of the FRP straps.
bonded zone
CFRTP strap
stringer coupling element
Fig. 3: Geometry of the test specimen
Experimental setup. The 3-point-bending tool, as depicted in Fig. 4, consists of a punch with a
10 mm diameter and four support rolls with a diameter of 30 mm. The support rolls are grooved with
a width of 3.5 mm to avoid a collision with the FRP straps. For the stringer, a gap between the
73
opposite rolls is set to 3 mm by using spacer discs. The distance between the roll axes is set to
60 mm, thus, leaving around 4 mm of space between the support rolls and the coupling element. For
the bending test, the FRP wound stringer is arranged on the opposite side of the punch. The
position-controlled tests are carried out with a punch-speed of 0.01 mm/s and the maximum punch
movement is set to 3 mm. A tensile/ compression testing machine (Zwick Allround-Line 100 kN) is
used.
Fig. 4: 3-point-bending tool with grooved support rolls
Numerical setup. The numerical studies are used to carry out the majority of the investigations. In
addition to the validation of the simulation by the comparison with the experimental data,
parameters beyond the experiments are investigated. For the FEM analyses, the software Dassault
Systèmes Abaqus 6.14 with an implicit solver is used. Because of the present symmetries, only a
quarter of the test assembly is modelled in order to shorten the simulation times. The tools, i. e. the
support rolls and the punch, are modelled as rigid bodies. For the metal specimen components, i. e.
the base plate, the stringer and the coupling elements, an elastic-plastic material model is used. The
necessary flow curves were determined by tensile tests and extrapolated using the ‘Ludwik’
equation [19]. Due to the low plasticity of FRPs, the CFRTP strap is modelled with a fully elastic
material behaviour. The strong anisotropy of the CFRTP is taken into account by an orthotropic
material model, calculated from literature data and manufacturer’s specifications as well as the
application of a material orientation tangent to the edge of the strap. Contrary to the experiments,
the CFRTP is built up as a fully laminated strap, in order to reduce the complexity of the model.
All contacts are modelled using the option hard contact in normal direction and a penalty algorithm
with estimated friction coefficients of 0.1 (steel-steel) and 0.25 (steel-FRP) in tangent direction.
Table 1 shows an overview of the most important material properties used in the simulations.
Table 1: Material properties
Steel
Young’s modulus [N/mm²]
Poisson’s ratio [-]
Density [g/cm³]
Yield strength [N/mm²]
DC04
204
ZStE340 ZStE500
210,000
0.3
7.85
340
450
CFRTP Sabic UDMAX GPP 47-70
Young’s modulus 0° [N/mm²] 37,000
Density [g/cm³]
1.65
Strain at break [%]
3.4
Tensile strength 0° [N/mm²] 948
Two types of models are created, which are shown in Fig. 5. For the validation of the simplified
numerical setup, a first model with the exact same assembly as in the experiments is used (model 1).
In this case, a collision of the coupling elements with the support rolls is prevented by a larger sheet
metal length. Since, however, free, unloaded ends are present in the reference stringer sheet, while
in the hybrid specimens, the ends are used for the force transmission, a second model is constructed
to ensure the comparability of the results. In model 2, the contacts between the support rolls and the
coupling element are deactivated and the base plate dimensions are set to 50 mm x 100 mm. Thus, a
comparable evaluation of the investigated effects is possible.
74
model 2
(further investigations)
bottom view
side view
model 1
(validation)
Fig. 5: Comparison of the numerical setups
Based on these two models, several parameters are investigated. In order to determine the
dependence of the pre-stress on the stiffness of the strap and of the coupling element, straps with 14 layers and coupling elements with thicknesses from 1-2 mm are modelled. The influence of the
yield strength of the coupling element material is determined by using different steel materials. For
this purpose, the steels ZStE340 and ZStE500 are used in addition to DC04.
Table 2: Parameters of the performed numerical simulations
sample name
number of layers
coupling element material
coupling element thickness
[mm]
punch movement [mm]
Ref.
-
model 1
B1 B2 B3 Ref. N1
1
2
3
1
DC04
1
model 2
N3 N4 N5 N6
N7
N8
3
4
DC04
ZStE340 ZStE500
N2
2
1
1.5
3
2
1
5
Results
The force-displacement curves from the experimental investigations are shown in Fig. 6a. It can
be seen that the forces required for the deformation of the specimen are significantly higher in the
case of the hybrid specimens than in the case of the stringer sheet used as a reference. Additionally,
the necessary forces are increasing with the number of CFRTP layers. Fig. 6b depicts the
comparison of the experimental and the numerical results for a stinger sheet reinforced by one layer
of CFRTP. After an initially comparable course, the forces in the simulation rise faster than in the
experiment. From about 1.7 mm, a further increasing slope of the experimental curve can be
observed, resulting in a curve almost parallel to the numerical results. For a better visualisation of
this effect, the experimental curve in Fig. 6b is extrapolated with a dashed line. At about 2.6 mm a
force drop appears in the experiments.
b) 1800
1800
1600
1600
1400
1400
1200
1200
1000
800
600
1 layer (B1)
2 layers (B2)
3 layers (B3)
Reference
400
200
0
0.0
0.5
1.0
1.5
2.0
punch displacement [mm]
2.5
3.0
punch force [N]
punch force [N]
a)
1000
800
600
400
numerical result (B1)
200
experimental result (B1)
0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
punch displacement [mm]
Fig. 6: Experimental mean value force-displacement curves (a); comparison of the experimental and
the numerical force-displacement curves (b)
75
Fig. 7 shows the specimens after the tests. In comparison, the qualitative experimental and
numerical results are fitting well. Nevertheless, here are two differences visible. While the strap
stays vertical in the numerical simulation after the contact with the base plate, the bottom edge of
the strap buckles in the experiment. Also, a small gap between the coupling element and the base
plate is visible after the bending process in case of the experimental specimen. These effects explain
the deviations in the previously presented diagram in Fig. 6b. Due to the tilting of the coupling
element, the bending stiffness of the CFRTP straps is not fully utilised, since the strap is only
slightly bended. After the contact with the base plate, the CFRTP strap is submitted to the full
bending loads, thus, stiffening the specimen. If these bending forces reach a critical value, the strap
buckles resulting in a sudden force drop, like it is shown at the end of the experimental curve in Fig.
6a.
Fig. 7: Deformed sample. Experiment (top picture) and numerical model with the resulting von
Mises stresses in N/mm² (bottom picture) – sample type: B1
Despite the differences between the experimental and the numerical results, the fundamental
effects can be mapped in a good approximation. Thus, the further parameter studies regarding the
general effects in the joining and pre-stressing process can be performed using the numerical setup
described earlier (see Fig. 5).
The numerical results are shown in Fig. 8. For the evaluation of the reinforcing effect of the
CFRTP strap, the punch force at 2 mm displacement is determined in Fig. 8a. At this time, the strap
is in contact with the base plate. In order to determine the potential for lightweight construction
purposes, the weight-specific stamping force is evaluated. For this purpose, the normalised punch
force Fn is determined from the quotient of the stamping force Fs and the respective sample mass m
ி
‫ܨ‬௡ ൌ ೞ .
௠
(1)
It can be seen that the normalised force required for the plastic deformation of the sample
increases with the number of the layers (N1-N4). Also, the increase of the coupling element’s
thickness reveals a marked elevation of the normalised forces (N5-N6). At a punch displacement of
2 mm, the weight-specific strengthening effect of a higher coupling element’s yield strength can
also be detected (N7-N8).
76
0
1.2
25
1.0
15
0.8
0.6
10
0.4
5
0.2
0
Ref.: Reference (stinger sheet)
N1-N4: number of layers (1, 2, 3, 4)
N7
N8
20
N5
N6
N7
N8
N1
N2
N3
N4
5
N5
N6
10
30
force [kN]
15
force transmitted by the strap
after unloading
1.4
N7
N8
20
c)
35
N5
N6
25
normalised punch force [N/g]
30
Ref.
normalised punch force [N/g]
35
weight-specific yield force at
5 mm punch displacement
N1
N2
N3
N4
b)
N1
N2
N3
N4
weight-specific yield force at
2 mm punch displacement
Ref.
a)
0.0
N5, N6: thickness of the coupling element (1.5 mm, 2.0 mm)
N7, N8: material of the coupling element (ZStE340, ZStE500)
Fig. 8: Numerical results: weight specific yield force at 2 mm punch displacement (a);
weight-specific yield force at 5 mm punch displacement (b);
force transmitted by the FRP strap after unloading (c)
As can be seen in Fig. 8b, the reinforcing effect caused by higher yield strengths (N7-N8) is
significantly increased at a punch movement of 5 mm. This is due to the fact that the coupling
element’s plastic deformation is reduced by the higher yield strength, thus transmitting higher
forces into the strap. Finally, the absolute forces, remaining in the CFRTP strap after a punch
movement of 5 mm and a subsequent unloading, are shown in Fig. 8c. Not shown are the forces
remaining in the stringer sheet which are equal in magnitude but oppositely directed due to the
mechanical equilibrium in the unloaded sample. It is shown that higher sample strengths, as seen in
Fig. 8a-b, can be achieved with an increasing pre-stress, represented by the remaining forces in the
CFRTP strap.
Conclusions
In this paper, a novel joining method for stringer sheet structures with reinforcing CFRTP straps
is presented. Despite some deviations due to manufacturing inaccuracies and numerical
idealisations, a good qualitative approximation of the conducted experimental and numerical
analyses can be stated. Based on the results, the following conclusions can be made:
- Stringer sheet structures can be joined and pre-stressed with pin-loaded CFRTP straps wound
around ring-shaped, elastic coupling elements.
- Due to the pre-stress originating from the FRP strap, a significant increase of the structure’s
weight specific yield strength can be generated.
- The beneficial pre-stresses can be increased by using stiffer coupling elements and stiffer
CFRTP straps.
- With increasing deformation of the specimen, the generated pre-stress can be raised by using
coupling elements made from materials with higher yield strengths.
In order to examine the influence of pre-stress more closely, further studies are needed. For this
purpose, it is planned to generate the pre-stresses by means of tensile tests and subsequently
evaluate their reinforcing effects by means of bending tests. Thus, the mechanisms of the pre-stress
generation are separated from the resulting effects, enabling a more fundamental evaluation.
Acknowledgements
The results generated in this paper were achieved within the project ‘Prestressed, hybrid stringer
sheet structures’ (GR1818/57-1) funded by the German Research Foundation (DFG). The financial
support of the German Research Foundation (DFG) is gratefully acknowledged.
77
References
[1] H. C. Schmidt, C. Lauter, Serienfertigung von Stahl-CFK-Strukturen auf metallischem Weg,
Online-Article: Maschinen Markt - Umformtechnik, E-Pub-Date: 2013, 25.01.13
[2] M. Grujicic, Injection over molding of polymer-metal hybrid structures, American Journal of
Science and Technology 1 4 (2014), 168-181.
[3] Z. Wang, M. Riemer, S. F. Koch, D. Barfuss, R. Grützner, F. Augenthaler, et al., Intrinsic
Hybrid Composites for Lightweight Structures: Tooling Technologies, Adv. Mat. Res. 1140 (2016),
247-254.
[4] Information on http://www.piceramic.de.
[5] C. T. Kwan, C. C. Wang, An Optimal Pre-stress Die Design of Cold Backward Extrusion by
RSM Method, Structural Longevity 15 (2011), 25-32.
[6] F. Dehn, G. König, G. Marzahn, Konstruktionswerkstoffe im Bauwesen, Ernst & Sohn, Berlin,
2003.
[7] J. M. Alegre, P. Bravo, M. Preciado, Fatigue behaviour of an autofrettaged high-pressure vessel
for the food industry, Eng. Fail. Anal. 14 (2007), 396-407.
[8] A. E. Naaman, Prestressed Concrete - Analysis and Design, Techno Press 3000, Ann Arbor,
2012.
[9] E. Ghafoori, M. Motavalli, Normal, high and ultra-high modulus carbon fiber-reinforced
polymer laminates for bonded and un-bonded strengthening of steel beams, Mater. Des 67 (2015),
232-243.
[10] P. Groche, F. Bäcker, M. Ertugrul, Möglichkeiten und Grenzen der Stegblechumformung, wt
Werkstatttechnik online 100 (2010), 760-765.
[11] P. Groche, S. Köhler, Formgebung und Leichtbaupotential verzweigter Blechbauteile, VDI-Z,
Integrierte Produktion 10 (2016), 50-52.
[12] H. Schürmann, Konstruieren mit Faser-Kunststoff-Verbunden, Springer, Berlin, Heidelberg
2007.
[13] J. Rösler, H. Harders, M. Bäker, Mechanisches Verhalten der Werkstoffe, Teubner, 2006, 295331.
[14] U. Meier, International Patent WO 96/29483 (1996).
[15] U. Meier, A. Winstörfer, Advanced Thermoplastic CFRP Tendons, International Workshop on
Thermoplastic Matrix Composites (2007), Italy.
[16] M. Deuring, Verstärkung von Stahlbeton mit gespannten Faserverbundwerkstoffen, docoral
thesis, ETH Zürich, 1993.
[17] S. Köhler, P. Groche, Forming of Stringer Sheets with Solid Tools, Adv Mat Res 1140 (2016),
3-10.
[18] P. Groche, F. Bäcker, Springback in stringer sheet stretch forming, CIRP Annals Manufacturing Technology 62 (2013), 275-278.
[19] P. Ludwik, Elemente der Technologischen Mechanik. Berlin: Springer-Verlag, 1909.
78
System Concept of a Robust and Reproducible
Plasma-Based Coating Process for the manufacturing of power
electronic applications
Alexander Hensel1,a, Martin Mueller1,b, and Joerg Franke1,c
1
Friedrich-Alexander-University Erlangen-Nuremberg, Institute for Factory Automation and
Production Systems (FAPS), Fuerther Strasse 246b, 90429 Nuremberg, Germany
a
alexander.hensel@faps.fau.de, b martin.mueller@faps.fau.de, c joerg.franke@faps.fau.de
Keywords: Wire Bonding, Additive Manufacturing Process, Plasma Coating
Abstract. The increasing integration of power electronics in various applications like energy
distribution, hybrid and electric mobility or consumer products requires a steady improvement of
interconnection technologies in order to increase reliability, efficiency and life expectancy of power
modules.
Currently the standard technology used for top level interconnection is the aluminum wire bonding
process. Thereby wires are friction welded both on dies and substrates by the use of ultrasonic in
order to create a local cohesively connection. Due to the limitation of aluminum regarding the
mechanical, thermal, and electric characteristics, the use of alternative materials like copper is
preferred. However, due to the higher hardness of copper, copper wire bonding on the previous
aluminum metallization of the components is not possible, so that an additional copper metallization
is necessary to be applied. Common processes for the metallization of semiconductors like PVD or
galvanic metallization are either expensive or require extensive follow up processes like back etching.
In order to provide another suitable method a plasma based coating unit for copper particles has been
installed.
Introduction
The elimination of limiting factors, like insufficient connection technologies, in electronics
production by the use of new technologies as well as the simplification of the process chains of
existing additive processes enables the development of more cost-effective and more powerful
modules. The preliminary investigation of various additive plasma-based methods shows a great
potential of this technology in the field of electronics. The challenge is to choose a process-relevant
method from the established procedures and to develop it adequately and in accordance with given
requirements. The categorization and explanation of the thermal spraying processes, to which plasmacoated methods are counted, is carried out according to DIN EN 657 [1].
The process of thermal metal spraying is now widely used in many fields of applications. As shown
in Fig. 1, in this process, by definition, a coating material, also an injection additive material, is
completely or superficially melted or plasticized by means of a source of energy [2]. Through the
injection into the flame of the spraying device, the material is linearly accelerated by the gas
expansion and impinges on the work piece to be coated. Upon impact, the coating material solidifies
and adhesion to the surface occurs due to physical adhesion, mechanical cracking and diffusion. The
component surface to be coated is not melted [3].
*
Submitted by: M.SC. Alexander Hensel
79
power source
- flame
- electric arc
- plasma
thermal
coating
coating material
accelerated and
heated particles
substrate
glasma gun
process mediums
relative movement
Fig. 1: Process principle of a thermal coating application
Definition of the process principles
For deeper process understanding, it is necessary to clarify what the term "plasma" means. Derived
from the Greek plásma, standing for "structures", a fourth physical state, compared to the
conventional states solid, liquid, gaseous, is described. This term is understood to mean an ionized
gas which differs greatly in its characteristics from other physical states. [4] In addition to the neutral
particles, the medium also has mobile charge carriers in the form of charged ions and electrons and
is thus electrically conductive. Due to the same number of charge carriers of different polarities, the
plasma is external electrically neutral; this is termed quasi-neutrality. [5] Within the plasma nozzle,
the ignition is carried out by means of an electrical gas discharge. In this process, a plasma is formed
by applying an electric voltage to two electrodes. The carrier gas to be ionized flows between the
electrodes and is exposed to an electromagnetic field. [3]
The energy consumption is distributed over various mechanisms as shown in Fig. 2. The energy is
thus first distributed for the excitation of elastic processes such as translation, vibration and rotation.
ionisation
dissoziation
electric excitation
rotation
vibration
translation
Fig. 2: Stages of energetic excitation of gases
If electrical excitation and dissociation are also present, the free primary charge carriers in the gas
are accelerated during the subsequent ionization. [6] A mass-dependent discrepancy can be observed.
The conversion of electrical energy into kinetic energy (Ekin) is subject to the rules of energy
conservation. The significantly heavier ions, according to Eq. 1, achieve a much lower speed than the
80
electrons at the same energy output. The factor m denotes the particle mass; V describes the speed.
[2]
(1)
Ekin = ½ mv²
The ions and electrons thereby become spatially separated from each another. In these movements
the electrons collide with the comparatively fixed ions and gas atoms. In the case of the latter, further
electrons are dissolved. However, the energy transfer is very small because of the impulse
maintenance and the influence of the mass in elastic impacts. Further electrons are emitted from the
anode into the gas. Due to the charge carrier shift of the electrons in the direction of the anode and
the ions to the cathode, this ultimately results in a kind of chain reaction by which the gas becomes
conductive by so-called streamers (Fig. 3) [3].
Fig. 3: Avalanche Ionisation of gas [7]
The sum of the free primary charge carriers and the secondary charge carriers dissolved by impact
ionization, referred to nion, indicates the degree of ionization in proportion to the number of total
particles n0 (Eq. 2). This usually reaches a value of approximately 3 % [2].
xion = nion / (nion + n0)
(2)
An important classification is the differentiation of thermal and non-thermal plasmas. If a thermal
plasma is present, a thermodynamic equilibrium must exist for all energy levels. The system shown
in Fig. 2 can be described by equations, which are individual for each factor. The degree of ionization
is described, for example, by the Saha-Boltzmann equation. The translatory processes can be defined
via a Maxwell distribution. The common factor is the temperature. Only if it isidentical in each
subsystem, a thermal plasma is present. The setting of a thermal plasma can be prevented by both
spatial and temporal gradients. An example of temporal gradients are high gas flows. These lead to
energetically enriched particles being quickly removed. Due to the rapid exchange and rapid
dissipation of the energy, a homogeneous temperature cannot be adjusted.
A further possibility is the excitation via pulsed sources. In this variant, the excitation is carried out
just for a very short time. An ionization can also take place here, but the formation of a
thermodynamic equilibrium is prevented by cooling effects.
81
The required energy for ionization of gases differs for every element. Depending on the occurrence
as molecule or atomic gases, the required energy demand for the dissociation has to be considered as
well. In Table 1 the main plasma gases and their energy requirements for ionization are displayed [2].
Tab 1:
Ionization and dissociation energy of the main plasma spray gases [2]
Species
Ionization
energy [eV]
Dissociation
energy [eV]
H
He
N
Ar
H2
N2
13.659
24.481
14.534
15.755
15.426
15.58
-
-
-
-
4.588
9.756
This relationship is illustrated in Fig4, although it is generalized. Nitrogen and argon differ only
slightly in their necessary ionization energy, argon with 15.76 eV is even slightly over nitrogen
(N2: 15.58 eV).
150
This is due to the gas structure. Argon is not a
diatomic molecular gas, but atomic. However, the
H
mechanisms of vibration, rotation and dissociation,
V
He
shown in Fig. 2, apply only to molecular gases. The
N
dissociation describes e.g. the decomposition of a
nitrogen molecule into two nitrogen atoms. For a
50
nitrogen molecule, approximately 12 eV are required.
Ar
Thus, the necessary total energy expenditure for the
ionization of argon is significantly lower.
Voltage
2
2
50
However, even under ideal condition, a thermodynamic
equilibrium can only be achieved theoretically. Therefore
an idealistic model is used.
100
A
200
Current
Fig4: Dependence of the electrical
properties on the Plasmagas [7]
Concept of the plant design
To ignite the plasma, the process gas is flowed into the burner. The initial ionization was carried out
by means of an electric gas discharge. In a stable condition, the plasma temperature for an Argon
based process is approximately 15,000 K in the arc column region. The plasma jet is directed through
a nozzle where the coating material is injected. The temperature drop on this short distance is
enormous. Fig. 5 shows the structure of the plasma torch coating cell. In order to increase process
stability, the plasma torch is firmly installed in the cell and the work piece is moved flexibly under
the nozzle. This unconventional construction has the advantage that the feed lines, in particular the
powder lines, do not move. This enables a more constant powder delivery. The mechanisms
responsible for the deposition of a copper layer are primarily due to the mechanical cracking of the
particles with the surface as well as to diffusion processes. Due to the active temperature control of
the samples by means of a heating table, the particle adhesion on smooth surfaces is improved, as a
minimization of the temperature difference of the surface and the copper particles is achieved. This
minimizes the cooling during impact, which results in less mechanical stress and a longer
solidification phase and thus prevents delamination [8]. The coating quality can be evaluated by the
consideration of single splats on the surface. Regarding the surface roughness, the splat diameter
should be at least two or three times of the maximum height divergence. However, due to the
temperature dependent oxidation rate of copper, this has a disadvantageous effect on the oxidation.
To counteract this, an envelope gas nozzle is used. This generates a gas stream of forming gas
(95 % N2 / 5 % H2) around the particle stream, which displaces the surrounding oxygen.
82
Fig. 5: Torch and plant design
The injection of the copper powder depends on two factors: the flow rate per minute and the carrier
gas stream. If the carrier gas flow is selected too low, powder deposits occur in the feed lines;
overflows of the feeders can occur if the rate is too high. If the gas pressure is set too high, the particles
are injected into the plasma jet a high velocity as a result of which the particles are not detected by
the gas stream and thus are not processed. Fig. 6 shows the interaction of the particles during the
powder injection and the influence of the carrier gas flow as well as the conveying rates on the layer
structure. The right injector has deposited the left copper hill and vice versa.
feeder 1
feeder 2
nozzle
plasmajet
substrate
Fig. 6: Powder injection
Since the in situ measurement of powder quantity is difficult, the required rate is defined in prior
measurements. In order to achieve an optimum and constant conveying rate, usually spherical powder
is used. Because of the surface / volume ratio, spherical powder requires a higher thermal energy
input to melt than shaped powders. Therefore the modification of the plasma gas by the addition of
secondary gases like helium or hydrogen is a common principle. The result is a strong influence on
the torch efficiency even at small additions. Byusing a defined mixture as process gas, the coating
particles can be influenced directly to fulfill the required coating attributes better.
Establishing process control
In the previous section, the process itself was discussed. In order to maintain a reliable and
reproducible coating process, several control units can be used. The measurement focus is set on the
following aspects: Particle temperature and speed, plasma composition and the constant parameter
set. The primary factor is the adjusted plasma power. The plasma power, or the adjusted current,
regulates the ionization of the plasma within the burner and thus controls the degree of ionization of
83
the process gas and the temperature of the flame. The particle temperature must be adjusted in such
a way that they are melted or superficially melted, but the thermal energy is not too high as to cause
damage to the substrate upon impact. The flow of the process gas is directly related. Argon is used
for this process. This is due to the relatively low ionization energy and the unnecessary dissociation.
Too low a volumetric flow results in an insufficient ionization of the plasma and thus weakens the
process, whereas at a flow rate selected too high the applied power is not sufficient for a complete
ionization and the plasma is cooled. This deteriorates the degree of melting of the particles. Fig. 7
shows the intensity of the copper lines of a spectral analysis as a function of the argon volume flow
[9]. The volume flow was reduced with an increment of one liter per minute. The start parameters
were at 30 SLPM (standard liter per minute) and were held for 20 seconds each. The intensity change
with a maximum at 22 SLPM is clearly visible. For this measurement a Plasus Emicon 1MC was
used.
24
23
20
40
Argon flowrate
22
21
20
l/min
18
80
100
sec
140
12000
cts
Signal
10000
9000
Cu
8000
7000
Ar
6000
60
Time
Fig. 7: Copper ionization in relation to the argon volume
Furthermore, it can be ascertained whether the injected process media are held at constant levels.
By analyzing the individual spectra even composition can be clarified and non-process related
elements can be identified.
The measurement of the particle speed and temperature can be achieved by specific monitoring
systems [11]. By recording the change of specific particle properties in dependence to the adjusted
process parameters, a deeper understanding of the processes within the plasma flame can be achieved.
The Oseir Spraywatch 2S enables the measurement by the use of two color pyrometry for
temperature and a CCD camera for velocity tracking. The used time-of-flight method calculates the
lengths of the particle trace on the CCD sensor array. By comparison with the known shutter time,
the velocity is determined. A two-color pyrometer uses a set of two different wavelengths filters in
order to compensate measurement errors caused by e.g. particle movement. With measurement
volume of 34 x 27 x 25 mm3, nearly the whole spray plume can be analyzed. Based on the
measurement principles, only light-emitting particles can be detected. Thus, a minimum particle
temperature of at least 1000 °C is required [11].
The GTV NIR Sensor instead uses an active measurement concept. By the use of an invisible NearInfraRed laser source, single particles are radiated. Based on the reflected radiation, the temperature
can be calculated. The velocity is calculated from two successive picture and the known shutter time.
Because of the active measurement application it is possible to measure colder particles (> 500 °C)
instead of a Spraywatch system. However, the measurement volume is with approximately
2.5 x 0.1 mm2 significant smaller.
84
Summary
Additive coating processes can be used for a large variety of applications. Regarding the field of
electronic production, a reproducible, stable and flexible copper coating process for electric circuits
enables the manufacturing both high-volume products as well as an costly attractive prototype or low
volume production. Regarding the integrated process control, even the field of power electronics with
its higher requirements concerning lifetime and reproducibility can benefit. By tracking the changes
of the particle temperature and speed relative to the nozzle distance, a specific impact temperature
can be defined by adjusting the substrate in suitable distance. Thus, even temperature sensitive
materials and structures with fragile surfaces can be coated without the risk of damage.
References
[1] DIN EN 657. 2005. Thermisches Spritzen - Begriffe, Einteilung
[2] M. Boulos, P. Fauchais, J. Heberlein, Thermal Spray Fundamentals: From Powder to Part.
New York, Springer Verlag, 2014 pp. 17-110, 383 – 467
[3] H. Herman, R. McCune, S. Sampath, (2000). Thermal Spray: Current Status and Future
Trends. MRS Bulletin, 25(7), 17-25. doi:10.1557/mrs2000.119
[4] M. I. Boulos, P. Fauchais, E. Pfender, Thermal Plasmas: Fundamentals and Applications
Volume 1. New York, Springer Verlag, 1994
[5] U. Stroth, Plasmaphysik: Phänomene, Grundlagen, Anwendungen. Wiesbaden:
Vieweg+Teubner Verlag / Springer Fachmedien Wies-baden GmbH Wiesbaden, 2011
[6] W. Demtröder, Experimentalphysik 2: Elektrizität und Optik. 6., überarb. u. akt. Aufl. 2013.
Berlin, Heidelberg: Springer, 2013 (Springer-Lehrbuch)
[7] J. Richter, Entwicklung einer Prozessregelung für das atmosphärische Plasmaspritzen zur
Kompensation elektrodenverschleißbedingter Effekte. Universitätsverlag Illmenau, 2014
[8] P. Fauchais, S. Goutier, M. Vardelle, (2012). Understanding of Spray Coating Adhesion
Through the Formation of a Single Lamella. Therm Spray Tech (2012) 21: 522.
doi:10.1007/s11666-012-9763-0
[9] G. Bertrand, P. Fauchais, G. Montavon, (2009). From Powders to Thermally Sprayed
Coatings. Therm Spray Tech (2010) 19: 56. doi:10.1007/s11666-009-9435-x
[10] National Institut of Standards and Technology:
http://webbook.nist.gov/chemistry/form-ser.html, 2015
Webbook
Chemistry,
[11] P. Fauchais, A. Vardelle, M. Vardelle, (2013). Reliability of plasma-sprayed coatings:
monitoring the plasma spray process and improving the quality of coatings. Journal of Physics D:
Applied Physics (2013) DOI: 10.1088/0022-3727/46/22/224016
85
Methods for the analysis of grinding wheel properties
Fabian Kempf1,a , Abdelhamid Bouabid1,b , Patrick Dzierzawa1,c , Thilo
Grove1,d and Berend Denkena1,e
1Leibniz
Universität Hannover, Institut für Fertigungstechnik und Werkzeugmaschinen,
An der Universität 2, 30823 Garbsen, Germany
a
c
kempf@ifw.uni-hannover.de, bbouabid@ifw.uni-hannover.de,
dzierzawa@ifw.uni-hannover.de, dgrove@ifw.uni-hannover.de,
e
denkena@ifw.uni-hannover.de
Keywords: Grinding Wheel, Sintering, Structural analysis
Abstract
In the simplest case the grinding layer of a grinding wheel consists of bond material, pores
and abrasive grains. Besides these there can be various other components that tune the
properties of the grinding wheel (e.g. pore formation agents, secondary grain). These more
complex systems complicate the comparison between different kinds of grinding wheels. In this
publication we propose new methods for the quantification of mechanical and structural
properties in grinding wheels, as well as methods for the characterisation of wear types. These
new methods aim to facilitate the evaluation of newly developed grinding layers by providing
new information about the individual grain wear, and characteristic values for the grinding
layer’s homogeneity, and the ability to accommodate diamond grains.
Introduction
The properties of the grinding layer of a grinding wheel have an important impact on the
grinding operation itself, as they ultimately influence the productivity and the workpiece
quality. There are different methods and properties that are used for the characterisation of
grinding wheels [1]. Some of these are already well defined, like grain size or grain
concentration, while others are used more loosely, like for example the degree of hardness of
the grinding layer [2].
A basic characterisation of grinding wheels is done by dividing the tools by the used abrasive
grain type. This already defines the recommended application of the respective tool: corundum
e.g for grinding steel, PCBN e.g for grinding hardened steels, diamonds e.g for hard materials
except steels. The differences in application mainly result from the grains hardness and shape
[2].
For the characterisation of the whole grinding layer there are already some methods
available. These methods can be used for example in quality management to identify faulty
products, like fractured grinding wheels. For this field there lies an emphasis on non-destructive
methods, like eigenfrequency analysis or optical measurement methods [3]. In a wider approach
for the design of bronze-bonded grinding wheels it was also shown that critical bond stress tests
were able to evaluate the sintering result [4]. Nevertheless there are also many aspects that are
still unknown. Also some methods are restricted to specific systems. For example the
characterisation via eigenfrequency analysis works well with vitrified bonded grinding wheels,
but is not suited for grinding wheels with a base body.
Another aspect that is considered for the characterisation of grinding wheels are thermal
properties of the grinding tool itself. Whereas vitrified bonded grinding wheels have a low
thermal conductivity, metallic bonded grinding tools exhibit a higher thermal conductivity.
Together with a high bonding strength these tools are being used for example in grinding glass,
stone and concrete [5], as well as for profile grinding [6]. However the thermal conductivity is
usually only categorised as high and low.
87
As with all tools the characterisation of the grinding wheel’s general geometry like shape
and size is very important for its application in the manufacturing process. While there are
already well defined standards for the overall geometry (ISO603), especially the microscopic
structure, the topography, is being investigated more closely in recent years [7, 8].
In order to gain a deeper understanding of the interaction between mechanical, and structural
properties and the grinding behaviour, new methods for the characterisation of grinding wheels
are necessary. In this paper we propose new methods to evaluate the microscopic wear types,
the structural cohesion, and grain distribution.
Experimental approach
The grinding layer in metallic bonded grinding wheels usually is formed by a sintering
process. In order to investigate the different properties and behaviours of the grinding layer it
is necessary to examine samples that are similar to the grinding layer of a true grinding wheel.
This investigation uses two principles to obtain samples consisting of a real grinding layer.
One way is to manufacture a grinding wheel as a whole and successively remove material until
the desired sample shape and size is obtained. This approach is used to manufacture grinding
pyramids which are used to carry out scratch tests in order to investigate the behaviour of a
single grain within its bond matrix during the grinding process. This analogy tool is joined to a
body and mounted in a wheel body. The grain protrusion is achieved by sharpening.
The other principle to obtain a sample is the manufacture of a dedicated grinding layer
sample. This approach is necessary when the desired sample size or geometry is not directly
accessible via the grinding layer of a grinding wheel. To do so the process chain is altered in a
way that cylindrical samples are obtained. The samples are 22 mm in diameter and 10 mm in
height. The height resembles the width of the grinding layer of the manufactured grinding
wheels. Furthermore the top and bottom areas of the cylindrical samples correspond to the side
areas of the grinding layer, meaning that in both cases these are the areas where the sintering
pressure is applied.
Figure 1: Connection between process chain of the manufacturing of grinding wheels and
experimental samples.
Because there is a significant difference in the shape and size between the grinding wheel
and the grinding layer sample, the pressure applied by the sintering press phydraulic has to be
88
adjusted. It is calculated from the sintering area Aform of the sintering form, the specific pressure
of the sintered powder pspec, and the effective area of the hydraulic cylinder Acyl.
‫ ‹Ž—ƒ”†›Š݌‬ൌ ௣•’‡ ȉ஺౜౥౨ౣ
஺ ›Ž (1)
The process parameters are closely monitored to ensure the comparability of grinding wheel
manufacture and grinding layer samples. In order to recognise an uneven temperature
distribution, the temperature is measured in three different areas.
Experimental setup
Grain distribution and demixing effects
The grinding wheels and the grinding layer samples were sintered with a sintering press DSP
510 from Dr. Fritsch at 560 °C, 3500 N/cm² (specific pressure) and with a holding time from
360 seconds at the maximum temperature. For the bond a bronze with 80% Cu and 20% Sn was
chosen. The utilised diamond grains FMD-60 from Van Moppes have a size of 76 μm and a
truncated octahedron shape. This allows for an easier characterisation compared to diamond
grains with an irregular shape, because there are only two distinct crystallographic faces present
leading to a high similarity between individual grains.
To analyse the influence of the grain concentration on the grain distribution samples with
10%, 25%, 50% grain concentration were sintered. To investigate the impact of different
handling of the mixed powders another batch of 25% grain concentration was mixed and filled
in the form, which was then vibrated with a sieve shaker IRIS FTL-0200 from Filtra Vibratión
for one minute at a power setting. Afterwards the batch was sintered with the same sintering
parameters.
Determination of critical bond stresses
To analyse the influence of the grain concentration on the cohesion of the grinding layer
fracture tests were used. The samples for these tests were sintered using the same parameters
as before, but the height of the grinding layer samples was reduced to 5 mm to gain a higher
height to width ratio. This was necessary to obtain a more consistent critical bond stress of the
cylindrical sample, and therefore a narrower distribution of the recorded force. Additionally a
batch of samples with 10% Zinc was produced.
Grinding pyramid
The grinding behaviour of grinding tools can be influenced by varying the sintering
parameters [4]. In order to understand these effects on the microscale, single grain grinding
with grinding pyramids is conducted. Compared to a common scratch test, which usually is
performed by a scratch tool with a brazed diamond grain, the grinding pyramid utilises the same
bond material as the actual tool, which allows to investigate the interaction of the metallic bond
and the grain. Therefore the estimation of the real stresses on single grains during grinding and
a more realistic analysis of their effects on the cutting performance and the wear become
possible. During experimental investigations grain as well as bond specific types of wear are
observed. Figure 1.b shows the progress of the bond wear, which finally leads to bond failure.
Due to the high mechanical stresses the bond is deformed plastically. This leads to a shift of the
grain within the bond and consequently to a change of the engagement conditions. In an
advanced stage (path 6) the grain is moved deeper into the bond. As a result there is no sufficient
grain protrusion to remove material.
89
Figure 2: New analogy tool: grinding pyramid
While the grinding pyramid is a tool which enables the investigation of the wear and the
bond strength in a dynamic process very well, there are other aspects of the grinding layer,
which cannot be examined with this tool. For instance influences of different grain
concentration, or the characterisation of the mechanical properties of the macroscopic grinding
layer, which require specific sample shapes. For this more macroscopic investigation, grinding
layer samples offer a versatile approach.
Critical bond stress
Grinding processes that aim for a high surface quality, usually use grinding tools with a high
number of small abrasive grains. In general, a higher grain concentration of the same grain size
leads to a higher surface quality of the workpiece, because the effective chip thickness for each
grain is reduced. However, the increase of the grain concentration reduces the effective volume
of the bond, therefore there is a limit for this correlation. Very high grain concentrations tend
to cause a structural failure of the bond, and therefore of the grinding layer itself. This becomes
obvious, when considering that the bond is diluted by abrasive grain, which is not actively
bound to the bond’s surface, but rather enveloped by it. This dilution of the bond causes a
decrease in the stress required to rupture the grinding layer: the critical bond stress.
In order to characterise this effect, the mechanical properties of the grinding layer can be
calculated. For this a three point flexural test is performed and the force necessary for fracturing
is recorded. Because the fracture stress is derived from the force at the rupture point and the
area on which the sample is positioned, it is crucial that the cylindrical grinding layer samples
are located in similar positions regarding the contact points of the testing device. To ensure this
a cylindrical cut out was milled into the upper part of the device. This way the critical bond
stress ıc can be calculated [4].
ଷȉி ȉ௟
೥
ߪ௖ ൌ ଶȉௗȉ௛
మ
ሺʹሻ
Because the critical bond stress is also affected by the sample’s porosity ĭ, as the individual
pores also can be regarded as a dilution of the bond, it also has to be considered in order to
isolate the influence of the grain concentration.
It is important to note that the grain concentration also correlates with the obtained sample
porosity [4]. Because of the correlation of the grain concentration and the obtained porosity it
is necessary to consider the sample porosity in computing the critical bond stress:
ߪ ‫ כ‬ൌ ߪ௖
90
ଵ
ଵି஍
ሺ͵ሻ
The grinding layer’s porosity can be calculated via comparison of the theoretical density ȡth
and the actual density measured via Archimedes’ principle ȡ.
ఘ
Ȱ ൌ ቀͳ െ ఘ ቁ
೟೓
ሺͶሻ
Figure 3 shows that the adjusted critical bond stress ı* is directly correlated to the grain
concentration. The linear trend starts at the adjusted critical bond stress of a pure bronze sample,
and extends to a sample with a grain concentration of approximately 66%. This concentration
is called the percolation threshold. This effect is still true if the composition of the bond is
slightly altered. The addition of 10% Zinc for example (72% Cu; 18% Sn; 10% Zn) lowers the
critical bond stress slightly, but the correlation trend remains similar. For this composition the
maximum grain concentration lies at approximately 64%.
In order to validate this finding, a batch of grinding layer samples with 66% SiC and bronze
bond was sintered. While the obtained samples would hold their principal shape after they were
pulled from the sintering form, they quickly began to erode. The samples were so brittle that
they cannot be considered for an actual grinding layer. This confirmed 66% SiC as the
concentration where there is no sufficient cohesion inside the grinding layer.
While the concentration of the percolation threshold is not the maximum possible grain
concentration for a grinding wheel, because grinding wheels have to withstand higher stresses
during the grinding process and even during mould removal, the knowledge of this effect allows
to approximate the maximum concentration of grain: Bond systems that produce grinding layers
that exhibit a higher overall critical bond stress can accommodate more grains before a bond
failure is to be expected. Thus this method for example can be used to judge the effect of
additives for the grinding layer.
Figure 3: Correlation between the adjusted critical bond stress and the grain concentration
Percolation theory
Mathematically the effect of the increasing grain concentration can be explained by the
percolation theory. This theory explains certain aspects of the behaviour of networks.
Historically the percolation theory was first proposed as a model for the spread of disease within
a community [9], since that it has found many other applications [10]. The basic idea is a
91
network consisting of nodes, which are connected via bonds. Within this network the individual
nodes or bonds can have different properties like being occupied or unoccupied for example.
Starting with an entirely unoccupied network, increasing the occupation probability leads to
more and more occupied nodes. Being a statistical model in the beginning mostly isolated
occupied nodes are observed. At a certain point clusters of occupied nodes are formed.
Eventually one characteristic point within this consideration is reached where a cluster is
formed that spans the entirety of the network: the network is percolated.
The grinding layer can also be regarded as a network of the bond. In this case the nodes can
either be occupied by bond, grain or pore. Percolation in this system can be described from
different points of view: It can be the increase of grain concentration that forms clusters and
ultimately leads to bond failure. Or it is the increase of the bond concentration that leads to the
formation of clusters of bond, which at a certain degree reach a point where a grinding layer is
formed that keeps its shape.
The fact that there are clusters formed within this idealised model regardless whether the
grain- or the bond concentration is regarded means, that there are areas where a higher cohesion
of the layer is neighbouring a lower cohesion. As long as the grinding layer’s composition
adheres to a statistical distribution the approach of the percolation theory can be applied.
Grain distribution
As the grain concentration has a direct impact on the resilience of the grinding layer, the
quality of the grain distribution itself has an impact on the local adhesion inside the layer. A
variation of the local grain concentration, especially for high grain concentrations, can cause
points of weakness in the grinding layer and ultimately lead to failure of the grinding tool. In
order to characterise the grain distribution inside the grinding layer a statistical structure
analysis can be performed. For this a cross section of the grinding layer sample is manufactured
and SEM micrographs are taken. Because the number backscattered electrons (BSE) increases
with the atomic number, areas with diamond (ZCarbon = 6) yield less BSE compared to areas with
bronze (ZCopper = 29; ZTin = 50) and therefore appear significantly darker in the image. The
resulting high contrast can be enhanced by converting the grey scale image into a binary image
(fig. 4). Different images with the same magnification are provided with the same kind of grid
in order to describe the special grain distribution. The binary image then can be processed by
the open source software imageJ to perform an automated particle count per grid. As a result
the number of diamond grains per grid is obtained.
Three batches of grinding layer samples with grain concentrations of 10%, 25%, and 50%
were waterjet cut and their cross sections were accordingly analysed. For each batch 60 grids
(42 mm²) were processed. A histogram of the three sets of 60 grids is obtained. Each data set
can be fitted with a Gaussian distribution. The position of the bell curve (μ) correlates with the
grain concentration, while its width (ı) can be regarded as a measure for die homogeneity of
the grain distribution (fig. 4). A homogeneous sample shows a narrow bell curve, whereas an
inhomogeneous sample shows a wide bell. This computer assisted method allows for an
efficient and precise characterisation of the grain distribution of a comparably large area.
92
Figure 4: Correlation between grain distribution and grain concentration
It is important to note that the different methods to achieve a surface which can be analysed,
also result in different surfaces qualities. Breaking the sample usually results in a rather rough
surface, which retains most of the diamonds in either part of the broken sample. Cutting the
sample results in a smoother surface than breaking, but part of the harder grain is pushed into
the bond and even dragged across the surface. Waterjet cutting produces a smooth surface, but
the pressure of the jet causes a relatively high loss of grain. As long as the preparation of
individual cross sections is done by the same method, the resulting of the grain distribution
analysis are comparable among each other. This method for example allows the identification
of effects during manufacturing, like demixing and clustering.
Mixing effects
Besides the statistical accumulation of clusters, like it is proposed by the percolation theory,
there are other aspects that influence the homogeneity of the grain distribution. When
considering the practical steps necessary for sintering a sample or a grinding wheel there are
different points during which the homogeneity of the grain distribution can be altered. Firstly
there is the initial mixing of the bond particles and the abrasive grain, which has to ensure an
even distribution of both components. But even when the powder shows a good distribution,
demixing can occur during the transfer of the powder from the mixer to the sintering form, or
afterwards when the mould is transferred to the sintering press. This is especially true for
powders consisting of different grain sizes or density values. In these cases vibrations can cause
a “floating” of the larger or the lighter particles.
The effect of demixing can also be found in the grain distribution analysis. In order to force
the demixing of one batch of grinding layer samples, the sintering form filled with the sintering
powder was vibrated for one minute. During this time there was a visible convection in some
of the areas of the sintering form. In some of the sample chambers this leads to an accumulation
of abrasive grain on the surface of the powder filling. This effect of demixing is also shown in
figure 5 where the vibrated and sintered grinding layer samples has the same position in the
93
histogram caused by the same grain concentration but a wider bell which is regarded to a
inhomogeneous grain distribution. A high grain concentration also includes a high possibility
of local demixing and cluster formation which leads to weakening or failure of the structure.
This behaviour is clearly shown by the sedimentation lines in the sandblasting of a grinding
layer sample body with 66% grain concentration shown in figure 5. Another effect of the
vibration of the sample is a decrease in porosity. This can be attributed to an improved pouring
density because of the vibration. These effects clearly show the impact of the overall handling
of the mixed raw materials before the sintering itself. Therefore a reliable method to
characterise these results of differences in handling is crucial for the investigation of other
effects isolating their influences.
Figure 5: Demixing effects. Left: demixing as a result of vibration. Right: influence of
vibration on the pore volume content.
Conclusion
The grinding process with its complex interaction between workpiece, grain and grinding
layer ensures a complicated research field. This set of investigations shows that in order to
systematically investigate interactions between the different components of grinding wheels,
existing methods have to be adapted and improved, and new methods have to be developed.
Only a strong set of methods allows to accurately describe effects and to isolate disruptive
influences.
The grinding pyramid allows the investigation of the wear mechanisms of different grains
under consideration of the respective bond system. The single grain setup gives the unique
opportunity to isolate and link observations, like forces or marks, to specific grains. While the
pyramids already give very good results regarding the wear of individual grains, multiple grain
pyramids are being developed to further refine this model tool and obtain data that is even closer
to the actual grinding process. For the investigation of metallic bonded grinding wheels the
production of grinding layer samples is a way to give easy access to grinding layer itself,
without having to produce whole grinding wheels. This makes it possible to perform various
investigations, that otherwise would demand a high degree of preparation. For the
characterisation of the bond-grain interaction the critical bond stress of grinding layer samples
can be measured. Together with the grain concentration and the experimental porosity a
characteristic maximum grain concentration can be calculated. In order to further describe the
grain bond interaction, the critical bond stresses of grinding layers with a known stronger grainbond attraction are being measured. This should allow not only the identification of an attractive
interaction between grain and bond, but also help to pinpoint parameters that result in an optimal
interaction.
An important aspect of the investigation of the grinding layer is the grain distribution. The
quantification method via the combination of SEM-imaging and the computer assisted grain
94
count is easy to apply and allows the differentiation of the grain distribution in individual
samples and batches. The quantification gives a value that corresponds with the concentration
(position of the bell function) and the homogeneity of the distribution (width of the bell
function). As expected higher grain concentrations generally lead to broader distribution of the
local grain concentration. As high grain concentrations result in a lower critical bond stress, this
means that for higher grain concentrations mechanical properties in general vary more than for
lower concentrations. Additionally demixing during the production process can also influence
the grain concentration. However this effect also shows in the quantification of the grain
distribution. Thus this method is a valuable tool that allows to point out differences in observed
properties that are in fact due to inhomogeneous grain distribution. In order to simplify the
workflow for the computer assisted quantification of the grain distribution other optical and
scanning methods are being investigated to omit the rather time consuming and laborious
process of SEM-investigations.
Figure 6: Conclusion
Outlook
These methods in combination have brought a deeper understanding of the different
interactions within the grinding layer of grinding wheels and ultimately pave the way for the
aimed design of grinding wheels in general. In the future these methods will be further refined
and expanded to accommodate results from other research areas in the manufacturing and
application of grinding wheels. For example the experiments with the grinding pyramid show
a good representation of the expected behaviour of a single grain within the bond. However to
accommodate the results of the investigation of the grinding layer samples, a refined grinding
pyramid is being developed that takes the interaction between a small number of grains in
consideration. This way the microscopic interactions between individual grains can be analysed
more closely in an experimental setup that is very close to the actual grinding process. The
investigation of grinding layer samples will be expanded upon to characterise different bond
systems. Here for example the analysis and quantification of macroscopic effects connected to
grain adhesion will be examined more closely.
Acknowledgement
The authors thank the “Niedersächsische Ministerium für Wissenschaft und Kultur MWK”
for the organisational and financial support of the project “Grundlagen zur modellbasierten
Auslegung und Herstellung von Schleifscheiben”.
References
[1]
[2]
Webster J, Tricard M (2004) Innovations in Abrasive Products for Precision
Grinding. CIRP Annals – Manufacturing Technology 32(2):597–617.
Klocke F (2005) Fertigungsverfahren, vol. 2. Springer.
95
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
96
Rammerstorfer FG, Hastik F (1974) Der dynamische E-Modul von
Schleifkörpern. Werkstatt und Betrieb, 107, 9:527-533.
Denkena B; Grove T, Bremer I; Behrens L (2016) Design of bronze-bonded
grinding wheel properties, CIRP Annals - Manufacturing Technology, vol. 65:
333–336.
Denkena B, Köhler J, Seiffert F (2011) Machining of Reinforced Concrete Using
Grinding Wheels With Defined Grain Pattern. International Journal of Abrasive
Technology 4(2):101–116.
Denkena B, Preising D, Woiwode S (2015) Gear Profile Grinding With Metal
Bonded CBN Tools. Production Engineering – Research and Development
9:73– 77.
Butler DL, Blunt LA, See BK, Webster JA, Stout KJ (2000) The
Characterization of Grinding Wheels Using 3D Surface Measurement
Techniques. Journal of Materials Processing Technology 127:234–237.
Nguyen A-T, Butler DL (2008) Correlation of Grinding Wheel Topography and
Grinding Performance: A Study from a Viewpoint of Three-Dimensional
Surface Characterization. Journal of Material Processing Technology 208:14–
23.
Broadbent SR, Hammersley, JM (1957) Percolation Processes, Mathematical
Proceedings of the Cambridge Philosophical Society, 53, 3:629–641.
Newman, MEJ (2003) The Structure and Function of Complex Networks, SIAM
Review, 45:167–256.
Influence of cutting edge micro geometry of diamond coated micromilling tools while machining graphite
Yves Kuche1,a,d, Julian Polte1,2,b,e and Eckart Uhlmann1,2,c,f
1
Institute for Machine Tools and Factory Management IWF, Technische Universität Berlin,
Pascalstr. 8 – 9, 10587 Berlin
2
Fraunhofer Institute for Production Systems and Design Technology IPK,
Pascalstr. 8 – 9, 10587 Berlin
a
kuche@iwf.tu-berlin.de, bjulian.polte@iwf.tu-berlin.de,
c
uhlmann@iwf.tu-berlin.de
d
+49 (0)30 / 314 75307, e+49 (0)30 / 39006 433,
f
+49 (0)30 / 314 23349
Keywords: Micro Milling, Diamond Coating, Wear
Abstract. Micro-milling is an appropriate technology for the manufacturing of micro structured
graphite electrodes for electrical discharge machining (EDM) process. The abrasive effect of the
graphite grains during the machining process causes high tool wear and displacement of the cutting
edge. An approach to reduce the tool wear is the diamond coating of micro-milling tools using the
hot filament chemical vapour deposition process (HFCVD). The high hardness of the diamond
coating improves the tool wear behaviour but influences the cutting edge micro geometry and
consequently the milling process. In this investigation WC-Co micro-milling tools with diameter
D = 0.5 mm were prepared and the tools with different cutting edge micro geometries were diamond
coated. In milling experiments with ultra-fine grained graphite the tool wear, the active forces Fa
during the process as well as the surface roughness were analysed. The results show increased active
forces Fa with increasing cutting edge radius rȕ as well as improved tool wear behaviour. In
consequence of the cutting edge radius rȕ an increased surface roughness was detected.
Introduction
The die-sinking EDM process is widely used for the production of micro structured tools in the
die and mould fabrication. Thereby, electrodes made of graphite provide good thermal and electrical
properties and in consequence a high removal rate and low wear of the electrodes [1, 2]. An
appropriate technology for the manufacturing of micro structured graphite electrodes is the micromilling process. Graphite is generally considered as “well machinable”. In consequence of low
cutting forces Fc and improved cutting speeds vc a higher productivity in comparison to copper is
possible. During graphite machining no chip is formed. The graphite grains brake out of the material
and effect high abrasive wear on the cutting edges [3]. The cutting edge geometry change with
increasing wear and a displacement of the cutting edge SV leads to reduced accuracy of the
machined structures.
An approach is the diamond coating of WC-Co micro-milling tools using the HFCVD. After
cleaning processes the substrate is prepared with a chemical etching process for removing cobalt out
of the substrate surface of the cemented carbide. Afterwards the diamond will be deposited in a
vacuum chamber with process gases hydrogen H2 and methane CH4 on the substrate. With a hot
wire tube made of tungsten W, tantalum T or rhenium Re with temperatures ࢡ > 2,000 °C atomic
hydrogen is formed and in consequence of the tube and the distance to the substrate the process and
layer behaviour can be influenced [4, 5, 6]. The diamond coating as well as the pre-treatment of the
coating with an etching process influence the cutting edge micro geometry. Thereby the cutting edge
radius rȕ increase and the chipping of the cutting edge Rs change.
*
Submitted by: M. Sc. Yves Kuche
97
Within the further investigations the influence of the cutting edge micro geometry on the graphite
machining is analysed. Therefore, micro-milling tools were prepared, using the immersed tumbling
process, and different cutting edge micro geometries were manufactured. Unprepared and prepared
micro-milling tools were coated with a multilayer diamond coating using HFCVD and applied in
milling experiments with graphite. Results regarding tool wear, active forces, and surface roughness
were shown and discussed.
Micro-milling tools
For the investigations micro-milling tools with tool diameter D = 0.5 mm and two edges were
selected. The tools were made of fine grained cemented tungsten carbide. The cutting edge micro
geometry was measured with an optical measurement device InfiniteFocus from the company
ALICONA IMAGING GMBH, Grambach, Graz, Austria. The cutting edge radii rȕ and the maximum
chipping of the cutting edges Rs,max on the major and minor cutting edges of each tool were
measured. Further the cutting edge micro geometry as well as the tool wear are shown by scanning
electron microscope (SEM) images.
Immersed tumbling is an appropriate process for the cutting edge preparation of micro-milling
tools and the manufacturing of cutting edge radii r ” 10 μm [7, 8]. In the experiments a machine
tool DF-3 Tools from the company OTEC PRÄZISIONSFINISH GMBH, Straubenhardt, Germany, was
used. The machine tool has two independent drives. The first one moves the rotor and the second
one rotates three satellites which can be equipped with up to six tool holders. The rotational speed
of the rotor can be selected with 20 rpm ” nR ” 50 rpm. The rotational speed of the holders can be
selected with 0 rpm ” nH ” 200 rpm in the same direction and -50 rpm ” nH ” 0 rpm against the
direction of the rotor. Within the preparation process the workpieces were clamped into the tool
holders and lowered into a container, which is filled with lapping media. The depth of immersion TE
is controlled by a laser sensor. The two drives move the tools in a planetary motion through the
container. Depending on the workpieces and target values different lapping media made of walnut
shell granulate, silicon carbide (SiC) or aluminium oxide can be used. Fig. 1 shows the machine tool
and the selected process parameters.
DF-3 Tools from the company Otec Präzisionsfinish GmbH
-nR
-nH
-nw
+TE
Tool group
1
2
3
Processing time tB
-
2 min
1 min
Depth of immersion TE
-
100 mm
100 mm
Rotational speed of the
workpiece holders nH
-
80 rpm
40 rpm
Rotational speed of the rotor nR
-
40 rpm
20 rpm
Rotational direction
-
synchronous
synchronous
Lapping media
-
H 4/400
HSC 1/300
Figure 1: DF-3 Tools and selected process parameters
The micro-milling tools were analysed and few of them were diamond coated. Here, a multilayer
diamond coating CCDia Carbon Speed from the company CEMECON AG, Würselen, Germany, with
a layer thickness sD § 3-4 μm was chosen. After a cleaning and chemical etching process the tools
were coated with a machine tool CC800®/9 DIA. The tools were measured again and SEM images
were taken. Table 1 shows the results of the optical measurement of the minor cutting edges S’.
The results show an increased cutting edge radius rȕ as well as a decreased maximum chipping of
the cutting edge Rs,max in consequence of the cutting edge preparation. Further the etching and
coating process leads to an increased cutting edge radius by ǻrȕ § 4.3 μm and increased chipping of
98
the cutting edge Rs,max for tool group 1-2, 2-2 and 3-2. It can be assumed that the etching process
was not that intensive that the cutting edge micro geometry of the preparation step was destroyed.
Table 1: Cutting edge radius rȕ and maximum
chipping of the cutting edge Rs,max of the minor cutting edge S’
Tool group
Gr. 1-1
Gr. 1-2
Gr. 2-1
Gr. 2-2
Gr. 3-1
Gr. 3-2
Preparation
group
1
1
2
2
3
3
Coating
Diamond
Diamond
Diamond
Cutting edge radius of the
minor cutting edge rȕ
2.1 μm
6.5 μm
3.7 μm
7.4 μm
5.5 μm
10.1 μm
max. chipping of the cutting
edge Rs,max
0.60 μm
0.86 μm
0.20 μm
0.39 μm
0.50 μm
0.99 μm
Experimental procedure
Milling experiments with a five-axes high precision machine tool PFM 4024-5D from the
company PRIMACON GMBH, Peissenberg, Germany, was used. The machine tool is equipped with a
high frequency spindle Precise MFW 1260 from the company FISCHER AG, Herzogenbuchsee,
Switzerland, with a rotational speed up to n = 60,000 rpm. The machine tool has an acceleration of
the axes a = 2 g and a position accuracy Pa = 1 μm. The tools were clamped in polygonal tool
holders TRIBOS SPF-RM HSK-E 32 from the company SCHUNK GMBH & CO. KG.,
Lauffen/Neckar, Germany. Ultra-fine grained graphite EDM3 from the company POCO GRAPHITE,
INC., Texas, USA, with grain diameter dG < 5 μm was used. The graphite has a shore hardness
H = 76 Shore and a compressive strength ȡ = 148 MPa [9]. In general the graphite was machined
with a rotational speed n = 31,831 rpm, a cutting speed vc = 50 m/min and a feed per tooth of
fz = 25 μm. Further a width of cut ae = 250 μm and a depth of cut ap = 150 μm were chosen. The
influence on the tool wear and the active forces Fa over the path length lc were analysed and
discussed. Furthermore, the feed per tooth fz and the cutting speed vc are varied with
15 μm ” fz ” 35 μm and 10 m/min ” vc ” 90 m/min. The surface roughness as well as the active
force were analysed with respect to the chosen parameters.
Results and discussion
During the graphite machining the active forces Fa were measured and the tool wear as well as
the surface roughness were analysed. With diamond coated tools a maximum path length lc = 216 m
was machined. After this path length the tools showed a strong displacement of the cutting
edges SV which was nearly equal to uncoated tools after a path length lc = 24 m.
Wear
After different path length lc images with a scanning electron microscope (SEM) were taken.
Different positions and resolutions were used to analyse the wear behaviour. The measurement
results are shown in Fig. 2. The uncoated tools showed high wear after a path length lc = 24 m and a
displacement of the cutting edge SV, which is shown in Fig. 3. Prepared tools of tool group 2-1
showed slightly smaller flank wear land VB in comparison to unprepared tools. After a path length
of lc = 24 m only the flank wear land VB of diamond coated tools were measured in consequence of
the high tool wear of the uncoated tools. The results show slightly higher flank wear land VB with
increasing cutting edge radii rȕ. This results from the displacement of the maximum width of flank
wear land VB in consequence of the cutting edge radius rȕ and the changed graphite flow. After a
path length lc = 40 m first failures of the cutting edges were detected.
99
Tool group 1-1: unprepared (1), uncoated
Tool group 2-1: prepared (2), uncoated
Tool group 1-2: unprepared (1), diamond coated
Tool group 2-2: prepared (2), diamond coated
Tool group 3-2: prepared (3), diamond coated
Workpiece:
Graphite, EDM3
60
μm
45
30
15
0
0
6
12
m
18
24
maximum width of flank wear land
VBmax
maximum width of flank wear land
VBmax
Process:
Micro-milling
PFM 4024-5D
60
μm
45
30
15
0
0
20
Path length lc
m
60
40
80
Path length lc
Figure 2: Maximum width of flank wear land VBmax over the path length lc
Crater wear on the rake face AȖ of the tool corners was observed which increased with the path
length lc. The crater wear lead to notch wear on the major cutting edge S in the area near the defined
depth of cut ap and on the minor cutting edge S’ in area of the defined feed per tooth fz.
Overview
Detail
ƒ Notch wear on the minor cutting edge S‘
ƒ Flank wear
ƒ Crater wear
ƒ Notch wear on the major cutting edge S
50 μm
200 μm
Tool group 2-1
lc = 24 m
Tool group 1-2
lc = 60 m
Tool group 2-2
lc = 60 m
Tool group 3-2
lc = 60 m
50 μm
50 μm
50 μm
50 μm
50 μm
100 μm
100 μm
100 μm
100 μm
100 μm
Top view
Major cutting
edge S
Tool group 1-1
lc = 24 m
Figure 3: SEM-Images of cutting edges after graphite machining
This is caused by an increased flow of the graphite grains to the flank face AȖ in the areas of
notch wear. The detected wear is comparable with the observed wear in turning experiments with
diamond coated WC-Co inserts by CABRAL [9, 10]. After a path length lc = 60 m the first failure of
100
the coating on the minor cutting edge S’ was detected. The tools of group 3-2 showed less coating
wear than the other groups in consequence of a better distribution of the abrasive graphite grains.
The cutting edge geometry changes slightly till a path length lc = 216 m notwithstanding
successively coating wear, which is shown by the SEM top view images in Fig. 3. After that path
length lc the displacement of the cutting edge VB was comparable to the displacement of the cutting
edge VB of uncoated tools.
Active forces
During the milling process the active forces Fa were measured with a piezoelectric dynamometer
MiniDyn 9256C2 from the company KISTLER INSTRUMENTE GMBH, Ostfildern, Germany. The
dynamometer has a threshold F(x,y,z) < 2 mN and a measuring range of -250 N ” F(x,y,z) ” 250 N.
The results are shown in Fig. 4.
Process:
Micro-milling
PFM 4024-5D
Tool group 1-1: unprepared (1), uncoated
Tool group 2-1: prepared (2), uncoated
Tool group 1-2: unprepared (1), diamond coated
Tool group 2-2: prepared (2), diamond coated
Tool group 3-2: prepared (3), diamond coated
0.8
0,8
1.2
1,2
μm
N
0,6
N
nm
0,9
Active force Fa
Active force Fa
Workpiece:
Graphite, EDM3
0,4
0.4
0.2
0,2
0,6
0.6
0.3
0,3
0,0
0.0
0.0
0,0
0
6
12
Path length lc
18
m
24
0
50
100
150
m
200
Path length lc
Figure 4: Active forces Fa while graphite machining depending to the path length lc
During graphite machining only small active forces 0.3 N < Fa < 1.2 N were measured. Rounded
cutting edges showed slightly higher active forces Fa in comparison to unprepared tools.
Furthermore, the active forces Fa were increased with higher chipping of the cutting edges Rs. Along
the path length lc the cutting edges were smoothed by the abrasive graphite grains. With increased
crater wear and the failure of the coating the active forces Fa rise again. In consequence of the
improved wear behaviour also optimised progress of the measured active forces Fa for tool
group 3-2 can be shown.
Influence of process parameters
The influence of the cutting speed vc as well as the feed per tooth fz was analysed by slot milling.
The surface roughness was measured with a chromatic white light sensor MicroProf 100 from the
company FRIES RESEARCH & TECHNOLOGY GMBH (FRT), Bergisch Gladbach, Germany. The active
forces Fa were measured with the piezoelectric dynamometer of the type MiniDyn 9256C2.
101
Tool:
Two flute end mills
Cemented carbide
Uncoated and diamond coated
Diameter D = 0.5 mm
Process:
Micro-milling
PFM 4024-5D
Measurement device:
MicroProf 100, FRT
MiniDyn 9256C2
1.2
1.2
1,2
μm
N
0,9
Active force Fa
Arithmetical mean deviation Ra
Tool group 1-1: unprepared (1), uncoated
Tool group 2-1: prepared (2), uncoated
Tool group 1-2: unprepared (1), diamond coated
Tool group 2-2: prepared (2), diamond coated
Tool group 3-2: prepared (3), diamond coated
Process parameters:
Depth of cut:
Width of cut:
0.6
0.3
0.0
150 μm
500 μm
0.6
0,6
0.3
0,3
0.0
0,0
15
20
25
μm
30
15
35
Feed per tooth fz
20
25
30
μm
35
Feed per tooth fz
1.2
2,0
2.0
μm
N
1,5
Active force Fa
Arithmetical mean deviation Ra
ap =
ae =
0.6
0.3
0.0
1,0
1.0
0.5
0,5
0,0
0.0
10
30
50
70
m/min
Cutting speed vc
90
10
30
50
70
m/min
90
Cutting speed vc
Figure 5: Arithmetical mean deviation Ra
while changing feed per tooth fz, depth of cut ap and cutting speed ap
The results show an increased surface roughness with increasing cutting speed vc as well as with
increased feed per tooth fz. The arithmetical mean deviation Ra ranges between
0.6 μm < Ra < 1.5 μm for all tool groups. Thereby, the sharp uncoated cutting tools of group 1-1
show the lowest roughness in comparison of all tool groups. With rising cutting edge radius rȕ the
surface roughness of the workpiece increases. The measured surface roughness of the slots, which
were machined with tools of group 3-2 (rȕ = 10.1 μm), show a mean deviation that is
25 % < ǻRa < 43 % higher in comparison to the measured surfaces, which were machined with
tools of group 1-1. This is a result of the changed cutting condition. While the cutting process with
cutting edge radii rȕ the crushed zone changes and the fractions of the graphite grains affect the
surface [12].
102
Summary
During graphite machining grains lead to high abrasive wear on the cutting tools and reduce the
tool life. With diamond coatings the tool wear can be reduced. In consequence of the coating and
the pre-treatment by an etching process, the cutting edge micro geometry is influenced. Especially
for micro-milling tools with decreased process parameters like feed per tooth fz, depth of cut ap and
width of cut ae the influence of the cutting edge micro geometry rises.
In this contribution, the influence of the cutting edge micro geometry of diamond coated micromilling tools while machining graphite was analysed. The cutting edge micro geometry was
influenced by the cutting edge preparation through immersed tumbling and the tools were diamond
coated by an HFCVD process. The cutting edge micro geometry was measured with an optical
measurement device and increased cutting edge radii rȕ after the preparation process as well as after
the diamond coating were shown. Furthermore, the chipping of the cutting edge Rs was decreased
by the cutting edge preparation and raised up after the diamond coating. Reason could be the
influence of the etching process, which removes the cobalt in the substrate surface for improved
adhesive strength of the diamond coating. An ultra-fine grained graphite of the type EDM3 was
machined and the tool wear and active forces Fa as well as the surface roughness were analysed and
discussed. The results showed increased active forces Fa, which were in general low in comparison
to the machining of steel or brass. After a path length lc = 216 m the diamond coated tools showed
tool wear which was comparable with the tool wear of the uncoated tools after a path length
lc = 24 m. The wear of the diamond coated tools is characterised by crater wear, which increased
along the path length lc. Notch wear on the minor and major cutting edge was detected in
consequence of the crater wear. With notch wear after a path length lc § 40 m the coating failed.
Tools with higher cutting edge radii rȕ showed improved wear behaviour than tools with lower
cutting edge radii rȕ. But the higher radii rȕ lead to higher surface roughness in consequence of
fraction of the graphite grains on the surface.
Further investigations will examine the wear behaviour with changing depth of cut ap as well as
the analysis of graphite grain concentrations during the graphite machining which lead to the notch
wear.
Acknowledgements
This article is based on investigations of the research project ‘‘Defined cutting edge preparation
for process optimization during micro-milling‘‘(UH 148 / 100-2), which was kindly supported by
the German Research Foundation (DFG).
References
[1] Aas, K. L.: Performance of two graphite electrode qualities in EDM of seal slots in a jet engine
turbine vane. Journal of Materials Processing Technology, 149, 2004, p. 152 – 156.
[2] Klocke, F.; Schwade, M.; Klink, A.; Veselovac, D.: Analysis of materials removal rate and
electrode wear in sinking EDM roughing strategies using different graphite grades. 7th CIRP
Conference on Electro Physical and Chemical Machining (ISEM), Procedia CIRP 6, 2013,
p. 163 – 167.
[3] Almeida, F. A.; Sacramento, J.; Oliveira, F. J.; Silva, R. F.: Micro- and nano-crystalline CVD
diamond coated tools in the turning of EDM graphite. Surface & Coating Technology, 203,
2008, p. 271 – 276.
[4] Pierson, H. O.: Handbook of Carbon, Graphite, Diamond and Fullerenes: Properties, Processing
and Applications. New Jersey: NOYES Publications, 1993.
103
[5] Bobzin, K.: Oberflächentechnik für den Maschinenbau. Weinheim: WILEY-VCH Verlag
GmbH & Co. KGaA. 2013.
[6] Sammler, F.: Steigerung der Nutzungspotentiale von CVD-diamantbeschichteten Werkzeugen.
Berichte aus dem Produktionstechnischen Zentrum Berlin. Hrsg.: Uhlmann, E., Stuttgart:
Fraunhofer IRB, Dissertation, Technische Universität Berlin, 2015.
[7] Uhlmann, E.; Oberschmidt, D.; Kuche, Y.; Löwenstein, A.: Cutting Edge Preparation of Micro
Milling Tools. 6th CIRP International Conference on High Performance Cutting, HPC 2014,
Procedia CIRP 14, 2014, p. 349 – 354.
[8] Löwenstein, A.: Steigerung der Wirtschaftlichkeit beim Mikrofräsen durch Schneidkantenpräparation mittels Tauchgleitläppen. Berichte aus dem Produktionstechnischen Zentrum
Berlin. Hrsg.: Uhlmann, E., Stuttgart: Fraunhofer IRB, Dissertation, Technische Universität
Berlin, 2014.
[9] Poco Graphite Inc.: Poco EDM Graphite Selection Guide. 2010.
[10] Cabral, G.; Gäbler, J.; Lindner, J.; Grácio, J.; Polini, R.: A study of diamond film deposition on
WC-Co inserts for graphite machining: Effectiveness of SiC interlayers prepared by HFCVD.
Diamond & Related Materials, 17, 2008, p. 1008 – 1014.
[11] Cabral, G.; Reis, P.; Polini, R.; Titut, E.; Ali, N.; Davim, J. P.; Grácio, J.: Cutting performance
of time-modulated chemical vapour deposited diamond coated tool inserts during machining
graphite. Diamond & Related Materials, 15, 2006, p. 1753 – 1758.
[12] Zhou, L.; Wang, C.; Qin, Z.: Investigation of Chip Formation Characteristics in Orthogonal
Cutting of Graphite. Materials and Manufacturing Process, 24, 2009, p. 1365 – 1372.
104
New production technologies of piston pin
with regard to lightweight design
Nadja Missal1, a , Mathias Liewald1 and Alexander Felde1
1
Institute for Metal Forming Technology, University of Stuttgart
Holzgartenstraße 17, 70174 Stuttgart Germany
a
nadja.missal@ifu.uni-stuttgart.de
Keywords: Piston pin, cold forming, lightweight design, reduction of CO2 emission
Abstract. The optimisation of piston pins with regard to lightweight design has drawn significant
interest in recent years in the automotive industry. Furthermore, this topic is one of the scientific
research topics of the “Lightweight Forging” Research Network, which was founded in Germany in
2015. The project aims at optimisation of forged components concerning lightweight design and
material use, developing more efficient steels and production by extending the technological limits
of forging processes when forging in different temperature ranges. The objective of this contribution
is to present results of scientific research regarding weight reduction of piston pins and main requirements that must be fulfilled within operation in combustion engines. During this research, an
alternative piston pin geometry with helical stiffeners instead of cylindrical inner diameter was
developed and subsequently the forging strategies were investigated through numerical analysis
using DEFORM. The application of this new helical geometry provides new opportunities to combine increased stiffness and performed lightweight design within same component.
Introduction
Automotive engineering has been of economic importance since decades and is currently facing a
big challenge regarding the reduction of CO2 emission. Referring to the car body, innovative
methods and materials have been contributing tremendously to the automotive lightweight industry.
Usually, each progress with regard to design of lightweight components produced by bulk metal
forming solely provides an insulated solution with insignificant transferability to other production
areas [1].
Following the fundamental idea of the ULSAB (Ultralight Steel Auto Body) project, which was
carried out internally by project groups during the years 1994 to 2002, a new project was launched
“Lightweight Forging” and was initiated in Germany in 2015 targeting similar goals. The
Lightweight Forging Initiative “Lightweight Forging”, funded by AiF (the German Federation of
Industrial Research Associations) and BMWE (The Federal Ministry for Economic Affairs and
Energy) was established to highlight the contributions from the forging industry to the automotive
megatrend of lightweight design. The main goal of the research project “Lightweight Forging” is to
optimise forged components with regard to lightweight design, material use, and new production
processes by extending the technological limits during forging, multiple component processes and
by developing of more efficient steel grades and heat treatment process. Ten research institutes from
five federal states in Germany and 60 companies including partners from automotive engineering,
steel production, suppler industry and bulk metal forming technology are participating in this project
in which relevant issues in forging lightweight components will be addressed.
The present work provides the contribution reports on particular research works performed in
Workpackage “Expanding technological horizons when forging in different temperature ranges”
launched by the Institute for Metal Forming Technology (IFU) University of Stuttgart. The goal is to
exploit lightweight design potential for components of the powertrain and chassis. The optimisation
of piston pins concerning lightweight design is one of the scientific topics within this Workpackage
*
Submitted by: Dipl.-Ing. Nadja Missal et al.
105
this paper is reporting about. The research work is carried out in cooperation with one of the 30
largest companies in the automotive supplying industry worldwide – the MAHLE Group, which
bring in many years of experience in the field of piston pin production.
Development of piston pins concerning lightweight design
The piston pin is the connecting part between the piston and connecting rod (Fig. 1) in a combustion engine. It is exposed to extremely high loads that occur in alternating directions during its lifetime. As a consequence of high combustion pressure distribution during use, the piston pin is subjected to bending, ovalization, and shearing. In order to achieve satisfying service life, the piston pin
must meet the following requirements: sufficient amount of strength, stiffness and toughness to
withstand the loads without damage; high surface hardness to achieve a favorable wear behavior;
high surface quality and shape accuracy for optimal fit with piston and connecting rod; low weight
to keep inertia forces to a minimum [2].
Figure 1: Left: load scheme of piston pin and right: requirements on piston pin
The development as well as production of piston pins bears unused technological potentials combining stiffness and lightweight design into one component. This results from the fact that an increase in stiffness concerning ovalization can be achieved only with a greater wall thickness and, thus,
always increases the amount of mass. The production of piston pins having internal helical features
instead of a constant inner diameter is one possible solution for this problem (Fig. 2). Applying such
a helical geometry allows reducing the weight of piston pins up to 8% keeping the component
strength unchanged according to given specification.
Figure 2: Investigated dimensions of helical geometry
In [3], it is shown that the reduction of weight depends on different parameters of the helical
geometry such as number of ribs, helix angle, ratio of outer to inner diameter, etc. Using ANSYS
Workbench numerical simulations were carried out to investigate the influence of aforementioned
parameters on stiffness and strength. Based on the results of these investigations, which are
described in [3], the piston pin revealing a number of 10 ribs, helix pitch angle β of 20°, ratio D2/D1
of 1.1, and ratio d1/d2 of 1 was selected for further production of the helical geometry by cold bulk
metal forming.
106
Numerical model setup
The numerical investigations of forming process of piston pin with new helical geometry were
conducted by using DEFORMTM. The DEFORMTM system is an engineering software that enables
designers to analyze metal forming processes on the computer prior final release of tool design and
manufacturing.
Several methods exist to produce such helical geometries by cold bulk metal forming. The hollow forward extrusion process of tubular semi-finished products, as described in previous research
studies [4-6], is the most commonly adopted technique for producing helical geometries with helix
pitch angles up to 25°. The hollow forward extrusion of piston pins with helical geometry was developed and implemented in a FEM System DEFORM 3DTM (Fig. 3 left). By use of such numerical
simulations it was determined, that the material not only flows significantly slower in radial
direction than in axial, but also flows almost uniformly in axial direction causing the destruction of
the helical geometry [7].
The research study performed in [8] presents a production strategy of such helical geometries by
hollow backward extrusion for workpieces with closed bottoms. The destruction of the helical
geometry can be prevented completely due to the fact that die displacement has the same direction
as the material flow throughout entire forming process. The fundamental tool concept, which is
shown in Fig. 3 right, was adapted and implemented in DEFORM 3DTM for further numerical
investigation.
Figure 3: Left: Simulation model of hollow forward extrusion process; right: hollow backward
extrusion for workpieces with bottoms and consequent ejection
Material and simulation data. The numerical investigations were conducted using the steel alloy
16MnCr5. Flow curves were obtained up to a deformation degree of 0.8 by conducting compression
tests. The flow stress was linearly extrapolated for calculating higher deformation degrees. The
material properties and simulation standard parameter values are represented in Table 1.
Table 1: Material properties standard parameter values for simulation
Material properties
Standard parameter values
Properties
Unit
Dimension
Parameter
Dimension
Young´s modulus
[N/mm²]
210.000
Coulomb friction
0.07
Tensile strength
[N/mm²]
560
Strain
0.75
Poisson´s ration
[-]
0.3
Cavity depth
0.7
Mass density
[g/cm³]
7.85
Helix pitch angle β
20°
Hollow backward extrusion process for workpieces with closed ends was performed by use of a
360° simulation model. The die was considered and designed concerning tool load and material
107
flow. In order to investigate the influence of elastic tool expansion on part ejection process, the
piston pin was modelled elastic-plastically to have 100.000 mesh elements and the simulation was
performed using an elastic mandrel having 100.000 mesh elements, rigid die and Coulomb friction
coefficient of 0.07. Furthermore, the die shoulder angle was varied between 7° and 15° aiming to
prevent cracks in the bottom area and to achieve the helical geometry entirely.
Results and discussion
Altering the die shoulder angle do disclosed significant effects on filling the helical geometry and
on radial and axial material flow in the bottom area as shown in Fig. 4. A bigger die shoulder angle
and high strain values seems to result into a significant axial material flow in opposite direction to
die displacement (Fig. 4 right) and causes tensile stress and cracks in the bottom of the cup (Fig. 4
left). This effect can be avoided completely when the die shoulder angle amounts approx. 3°.
However, tool set having a die shoulder angle of 3° is excessive for such small geometries and
moreover, the bottom should be mechanically separated from cylindrical part of piston pin after the
forming process.
Figure 4: Left: Cracks in the bottom of cup; Right: influence of the die angle on material flow
Thus, the die shoulder angle of 7° was selected for further numerical and experimental investigations because failure is located in the bottom area and the filling of the helical geometry can be
achieved entirely as shown in Fig. 5 left. In Fig. 5 right, the die load versus die stroke diagram is
shown regarding the basic parameter setup. Up to a die stroke of 7 mm, a rather linear die force
increase can be detected. At this moment the material starts to flow in the helical geometry radially,
which results into a higher die force. Next, the die force increases slightly to a die stroke of 27 mm.
While end diameter of D2=22 mm will be reached, the die force stabilizes and forming process
continues steadily. Maximum die load throughout deformation process amounts by 195 kN.
Figure 5: Left: outline filling of the helical geometry; Right: die load versus die stroke curve
108
Normally, the ejection force can be roughly calculated as 10% -20% of forming force in case of
cylindrical geometries. The ejection of such helical geometries is complicated and in order to
investigate the ejection force after the forming process, the simulation of ejection process was
conducted. After the forming process, the die returns to its starting position and piston pin is located
on the mandrel as shown in Fig. 6 left. Further numerical investigations were carried out as a
continuation of forming process by using aforementioned simulation standard parameter values.
The results of numerical investigations showed that the piston pin is not only shifted in axial
direction (V1) but also rotates axially (V2). Thereby, the piston pin can be ejected without damaging
the helical geometry (Fig. 6 left). In Fig. 6 right, ejector load to ejector stroke diagram is depicted.
At the beginning of ejection process, the ejector force is at its maximum around 22,5 kN because of
highest friction surface between piston pin and mandrel. Decrease of friction surface results in
gentle decline of ejector force.
Figure 6: Left: ejection process and total velocity of piston pin within ejection and right: ejector
load versus ejector stroke curve
Summary
In this paper, fundamental investigations on material flow, forming and ejection forces for
production of piston pin with a new helical geometry by hollow backward extrusion for workpieces
with bottoms were presented. Concerning material flow, altering the die shoulder angle showed significant effects on the filling of the helical geometry. The filling of the helical geometry can be
achieved entirely without failure in the bottom area when die shoulder angle is set to 7°. Furthermore, numerical investigations of ejection process showed the axial rotation of piston pin yielding a
successful ejection without damaging the formed geometry.
Moreover, the application of this new helical geometry presents an opportunity to reduce the
weight of piston pins up to 8% and, whereby maintaining the technical properties such as strength
and stiffness. The investigated lightweight design for piston pins with helical inner geometries can
be transferred to further hollow components of the powertrain as well and chassis which are
exposed to similar loads. By the use of this lightweight construction a reduction of CO2 emission
can also be achieved.
109
Outlook
Based on the simulation results in DEFORMTM, the process chain of piston pin manufacturing by
cold forging and subsequence structural analysis will be considered to estimate the influence of
initial parameters and forming results on the final properties of the piston pin.
Furthermore, the tool set for hollow backward extrusion for workpieces with bottoms will be
designed as next regarding results of numerical investigation. A conventional experimental tool set
will be adopted and assembled to a tool rack with integrated double-action hydraulic cylinder and
stroke measurement system will be integrated. To investigate the die force, piezoelectric load cell
will be placed between die and pressure pads. The additional hydraulic tool axis provides a
maximum stroke of 100 mm and speed is limited at 100 mm/s [9]. The maximum force of the
controllable hydraulic axis amounts to 500 kN (Fig. 7).
Figure 7: Tool set for hollow backward extrusion for workpieces with bottoms
Acknowledgement
The research project “Expanding technological horizons when forging in different temperature
ranges” (IFG-Nr. 18229 N) of the Research Association for steel Application (FOSTA), Heat
Treatment and Material Engineering Association (AWT), Research Association for Drive
Technology (FVA) and Research Association of Steel Forming (FSV) is supported by the Federal
Ministry of Economic Affairs and Energy through the German Federation of Industrial Research
Associations (AiF) as part of the program for promoting industrial cooperative research (IGF) on the
basis of a decision by the German Bundestag.
110
References
[1] M. Liewald, A. Felde, Research activities and new developments in bulk metal forming at the
Institute for Metal Forming Technology, New Developments in Forging Technology, Stuttgart,
2015, pp. 1-42.
[2] MAHLE GmbH, Cylinder components: Properties, applications, materials, Springer Vieweg,
2nd ed., (2009) pp. 25-46.
[3] N. Missal, M. Liewald, A. Felde, R. Lochmann and M. Fiderer, Piston pin optimisation with
respect to lightweight design, International Cold Forging Group, 49th Plenary Meeting, ICFG 2016,
Stuttgart, 2016, pp. 157-161.
[4] Regie Nationale Des Usines Renault, Automobiles Peugeot, Improvement in methods of
manufacturing helical gear blanks by cold extrusion process, Patent office London 1 373 547, 1971.
[5] K. Lange, Verfahren und Werkzeuge zum Kalt-, Halbwarm- und Warmquerfließpressen von
Werkstücken mit genauen Verzahnungen aus Stahl, vorzugweise Stahl, Deutsches Patentamt DE 37
18 884 A1, 1988.
[6] H. Gueydan, Outillage pour la fabrication de pieces frittees a surfaces helicoidales, European
Patent Office 0 050 576 B1, 1981.
[7] O. Napierala, N.- B. Khalifa, E. Tekkaya, N. Missal, A. Felde, M. Liewald, Manufacturing of
load-oriented components by cold forging, International Conference on Steels in Cars and Trucks,
Amsterdam-Schiphol, 2017.
[8] A. Schwager, M. Kammerer, K. Siegert, A. Felde, E. Körner, V. Szentmihályi, Cold Forming of
Helically Internal Toothed Wheels, MAT-INFO Werkstoff-Informationsgesellschaft, Frankfurt/M.,
2003, pp. 517-531.
[9] T. Schiemann, M. Liewald, C. Mletzko, J. Wälder, Automatically controlled (cold-) forging
process – equipment and application examples, New Developments in Forging Technology,
Stuttgart, 2015, pp. 257-282.
111
Contact Conditions in Bevel Gear Grinding
Mareike Solf1,a, Christoph Löpenhaus1,b and Fritz Klocke1,c
1
Laboratory for Machine Tools and Production Engineering of RWTH Aachen University
Steinbachstraße 19, 52074 Aachen, Germany
a
M.Solf@wzl.rwth-aachen.de, bC.Loepenhaus@wzl.rwth-aachen.de,
c
F.Klocke@wzl.rwth-aachen.de
Keywords: Gear, Grinding, Force
Abstract. Due to increasing requirements concerning efficiency and noise excitation of gear drives,
the hard fine machining of gears has become a necessary process step for many applications. The
hard fine machining by grinding is an established manufacturing process for different types of gears,
as good geometric and surface quality can be achieved. Grinding of bevel gears is used especially for
the machining of gears with high requirements concerning the gear quality. Recent developments in
the machine tool technology have enabled the grinding of bevel gears to be established not only in
the aerospace industry, but also as a productive manufacturing process for the series production of
automotive rear axle drives.
The knowledge of the cutting force in bevel gear grinding is of essential relevance for the
prediction of the properties of the near surface zone of the workpiece and the load on the grinding
tool. Therefore, the prediction of the grinding force plays an important role in the knowledge-based
process design. For profile and generating grinding of cylindrical gears, models for the contact
between grinding tool and gear, the cutting force and the thermomechanical influence on the
workpiece exist. For the process of bevel gear grinding, there is still a lack of knowledge. To be able
to predict the cutting force, the contact conditions between the grinding wheel and the gear have to
be analysed for bevel gear grinding. By means of an examination of the contact conditions, the cutting
force model according to WERNER will be transferred onto the plunge grinding of bevel gears.
Introduction
Due to high achievable part quality and low surface roughness, grinding is an established
manufacturing process for the machining of different types of gears. Bevel gears are ground in case
of high requirements concerning the geometric quality, for example in vehicle drives [1]. An effective
design of productive grinding processes can be based on the cutting force. Knowing the cutting force
is necessary for the prediction of the thermal influence on the workpiece as well as the load and,
hence, the wear of the grinding tool. Furthermore, the cutting force influences the deformation of the
tool during the process and is therefore relevant for the modelling of the process-machine-interaction.
The focus of previous research on bevel gear grinding has mainly been on cutting processes with
defined cutting edge. The penetration of cutting tool and workpiece has been determined and a force
model was developed for bevel gear cutting [2]. For bevel gear grinding, the effect of dressing and
grinding parameters on the workpiece properties has been analysed [3]. Based on the test results, an
empirical model for the prediction of grinding burn depending on the process parameters of plunge
grinding of bevel gears was developed [3]. A model for the prediction of the cutting force does not
yet exist.
Analyses of profile grinding [4] and generating grinding [5] of cylindrical gears have shown the
relevance of the cutting force calculation for the knowledge based process design. Therefore, the
objective of the investigations in this paper is to check the transferability of existing approaches for
the cutting force calculation onto plunge grinding of bevel gears. For the continuous generating
grinding of cylindrical gears, a penetration calculation between the grinding wheel and the workpiece
was conducted [5]. Based on the approach of WERNER, the course of the cutting force could be
113
calculated out of the penetrated geometry [6]. To be able to transfer the calculation method onto
plunge grinding of bevel gears, the contact conditions between the grinding wheel and the bevel gear
have to be analysed.
Analysis of the Contact Conditions in Bevel Gear Grinding
To calculate the cutting force in grinding processes, the contact conditions between the grinding
wheel and the workpiece must be considered. In investigations of generating grinding of helical gears,
it was shown that by means of an exact calculation of the resulting geometric contact conditions, the
cutting force and the thermomechanical influence on the material can be predicted [5]. In order to
enable the prediction of the cutting force also for bevel gears, the contact conditions of plunge
grinding of bevel gears are examined, as shown in Fig. 1.
Unmodified Plunging
vc
Vector Feed
Vector Feed with Waguri
„ Full contact
„ Full contact
on whole
face width
on whole
face width
vc
ve
„ Locally
limited
contact
through
eccentric
motion
vc
tool
φS
„ Earlier
vt
tool
φS
tool
vt
contact on
side with
smaller profile
angle φS
„ Equally
distributed
contact on
both flanks
φS
vt
„ Equally
distributed
contact on
both flanks
vt
vt
vc
vt
vc
vc
© WZL
Figure 1:
Contact Conditions of Plunge Grinding of Bevel Gears
In the unmodified plunging process, the central axis of the grinding wheel equals the feed axis. A
full contact over the entire face width on both tooth flanks occurs. As the plunging depth is increased,
the contact height rises until the grinding wheel engages the entire height of the tooth flank. The
grinding wheel is inclined relative to the workpiece so that the tip plane of the grinding wheel runs
parallel to the tooth root. This results in parallel contact lines on the flanks.
In the production of many bevel gears, grinding wheels with significantly different profile angles φS
on the inner and outer sides are used. In case of a plunging motion along the grinding wheel centre
axis, a premature engagement results on the side with the smaller profile angle, as it can be seen on
the left in Fig. 1. Depending on the angle difference, this premature engagement leads to different
material removal rates on the two tooth flanks. This results in different loads on the inner and outer
tool flank as well as on the convex and concave workpiece flank [2]. Hence, an uneven tool wear and
different roughness on the convex and concave tooth flank can occur. In order to compensate for the
different contact conditions on the convex and concave flank, the process kinematics of modern bevel
grinding machines can be adapted. Taking account of the profile angle φS, the feed direction of the
grinding wheel relative to the workpiece is modified. In this way, the engagement on both tooth flanks
can take place almost simultaneously. This adapted form of the process kinematics is also referred to
as vector feed, Fig. 2 middle.
Both with and without the application of a vector feed, a permanent contact between the grinding
wheel and both complete tooth flanks occurs. This results in a high risk of grinding burn. For this
reason, an eccentric motion of the grinding wheel is superimposed which is also referred to as Waguri
114
motion. This eccentric motion leads to a displacement of the grinding wheel perpendicular to its
central axis. The combination of adapted grinding wheel geometry and eccentric motion results in a
locally limited contact. This causes a theoretically linear contact between the flank and the grinding
wheel. Even when grinding with an eccentric motion, from a certain plunging depth on, the grinding
wheel is engaged over the entire tooth height. Theoretically, a point contact results in the tooth-width
direction, which is moved in the direction of the face width through the eccentric motion during the
process.
Calculation of the Cutting Force According to WERNER
A model which is frequently used for the cutting force calculation in grinding processes is the
model according to WERNER. Originally, this model was developed for surface grinding processes.
For continuous generating grinding of cylindrical gears, the transfer of the calculation approach onto
a gear grinding process has already been carried out [5]. In the calculation according to WERNER, the
specific grinding normal force ୬ᇱ is calculated based on the contact conditions according to Eq. 1.
[6]
Fn'
ᇱ
୬
lg
k
³
lg
0
k ˜ Acu (l ) n ˜ N kin (l )dl
[N/mm] Specific normal force
[mm]
Contact length
[N/mm²] Specific cutting force
(1)
Acu
Nkin
n
[mm²]
Chip cross-section
[1/mm²] Kin. Number of cutting edges
[-]
Exponential coefficient
The main factors in WERNER's grinding force calculation are the specific cutting force k, the
penetrated chip cross-section Acu and the kinematic number of cutting edges Nkin. The specific cutting
force is, among other things, depending on the material and is determined in grinding tests. The chip
cross-section results out of the penetration of tool and workpiece and is determined perpendicular to
the direction of the cutting speed. The exponential coefficient n is also determined in grinding tests.
This coefficient can have values between 0 < n < 1 and is used to take account of the changing chip
cross-section during the penetration. The kinematic number of cutting edges Nkin is dependent both
on the geometric contact characteristics between tool and workpiece as well as on the grinding wheel
properties. According to WERNER, the kinematic number of cutting edges is calculated according to
Eq. 2 and is proportional to the chip thickness hcu. [6]
E
N kin
Nkin
s
vwp
vc
J
§v · § a ·
s ˜ ¨¨ wp ¸¸ ˜ ¨ e ¸ v hcu
¨
¸
© vc ¹ © deq ¹
[1/mm²]
[1/mm²]
[m/s]
[m/s]
Kin. number of cutting edges
Grinding tool influence factor
Workpiece speed
Cutting speed
(2)
ae
deq
β, γ
hcu
[mm]
[mm]
[-]
[mm]
Stock
Equivalent tool diameter
Exponential coefficients
Chip thickness
The force calculation according to WERNER has already been applied for continuous generating
grinding of cylindrical gears [5]. The determination of the chip cross-sections Acu using a penetration
calculation was carried out. The volume between the grinding wheel and the workpiece, which was
penetrated in a discrete time step in the process, was determined. Perpendicular to the direction of the
cutting speed vc, the penetrated volume was divided into discrete chip cross-sections Acu with the
distance Δl. The specific grinding standard force ୬ᇱ was calculated using the discrete chip crosssections Acu(i), the kinematic number of cutting edges Nkin(i) and the distance Δl, Eq. 3 [5].
115
Fn'
m
¦k ˜ A
cu
(i) n ˜ N kin (i)'l
(3)
i 1
ᇱ
୬
k
i
Acu
[N/mm]
[N/mm²]
[-]
[mm²]
Specific normal force
Specific cutting force
Control variable
Chip cross-section
Nkin
n
Δl
[1/mm²] Kin. number of cutting edges
[-]
Exponential coefficient
[mm]
Distance between
chip cross-sections
Transfer of the Force Calculation onto Bevel Gear Grinding
In the following, the calculation of the cutting force is transferred onto a bevel gear plunge grinding
process with vector feed. The grinding process of a truck gear with z = 37 teeth and a mean normal
module mn = 9 mm is considered. The tooth root is not ground in this process. In a 3D CAD geometric
penetration calculation, the material removal between the grinding wheel and the workpiece could be
examined and an increase in the size of the penetrated cross-section in the direction of the tooth height
at the beginning of the process was determined as shown in Fig. 2.
Flank
Grinding
vt
Acu(n)
Acu(n+1)
Acu(n+2)
Acu(n+3)
hcu,1
Tip
b1
Root hcu,2
b2
Acu [mm²/s]
After the time th the grinding wheel is in contact
with nearly the whole height of the flank
4
th
3
2
konvex
convex
1
concave
konkav
0
Time t [s]
Expected Normal Force
Roughing
Normal Force Fn [N]
According to W ERNER
Machined Cross-Section
Finishing
th
Time t [s]
Time t [s]
Gear Data
Process Parameters
„ z
„ vc
= 37
„ mn = 9 mm
= 20 m/s
„ vt(Roughing) = 20 mm/min
„ vt(Finishing) = 15 mm/min
© WZL
Figure 2:
Transfer of the Cutting Force Calculation onto Plunge Grinding of Bevel Gears
Analogously to the procedure for generating grinding of cylindrical gears, the plunge grinding
process of bevel gears is divided into discrete time steps. Because of the constant plunge feed rate vt,
cutting speed vc and eccenter speed, the engagement conditions of the individual grains are nearly
constant over the entire process. Therefore, the consideration of locally machined cross-sections
within a discrete time step is considered to be sufficiently accurate instead of single-grain machining.
Since the eccenter speed vE is high compared to the plunge feed rate vt, the eccentric motion is
neglected in the determination of the overall penetrated chip cross-section Acu in discrete time steps.
As a result of the crowning on the grinding wheel profile, a locally limited contact in the height
direction between the tool and the workpiece occurs when the grinding wheel plunges into the gap.
The engagement width b1 of the grinding wheel with the tooth flank increases continuously in the
first approximately 25% of the process time t. As shown in Fig. 2 on the left, the chip cross-section
Acu increases approximately linearly at the beginning of the process. From the point in time th, the
grinding wheel is engaged with the flank on the entire profile height apart from the remaining
plunging depth.
116
As shown by HERZHOFF for plunge cutting of bevel gears, the chip thickness hcu,1 on the tooth
flank remains constant for constant plunge feed rates [2]. Through the increasing depth in the direction
of the plunge feed rate vt, the rounded tip of the grinding wheel engages with the tooth flank. As a
result, an area with the width b2 is machined close to the tooth root (Fig. 2, top left), which increases
with rising plunging depth. The increase of the chip width close to the tooth root b2 and the slight
increase in the chip width on the flank b1 cause a continuing rise of the chip cross-section Acu after
the time th. This increase in the chip cross-section is likewise approximately linear, but with a
significantly lower slope than before the time th. The total chip cross-section Acu can be divided into
the partial chip cross-sections Acu,1 and Acu,2. These partial chip cross-sections Acu,i can each be
described as the product of the respective chip thickness hcu,i and the engagement width bi.
Assuming a constant specific cutting force k and a constant distance Δl between the cross-sections,
the grinding normal force Fn is proportional to the sum of the chip thicknesses hcu (Eq. 4) based on
Eq. 2. By the multiplication with the engagement width b, the total grinding normal force Fn can be
calculated from the specific grinding normal force F'n .
Fn (t ) v ¦ hcu
Fn
t
hcu
[N]
[s]
[mm]
n 1
˜ b(t )
(4)
Normal force
Time
Chip thickness
n
b
[-]
[mm]
Exponential coefficient
Engagement width
Due to the constant plunge feed rate vt, the chip thickness hcu is constant for plunge grinding of
bevel gears. Up to the time th, the kinematic number of cutting edges Nkin (Eq. 2) and the chip crosssection Acu remain unchanged because of the constant plunge feed rate vt. With this assumption, the
specific normal force F'n remains constant. As a result of the linear increase in the width of the
engagement, a linear increase in the total normal force Fn, proportional to the course of the chip crosssection Acu, as shown in Fig. 2, is expected. The expected course of the cutting force for the process
is shown in Fig. 2.
From time th onwards, the engagement conditions remain nearly constant during the roughing
process. Close to the tooth root however, the cross-section Acu,2 increases as shown in Fig. 2. Since
the engagement width on the flank b1 is large compared to the engagement width b2 in the area close
to the tooth root (b1 ~ 100·b2), only a slight increase in the total chip cross-section Acu is expected
after the time th. Therefore, it must be assumed in the WERNER calculation that the normal force
increases only slightly after the time th.
After roughing, a finishing process follows. Since the flank with remaining stock has already been
adapted to the grinding wheel contour, almost instantaneous contact occurs over the entire tooth
height. The material removal rate Qw and, thus, the chip cross-section Acu remain approximately
constant. Therefore, it can be assumed with the force calculation according to WERNER that the cutting
force is approximately constant throughout the entire finishing process.
Validation of the Transferability of the Cutting Force Calculation
In order to validate the transferability of the WERNER model onto plunge grinding of bevel gears,
measurements of the spindle power from a bevel gear grinding process are analysed. The increase of
the power from the time of the engagement on can be interpreted as an increase in the cutting power.
The cutting power is proportional to the cutting force [7]. Assuming constant process speeds, it is
assumed that the course of the cutting force F can be estimated as proportional to the measured course
of the total power P. In Fig. 3 the measured course of the spindle power during the rough grinding of
two tooth gaps in the previously considered plunge grinding process is shown. Furthermore, the
course of the power during the finish grinding of two tooth gaps of the gear can be seen.
117
A roughly linear increase in the power P over the time t can be determined for roughing as well as
for finishing. The same qualitative course was also measured for grinding tests on the same gear
geometry with a modified cutting speed vc and plunge feed rate vt. In addition to the measured spindle
power, the diagrams also show how the cutting force was predicted using the WERNER calculation
model (Eq. 1). In this case, it is assumed that the mechanical work for grinding a gap and, thus, the
area below the power functions is the same.
Roughing
Finishing
Roughing
„ vc = 20 m/s
„ vt = 20 mm/min
Power P [%]
Grinding Wheel
„ TGX120F12VCF5
Gap 2
60
100 %
40
Finishing
„ vc = 20 m/s
20
„ vt = 15 mm/min
0%
0
Time t [s]
Measured Power
Gap 1
Gap 2
F
[%]
Fmax
Gap 1
80
80
60
100 %
40
0%
20
0
Expected Cutting Force
„ mn = 9 mm
Expected Cutting Force
„ z = 37
100
F
[%]
Fmax
100
Power P [%]
Gear
Time t [s]
Expected Force According to W ERNER
© WZL
Figure 3:
Measured Spindle Power for Plunge Grinding of Bevel Gears
Between the measured course and the calculated function, a significant difference can be
determined. According to the WERNER model, only kinematic and geometric changes in the contact
conditions cause a change in the cutting force, provided that the material and grinding wheel
characteristics stay the same. Changes of kinematics and geometry hardly take place in the present
plunging process. Especially for the finish grinding, the contact conditions remain nearly constant.
Nevertheless, the spindle power continuously increases to a multiple of the initial power. The
comparison of the measurements of the two gaps directly after each other shows that the increase is
not caused by a change in the state of the grinding wheel.
Analysis of the Transferability of the Cutting Force Calculation
The WERNER force calculation model has already been successfully applied in the past for the
continuous generating grinding of cylindrical gears. In Fig. 4 the process kinematics of surface
grinding and continuous generating grinding are shown. The calculation of the force according to
WERNER, which has been developed and validated specifically for this process, is presented for the
surface grinding process. With the exception of the entry and exit area of the grinding wheel, the
cutting force during surface grinding is constant. In addition to the process kinematics on the right
hand side of Fig. 4, a measured cutting force profile is shown for generating grinding. This course
can be modelled with the calculation according to WERNER, based on a penetration calculation, as
shown in Fig. 4 [5].
In contrast to the good consistency for generating grinding, the results of the measurements and
the simulation do not match for plunge grinding of bevel gears. In order to explain this deviation, the
process of gear honing is used, in which linear power increases can also be determined [8]. A common
feature of the plunge grinding of bevel gears and the honing process is the continuous main feed
component in the direction of the tooth height, which results in an infeed normal to the tooth flank.
118
In the case of generating grinding, the main feed direction is parallel to the central axis of the gear
and thus in the direction of the tooth width.
Surface Grinding
Generating Grinding
vf
vc
vf
vc
vw
vf
n
³ k ˜[ Acu (l )] N dyn (l ) ˜ dl [6]
Time t
„ vc= 35 m/s
„ z = 46
„ mn = 4 mm „ vf = 93 mm/min
Force F
Force F
0
„ z = 33
„ vc = 1.50 m/s
„ mn = 4.6 mm „ f
= 0.06 μm
Force F
lk
Fn'
Gear Honing
Calculation
Measurement
Time t
[5]
Time t
[8]
© WZL
Figure 4:
Contact Conditions in Surface Grinding, Generating Grinding and Gear Honing
Chip removal in grinding processes takes place in three phases [7]. In the first phase, only elastic
deformation occurs, which is supplemented by plastic deformation in the second phase. Only in the
third phase the material is cut. The characteristics of the three phases of the grinding process are
decisively influenced by the grinding wheel properties, the grinding parameters, the cooling lubricant
and the properties of the machined material [7]. Under unfavourable conditions, a large proportion of
elastic and plastic deformation can occur. In addition to the deformation of the workpiece material,
the deformation of the grinding wheel and the machine tool affect the cutting conditions [9]. The
combination of these effects leads to an increase in the material to be cut in the contact zone with the
feed depth. Thus, the force to be applied increases continuously for the further machining, until the
limiting deformation of the system workpiece-tool-machine tool is reached [9].
Since a continuous feed into the material does not take place during surface grinding and
generating grinding, no steadily increasing pressure is expected in the contact zone. In these
processes, stationary conditions occur after a short time. The model for force calculation according
to WERNER could therefore be transferred. Contrary to this, in the case of plunge grinding of bevel
gears and gear honing, effects occur which can not yet be described by this model. In order to enable
a prediction of the force, the model has to be adapted to the specific process conditions.
Summary and Outlook
In previous scientific research, the relevance of the cutting force calculation for the prediction of
the thermomechanical influence on the workpiece and the load on the grinding wheel have been
shown [5]. To transfer the approach of WERNER onto bevel gear grinding, the contact conditions of
plunge grinding of bevel gears have been analysed. Based on an analysis of the contact conditions,
the cutting force model was adapted onto bevel gear grinding. By means of the transferred model and
the known process parameters as well as contact conditions, the expected course of the force for
roughing and finishing bevel gear grinding in the plunging process was derived. In a measurement of
the spindle power during the plunging, a strong increase of the power throughout the process was
detected even though the geometric and kinematic contact conditions remain nearly constant. This
increase most likely results out of a rise of the cutting force. The rise of the cutting force can not
directly be explained by the model of WERNER.
119
The increase of the cutting force despite nearly constant geometric and kinematic contact conditions
can also be observed in gear honing. The direction of the main feed can be identified as a significant
difference between the processes for which the WERNER model was successfully applied and plunge
grinding of bevel gears as well as gear honing. The infeed in plunge grinding of bevel gears is
continuously directed towards the root of the gap and, therefore, partially perpendicular to the flank.
This feed direction leads to a repeated machining of the same areas of the flanks with increasing
infeed. In case of an insufficient cutting due to a deformation of the workpiece and the grinding tool,
an increase of the amount of material in the contact zone could occur. These effects can not be
described by the WERNER cutting force model.
In the future, the course of the spindle power in plunge grinding of bevel gears has to be analysed for
different gear geometries and process parameters. In this context, the occurrence of a stationary
condition needs to be examined. Furthermore, the correlation between the spindle power and the
components of the cutting force must be validated.
References
[1] Stadtfeld, H.: A Split Happened on the Way to Reliable, Higher-Volume Gear Grinding. In: Gear
Technology, 2005, Nr. September/Oktober
[2] Herzhoff, S.: Werkzeugverschleiß bei mehrflankiger Spanbildung. Diss. RWTH Aachen, 2013
[3] Weßels, N.: Flexibles Kegelradschleifen mit Korund in variantenreicher Serienfertigung. Diss.
RWTH Aachen, 2009
[4] Grinko, S.: Thermo-mechanisches Schädigungsmodell für das (Zahnflanken-) Profilschleifen.
Diss. TU Magdeburg, 2006
[5] Hübner, F.; Klocke, F.; Brecher, C.; Löpenhaus, C.: Development of a Cutting Force Model for
Generating Gear Grinding ASME 2015 International Design Engineering Technical Conferences &
Computers and Information in Engineering Conference. Boston, 2015
[6] Werner, G.: Konzept und technologische Grundlagen zur adaptiven Prozessoptimierung des
Aussenrundschleifens. Habil. RWTH Aachen, 1973
[7] Klocke, F.; König, W.: Fertigungsverfahren 2. Schleifen, Honen, Läppen. Berlin, Heidelberg:
Springer, 2005
[8] Klocke, F.; Brumm, M.; Kampka, M.: Process model for honing larger gears, Cambridge:
Woodhead Publishing an imprint of Elsevier, 2014, S. 118–128
[9] Bock, R.: Schleifkraftmodell für das Außenrund- und Innenrundschleifen. In: Jahrbuch
Schleifen, Honen, Läppen und Polieren, 1987, S. 36–45
120
Fundamental Investigations of Honing Processes Related to the Material
Removal Mechanisms
Meik Tilger1, a , Tobias Siebrecht1, b , Dirk Biermann1, c
1Institute
of Machining Technology, Baroper Straße 303, 44227 Dortmund, Germany
aTilger@isf.de, bSiebrecht@isf.de, cBiermann@isf.de
Keywords: Honing, Material removal, Surface analysis
Abstract. Honing processes are commonly used for the machining of functional surfaces for
tribological applications such as bearings of connecting rods and cylinder liners. Plateau-structured
surfaces featuring a cross-grinding pattern and a high bearing area ratio can be generated by honing
processes. As a simplification, honing is commonly considered as similar to grinding processes
regarding tool-workpiece interactions although the kinematics, contact relations and, especially the
resulting material removal are quite different. Therefore, the material removal mechanisms cannot
be compared with those of a grinding process. The initial surface topography generated by grinding,
turning or milling, has a strong influence on the resulting workpiece topography as well as the
machining time using honing because of the small material removal rate. In order to investigate the
material removal mechanisms arising during honing, a special experimental setup has been
developed to investigate the fundamental mechanisms. In this context, small honing tools with a
total contact area of 5 mm2 are used to simplify the influence of the entire honing process on single
grain engagements in a feed-controlled flat-honing process. The influence of the varying depth of
cut and different rotational speeds on the single grain chip thickness is analysed, focusing on the
material removal mechanisms and process forces and, finally, the generated surface structure.
Ploughing with its plastic deformation and the associated lateral bulging along the width of the
grain as well as micro cutting are observed as dominating material removal mechanisms in the
initial process phase. Additionally, the surface formation progress, especially the gaining amount of
honing grooves and the surface smoothing are investigated considering the process time and the
corresponding increase of the accumulated material removal.
Introduction
The consideration of tribological functions of machined surfaces is rising steadily [1]. Technical
surfaces already designed and produced focusing their tribological behaviour are often machined
with honing processes [2]. Therefore, honing is the last step in the process chain and is used to
generate a surface which improves the tribological contact situation [3]. Honed surfaces have a
homogenous plateau-structure including cross-grinding pattern and a high bearing area ratio [4].
Commonly, honed surfaces are used in bearings of connecting rods and cylinder liners [5]. In the
industrial environment, it can be assumed that honing is a controllable process while the kinematics
and contact relations are having a strong influence on the process results. Although the process is
applicable the material removal mechanisms have not been investigated in detail so far [6].
According to the current state of the art the material removal mechanisms in honing processes
are described as micro ridging, micro ploughing and micro cutting in analogy to grinding processes,
although the ridging and ploughing mechanisms dominate. While micro ridging caused by high
elastic deformation of the material and very low material removal rates is ineffective, micro
ploughing and micro cutting are more effective [7]. Characteristic of micro ploughing is its lateral
bulge along the cutting groove caused by elastic and plastic deformation which are unfavourable for
building a plateau with low roughness peaks. The most desirable material removal mechanism is the
*
Submitted by: Dipl.-Ing. Meik Tilger
121
micro cutting describing the removal of a chip without high elastic deformation and without lateral
bulging [8].
The chip thickness and the cutting edge profile exert the main influence on the material removal.
Yeneoglu et. al describe a change in material removal mechanisms when increasing chip thickness
from ridging to ploughing and micro cutting [7]. This interaction leads to the hypothesis of
changing material removal mechanisms within varying depth of cut.
The material removal mechanisms and their side effects such as elastic and plastic deformation
and bulge formation are mainly influencing the resulting surface topography. In today´s research
and process development numerical simulation models help to simplify tool and process design
when fundamental interactions are interpreted in the right way [9]. Therefore, a better
understanding of material removal mechanisms is fundamental aspect for an appropriate process
simulation which can be used for an efficient tool and process design by reducing experimental
tests for honing processes.
Experimental Set-up
The investigations were carried out on a CNC turning machine using honing stones with the
width of 1 mm and length of 5 mm. The experimental set-up represents a feed-controlled flathoning process on the front side of a cylinder. During the experiment process forces were recorded
through honing with the 3-component tool holder dynamometer Kistler 9121. Figure 1 gives an
overview on the experimental set-up, the honing tool, resulting honing angles and the occurring
process forces.
Figure 1:
Experimental set-up a) honing tool; b) process kinematics; c) resulting surfaces;
d) typical force measurement
Tool and Workpiece. During the experiments synthetic bond cBN honing stones manufactured by
Elgan company with the specifications of B91-P400-074 were used. The grain size was dK = 90 μm.
The flat form of the honing stones was achieved by a surface grinding process. The workpieces,
cylinders of 100Cr6, were heat-treated by an inductive hardening and achieved 63±2.5 HRC. The
workpieces diameter are da = 85 mm and di = 50 mm. Fig. 1 shows an image of the tool a) and the
process kinematics of the experimental set-up b).
122
Description of the Process Kinematics. The honing process kinematic was realized by a defined
workpiece rotational movement and oscillation movement of the honing stone. The oscillating
strokes were carried out by y-axis of the CNC turning machine as shown in Fig. 1 b) by the
oscillating velocity vosc.. The oscillation amplitude was A = 8 mm while the rotational speed and the
depth of cut had been varied stepwise. A typical force measurement is shown in Fig. 1 d). Fig. 1 c)
shows the occurring honing pattern on two workpieces honed with different rotational speeds. The
force components were measured in the three directions of x, y, and z whereby forces in z-direction
correspond to the process normal force. Considering the tool-workpiece contact zone, overlapping
areas are a result of the movements. These overlapping areas consist of an amount of single honing
grooves crossing each other. Regarding the increase in the rotational speed the number of
overlapping areas gains and the honing angle, defined by two crossing grooves, decreases, compare
Fig. 1 c). To enhance the statistical accuracy and investigate tool wear, every experiment was
carried out five times using the same honing stone without any dressing of the tool.
Analysis of Experiments
The length of the tool track changes due to different rotational speeds, caused by the constant
process time depending on the number of strokes nstroke = 6. Combined with the number of
operations nhoning,i (i = 1…5), depending on the experimental repetition a comparison of wear and
surface evolution for every single honing stone is quite difficult. Therefore, a unified parameter is
defined, which depends on the rotational speed n, the process time te = 3 sec., the arithmetic mean
of the tool position regarding the workpiece diameter of the honing track dmid = 75 mm, and the
number of operations nhoning,i for the used tool. The contact length equivalent will be used to unify
the total contact length for each honing stone and make tool wear comparable. This contact length
equivalent lc,h is calculated by:
(1)
lc,h = n*te*ʌ*dmid*Pi
Analysis of Process Forces. Fig. 2 shows the influence of the depth of cut ae and the rotational
speed n on the process normal force Fn including all experiments. Although the normal forces are
scattering over the five repeated experiments, the coefficient of determination R2 is about 0.89,
which means a high model accuracy.
Figure 2:
Regression model for normal forces depending on depth of cut and rotational speed
While the rotational speed has nearly no influence on the normal force, a gaining depth of cut
leads to an increase in normal force from values of 50 N up to 300 N. The increase in normal force
123
is almost linear up to a depth of cut of approx. ae = 100 μm. With higher depth of cut the normal
force tends to decrease. This effect may be caused by reaching the maximum bonding elasticity
with a depth of cut higher than ae = 100 μm. If the maximum elastic deformation of the bonding is
achieved, the abrasive grain cannot be forced back into the bonding. Additionally, the high depth of
cut leads to higher clogging which results in a friction of metal-metal contact and less cutting
whereby normal forces decrease. Therefore, an overloading of the honing stones can be assumed.
Analysis of the Surface Topography. The surface analysis is based on three-dimensional surface
measurements performed by a confocal white light microscope. Additional tactile measurements
show that the roughness increases by the honing process independent of the process parameters
rotational speed and the depth of cut. In particular, the evaluation of mean roughness depth Rz and
reduced peak height Rpk have been considered because they are mainly affected. Based on a
topography with a mean roughness depth Rz between 0.6 and 1.6 μm and a reduced peak height
Rpk between 0.1 and 0.5 μm the mean roughness increases as well as the reduced peak height.
Regarding this behaviour, it is supposed that lateral bulging occurs along the honing grooves. To
verify this hypothesis the topography of the honed workpiece had been analysed. Fig. 3 shows the
comparison of two three-dimensional surfaces a), b) and matching profiles of crossing grooves c),
d) generated by honing with varying rotational speed.
Figure 3:
Comparison of honing grooves – a) and b) 3D-topography; c) and d) profiles
depending on the honing grooves
The profiles in Fig. 3 c) and d) illustrate the appearance of lateral bulging. Independent of the
rotational speed n, lateral bulging occurs along the honing grooves. In addition, an equal chip
thickness can be determined for both rotational speeds. As mentioned before, a higher rotational
speed n generates a smaller cross hatch angle. In Fig. 3 b) the honing grooves directions are very
close to the grooves induced by the previous turning process, and there are just two dominant
grooves. On the horizontal axis, the groove between 150 and 175 μm is crossing a smaller groove.
In comparison with this Fig. 3 a) shows a larger cross hatch angle built by two dominant grooves
crossing each other. Considering the material removal mechanisms based on these resulting
124
topographies, with the lateral bulging along the honing grooves, it can be deduced that ploughing
with its plastic deformation is one main effect during the honing process.
In Fig. 3 c) and d) profile series with a couple of profiles between profile I and profile II are plotted.
These profile series give an overview of the profile development for single honing grooves towards
their crossing section. Profile I depicts the front profile regarding the axis direction, whereas
profile II shows the last profile of the series. Both profile series show lateral bulging along single
grooves. Considering the width of the grooves, the lateral bulging seems to decrease within the
crossing section of the two grooves. The width of the grooves in the crossing sections is increasing,
while the chip thickness remains the same. Even if varying the depth of cut the chip thickness does
not differ. This effect is illustrated in Fig. 4 showing the 3D-topography a) and a profile series b)
for a depth of cut of ae = 125 μm.
Figure 4:
Resulting surface for the depth of cut of ae = 125 μm a) 3D-topography; b) profiles
depending on the honing grooves
In comparison with the surfaces produced by a honing process with the depth of cut of
ae = 75 μm, Fig. 4 shows a larger number of deep grooves in the honed area for higher depth of cut
without higher groove depth or width. The higher depth of cut may cause an increasing number of
active grains because grains with lower protrusion also reach the critical grain engagement depth
Tμ. The reason for the limit of groove depth is the elasticity of the tool bonding, the grain protrusion
and clogging which limits the grains resulting chip thickness. Regarding the clogging of the tool,
the wear mechanisms are analysed as well.
Analysis of Tool Wear. Tool wear is investigated by qualitative light microscopic analysis for each
honing stone before and after the honing process. Due to the repeated experiments, each honing
stone had five process steps with the similar contact length depending on the rotational speed n. To
compare the tool wear for different rotational speeds the contact length equivalent lc,h is used to
standardise the total contact length for each honing stone. The areas that appear bright show the
clogging, whereas the bonding material of the honing stone is dark. Fig. 5 a) shows the comparison
of five honing stones for different process parameters after one experiment. In part b) the tool wear
evolution over five experiments is described for a depth of cut of ae = 25 μm and a rotational speed
of n = 150 1/min. Section c) shows the microscopic image of some microchips produced during
these honing experiments.
Due to an increasing normal force caused by a rise of the depth of cut ae, the surface pressure
increases, too. The higher surface pressure results in a higher load for the honing stones and for
each single grain. The overload leads to high adhesive wear in the form of clogging on the honing
stones. Those clogging increases with a higher depth of cut. The rotational speed influences the tool
125
wear in the same way, whereby a higher rotational speed causes a longer contact length lc,h and,
thus, a higher volume of removed material.
Figure 5: Tool wear - a) depending on rotational speed and depth of cut; b) depending on the
process time; c) microchips
In consideration to Figure 5 b) it becomes apparent, that during the five repetitions of the
experiments for the use of one tool the wear varies a bit. Starting with a degressive increasing
clogging during the first four process repetitions the clogging is reduced during the fifth process.
Until a contact length equivalent of lc,h = 7096 mm the clogging increases until it reaches a
maximum height, compare tool wear at a contact length equivalent lc.h = 7096 mm. During the
following honing process the clogging is removed partly. This high adhesive wear leads to higher
friction within the process and reduces the total depth of cut by decreasing the grains penetration
depth caused by the reduced grains protrusion. Therefore, the tool wear influences the process result
by restricting the cutting efficiency.
Based on the analysations it can be assumed, that due to the clogging of the tool high friction
between tool and workpiece effects while honing process. In addition, the resulting surface
topography has a lot of lateral bulging along the honing grooves. This surface structure indicates,
that micro ridging and micro ploughing are the mainly effects of material removal mechanisms
while honing. In contrast to this observation, also some microchips are produced, what is depicted
in in Figure 5 c). These microchips have different chip forms which result from different grain
forms and varying cutting angles caused by the undefined orientation of the grains. These resulting
microchips indicate that in addition to ploughing micro cutting is another occurring material
removal mechanism while honing.
Summary and Outlook
Within these fundamental investigations the influences of the depth of cut and the rotational
speed on the normal force were analysed establishing a regression model with a statistic accuracy of
almost 90 percent. A gaining depth of cut causes an increasing normal force up to a maximum of
nearly Fn = 300 N. A further increase in the depth of cut leads to a decrease of the normal force,
which suggests an influence of the clogging of the tool. This clogging, in particular the process
influencing parameters, could be confirmed by means of microscopic images of the honing stones.
126
The investigation of the machined surfaces showed increasing roughness parameters after honing
and also honing grooves having a maximum depth of 2 μm regardless of the depth of cut and the
rotational speed. The apparently constant chip thickness can be explained by the elastic bonding
which allows the grain to be forced back. In addition to this, the identified clogging adversely
affects the grain protrusion by reducing it. Hence, the resulting chip thickness decreases. The
increasing roughness can be explained by the occurring lateral bulging. This indicates high plastic
deformation during material removal and thus ploughing as a material removal mechanism.
Furthermore, microchips could be identified which also demonstrates micro cutting as an additional
material removal mechanism. The merge of two different material removal mechanisms for active
effects on the cutting process can be explained by the varying grain positions and form engaging the
workpiece material.
In order to reduce the roughness, especially the roughness peaks, and to produce a typical honed
surface the process time has to be increased. To reduce the influence of the tool wear, an
opportunity for conditioning has to be implemented into the experimental set-up. With this new setup the parameters depth of cut and rotational speed should be kept constant focusing on the
incrementally developing surface and the influence of single grains and tool wear. Furthermore, a
reduction of grain size and a larger tool geometry are considered to be effective. The experimental
results will be implemented to a simulation model to simulate honing processes considering the
material removal mechanisms for force- and feed-controlled honing processes.
Acknowledgement
The investigations are funded by the Deutsche Forschungsgemeinschaft (DFG) by the project
“Basic Experimental and Simulation-Supported Analysis of Surface Structuring for Short- and
Long-Stroke Honing” with the funding code DFG BI 498/40-3. Furthermore, the authors want to
express their thank to the company of Elgan for their support regarding the supply of honing stones.
References
[1] A. A. G. Bruzzone, H. L. Costa, P. M. Lonardo und D. A. Lucca, Advances in engineered
surfaces for functional performance, Annals of the CIRP. 57 (2008) 750–769.
[2] T. Abeln, G. Flores, U. Klink, Innovative Fertigungsverfahren zur wirtschaftlichen
Feinstbearbeitung von Zylinderbohrungen, Stuttgarter Impulse - Fertigungstechnik für die
Zukunft. 2008; 333-350.
[3] G. Flores, Grundlagen und Anwendungen des Honens, Vulkan Verlag, Essen, 1992.
[4] G. Haasis, Honing technology 1992 - improvements and new procedures, International Honing
Clinic Conference Separate Papers, Society of Manufacturing Engineers, 1992, pp.1-22.
[5] K. U. Paffrath, Untersuchungen zum kraftgeregelten Langhubhonen auf multifunktionalen
Bearbeitungszentren, Vulkan Verlag, Essen, 2010.
[6] D. Biermann, R. Joliet, M. Kansteiner, Experimentelle und simulative Untersuchung des
Langhubhonens Teil 2, dihw – Diamant Hochleistungswerkzeuge, 6 (2014) 36-39.
[7] K. Martin, K. Yegenoglu, HSG-Technologie: Handbuch zur praktischen Anwendung, first ed.,
Guehring-Automation, Stetten a.k.M.-Frohnstetten, 1992.
[8] I. D. Marinescu, W. B. Rowe, B. Dimitrov, H. Ohmori, Tribology of Abrasive Machining
Processes, second ed., William Andrew Publishing, Oxford, 2013.
[9] R. Joliet, M. Kansteiner, A High Resolution Surface Model for the Simulation of Honing
Processes, Advanced Materials Research, 769 (2013) 69- 76.
127
Fine positioning system for large components
Maik Bergmeier1, a and Berend Denkena1,b
1
Leibniz University Hannover, An der Universität 2, 30823 Garbsen, Germany
a
bergmeier@ifw.uni-hannover.de, bdenkena@ifw.uni-hannover.de
Keywords: Workpiece, Productivity, Precision
Abstract. Prior to the profile grinding process of large gear wheels with a weight of several tons, a
very precise alignment of the workpiece is required in order to meet the narrow tolerances.
However, three-axis grinding machines are not able to correct alignment errors in four axes and the
manual alignment process is a time intensive procedure. Automation of the manual process offers
the potential to increase efficiency of the process enormously. For the positioning of large and heavy
components of up to 4.8 t, a fine positioning system was developed. The system is suitable for
micrometer precision positioning in four degrees of freedom. Two linear axes are carried out as a
conventional cross guide. Two rotational degrees of freedom are applied by a circular membrane,
which is used as a flexure joint for the wobble unit. The presented prototype is able to correct
eccentric errors of ±2.5 mm and wobble errors of ±0.1° prior to the process. Finally, the system was
validated in a profile grinding process of a 4 t wind turbine gear. The results show high stiffness
values and qualify the device for the use in profile grinding machines. The total tooth pitch
deviation was measured with 10 μm (gear diameter: 1146 mm), which demonstrates the uniformity
of the rotational stiffness.
Introduction
Machining of large parts with a weight of several tons, such as gears for marine, mining or wind
gear boxes, requires time-consuming manual fine positioning. The manual fine positioning is
necessary in order to meet the narrow tolerances [1]. The automation of the manual process offers
the potential to reduce non-productive times and increase the productivity of the machine tool.
However, some alignment errors, e.g. wobble errors, cannot be compensated by the kinematics of
the machine tool, which is why conventional three-axes machine tools require additional positioning
axes with high accuracy [2]. These additional positioning axes can be installed on the tool side [3]
as well as on the workpiece side [1] to compensate positioning errors. Olaiz et al. showed an
adaptive fixture for accurate positioning of planet carriers in the wind-power sector [1]. The
investigated fixture was driven by electric motors and able to center workpieces within a 10 μm
tolerance. PI offers a High-Load Hexapod with six degrees of freedom [4]. The design is able to
carry loads with 1t in a horizontal table position with a repeatability to ± 0.5 μm. The design has an
overall height of 663.5 mm. Yang et al. developed an ultra-precision system with two axis and a
large 300 x 300 mm workspace [5]. The drive system consists of two combined drives. The linear
drive is used for macro positioning and flexure hinges driven by PZT for micro positioning. Yang
achieved a position accuracy of less than 3 μm with a velocity of 500 mm/s. In order to move
workpieces, weigthing tons, with micrometer precision over a length of several millimetres,
positioning systems are required, which have both, sufficient stiffness and precision as well as high
forces. Furthermore, a compact design is necessary to restrict the workspace as little as possible. A
system that aligns heavy workpieces weighing several tons in four degrees of freedom with
micrometer accuracy does not exist.
The research of a micro positioning system that provides a significant progress in the automated
set up process in profile grinding of large and heavy components is in the focus of this paper. The
system consists of a mechanical stage, which allows movements in two translational degrees of
*
Submitted by: Maik, Bergmeier
129
freedom and a wobble stage, which allows movements in two rotational degrees of freedom to
compensate eccentric and wobble errors of the workpiece.
Piezo hydraulic pump design
The table was especially designed to support the set up process for the profile grinding of gear
wheels. Hydraulic cylinders provide enough power to actuate weights up to 4 t. Two piezo hydraulic
pumps were used, to overcome the contrast between high forces and high accuracy. Piezo pumps
consist of piezo stacks linked to a flexible membrane and a pump chamber with check-valves or
micro-valves. High frequency actuated piezo stacks with 100 μm of stroke length, supply discrete
quantities of fluid and therefore a very precise actuation of hydraulic cylinders. Furthermore, these
pumps provide a power to volume ratio about 100-1000 times greater than electrostatic counterparts
[6]. In the following, the structure of the piezo pump and the mechanical positioning stage is
described as well as the results of the experimental investigations in practice.
The piezo hydraulic pump is based on a pump design by Denkena [7]. In order to suit industrial
needs, modifications of the hydraulic pump design were necessary. Due to the high voltages of up to
1000 V of the piezo actuator, the use under industrial conditions is not possible because of safety
requirements. Furthermore the old design used just one large membrane for the pumping chamber as
well as for the sealing of the chamber with two fast switching piezo valves. In the area of the
piezoelectric valves the membrane was repeatedly torn. To solve the problem with the membrane
endurance, the membrane is no longer used for sealing the entry and exit of the chamber. The piezo
driven fast switching valves were replaced by passive check valves. Additionally this saves two
costly piezo actuators and reduces the dimensions of the pump unit. The modified pump is shown in
Figure 1. Check valves are placed as close as possible to the pump chamber to keep the pumping
chamber volume small.
Figure 1: Piezo hydraulic pump with check valves
Another modification of the pump was taken by the substitution of the high-voltage actuator by a
low-voltage actuator. The performance of low-voltage actuators is limited compared to high-voltage
actuators. The chosen actuator of piezomechanics operates in the voltage range from U = 0-150 V
with a stroke of s = 100 μm and a stiffness of c = 80 N/μm. Due to the low voltages, powerful
amplifiers are necessary which can supply high currents at a frequency of f = 30 Hz. For this
purpose, a Piezomechanik LE 150/100 EBW was chosen, which provides a maximum voltage of
150 V with an average current of I = 350 mA and a maximum current of I = 1200 mA. The actuator
delivers sufficient power up to a pump frequency of f = 40 Hz. Figure 2 shows the result with the
low voltage actuator and the experimental setup.
The pump builds up a difference pressure between entry and exit of Δp = 7 MPa. The pressure
was measured at a voltage of U = 150 V, which corresponds to the maximum actuator stroke of
s = 100 μm. A high pressure build-up occurs in the first pumping cycles until an asymptotic
behaviour starts at a differential pressure of 7MPa. The resulting piston movement with a double
130
rod cylinder is shown in Figure 2b. It can be seen that the piston moves in the pump- as well as in
the suction-cycle, resulting in an almost constant movement. The pump conveys a discrete amount
of fluid per stroke at a defined voltage and pump frequency (0.3 mL/s at 30 Hz). Given a piston
surface of Apiston = 765.8 mm², the step size is around 40 μm. Due to the passive check valves, the
pump is only able to pump in one direction. Therefore, two additional entry and exit connections
with check valves were added to the chamber. Together with the 3/2 on-off valves these four valves
enables pumping in two directions by switching between the valves.
Figure 2: Experimental setup, a) maximum difference pressure, b) piston movement
In order to achieve a high accuracy within a μm range, a fixed step size of 40 μm is not
sufficient. Therefore, we chose a position control based on a characteristic curve control and tested
it for accuracy (Figure 3).
Figure 3: Characteristic curve control, a) characteristic curve, b) positioning process
131
The test setup for the characteristic curve controller stayed the same as shown in Figure 2. A
characteristic curve controller, which is parameterized according to the low-voltage actuator,
provides the voltage amplitude as a function of the control difference (Figure 3a). The test results in
Figure 3b show that the compensation of a predetermined control difference has been completed
with no overshooting. The target position was reliably reached in a predetermined tolerance field of
± 0.25 μm and therefore meets the accuracy requirements of 3 μm under the same setup as shown in
Figure 2.
Four axes precision table
To align workpieces in four degrees of freedom for the profile grinding process, a fine
positioning system based on two mechanical movement stages was build. This system consists of an
eccentric stage, carried out with two linear axes as a conventional ball rail system in cross
construction. On top, a wobble stage was attached, consisting of a circular membrane flexure joint
with a thickness of 2 mm. Figure 4 shows the schematically design and the CAD rendering of the
fine positioning system. A hole in the middle enables the operator to adjust workpieces in the z-axis.
Simultaneously a reduction of the combined height of workpiece and system results. Due to the
chosen structure of the wobble stage, with the cylinders mounted below the adapter panel, the
construction height was kept low to meet the requirements. Each of both stages is driven by two
hydraulic cylinders. Therefore, a hydraulic preload of the system of at least 3 MPa is necessary to
ensure correct functioning of the system.
While the cylinders in the eccentric unit, in x- and y-direction, only have to overcome the friction
in the guides, they have a smaller piston surface area of 1,650 mm². With a preload pressure in the
system of 8 MPa, this results in an actuator force of 13,200 N, which is enough for the eccentric
stage. In the wobble unit, on the other hand, the workpiece weight is carried by the hydraulic
cylinders entirely. Therefore, the piston area was increased to 5,980 mm², resulting in a force of
47,120 N, limiting the maximum workpiece weight to < 4.8 t.
Figure 4: Fine positioning system for large workpieces
Based on the mechanical design and the piston stroke length of the cylinders, errors up to ±3 mm
in x-/y-direction and ±0.072° in ψ-/φ-axis can be compensated. Since the hydraulic cylinders are not
132
able to capture the piston position themselves, gauging sensors, mounted parallel to the pistons on
the mechanical structure, were used for position measurement. In order to ensure that there is no
plastic deformation of the membrane due to the movement of the wobble cylinders, an analysis was
carried out with three times of the estimated piston travel of 3 mm. The analysis showed an
equivalent stress of ~240 MPa and therefore enough safety against plastic deformation of the
membrane (elastic limit = 310 MPa). The simulated axial stiffness represents a very low resistance
for the hydraulic cylinders with 2.78 N/μm. Since the mounting panel only rests on the support
points and is not further fixed, the mounting panel is held in position by the membrane only. The
wobble unit was tested by a scaled prototype.
For cost reasons, only two pumps were used for the four hydraulic cylinders. Two additional 3/2
on-off valves in each circuit were installed to change between the cylinders. Both hydraulic circuits
are hydraulically preloaded by screw pumps. The connection of two cylinders with one pump
influences the behavior of the hydraulic system. Switching between the cylinders leads to a position
jump of the piston due to the pressure compensation between the piezo pump and the now active
cylinder. The now passive cylinder remains unaffected in its position. Therefore, it is still possible
to operate two cylinders on one pump despite pressure equalization. After the alignment process the
pistons are clamped by integrated clamping sleeves with 50 MPa to make sure, that the pistons stay
in place and to provide sufficient stiffness during the grinding process. With active clamping
sleeves, it is no longer possible to move the pistons. Due to the small pump dimensions of
74x74x190 mm³, both hydraulic circuits, including the 3/2 on-off valves, were integrated in the
mechanical setup. A Raspberry Pi with a self-developed expansion card controls the piezo pumps,
the position- and pressure-sensors.
Measuring concept
The measurement of the actual workpiece position takes place in the grinding machine. Two
gauging sensors are positioned above and below the gearwheel direct on the shaft. The machine
operator supplies the measurement system with the sensor heights hi, which can be adjusted to fit
different workpieces. During the measurement process the whole system with the workpiece rotates
360°. Afterwards the actual workpiece position is calculated from the measured sinus wave signals
in comparison to the rotary axis of the machine tool (Figure 5).
Figure 5: Measurement Concept
The setpoint values for the microsystem are calculated based on the workpiece position and the
geometry of the mechanical system.
Experimental results
To evaluate the positioning system and the measurement concept, a reference workpiece, which
is used to calibrate the grinding machines was placed on the table (Figure 6). For the displacement
133
measurement, the whole system was mounted on a rotating table. In Figure 6 the calculated setpoint
values of all cylinders are shown for different iteration steps. The iteration starts with the initial
value. It can be seen that the required accuracy of less than 10 μm is achieved in the third iteration.
As mentioned before, the mounting panel just rests on the passive and active support points. The
calculation of the setpoint values assumes, that there is no movement between the contact points of
the mounting panel and the pistons of the cylinders. A shifting of the mounting panel in x- and ydirection during the piston movement due to the membrane leads to deviations between the actual
and the calculated position.
Figure 6: Positioning process
Because of the workpiece weight resting on the cylinders in the wobble unit, a different
behaviour in the different directions of travel could be observed. The piston moves significantly
faster in the downwards (-z-axis) movement. This leads to the overshooting of the target position in
the downwards movement. Therefore, the wobble cylinder target positions are approached from the
bottom against the workpiece weight. Because the alignment process takes place before the grinding
process, this behavior has no effect on the accuracy.
Profile grinding process. Rotational symmetry is of crucial importance for the manufacturing
quality in the production process of gear wheels. Therefore, the rotational symmetry of the
microsystem is more important than absolute stiffness. The mechanical qualities of the system were
validated in a profile grinding process of a 4 t gear with a diameter of 1146 mm. Therefore, the
system was placed in the grinding machine with a gear (92 teeth) and a workpiece holder. Due to the
high number of teeth, angle-dependent stiffness can be detected easily in the tooth pitch deviation
measurements after the grinding process. All cylinders were clamped with p = 50 MPa during the
manufacturing process. Regarding the usability of the system for the profile grinding of gears, the
workpiece was measured on a Klingelnberg precision measuring center. All measured values of
pitch deviation correspond to a quality of 1 (highest quality, DIN 3962). The results show the
suitability of the system for the use in the profile grinding process.
134
Summary
In this paper a fine positioning system for profile grinding processes of large and heavy workpieces
is presented. The positioning system was realized based on two mechanical movement stages with
hydraulic cylinders as drives and a piezo hydraulic pump. It is able to correct eccentric errors up to
±3 mm and wobble errors up to ±0.072° for workpieces up to 4 t. A piezo hydraulic pump,
redesigned with check valves and a low voltage piezo actuator, is able to suit industrial demands in
terms of durability and safety requirements. The developed pump delivers accuracies under 1 μm
depending on the piston diameter, the pump frequency and the amplitude. The calculation of the
setpoint values for the compensation of the workpiece position errors is capable to align the
workpiece position within three positioning steps in order to achieve a minimum accuracy of less
than 10 μm in all axes. Finally, the system was verified under industrial conditions with a 4 t gear
wheel profile grinding process. It could be shown, that with the developed fine positioning system, a
quality 1 grade according to DIN ISO 3962 and sufficient stiffness for profile grinding processes
was achieved. Further investigations are necessary in order to consider the actual movement of the
mounting panel into the calculation of the setpoint values and thus to eliminate the iterative steps.
Acknowledgements
The developed fine positioning system was carried out within the transfer project “Piezo-hydraulic
Micro-Positioning system as setup assistance for large components”. The authors want to thank the
partners Roemheld and Siemens for their support and the German Research Foundation (DFG) for
funding this project.
References
[1] E. Olaiz, J. Zulaika, F. Veiga, M. Puerto, A. Gorrotxategi, Adaptive Fixturing System for the
Smart and Flexible Positioning of Large Volume Workpieces in the Wind-power Sector, Procedia
CIRP 21 (2014) 183-188.
[2] D. Spath, S. Mussa, Compensation of Machine Tool Errors with a Piezo Device, Production
Engineering VIII/2 (2001) 103-106.
[3] C. Brecher, D. Manoharan, U. Ladra, H.-G. Köpken, Chatter suppression with an active
workpiece holder, Production Engineering, Research and Development Vol. 4 Numbers 2-3 (2010)
239-245.
[4] Physik Instrumente (PI) GmbH, High-Load Hexapod H-845, H-845_Datasheet, downloaded on
2017-08-01, (2017).
[5] C. Yang, G. L. Wang, B. S. Yang, H. R. Wang, Research on the structure of high-speed largescale ultra-precision positioning system, Proceedings of the 3rd IEEE International Conference,
Sanya, China, (2008) 9-12.
[6] B. Lia, Q. Chena, D.-G. Leeb, J. Woolmanb, G. P. Carmanb, Development of large flow rate,
robust, passive micro check valves for compact piezoelectrically actuated pumps Sensors and
Actuators A 117 (2005) 325–330.
[7] B. Denkena, S. Plümer, Analysis of a Piezo-Hydraulically Actuated Fixing Plate for Highly
Precise Positioning, The 13th Mechatronics Forum International Conference, Proceedings, Vol. 2
(2012) 575-581.
135
Selective Laser Melting of Ti6Al4V using powder particle diameters less
than 10 microns.
Michael Kniepkamp1,a, Mara Beermann1,b, and Eberhard Abele1,c
1
TU-Darmstadt PTW, Otto-Berndt-Straße 2, 64285 Darmstadt, Germany
a
kniepkamp@ptw.tu-darmstadt.de, bbeermann_m@ptw.tu-darmstadt.de, cabele@ptw.tudarmstadt.de
Keywords: Selective laser melting (SLM); Titanium; Additive manufacturing
Introduction
Additive manufacturing (AM), an emerging field in manufacturing technologies, features the
common principle of building solid parts directly from three-dimensional (3D) computer-aided
design (CAD) data by the addition of material layer by layer. Powder bed fusion-based AM
processes use thermal energy to selectively fuse regions of a powder bed [1]. Laser beam melting is
a process in which powder is applied in layers, which are then selectively melted using a laser beam
to generate 3D parts directly from CAD data. This study focuses on the laser beam melting of metal
powders, which is often termed selective laser melting (SLM). SLM typically involves layer
thicknesses of 20–100 μm and powders with particle sizes ranging from 20–45 μm [2]. The
minimum layer thickness depends on the particle size distribution of the powder being used. To
increase the resolution and accuracy of SLM, the process can now use powders with smaller particle
sizes, which enables layer thicknesses of less than 20 μm [3]. Powders with mean particle diameters
of less than 10 μm tend to agglomerate, which necessitates the use of powder rake systems to
process these materials. Since this newer process differs greatly from the established SLM process,
in this paper, the term micro selective laser melting (μSLM) is used.
Every material has unique physical properties, which require qualification for use in the SLM
process. This qualification can be met by either process simulation or experiment. Variations in the
main influencing process parameters (scan speed, hatch distance, and laser power) comprise the
essential aspects of experimental qualification, starting with single vectors of the first layer [4] and
continuing with the variation of the hatch distance to assess the surface influence [5]. In addition,
investigations are conducted regarding thermal, chemical, and mechanical properties such as
density, based on multiple layer parts such as cubes [6].
Furthermore, simulations are then based on these experimental qualification results and can only
be used when the results are sufficient to confirm the influence of the different parameters in the
process. However, there is often insufficient data to simulate the process [7]. With the use of a 3D
finite element method, the influence of the temperature distribution, width of the melt pool, and the
heat-affected zone can be simultaneously simulated. Then, these single-track experimental results
can be used to predict the surface properties and thermal spread [8]. Compared to the experimental
qualification process, the main advantages of simulation are decreased cost and time savings, but it
is only an approximation procedure [9]. Promoppatum et al. combined numerical and experimental
approaches to simplify the material qualification process by combining several parameters as
energy densities to draw a process map [10]. In this study a similar approach is carried out for the
μSLM process to reduce the number of required experiments necessary to identify suitable process
parameters for new materials.
Surface roughness is a critical factor in many applications. The dominant influence on the fatigue
performance of additive manufactured parts is their surface roughness [11,12]. For medical
instruments, a low average roughness (Ra) is required for sterilization. Currently, state-of-the-art
post processing operations such as grinding, shot peening, or machining are required to meet strict
requirements. These operations increase the total time needed to produce the parts and require partdependent tools, which is contrary to the AM philosophy. The surface morphology of SLM137
produced parts is characterized by several effects that are highly dependent on the relative
orientation of the part with respect to the build direction [13]. On the top-facing surfaces, the
morphology is dominated by the stability of a single melt track and the hatch distance between two
adjacent tracks [14,15]. The morphology of side-facing surfaces that are parallel to the build
direction is dominated by partially melted powder particles. Side-facing surfaces are surrounded by
loose powder particles during the build process, which are drawn into the melt pool, but due to
insufficient energy at the melt pool edge, are only partially melted. This effect can be influenced by
the process parameters of laser energy and scan speed [16]. In the second part of this study the
surface morphology is analyzed using the best parameter setup from the material qualification
strategy described in the first part.
Methods
To conduct the experiments, a commercially available μSLM system DMP50 GP from 3DMicroprint GmbH (Germany) was used. This system uses a 50-W single-mode fiber laser with a
wavelength of 1060 nm focused on a spot 30 μm in size. The laser can be operated in a pulsed or
continuous wave mode. In this study, the laser was operated in the continuous mode only, as the
pulsed mode leads to discontinuous melt tracks resulting in high surface roughness. The build
platform of the system has a diameter of 60 mm and is moved by piezo actuators with an accuracy
of less than 1 μm. The build platform material is Ti6Al4V. To apply layer thicknesses of less than
10 μm, powders with sufficiently small particle diameters must be used. These powders tend to
agglomerate, so it is impossible to coat the powders using gravitational forces only. Thus powder
application is achieved by pressing and wiping it onto the build platform by external force. The
entire system is housed in a glove box system containing a closed-loop inert-gas purification
system, which provides a high-quality inert-gas atmosphere with less than 1-ppm O2 and H2O for
reactional materials like titanium or aluminum. The powder used in this study was provided by TLS
Technik GmbH & Co. KG (Germany). It is analyzed using scanning electron microscopy (SEM)
images and energy-dispersive X-ray spectroscopy (EDX) analysis, the results of which are shown in
Figure 1. The particles are spherical in shape and their D50 size is 3.8 microns.
Quantile
Size [μm]
Particle size distribution
D10 D30 D50 D70 D90
1.7
2.5
3.8
5.4 6.98
Element
Weight [%]
Element analysis
Ti
Al
V
Zr
85.5 6.3 3.6 0.6
Nb
0.5
Mo
0.6
D100
11.26
Sn
0.6
30 μm
Figure 1: Particle size distribution and element analysis of the powder material used
To determine the optimal layer thickness for the given powder, material coating experiments are
conducted. The piezo actuators of the build platform are designed to dislocate when the force
applied to the build platform is too strong, which results in thicker layers than desired. With thinner
layer thicknesses, the force on the build platform increases as the larger particles are squeezed
between the build platform and the rake. The dislocation is measured by the build platforms
positioning system and is recorded after each coating step. Once the dislocation error is higher than
138
the desired layer thickness, the system skips one layer. To determine the layer thickness 100 coating
operations are conducted with layer sizes ranging from 5 to 17 microns. Each experiment was
repeated five times and the average number of skipped layers is used to find a good compromise
between layer thickness (resolution) and the positioning error caused by the powder.
The most important SLM process parameters are the laser power (PL) and the scan speed (vs),
which constitute the energy of a single laser scan track (El) (Eq. (1))
‫ܧ‬௟ ൌ ܲ௟
‫ݒ‬௦
(1)
To produce solid volume bodies using SLM requires a process window in which solid scan
tracks can be generated. To determine this process window, experiments building single scan tracks
with different laser powers and scan speeds were carried out. A single layer of powder is applied
onto the build platform and single lines are scanned with the laser. A full factorial design is used
with laser power ranging from 5 to 40 W in 5-W steps and scan speeds ranging from 500–7000
mm/s in 500-mm/s steps. Each line is build five times and evaluated using optical microscopy. The
lines are categorized according to their quality, ranging from no line present to a continuous melt
track. Using these categories, a process window can be determined for generating solid parts. One
drawback of this method is that the thermal condition on the build platform differs from that in the
later build process, so a second single-line experiment is conducted to generate lines on the parts.
To thermally insulate the parts from the build platform a cross-shaped lattice structure made of
single scan tracks is used as a support structure. In the μSLM process, the support structure has a
typical height of 0.5 mm. The distance between the single tracks is 300 microns, a laser power of 30
W, and a scan speed of 500 mm/s is used. On top of the support structure a rectangular solid is built
as a base for the second single-line experiment. Five single scan tracks are built on top of the base
structure using the same full factorial design and parameter combinations described above
(Figure 2).
Process parameters
#
Object
PL
[W]
vs
[mm/s]
hs
[μm]
1
2
3
Lines
Volume
Support
5 - 40
30
30
500 - 7000
500
1000
34
500
Figure 2: Part single-line experiment
The single lines are analyzed and categorized using optical microscopy. Additionally the track
width is measured and correlated with the line energy to find a suitable hatch distance (hs) for solid
volume bodies. The result is an extended process window for the generation of single-scan tracks on
parts. Based on this process window cubes with edge lengths of 5 mm were built using different line
energies to further extend the process window for solid parts. Based on the results of previous
studies [17], a scan strategy with alternating vectors and a rotation of 83° in each layer was used.
Line energies from 4–14 mJ/mm in 2-mJ/mm steps with laser powers of 20, 30, and 40 W, and a
139
fixed scan track overlap of 20% are investigated. To evaluate the mechanical properties additional
cubes with different volume energies, using the parameters shown in Table 1, with three repetitions
were built.
Table 1:
Parameter combinations for evaluating density
# PL [W]
20
1
20
2
40
3
40
4
vs [mm/s]
1429
3333
2857
6667
hs [μm]
23
15
23
15
EL [mJ/mm]
14
6
14
6
The cubes are built on the same support structure described above, which is removed after the
separation of the build platform using a grinding process. After separation each test specimen was
cleaned with a cleaning procedure similar to that used in Kamath et al. [18]. The measurement of
the density was done in relation to the suggested approach of Spierings and Levy [19] by using the
Archimedes method. The density was calculated using Eq. (2).
ߩ௉ ൌ ݉௔
ߩ
݉௔ െ ݉௙௟ ௙௟
(2)
The total mass (ma) in air of all nine cleaned, dried, and outgassed specimens for each parameter
setup was measured together. For the weight measurement a calibrated Kern ABT 220-4M scale
was used. After measuring all the specimens dry, the wet mass (mfl) was balanced in a 5% tenside
solution using the density of the solution (ߩ௙௟ ) at the given temperature.
The surface morphology is analyzed using a hexagonally shaped specimen with a height of 5 mm
and a face length of 6 mm. In the first step the specimen was built using only a single exposure step,
using the best parameter set from previous investigations. The surface roughness was measured
using the tactile measurement system MarSurf GD25 with a MFW 250 surface probe with a tip
angle of 90° (Mahr GmbH, Germany).
Face
1+5
2+4
3
6+8
7
Figure 3: Specimen and method used to analyze surface morphology
140
Orientation
90°
45°
0°
135°
180°
The measuring distance was 4.8 mm and a cut-off filter of 0.8 mm. Five horizontal and vertical
measurements were carried out on each side and on the top surface of the specimen (Figure 3). In
the second step, an additional contour exposure step was added to improve the surface roughness on
the vertical surfaces. Based on the single-line experiments a scan speed of 500 mm/s and a laser
power of 30 W was used in the contour exposure step.
Results
Figure 4 shows the results of the coating experiments, in which the number of skipped layers
decreases with larger layer sizes. The minimal number of skipped layers is seven, with a layer
thickness of 17 microns. Between layer thicknesses of 9 and 11 microns, the number of skipped
layers jumps from 10 to 12, which is more than 10% of the total number of layers. More than 10%
skipped layers is not desirable as it will reduce the accuracy of the build process in the Z-direction.
As such a layer thickness of 11 microns is used for the other experiments in this study.
No. of Skipped layers
18
16
std. dev.
14
12
10
8
6
4
2
3
5
7
9
11
13
15
17
Layer thickness [μm]
Figure 4: Results of the coating experiments
The process window for the Ti6Al4V powder was developed in three steps, using two single-line
experiments and one volume-body experiment. The scan tracks of the single line experiments were
examined using the categories shown in Figure 5.
Balling
Disconnected Track
Figure 5: Quality categories for single-line experiments
Homogenous Track
For lines directly on the build platform, a minimal laser power of 15 W is required for a stable
melt pool at the slowest tested scan speed of 500 mm/s. With higher laser powers, the speed can be
increased up to 2000 mm/s at 40 W. Figure 6 [A] shows the process windows for a stable melt pool
for single lines. Based on the results support parameters and a first set of solid-part parameters can
be chosen for the second experiment. The single-line experiment was repeated on actual parts with
support structure to simulate the thermal condition, which occurs on real parts during the build
process. In addition to categorizing the lines the line widths is measured using optical microscopy.
The experimental result is an extended process window (Figure 6 [B]) with at maximum scan speed
of 3000 mm/s for a laser power of 40 W.
141
7000
Scan speed [mm/s]
6000
D
5000
A
C
4000
3000
B
2000
Stable melt pool on substrate
A+ B
Stable melt pool on part
A+ B+ C
Process window for solid
parts
D
1000
0
No melting
A
10
20
30
40
Laser power [W]
Figure 6: Qualitative process window for Ti6Al4V using μSLM
0
Figure 7 shows the line width measurement results, which correlate with the line energy and can
be approximated using a second-degree polynomic approach, as shown in Eq. (3) below:
‫ݓ‬௅ ൌ െͲǤͲͲ͵Ͷ݁௅ ଶ ൅ ͲǤͺͷͷͶ͸݁௅ ൅ ʹͶǤ͹͵͸.
(3)
Based on previous μSLM studies, to achieve homogenous surfaces without voids in high-density
volume bodies, an overlap of 20% between two adjacent scan lines is recommended. Equation (3)
can be used to calculate the required hatch distances with an overlap of 20% for the given line
energies, which was used in the last step to expand the process window using volume bodies. Figure
6 shows the volume-body experimental results. It is possible to build volume bodies down to a line
energy of 6 mJ/mm with the highest scan speed of 6666 mm/s at 40 W. Below a line energy of 6
mJ/mm, the parts tend to delaminate or it becomes impossible to build any solid structure (Figure 8,
left).
Experiment
Calculated
80
Delamination
60
50
R² = 0.9905
Cracks
Line width [μm]
70
40
30
20
0
10
20
30
40
50
Line energy [mJ/mm]
Figure 7: Line widths of parts in the experiment
142
60
70
80
90
Build direction
Cracks can be identified on all parts at line energies of more than 6 mJ/mm (Figure 8, left). The
cracks are orientated both horizontally and perpendicular to the build direction. The cracks expand
over several layers (Figure 8, right) and the number of cracks is greatly reduced at lower line
energies. Based on these results, only lower line energies of 14 mJ/mm and 6 mJ/mm were used for
the analysis of the density of the parts.
1500 μm
200 μm
Figure 8: Part with layer delamination (left) and SEM image of cracks (right)
Table 2 shows the density measurement results. The 20-W specimens have lower densities than
the 40-W specimens with the same line energies. The specimens with a line energy of 14 mJ/mm
have a higher density than those with 6 mJ/mm, but also have significantly more cracks. Using a
reference density of 4.45 g/cm³ for the casted Ti6Al4V, relative densities of 98.95% can be
achieved.
Table 2: Density measurement results
#
1
2
3
4
Laser Power
[W]
20
20
40
40
Line energy
[mJ/mm]
14
6
14
6
Density
[g/cm³]
4.37
4.17
4.40
4.33
Relative
density*
98.28 %
92.99 %
98.95 %
97.31 %
* calculated using a reference density of 4.45 g/cm³
Figure 9 summarizes the surface roughness measurement results. The roughness average (Ra),
without an additional contour exposure step for all orientations, ranged between 5.1 μm and 5.7 μm.
No relevant difference can be seen between the horizontal and vertical measurements. The top
surface had a similar mean roughness of 5.2 μm. As expected the roughness average can be greatly
reduced by adding and additional contour exposure step. The mean roughness over all faces is
between 1.3 μm and 2.1 μm. This time an influence of the measurement direction can be seen with
a mean roughness of 1.5 mm for the horizontal direction and 1.8 μm for the vertical one. By adding
a contour exposure step the roughness average of surfaces parallel to the build direction can be
reduced by the factor of three.
143
Horizontal
Roughness average [μm]
7
Vertical
Horizontal contour
Vertical contour
Top Surface
min / max dev.
6
5
4
3
2
1
0
0°
45°
90°
135°
180°
Top Surf.
Orientation to coating direction
Figure 9: Results of the surface roughness measurement
Discussion
The coating experiment results indicate the ideal layer thickness to be 11 μm for the given
powder, which correlates with the particle size distribution measurement results for powder with
particle diameters greater than 11 μm. These larger particles cannot fit in the gap between the build
platform and the coating blade, which causes a dislocation of the built platform due to the increased
force. To further reduce the layer thickness and to increase the resolution in the build direction,
finer powders are required. A single-line experiment on the build platform can be used as a first
indication of a process window for a given material. Since the melting metal powder was so close to
the build platform in this experiment, it is expected the thermal conditions to differ greatly from
those occurring when building real parts, due to the associated heat loss. This is confirmed by the
second on part single-line experiment. The support structure acts as a thermal insulator between the
build platform and the part, thus leaving more energy available to melt the powder and allowing the
use of higher scan speeds. The amount of induced energy correlates with the melt pool size and thus
with the thickness of the lines, which can be used to estimate the required hatching spaces between
adjacent scan tracks when building volume bodies. Volume bodies can be built using even lower
line energies than indicated by the second experiment, which can be explained by examining the
overall thermal situation in which a layer of powder is melted using several adjacent scan lines.
After melting the first track, the surrounding area is heated up, so less energy is required for the
next track.
The volume-bodies experiment showed that cracks are generated in parts with higher line
energies. These cracks then expand over several layers, which indicates that the cracks occur due to
residual stress and not due to insufficient bonding between the layers (delamination). Residual
stress is a well-known problem in SLM and is influenced by several factors. Small laser spot and
melt pool sizes lead to a localized heat input with fast solidification and large thermal gradients.
Differences in thermal shrinkage leads to a large build-up of thermal stresses, which can cause
macro or micro cracks and delamination. Vrancken et al. examined the residual stresses for different
materials in the SLM process and measured high values in Ti6Al4V and nickel-based alloys [20].
Several approaches may be taken to reduce residual stress. One approach is to use optimized scan
strategies [21–23] and another is to use preheating to lower the thermal gradients [24–26].
The results of the surface roughness measurement showed that roughness averages around
5.2 μm can be achieved using one exposure step only. By adding a contour exposure step the
roughness can be greatly reduced. The results show the main advantage of the fine powder used in
the μSLM process compared to traditional powders for the SLM process as better surface qualities
can be achieved [27]. The roughness on the top facing surface is with a Ra of 5.2 higher than
144
expected. This is an indication for a lack of fusion of the parts core process parameters which can
also be seen in the density measurement. And additional re-melting exposure step could be applied
to top facing surfaces to reduce the roughness [28].
Conclusions and outlook
In this study Ti6Al4V powder with an average particle size of 3.8 μm is used to generate threedimensional parts using the μSLM process. A simplified experimental approach can be used to
draw a general process window that requires only three build jobs. While it is possible to produce
crack-free parts using low line energies, only a relative low density of 97.31% could be achieved. A
roughness average of less than 2 μm on side- and 5.2 μm on top-facing surfaces can be achieved
using a core / contour exposure strategy. Future studies will concentrate on different scan and
preheating strategies to reduce the number of cracks at higher line energies and thereby increase
density.
Acknowledgments
This study was partially funded by the Federal Ministry of Economic Affairs and Energy through
the AiF GmbH (Grant No. KF2012461WO3). The authors would like to express their thanks for
this support.
References
[1] ASTM F2792-12a, Terminology for Additive Manufacturing Technologies, West
Conshohocken, PA, 2012.
[2] D.D. Gu, W. Meiners, K. Wissenbach, R. Poprawe, Laser additive manufacturing of metallic
components: materials, processes and mechanisms, International Materials Reviews 57 (3)
(2012) 133–164.
[3] A. Streek, P. Regenfuss, H. Exner, High Resolution Laser Melting with Brilliant Radiation, in:
25. Solid Freeform Fabrication Symposium, Austin, Texas, 2014, pp. 377–389.
[4] I. Yadroitsev, P. Bertrand, I. Smurov, Parametric analysis of the selective laser melting
process, Applied Surface Science 253 (19) (2007) 8064–8069.
[5] C. Over, Generative Fertigung von Bauteilen aus Werkzeugstahl X38CrMoV5-1 und Titan
TiAl6V4 mit "Selective Laser Melting", Shaker, Aachen, 2003.
[6] M. Ott, Multimaterialverarbeitung bei der additiven strahl- und pulverbettbasierten Fertigung,
Utz, München, 2012.
[7] I. Kellner, Materialsysteme für das pulverbettbasierte 3D-Drucken, Utz, München, 2013.
[8] J. Hötter, M. Fateri, A. Gebhardt, Prozessoptimierung des SLM-Prozesses mit hoch-reflektiven
und thermisch sehr gut leitenden Materialien durch systematische Parameterfindung und
begleitende Simulationen am Beispiel von Silber, RTejournal - Forum für Rapid Technologie
2012 (1) (2012).
[9] M. Cloots, A. Spierings, K. Wegener, Thermomechanisches Multilayer-Modell zur Simulation
von Eigenspannungen in SLM-Proben, in: Sysweld user forum, 2013.
[10] P. Promoppatum, R. Onler, S.-C. Yao, Numerical and experimental investigations of micro and
macro characteristics of direct metal laser sintered Ti-6Al-4V products, Journal of Materials
Processing Technology 240 (2017) 262–273.
[11] D. Greitemeier, C. Dalle Donne, F. Syassen, J. Eufinger, T. Melz, Effect of surface roughness
on fatigue performance of additive manufactured Ti–6Al–4V, Mater. Sci. Technol. (2015)
1743284715Y.000.
[12] H.A. Stoffregen, K. Butterweck, E. Abele, Fatigue Analysis in Selective Laser Melting:
Review and Investigation of Thin-walled Actuator Housings, in: 25. Solid Freeform
Fabrication Symposium, Austin, Texas, 2014, pp. 635–650.
145
[13] G. Strano, L. Hao, R.M. Everson, K.E. Evans, Surface roughness analysis, modelling and
prediction in selective laser melting, Journal of Materials Processing Technology 213 (4)
(2013) 589–597.
[14] I. Yadroitsev, I. Smurov, Surface Morphology in Selective Laser Melting of Metal Powders,
Physics Procedia 12 (2011) 264–270.
[15] Y. Pupo, L. Sereno, J. de Ciurana, Surface Quality Analysis in Selective Laser Melting with
CoCrMo Powders, MSF 797 (2014) 157–162.
[16] K. Mumtaz, N. Hopkinson, Top surface and side roughness of Inconel 625 parts processed
using selective laser melting, Rapid Prototyping Journal 15 (2) (2009) 96–103.
[17] J. Fischer, M. Kniepkamp, E. Abele, Micro Laser Melting: Analysis of Current potentials and
Restrictions for the Additive Manufacturing of Micro Structures, in: 25. Solid Freeform
Fabrication Symposium, Austin, Texas, 2014, pp. 22–35.
[18] C. Kamath, B. El-dasher, G.F. Gallegos, W.E. King, A. Sisto, Density of additivelymanufactured, 316L SS parts using laser powder-bed fusion at powers up to 400 W, Int J Adv
Manuf Technol (2014).
[19] A. Spierings, M. Schneider, R. Eggenberger, Comparison of density measurement techniques
for additive manufactured metallic parts, Rapid Prototyping Journal 17 (5) (2011) 380–386.
[20] B. Vrancken, R. Wauthle, J.-P. Kruth, J. van Humbeeck, Study of the influence of material
properties on residual stress in selective laser melting, in: 24. Solid Freeform Fabrication
Symposium 2013, Austin, Texas, 2013.
[21] P. Mercelis, J. Kruth, Residual stresses in selective laser sintering and selective laser melting,
Rapid Prototyping Journal 12 (5) (2006) 254–265.
[22] L. Parry, I.A. Ashcroft, R.D. Wildman, Understanding the effect of laser scan strategy on
residual stress in selective laser melting through thermo-mechanical simulation, Additive
Manufacturing (2016).
[23] M.F. Zaeh, G. Branner, Investigations on residual stresses and deformations in selective laser
melting, Prod. Eng. Res. Devel. 4 (1) (2010) 35–45.
[24] K. Kempen, L. Thijs, B. Vrancken, S. Buls, J. van Humbeeck, J.P. Kruth, Producing crackfree, high density M2 HSS parts by Selective Laser Melting: Pre-Heating the Baseplate, in: 24.
Solid Freeform Fabrication Symposium 2013, Austin, Texas, 2013.
[25] R. Mertens, B. Vrancken, N. Holmstock, Y. Kinds, J.-P. Kruth, J. van Humbeeck, Influence of
Powder Bed Preheating on Microstructure and Mechanical Properties of H13 Tool Steel SLM
Parts, Physics Procedia 83 (2016) 882–890.
[26] F. Brückner, D. Lepski, E. Beyer, Modeling the Influence of Process Parameters and
Additional Heat Sources on Residual Stresses in Laser Cladding, J Therm Spray Tech 16 (3)
(2007) 355–373.
[27] A. Spierings, N. Herres, G. Levy, Influence of the particle size distribution on surface quality
and mechanical properties in AM steel parts, Rapid Prototyping Journal 17 (3) (2011) 195–
202.
[28] E. Yasa, J.-P. Kruth, Application of laser re-melting on selective laser melting parts, in: Miran
Brezocnik (Ed.), Advances in Production Engineering & Management, Maribor, Slovenia,
2011, pp. 238–310.
146
&KDSWHU
,QGXVWU\
Prototyping in highly-iterative product development for technical
systems
Sebastian Schloesser1,a,b, Michael Riesener1, Günther Schuh1
1
RWTH Aachen University, Laboratory for Machine Tools and Production Engineering (WZL)
Steinbachstraße 19, 52074 Aachen, Germany
a
s.schloesser@wzl.rwth-aachen.de, b+49 241 80-28019
Keywords: highly-iterative product development, prototypes, minimum viable product, Developers
Dilemma
Abstract
Nowadays, the realization of radical innovations is a crucial success factor for manufacturing
companies acting in an environment of increasing market dynamics. Hereby, heterogeneous
customer requirements altering in short product life cycles while requiring a high variety of product
functions are some of the main challenges, many manufacturers of technical systems are facing.
Similar circumstances concerned the software industry in the early 1990s. As an answer agile
development methods like Scrum have initially been applied in development projects. The objectoriented iterative development of functional product increments being shippable to potential
customers at the end of a development phase has helped the industry to dynamically align the
product to the customers’ needs. Doing this, the development time has been reduced significantly.
The development of technical systems in contrast, typically follows maturity-oriented approaches
focusing the entirety of a product at each development stage. First approaches to apply iterative
development methods to technical systems outline the challenge to divide the systems into
functional increments to be assigned to short development cycles. This paper presents the
framework of a bottom-up approach to systematically divide a technical system into coherent
increments which are potentially shippable as prototypes to internal and external customers in order
to reduce market or technological uncertainties. While several top-down approaches already
recommend to constitute prototypes in highly-iterative product development based on user stories,
requiring distinct elements of a technical system, this approach focusses the technical feasibility to
build up prototypes embodying distinct elements of a technical system. Hereby, the conflict between
the efficiency-oriented realization of marginal prototypes aiming at quick customer responses and
the realization of effectivity-oriented extensive prototypes aiming at a maximum degree of
customer’s perception constitutes an inherent area of conflict. Depending on the purpose of a
prototype, selected criteria are applied to a technical system’s architecture to derive coherent
functions and components to be developed in a given development cycle. The findings about
decoupling technical systems into distinct increments are then aggregated to derive implications for
the product architecture design for technical systems to facilitate the application of highly-iterative
product development processes.
Introduction
Nowadays, the realization of radical innovations is a crucial success factor for manufacturing
companies acting in an environment of increasing market dynamics. Hereby, heterogeneous
customer requirements in combination with shorter product life cycles leading to a high variety of
product functions are some of the main challenges, many manufacturers of technical systems are
facing. [1] Realizing radical innovations within an environment as briefly outlined, implies a high
degree of both market and technological uncertainty [2]. After similar circumstances have
concerned the software industry in the early 1990s, the industry initiated the broad application of
*
Submitted by: Sebastian Schloesser
149
agile development methods like Scrum in complex development projects [3]. Apart from an
extensive change regarding organizational setup and mindset in development teams, the application
of agile development methods led to a massive change in development process design by
consequently prioritizing the early and intense reduction of uncertainties using incremental
functional prototypes [4].
Particularly, with regard to confirmed savings in cost, time and quality within development
projects, there has recently been a lot of industrial and scientific attention, trying to adopt agile
development methods to the maturity-oriented approaches in technical system’s development
[5][6][7][8][9]. However, the transfer and application of agile product development from software
to technical systems entails challenges especially in terms of realizing incremental prototypes. At
this, dividing complex systems into divisible elements is one of the main challenges [10]. Especially
when it comes to the request to rapidly realize and equally reliably validate incremental prototypes
in close collaboration with internal or external stakeholders the challenges emerge.
Therefore, this paper aims at introducing a research framework to systematically support an
effective and efficient prototyping in the context of agile product development for technical
systems. After the basic characteristics of agile product development and the importance of
prototyping in particular are described, the core challenges of prototyping with respect to quick
validation of developed technical systems are outlined. Relevant research approaches, discussing
related issues are briefly introduced afterwards. Thereafter, the research framework is introduced
before key findings are summarized and future work is drafted.
Characteristics of highly-iterative product development
The principles of agile product development were first introduced by the so-called Agile
Manifesto in 2001. The authors introduce several paradigms and principles for a new approach in
software development by focusing customer satisfaction through early and continuous delivery of
valuable software rather than following a strict development plan. By continuously delivering
functioning product increments and receiving feedback from the customer, change requests can
effectively be incorporated based on early customer feedback [11].
In order to apply the principles of agile product development to technical systems’ development,
the established development processes have to be adjusted. As depicted in figure 1, the agile
development approach basically differs from the traditional sequential approach in terms of
measuring development progress. While sequential processes are continuously tracked by an overall
maturity status, agile processes rather consider distinct incremental prototypes as key indicators for
development progress.
Specification
overall maturity
statuses
development
progress
Specification
Specific
Concept
Design
sequential
agile
Concept
Conce
Design
Desi
Realization
Realization
Realiza
Validation
Validation
Valida
incremental
prototypes
development
progress
Figure 1: Sequential process approach vs. agile process approach [12]
When it comes to the integration of agile principles to sequential development processes of
technical systems, the term highly-iterative product development has gotten certain attention and
shall equally be adopted for this work [1][6][7][8]. In this context, it is widely agreed, that a pure
150
adoption of agile process frameworks is not applicable due to specific requirements in technical
systems development [6][7][8]. Therefore, this paper explicitly focuses on the specific
requirements, arising in terms on an amplified usage of physical prototypes. Especially the intense
realization of physical prototypes in highly-iterative product development needs to be systematically
implemented to the overall process framework. Among various methodologies and frameworks
used to implement agile principles, Scrum has been the most commonly used one [13]. The
framework divides the product development process into multiple so-called sprints (iterations),
targeting the delivery of distinct product increments to be validated at the end of a sprint. During
sprint planning the overall sprint target is defined by conducting a risk evaluation and subsequent
selection of relevant development questions. By doing this, the length of a sprint is determined in
accordance with the relevant scope to be elaborated. The act phase contains the actual iterative
development of a functioning product increment. Afterwards, the incremental prototype is validated
in collaboration with internal and external stakeholders within the check phase. At this, the
prototype constitutes the most relevant part of an iteration since all relevant stakeholders are meant
to provide information concerning market or technological uncertainties to be considered in further
development based on the application of the respective prototype [14].
Prototyping in highly-iterative product development
Utilizing prototypes is considered an effective technique to validate different design or
technological alternatives and communicate ideas to end users or further stakeholders as part of the
product development process [15]. The comparison of the current development status with
stakeholders’ expectations throughout the whole development process consequently ensures
technical feasibility and market acceptance [16]. In the course of broader application of highlyiterative product development for technical systems in combination with a widespread usage of
rapid prototyping methods such as 3D printing, the utilization of physical prototypes will increase
especially in the early development phases [17].
While software products are divisible into different development items to be realized in distinct
sprints in form of functional microservices [18], the definition of suitable functional incremental
prototypes remains one of the main challenges within the highly-iterative product development.
Technical systems contain impartible components, which are often highly correlated and on their
own do not serve as potentially releasable increments towards customers [5]. Furthermore, physical
prototypes usually require substantial effort regarding time and cost in order to realize functional
increments in an adequate tangibility to generate valid results when confronting customers or other
stakeholders with artefacts. Because of this, it is a particular challenge to determine an appropriate
degree of fidelity of a prototype with respect to significant costs and efforts for high fidelity
prototypes [19].
According to this challenge, Cooper et al. adjust the understanding and definition of a “done
sprint” in the course of highly-iterative development by introducing a differentiation between
prototypes and so-called protocepts. Protocepts are defined as product versions between a product
concept and a ready-to-trial prototype. Protocepts can be of physical or virtual nature as long as
customers or further stakeholders can provide targeted feedback [5].
The concept of Minimum Viable Products (MVP) is a popular approach to assess the required
degree of detail for prototypes and protocepts. In order to concentrate all development efforts on the
product’s unique value proposition and to limit prototyping efforts, the concept suggests cutting out
all non-essential features of the product while still achieving a learning effect for subsequent
development. Still, the main challenge in developing an MVP remains in defining the right
combination of desired learning effects and required quality [20]. Hereby, it is crucial to effectively
realize as much functionality as needed on the one hand and to efficiently simplify as far as possible
on the other hand [16]. Consequently, it remains a substantial challenge to plan for an effective and
efficient realization and utilization of prototypes, protocepts, MVPs or other types of product
increments.
151
required quality
Design
Before related work in the field of prototyping for highly-iterative product development is
analyzed, an inherent challenge when planning and realizing prototypes for fast development
validation shall be presented, introducing the Developers Dilemma (see fig. 2).
Target
overquality
•
underquality
X
•
MVP does not produce
reliable results
X
desired learning scale
Target
The desired
learning scale and
further relevant
target dimensions
are defined during
sprint planning for
distinct sprints
The x-axis
positioning is predefined
Design
•
•
The developer is
responsible for an
adequate
positioning on the
y-axis
Quality includes
common quality
measures as well
as the costs and
technical depth of
the solution
The Developers Dilemma consists in optimizing the
design as close to the minimum viability line as
possible while neither exceeding nor falling below the
required quality.
Figure 2: The Developers Dilemma [20]
The previously mentioned challenge of developing an adequate Minimum Viable Product (MVP)
within each sprint is initially relevant during sprint planning. During this phase, the team decides,
which items from the product backlog (“What?”-Aspect) should be completed in the upcoming
Sprint and which approach should be used during development (“How?”-Aspect) depending on the
stakeholders in charge [14]. Thereby, the scope as well as the targets of an individual product
increment are determined. Target dimensions such as the desired learning scale are usually defined
during sprint planning either by the management team or by the developers themselves. Hereafter,
an appropriate design, for example in terms of quality, has to be chosen to optimally fit the targeted
learning scale. If the design exceeds the required quality on the one hand, waste is created in terms
of working hours, costs, material etc. If the design falls below the required quality on the other
hand, the MVP does not provide reliable results during validation. Considering natural quality
tendencies of software or hardware engineers, commonly striving for high-quality development, the
effect is named Developers Dilemma [20].
The inevitable alignment of target and design dimensions when planning the prototypical
realization of product increments in the course of highly-iterative product development is one of the
formative properties of the research framework to be introduced in this paper. In this context, the
illustrated interrelation is only one example for multifold interrelations between target dimensions
and design dimensions to be elaborated as part of the holistic research framework.
Related work
The systematic planning for individual sprints in highly-iterative product development in order to
increase efficiency and effectiveness of the early and intensive utilization of physical prototypes has
recently been investigated in scientific literature. Cooper and Sommer present approaches, enabling
early, quick and cheap validation of development progress in physical product development by
using so-called protocepts. However, the question is raised, how to determine a required
incremental representation of the product from a generic concept towards a full prototype depending
on the stakeholder in charge [5]. To develop a description model for prototypes, Exner et al. propose
different scopes of prototypes, depending on the degree of representation of the final product [19].
Similarly, Hilton et al. present a concept to plan the designing and prototyping process.
Emphasizing the need to reduce the costs of prototypes the authors suggest to increase single
component testing rather than implementing and testing entire systems [21]. Kampker et al. describe
the characteristics of prototypes in highly-iterative product development projects and point out, that
152
prototypes should not be developed following stiff and planned degrees of maturity, but rather to
answer individually arising questions [22]. Apart from assessing the role of prototyping in product
development projects, multiple approaches focus on systematically bundling and assigning
development tasks to individual sprints. In this context, Rauhut developed a generic methodology to
structure and synchronize development tasks [23]. From a technical point of view, modularization
can be considered as an approach to effectively and efficiently bundle development items as well.
Herefore, module drivers are utilized to divide the product into manageable modules for
development and testing efforts [24]. Modular microservices in agile software development are
recently regarded as an approach to facilitate an easier, quicker and particularly scalable software
development. Considering this approach, the optimal number and size of individual microservices
to be developed in parallel have to be determined based on decision criteria such as team size,
available infrastructure etc. in the forefront of a sprint [18]. A first approach to bundle development
activities in the context of highly-iterative product development is introduced by Schuh et al. [6]
[7]. The authors propose a framework to assess, which parts of a product can be developed using
agile methods and which parts require the application of conventional stage-gate development
approaches by evaluating individual parts regarding distinct dimensions such as customer relevance,
market & technology uncertainty as well as prototype manufacturability. Böhmer et al. explicitly
illustrate the conflict between systematic development approaches and “trial and error approaches”
particularly with regard to prototyping within highly-iterative development processes. The authors
stress the need for flexibility in product development to reach a “happy medium” between complex
prototyping efforts and “trial and error” efforts, which consume less resources, but do not
incorporate the entire functionality of the product [17]. Addressing this issue in software
development, the Filter-Fidelity-Model by Hochreuter et al. proposes an approach to quantify a
prototype’s fidelity in order to allocate adequate prototypes to respective tasks and development
stages [25].
In conclusion, related literature touches the outlined challenges in prototyping for highly-iterative
product development and provides suitable links to a holistic framework enabling an efficient and
effective prototyping. In addition, several approaches from software and hardware perspective
corroborate the challenge which has been introduced in the previous section.
Research Framework
The exemplified challenge in sprint planning for highly-iterative development of technical
systems shall be addressed in a holistic research framework. Therefore, the five step approach,
which is illustrated in figure 3, is introduced in this paper.
Principles of task allocation and definition in sprint planning
[partial modell I | descriptive model]
Research question: Which principles are utilized to systematically structure extensive development efforts into distinct sprint phases?
Content-based design dimensions of incremental
prototypes
[partial model IIa | descriptive model]
Research question: How can incremental prototypes, which are
realized within an iteration, be generically described?
Target dimensions of an iteration
[partial model IIb | descriptive model]
Research question: Which are the dimensions being targeted
when planning for an iterative realization of incremental
prototypes?
Interrelations in the area of conflict when planning for incremental prototypes
[partial model III | explanatory model]
Research question: How can the interrelations of design dimensions and target dimensions, constituting immanent conflicts when realizing
incremental prototypes, be holistically modelled?
Systematic sprint planning for incremental prototypes in highly-iterative product development
[partial model IV | decision model]
Research question: How can the planning for incremental prototypes in highly-iterative product development for technical systems be
systematically optimized by aligning design dimensions and target dimensions with respect to generic conflicts?
Figure 3: Research Framework
153
In the following the distinct research questions are presented as well as the rough contents to be
elaborated in distinct partial models. An in-depth elaboration of the partial models will be subject of
further publications within the research area of prototyping for highly-iterative product
development.
Principles of task allocation and definition in sprint planning. User stories, prioritized
requirement backlogs, distinct technological or market uncertainties being formulated as basic
questions are only an excerpt of multifold approaches to define scopes for distinct sprints. For
example, a user story covers one explicit area of application of a particular customer group or
market segment, so that the requirements as well as the characteristic of that group can be addressed
in a prototype comprehensively [26]. The mentioned approaches recommend to constitute
prototypes based on top-down given questions to be answered as quickly as possible. In contrast
bottom-up approaches, preferably considering technical feasibility when bundling components or
functions for prototypical realization are so far neglected. At this, the analogy to the Module
Indication Matrix appears suitable. Hereby, components are bundled to modules based on typical
patterns which are identified by evaluating distinct components regarding different factors such as
separate testability or supplier availability [24]. While the Module Indication Matrix serves as a
comprehensive bottom-up approach to systematically divide a product into suitable scopes, an in
depth elaboration of an according methodology for prototypical bundling of components in the
course of highly-iterative development is required. For instance, coherent use cases can be deployed
to identify relevant functions and components which integrally represent a distinct use case.
Furthermore, aspects like a coherent design perception are likely to be utilized when identifying
relevant components for validating current design status with customers or other stakeholders. From
a technical perspective, components utilizing identical materials or manufacturing technologies are
likely to be bundled for early validation cycles.
Existing principles of top-down task allocation and definition as well as the mentioned bottomup approach are condensed into a model, describing current state of the art principles to
systematically structure extensive development efforts into distinct sprint phases.
Content-based design dimensions of incremental prototypes. When planning for a sprint to
rapidly elaborate and validate distinct development scopes, there are usually a lot of degrees of
freedom left for the team or an individual developer to implement a prototype. Especially, when it
comes to physical implementation of prototypes serving for validation at the end of a sprint, it is
inevitable to precisely determine characteristics of certain design dimensions such as scope, quality
or fidelity in accordance with the defined set of targets. As already exemplified by aid of the
Developers Dilemma, the team or the individual developer are meant to optimally accomplish the
set target in form of a learning rate by neither exceeding nor falling below the required quality.
According to this bold example, it is assumed that a set of design dimensions exists
comprehensively representing the degrees of freedom to be determined in the phase of sprint
planning.
In form of a morphological box, relevant design dimensions as well as the respective
characteristics are elaborated to enable a content-based description of incremental prototypes being
realized in an iteration.
Target dimensions of an iteration. As already mentioned in the previous section, it is inevitable to
align the realization of a prototype to the set targets for an individual iteration. According to Rauhut,
the specific method of validation to investigate an iteration’s outcome depends on the three
characteristic target dimensions expected result, available time and reasonable effort/ costs [23].
Moreover, it has to be taken into account which stakeholder is the targeted addressee of an
individual iteration as this significantly affects the level of abstraction of the iteration’s outcome
format. For example, aiming at an internal iteration of a specific functionality a considerably less
154
comprehensive prototype is suitable than it would be necessary to generate valid results by
integrating an external customer’s perspective into the validation cycle [27].
As a result of this partial model, a comprehensive description of relevant target dimensions is
elaborated, which serves as a general framework to be applied when defining targets for a sprint in
highly-iterative product development.
target dimension
[validity]; [effort]
diminishing marginal utility of the
generated result
Exponentially
increasing effort
due to necessary system
integration, assembly etc.
design dimension
[e.g. product scope]
target dimension
[validity] [celerity (time-1)]
Interrelations in the area of conflict when planning for incremental prototypes. The
importance of matching design and target dimensions in sprint planning for highly-iterative product
development has already been illustrated by the Developers Dilemma and shall explicitly be
addressed in this partial model.
Figure 4 shows interrelations between exemplary target dimensions (y-axis) and design
dimensions (x-axis) which have to be taken into account when aiming at an efficient and effective
realization and utilization of prototypes in highly-iterative product development. Both examples
result in specific areas of conflict, which have to be dissolved pro-actively during sprint planning.
The specific area of conflict in the first case (see fig. 4, left) originates from the conflict between a
prototype generating maximum valid results, e.g. from a market expedition, and a prototype being
realized with minimum effort. On the one hand, the aim is to reduce a prototype to a fractional
scope of the product’s components to be implemented with minimum efforts. On the other hand
delivering a comprehensive prototype for a market expedition promises much more valid results
than a fractional prototype as customers’ perception is much more meaningful when confronted
with an overall impression of a product. Knowing this, the prototype scope has to be adjusted to
ideally dissolve the resulting area of conflict.
A second case, depicting another specific area of conflict is shown at the right part of figure 4.
Hereby, the conflict originates from the desire to quickly generate results in highly-iterative product
development which are as valid as possible. Again, a design dimension, which in this case is the
fidelity of a prototype, needs to be aligned with the required celerity and the desired validity of an
iteration.
Based on an extensive elaboration of existing literature and an in-depth analysis of several use
cases in industrial practice the relevant interrelations are explained and finally aggregated in this
partial model. Consequently this model encompasses the basis for an integral determination of
optimally aligned design and target dimensions in sprint planning for highly iterative product
development.
continuously increasing validity
of the generated result
continuously decreasing
celerity due to longer
implementation cylces
design dimension
[e.g. fidelity]
Figure 4: Exemplary interrelations of target and design dimension (schematic, qualitative)
Systematic sprint planning for incremental prototypes in highly-iterative product
development. To eventually support decision making in sprint planning within highly-iterative
product development projects the findings are consolidated into a decision model. The decision
model aims at defining an optimum configuration and allocation of development tasks to distinct
sprints, considering both relevant design dimensions and target dimensions. The aim is to accurately
155
define and design incremental prototypes at an optimum point of abstraction respectively in an
optimal technical depth to be realized within individual sprints.
Based on the identified principles of task allocation in alignment with the relevant design and
target dimensions as well as the respective interrelations an optimum configuration of development
tasks to be elaborated and physically implemented is defined. The incremental prototypes are meant
to meet the optimum between reducing a certain degree of uncertainty with respect to the overall
initiative and an efficient elaboration and implementation during the sprint. Hereby one of the major
development patterns, especially being experienced in an environment of advanced engineering
knowledge and expertise, is explicitly scrutinized. For example traditional German machine
engineering companies will investigate the inherent drive of perfectly and comprehensively
elaborating engineering tasks, also known as the completeness-paranoia, in this context [26].
Conclusion and further research
In order to increase efficiency and effectiveness in planning for incremental prototypes in highlyiterative product development, a comprehensive research framework is introduced in this paper. The
focus of the paper lies on the motivation of the practical and theoretical need for a more systematic
approach to plan, elaborate and implement costly prototypes with respect to individually targeted
uncertainties. The partial models are briefly introduced to logically illustrate the approach. On the
level of partial models this paper addresses future research in the scientific community, focusing on
agile or highly-iterative development approaches for technical systems. Future research will focus
on the relevant design dimensions as well as on respective characteristics to operationalize the
concept of modelling incremental prototypes on a technical level. By doing this, a generic contentbased description model is created which facilitates a further discussion regarding the optimum
scope, fidelity, technical depth etc. of incremental prototypes.
References
[1] T. Gartzen, F. Brambring and F. Basse, Target-oriented prototyping in highly iterative product development,
Procedia CIRP 51, 2016, pp. 19-23.
[2] G. C. O'Connor and M. P. Rice, A Comprehensive Model of Uncertainty Associated with Radical Innovation,
Journal of Product Innovation Management, vol. 30, Issue S1, 2013 pp. 2-18.
[3] D. F. Rico, H. H. Sayani, and S. Sone, The business value of Agile software methods: Maximizing ROI with just-intime processes and documentation, J. Ross Pub., Fort Lauderdale, FL., 2009.
[4] D. Karlström and P. Runeson, Integrating agile software development into stage-gate managed product development,
Empirical Software Engineering, Vol. 11 No. 2, 2006, pp. 203-225.
[5] R. G. Cooper and A. F. Sommer, From Experience: The Agile–Stage-Gate Hybrid Model: A Promising New
Approach and a New Research Opportunity, Journal of Product Innovation Management, vol. 33, Issue 5, 2016, pp.
513-526.
[6] G. Schuh, M. Riesener and F. Diels, Methodology for the Suitability Validation of a Highly Iterative Product
Development Approach for Individual Segments of an Overall Development Task, Proceedings of the WGP Congress
2016, pp. 513-521
[7] G. Schuh, M. Riesener and F. Diels, Structuring Highly Iterative Product Development Projects by Using HIPIndicators, Proceedings of the IEEM Conference 2016, pp. 1171-1175
[8] G. Schuh, S. Rudolf, M. Riesener and J. Kantelberg, Application of Highly-Iterative Product Development in
Automotive and Manufacturing Industry, Proceedings of ISPIM Innovation Forum, 2016, pp. 1-13.
[9] T. Klein, Agiles Engineering im Maschinen- und Anlagenbau, Dissertation TUM München, 2016.
156
[10] T. S. Schmidt and K. Paetzold, Agilität als Alternative zu traditionellen Standards in der Entwicklung physischer
Produkte: Chancen und Herausforderungen, Design for X - Beiträge zum 27. DfX-Symposium 2016, pp. 255-267.
[11] K. Beck, M. Beedle, A. van Bennekum,, A. Cockburn, W. Cunningham, M. Fowler, J. Grenning, J. Highsmith, A.
Hunt, R. Jeffries, J. Kern, B. Marick, R. Martin, S. Mallor, K. Shwaber and J. Sutherland, “The Agile Manifesto”, 2001,
available from: http://www.agilemanifesto.org/. [31.03.2017]
[12] N. Johnson, Applying Agile To Hardware Development. We’re Not That Different After All, Xtreme EDA USA
Corporation, 2011.
[13] K. Schwaber, SCRUM Development Process, in J. Sutherland, C. Casanave, J. Miller,P. Patel and G. Hollowell
(Eds.), Business Object Design and Implementation, Springer London, London, 1997, pp. 117–134.
[14] J. Sutherland, The Scrum Handbook, Cambridge: Scrum Inc., 2013.
[15] G. Gabrysiak, J. Edelman, H. Giese, A. Seibel, How Tangible can Virtual Prototypes be?, Proceedings of the 8th
Design Thinking Research Symposium, 2010, pp. 163-174.
[16] A. Albers, M Behrendt, S, Klingler, K. Matros, Verifikation und Validierung im Produktentstehungsprozess, in U.
Lindemann, Handbuch Produktentwicklung, 2016, pp. 541-569.
[17] A. I. Böhmer, A. Beckmann, U. Lindemann, Open Innovation Ecosystem - Makerspaces within an Agile Innovation
Process, in Proceedings of the ISPIM Innovation Summit: Changing the Innovation Landscape, 2015, pp. 1-11.
[18] E. Wolf, Microservices - Agilität durch Modularisierung, 2016, available from:
https://www.innoq.com/en/articles/2016/04/microservices-agilitaet/. [31.03.2017]
[19] K. Exner, K. Lindow, R. Stark, J. Angesleva, B. Bahr, E. Nagy, A transdisciplinary perspective on prototyping, in
IEEE International Conference on Engineering, Technology and Innovation/ International Technology Management
Conference (ICE/ITMC), 2015, pp. 1-8.
[20] H. Terho, S. Suonsyrjä, K. Systä, The Developers Dilemma: Perfect Product Development or Fast Business
Validation?, in Proceedings of the 17th International Conference, PROFES, 2016, pp. 571-579.
[21] E. Hilton, J. Linsey, J. Goodman, Understanding the prototyping strategies of experienced designers, in
Proceedings of IEEE Frontiers in Education Conference (FIE), 2015, pp. 1-8.
[22] A. Kampker, R. Förstmann, M. Ordung, A. Haunreiter, Prototypen im agilen Entwicklungsmanagement, in ATZ Automobiltechnische Zeitschrift, Ausgabe 7-8/2016, 2016, pp. 7-8.
[23] M. Rauhut, Synchronisation von Entwicklungsprozessen durch Taktung, Dissertation RWTH Aachen, 2011.
[24] A. Ericsson, G. Erixon, Controlling design variants - Modular product platforms, 1999.
[25] T. Hochreuter, S. Diefenbach, M. Hassenzahl, Durch schnelles Scheitern zum Erfolg: Eine Frage des passenden
Prototypen?, in Tagungsband UP13, 2013, pp. 78-84.
[26] G. Schuh, M. Riesener, C. Ortlieb, F. Diels and S. Schröder, Agile Produktentwicklung, in Tagungsband Aachener
Werkzeugmaschinen-Kolloquium 2017, pp. 29-51.
[27] C. Donati, M. Vignoli, Matteo, How tangible is your prototype? Designing the user and expert interaction,
International Journal on Interactive Design and Manufacturing, vol. 9, iss. 2, 2015, pp. 107-114.
157
An analytical-heuristic approach for automated analysis of dependency
losses and root cause of malfunctions in interlinked manufacturing
systems
Thomas Hilzbrich1,a, Felix Georg Mueller2,b , Timo Denner2,c,
Michael Lickefett2,d
1
Institute of Industrial Manufacturing and Management (IFF), University of Stuttgart,
Nobelstrasse 12, 70569 Stuttgart, Germany
2
Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Nobelstrasse 12, 70569
Stuttgart, Germany
a
thomas.hilzbrich@ipa.fraunhofer.de, bfelix.mueller@ipa.fraunhofer.de,
timo.denner@ipa.fraunhofer.de, cmichael.lickefett@ipa.fraunhofer.de
c
Keywords: Manufacturing system, Analysis, Root cause
Abstract. The analysis and optimization of interlinked manufacturing systems is challenging mainly
due to dynamic interdependencies of system components. This paper presents a method to
automatically identify downtimes and to determine their causes in interlinked manufacturing
systems. The method helps to retrace the root cause of dependency losses, even if the root cause is
not stopped. Furthermore data mining techniques are used to identify workpiece-specific root causes
of malfunctions and scrap as well as temporal correlations between downtimes.
Introduction
An increasing competition in global markets as well as the trend towards individualization of
products forces manufactures to make their manufacturing processes more flexible and to
continuously improve their production facilities [1,2]. In order to meet the challenges of increasing
complexity of products and the therefore needed flexibility in manufacturing, a higher complexity of
a manufacturing system is often inevitable [3]. This can result in a higher number of components in
a production line and greater dependencies between consecutive process steps [4]. However, the
operating performance of automated and interlinked production lines is often affected by a
multitude of malfunctions and machine downtimes due to these dynamic interdependencies [5].
To discover improvement potentials of a manufacturing system an analysis and evaluation of the
system is required. A widely-used way to evaluate the performance of a system is to use
performance indicators like technical availability or OEE (overall equipment effectiveness) and to
use these figures to identify improvement potentials [6]. These figures have a limited usage for
interlinked production lines, so that several extension of the OEE-figure exists [7]. Furthermore
quality management methods, like FMEA (Failure Mode and Effects Analysis), are used to identify
weak spots in manufacturing lines, though they are manually executed in general [8].
In the research area various methods were proposed for an automated analysis of malfunctions in
a manufacturing system. Besides the calculation of productivity indicators, like OEE, methods
based on data mining techniques were developed to identify the root cause of malfunctions in
manufacturing systems. Most of these methods have different weaknesses, for instance an
inaccurate identification of the root cause of dependency losses [9–11].
To meet these challenges a method that automatically retraces dependency losses and evaluates
the root causes of malfunctions in interlinked manufacturing systems was developed. As input data
extracted features of video feeds displaying several subsequent manufacturing process steps are
used. Thus, consistent data of a manufacturing system is available, without much implementation
159
effort. In order to properly analyse a manufacturing system, a configurable abstract model of a
manufacturing system is used to meet the characteristics of various production lines.
The article is structured as follows: In the following section, methods for an automated analysis
of manufacturing systems are discussed. Next, the developed analytical and heuristic method to
analyse malfunctions in a manufacturing system are presented. Based on an industrial
manufacturing system the validation of the method is outlined. Finally, conclusions are presented
and future research directions are discussed.
State of the art
In the research area various methods for an automated analysis of malfunctions in a
manufacturing system were proposed. In general these approaches can be classified into analytical
methods, mainly calculating productivity ratios, and heuristic methods, which are searching for
patterns to identify the root cause of malfunctions. Contrary to analytical methods, which calculate
distinct results, heuristic methods discover patterns and relationships using data mining techniques.
In [9] an approach to calculate availability ratios and to identify causes of availability losses in
automated assembly lines is presented. To evaluate downtimes, conditions of manufacturing
stations are defined based on machine data. Therefore for each station it is specified which
combination of parameter values represent a certain condition. In addition a radio location system is
used to determine the location of machine operators. By this means it is checked whether a machine
operator is available for operation.
[10] developed a method to automatically calculate product-type specific productivity ratios of
stations in interlinked manufacturing systems. The therefore used machine data is combined with
product data such as variant type and product characteristics. A model of a manufacturing system is
defined which incorporates multiple types of linkage between stations, recirculation of products as
well as diverging and converging material flows. The method includes an algorithm to determine
waiting and blocked conditions of manufacturing stations based on the buffer filling rate. Thus,
interferences between interlinked stations are detected. As a result of the method, OEE ratios are
calculated for each manufactured product type.
In order to identify associations between downtimes of manufacturing stations, [11] developed a
method based on association rule learning. The presented algorithm searches for frequently
occurring patterns of downtimes, which are detected in a particular interval. For instance, if a
specific downtime of a station occurs frequently after another downtime, a temporal association
between those downtimes could be detected. Besides associations between errors of the same
manufacturing station, associations between different stations can also be detected. The method is
based on the a-priori algorithm, which was extended to detect the associations described.
An algorithm to learn association rules describing associations between the duration of a
malfunction and the processed product type or type of malfunction is introduced by [12]. The
method is ought to identify if specific product or malfunction types are responsible for a majority of
downtimes. [13] used an association rule learning as well for analysing the cause of scrap in
manufacturing scenarios, where a variety of machinery combination exists. As a result the method
identifies which combinations of machines used for production most likely lead to scrap.
In summary, the discussed analytical and heuristic methods focus on generating different results.
Whereas the discussed analytical methods describe the current performance of a manufacturing
system by identifying downtimes continuously and calculating productivity ratios, the use of the
described heuristic methods is to retrace the cause of an error in a manufacturing system by
identifying associations in manufacturing data based on historic data representing a longer period of
manufacturing time.
160
Combination of analytical and heuristic analysis
In this paper a method is presented, which combines an analytical with a heuristic approach to
evaluate the performance of a manufacturing system and to retrace root causes of errors. The basic
function of the analytical method is the detection of downtimes and the conditions of faulty stations.
If a manufacturing station is blocked or waiting, the initially causing station can be identified. In
order to be able to analyse a manufacturing system an abstract model of it is defined. The heuristic
model is based on association rule learning, used to identify workpiece-specific root causes of
malfunctions and scrap as well as temporal associations between downtimes.
Model of a manufacturing system. For the analysis of a manufacturing system a model describing
its properties is needed. The central part of a production line are manufacturing stations where
machining of workpieces takes place. Each station consists of at least one manufacturing equipment,
e.g. a robot. In a manufacturing station one or more workpieces are processed at the same time. In
general workpieces can be transported on workpiece carriers or in bulk through a production line.
For a manufacturing station three positions are defined, where a workpiece or a workpiece-carrier
passes through (Fig. 1). At the incoming position workpieces are arriving at a station, at the
outgoing position they leave a station. The machining of workpieces takes place at the processing
position. Generally a station has only one processing position. Material which is conveyed to a
station is modelled as component. It is defined that a station has only one main source and one main
sink. Besides that a manufacturing station can have side tracks to remove scrap.
Components
Workpieces
Incoming
Position
Components
Scrap
Stationi
Processing
Position
Troughput timei,min
Workpieces
Stationi+1
Bufferi
Outgoing
Position
Scrap
Incoming
Position
Throughput timeiÆi+1,min
Processing
Position
Outgoing
Position
Throughput timei+1,min
Figure 1: Model of a manufacturing station with buffers
An interlinked manufacturing system consists of multiple stations, which are linked with
conveyors. Between two stations a material buffer can be placed, whereas a conveyor is capable of
buffering workpieces as well. If manufacturing stations are rigidly linked with a conveyor without
buffering capability, the capacity of the buffer between these stations is zero. A buffer is assigned to
the upstream station.
For the evaluation of a waiting condition of a manufacturing station a minimum throughput time
for each station and buffer is defined. It specifies the minimum time a workpiece needs to pass a
station from the incoming to the outgoing position and a buffer from the outgoing position of the
previous station to the incoming position of the following station. The throughput times are
calculated once when a workpiece or workpiece carrier passes an empty station or buffer.
Detection and analysis of downtimes. In order to detect a downtime of a manufacturing station the
theoretical cycle time of a station is used. The cycle time is defined as the time span between two
workpieces or workpiece carrier in the outgoing position [14]. If the current cycle time exceeds the
theoretical cycle time, the downtime of a station is the difference between these two values. A
downtime of a station can either be caused by a station itself or by another station. The latter is the
case if the station is waiting for workpieces to be processed or if the station is blocked due to a
subsequent buffer or station. If a station is blocked, workpieces cannot leave the outgoing position
due to occupied subsequent positions.
161
To detect the cause of a downtime the constantly calculated filling rate as well as the minimum
throughput time of a station are used. If a station is waiting, a workpiece processed by the previous
station was not delivered in time. Hence, it is checked if there are workpieces in the station, which
arrived within a specified time frame (Figure 2). It is calculated if the filling rate of the station
minus the number of workpieces arrived within the last t time units is 0, where t is the throughput
time of the station. If this is true, the station has a waiting condition.
Time span = (currentTimestamp – throughput timei, currentTimestamp]
IncomingCount = Workpieces in incoming position within time span
If Stationi_fillingRate – incomingCount = 0 Then
Condition_Stationi = Waiting
Figure 2: Identification of a waiting condition of a manufacturing station
In case of a blocked station, the outgoing position of the considered station and the incoming
position of the subsequent station are currently or have been occupied within the last t time units,
where t is the position changing time of the subsequent station. The position changing time is
defined as the duration until a new workpiece arrived at the incoming position after the prior buffer
was full.
Using information about the downtimes and the detected quantity of finished goods and scrap,
productivity rations like the TOEE (Total Overall Equipment Effectiveness) of sole stations, which
differentiate dependency losses, and the OEEML (Overall Equipment Effectiveness in a
Manufacturing Line) of a whole manufacturing line can be calculated [15].
Analysis of dependency losses. If a manufacturing station is stopped, waiting for material or being
blocked by another station, it is not responsible for the downtime. It has to be analysed which
manufacturing station is causing the downtime. In a system with several manufacturing stations it is
possible that the initial cause is not the direct neighbour of the affected station since an error can
spread through a system. A waiting condition occurs if the previous station has not delivered
material in time to the considered station. If a station is blocked, it cannot deliver processed
workpieces to the next station. In general these two conditions occur if the causing station has a
downtime or a higher cycle time than the considered station.
Analysis of waiting condition. To properly analyse a waiting condition of a stationi several
scenarios have to be incorporated. First of all, the previous station (stationi-1) can (still) be stopped
at the occurrence of the downtime, so that there is a clear relation between the conditions. However,
it is also possible that stationi-1 is running again or was never stopped, meaning that there is no
direct relation given between conditions. In addition, stationi-1 can be running, but is producing
scrap only, which is sorted out.
Based on a detected waiting condition it is evaluated which prior station was the initial cause.
Therefore it is checked if the waiting condition is affected by a downtime of the direct predecessor.
This is done by calculating at which time this condition has had been present. Due to the waiting
condition the prior conveyor or buffer has at least one empty position. If stationi-1 has produced a
workpiece after a downtime, this workpiece passes through the system in a specific duration, the
throughput time of bufferi and stationi+1, till it arrives in the outgoing position of the next station. If
a waiting condition is detected, which is caused by a downtime of the previous station, this station
must have been stopped at least at the time of the detection of the downtime of stationi minus the
defined throughput time. If the stationi-1 was stopped at this time, the cause of this downtime is
retrieved and set as the cause of the waiting condition. Additionally the stationi-1 is added to the
cause. Thereby a relation between these conditions is established. If then a subsequent station has a
waiting condition caused by the analysed condition, the cause is related to the cause of the
downtime of the subsequent station. For the scenario that the prior station was not stopped, but
caused a downtime due to a higher cycle time, this station is the initial cause of the downtime.
Otherwise it would have had stopped. The described procedure is shown in Figure 3 as pseudo code.
162
If Condition_Stationi = Waiting Then
Stationi-1_stoppedTime = Stationi_startDowntime –
(throughput timeiÆi+1,min + throughput timei,min)
If Condition_Stationi-1(startTime < Stationi-1_stoppedTime >= endTime) Then
Root cause Condition_Stationi = Root cause Condition_Stationi-1
Root cause Condition_Stationi.add(Stationi-1)
Figure 3: Identification of initial cause of a waiting condition
Analysis of a blocked condition. Similar to a waiting condition, scenarios for the analysis of a
blocked condition can be defined. First, the subsequent station (stationi+1) can be stopped. If this
stationi+1 has a higher cycle time than the considered one (stationi), it can block the prior station
without having a downtime. Furthermore it is possible that the subsequent station is operating again
at the moment of the analysis after having a downtime. This can occur if the next workpiece, which
was waiting in the incoming position of stationi+1, has not yet arrived in the processing position, so
that the chain of subsequent workpieces up to the workpiece in outgoing position of the stationi
cannot move to their next position, so that the stationi cannot deliver a new workpiece.
Similar to the identification of the cause of a waiting condition, it is checked whether it is caused
by a downtime of the next stationi+1. If stationi+1 is stopped and the start time of the downtime was
before or at the same time of the beginning of the downtime of stationi, than the cause of this
downtime is related to the cause of the considered downtime. If it is not stopped it is retrieved
whether a downtime of stationi+1 ended within the last t time units, where t is the position changing
timei+1. In both cases the next station is added to the cause of the downtime, so that the scenario of a
non-stopped stationi+1 is covered. The described procedure is shown in Figure 4 as pseudo code.
If Condition_Stationi = Blocked Then
Stationi+1_stoppedTime = Stationi_startDowntime – position changing timei+1
If Condition_Stationi+1(startTime <= Stationi_startDowntime) Then
Root cause Condition_Stationi = Root cause Condition_Stationi+1
Else If Condition_Stationi+1(endTime >= Stationi+1_stoppedTime) Then
Root cause Condition_Stationi = Root cause Condition_Stationi+1
Root cause Condition_Stationi.add(Stationi-1)
Figure 4: Identification of initial cause of a blocked condition
Heuristic method. In an interlinked manufacturing system it can be challenging to identify weak
spots due to complex interrelations within the system using common methods like FMEA.
Identifying patterns in (large) sets of manufacturing data using data mining methods can help to find
the root cause of a malfunction.
For example, if a certain type of workpiece often causes an error in a specific manufacturing
station or a workpiece is often detected as scrap if it stays too long in a buffer, this can be useful
information for optimization. In addition it can be interesting to detect temporal associations
between downtimes. For instance, it could occur, that a specific malfunction is detected frequently
after a downtime caused by a planned maintenance. The heuristic method in this paper is able to
cover these two types of correlations, workpiece-specific and temporal correlations, identifying root
cause.
A root cause is only classified as relevant if it occurs within a certain frequency. If a malfunction
only occurs once in a certain time, it is generally not relevant for long-term optimization of a
manufacturing system. Thus, the identification of root causes has to be done based on historic data
in order to find correlations.
Identification of workpiece-specific root cause. In an interlinked manufacturing system a
workpiece passes through stations in a specific time and is processed with certain parameters. These
workpiece-specific values can be combined to a manufacturing history of a workpiece. For example,
163
a workpiece can be processed with a certain temperature or remain a certain time in a buffer. Based
on these manufacturing histories associations between parameter values and downtimes of
manufacturing stations or scrap can be identified.
Since the focus is not to identify associations between attributes and a label (e.g. scrap), like a
classification technique is doing, but relations between any attributes, an association rule learning
method, the FP-growth algorithm [16], is used. Association rule learning identifies relations
between attributes in a data set of the form A ĺ B (if A then B) based on the frequency of these
attribute values [17]. A frequency threshold (support) helps to identify only frequently occurring
patterns. Rules are defined as relevant if the confidence of the rule exceeds a specified threshold.
Besides the FP-growth algorithm, the Apriori algorithm is a common method to identify association
rules.
In order to generate a manufacturing history of a workpiece it has to be known at which time a
workpiece was at a certain place in a manufacturing system. Only these manufacturing parameters
can be related to a workpiece. Therefore tracking of workpieces is required. Methods for tracking
workpieces in a manufacturing system are not discussed in this paper. The constructed
manufacturing history of a workpiece (Table 1) consists of the variant of the workpiece, the
processing time for each manufacturing station and the throughput time for each buffer. In addition
it is noted if a station had a downtime while the workpiece was in the processing position and if the
workpiece was identified as scrap.
Table 1: Parameters of manufacturing history of a workpiece
ID
1
Variant
A
Si_processing time
10 s
Si_downtime
Equipment error
Bi_throughput time
15 s
Scrap
False
To apply the FP-growth algorithm the manufacturing histories need to be transformed in a binary
data format. This is done by constructing intervals for each attribute and generating an attribute for
each interval. As downtimes and scrap in general will be not the dominant factors in the data set of
manufacturing histories, a relatively low support threshold can be chosen, while a high confidence
threshold can be set, as only significant associations are relevant, so that a specific cause often has a
specific effect. The discovered association rules have to be filtered, as many rules will describe the
normal behaviour of a system. Therefore it is defined that an effect of a rule must be a condition of a
downtime or detected scrap.
Identification of temporal associations of downtimes. Especially in an interlinked
manufacturing system it is possible that downtimes of manufacturing stations can have an impact on
each other, so that it would be interesting to identify these temporal associations. To identify
temporal associations between downtimes of manufacturing systems, the Apriori-based algorithm
introduced in [18] for frequent episode discovery is used.
Similar to the Apriori-algorithm for identifying association rules, it has a phase to find frequent
patterns and a phase to generate association rules based on the found frequent patterns (called
episodes), in which generally only the calculation of the frequency is different compared to the
standard Apriori-algorithm. To calculate the frequency of related events, downtimes which occurred
within a certain time frame, which can be set individually, are searched. An episode of downtimes is
considered as relevant if it is non-overlapping with another, but the same type of episode. As
downtimes can have different durations, the downtimes can be differentiated by the intervals of
durations. As input data the downtimes with type as well as start time and duration are used.
164
Proof of concept
The developed method was implemented as an IT tool and validated on the basis of simulated
data and industrial manufacturing system data. For validation a dataset of a manufacturing system
was used, which has three manufacturing stations linked by a workpiece carrier system. The data set
covers a time of 24 hours. To verify results videos showing the processes of the manufacturing
system have been used.
It can be shown that the first station can be identified as cause of a waiting condition of the last
station, while the causing station is not stopped anymore at the time of the analysis. The
corresponding video sequence is showing the stop of the first station with following stops of the two
subsequent stations after the buffers in between have been depleted. Before the stop of the last
station, the first station is running again, but the produced workpiece did not reach the last station in
time. An equivalent scenario can be reproduced by analysing a blocking condition. However, it
appears that the cause of a dependency loss possibly cannot be identified at the beginning of an
analysis if previous or subsequent stations have a higher cycle time than the stopped one as a
downtime of these stations is detected later.
For the validation of the identification of workpiece-specific root causes using association rule
learning generated manufacturing histories of workpieces are used. In several iterations the
parameter of the model are varied, using a minimal support of 10% and a minimal confidence of
80 % in the end. Besides trivial association rules, it is detected that if a workpiece stayed more than
10 minutes in a buffer, it is later detected as scrap in the last station. This pattern can be explained
by time-temperature-transformation effects occurring while waiting significantly longer in the
buffer.
By the use of the temporal association rule learning technique temporal correlations of
downtimes are ought to identify. With a minimum support of 10 % and a minimal confidence of
60 %, it can be exemplarily shown with a confidence of 70 % that a waiting condition of the last
station is followed by a downtime of the second station.
In general further validation and testing of the heuristic model is needed as the small quantity of
attributes and stations limited the usage of the methods.
Conclusion
Especially in an interlinked manufacturing system with numerous stations and short cycle times
the interdependencies can be complex. With the combination of the analytical analysis of
downtimes and dependency losses as well as the heuristic identification of root causes, weak spots
in such a system can be identified. It is shown that the proposed method, compared with other
approaches for the analysis of dependency losses, is able to identify the initially causing station of a
dependency loss, as a downtime can affect several stations in an interlinked manufacturing system.
This even works if the causing station is not stopped at the time of the analysis.
In the future the used manufacturing system model will be extended to converging and diverging
workpiece flows so that a broader range of manufacturing systems can be analysed. In addition it
has to be considered, that a conveyor and a buffer can also have downtimes. Regarding the
identification of workpiece-specific root causes, feasibility tests of the developed model have been
performed. In the next step it is validated using a higher quantity of machine data in order to get
more detailed insights.
References
[1] E. Abele, R. Gunther, Herausforderungen an die Produktion der Zukunft, in: E. Abele, G.
Reinhart (Eds.), Zukunft der Produktion: Herausforderungen, Forschungsfelder, Chancen, Carl
Hanser Fachbuchverlag, 2011, pp. 5–32.
165
[2] D. Mourtzis, M. Doukas, Decentralized manufacturing systems review: Challenges and
outlook, Logist. Res. 5 (2012) 113–121.
[3] W.R. Ashby, An Introduction to Cybernetics, Wiley, New York, 1961.
[4] T. Bauernhansl, Die Vierte Industrielle Revolution - Der Weg in ein wertschaffendes
Produktionsparadigma, in: T. Bauernhansl, M. ten Hompel, B. Vogel-Heuser (Eds.), Industrie 4.0 in
Produktion, Automatisierung und Logistik: Anwendung, Technologien, Migration, Springer
Vieweg, Wiesbaden, 2014, pp. 3–35.
[5] H.-P. Wiendahl, M. Hegenscheidt, Produktivität komplexer Produktionsanlagen, ZWF
Zeitschrift für wirtschaftlichen Fabrikvertrieb (2001) 160–163.
[6] S. Nakajima, Introduction to TPM: Total Productive Maintenance, Productivity Press,
Portland, Or., 1988.
[7] G. Lanza, J. Stoll, N. Stricker, S. Peters, C. Lorenz, Measuring Global Production
Effectiveness, Procedia CIRP 7 (2013) 31–36.
[8] H.-P. Wiendahl, M. Hegenscheidt, Verfügbarkeit von Montagesystemen, in: B. Lotter, H.-P.
Wiendahl (Eds.), Montage in der industriellen Produktion: Ein Handbuch für die Praxis, second ed.,
Springer, Berlin, Heidelberg, 2012, pp. 331–364.
[9] C. Köhrmann, Modellbasierte Verfügbarkeitsanalyse automatischer Montagelinien.
Dissertation, VDI-Verlag, Hannover, 2000.
[10] T. Langer, Ermittlung der Produktivität verketteter Produktionssysteme unter Nutzung
erweiterter Produktdaten. Dissertation, Verl. Wiss. Scripten, Chemnitz, 2015.
[11] S. Laxman, B. Shadid, P.S. Sastry, K.P. Unnikrishnan, Temporal Data Mining for root-cause
Analysis of Machine Faults in Automotive Assembly Lines (2009).
[12] B. Kamsu-Foguem, F. Rigal, F. Mauget, Mining Association Rules for the Quality
Improvement of the Production Process, Expert Systems with Applications 40 (2013) 1034–1045.
[13] W.-C. Chen, S.-S. Tseng, C.-Y. Wang, A novel manufacturing defect detection method using
association rule mining techniques, Expert Systems with Applications 29 (2005) 807–815.
[14] K. Suzaki, Modernes Management im Produktionsbetrieb: Strategien, Techniken,
Fallbeispiele, Hanser, München, 1989.
[15] M. Braglia, M. Frosolini, F. Zammori, Overall equipment effectiveness of a manufacturing
line (OEEML), Journal of Manu Tech Management 20 (2008) 8–29.
[16] J. Han, J. Pei, Y. Yin, Mining frequent patterns without candidate generation, in: Proceedings
of the 2000 ACM SIGMOD international conference on Management of data, 2000, pp. 1–12.
[17] J. Han, M. Kamber, J. Pei, Data Mining: Concepts and Techniques. third ed., Morgan
Kaufmann Publishers, 2011.
[18] S. Laxman, P. Sastry, K. Unnikrishnan, Discovering Frequent Generalized Episodes When
Events Persist for Different Durations, IEEE Transactions on Knowledge and Data Engineering 19
(2007) 1188–1201.
166
Design of a Modular Framework for the Integration of Machines and
Devices into Service-oriented Production Networks
Sven Jung1,a, Michael Kulik1,b, Niels König1,c and Robert Schmitt1,2,d
1
Fraunhofer Institute for Production Technology IPT, Steinbachstr. 17, 52074 Aachen, Germany
2
Laboratory for Machine Tools and Production Engineering (WZL) of RWTH Aachen University,
Chair for Metrology and Quality Management, Steinbachstr. 19, 52074 Aachen, Germany
a
sven.jung@ipt.fraunhofer.de, bmichael.kulik@ipt.fraunhofer.de, cniels.koenig@ipt.fraunhofer.de,
d
r.schmitt@wzl.rwth-aachen.de
a
+49 241 8904-472, b+49 241 8904-411, c+49 241 8904-113, d+49 241 80-20283
Keywords: Digital Manufacturing System, Distributed design, Integration
Abstract. In today’s production systems flexibilisation and process data tracking becomes more and
more important, in order to face the challenges coming with an individualised production and highly
linked processes. The required digital interconnection of machines and systems is time-consuming
and costly, due to the variety of different interfaces and protocols. Within this work, we present a
framework for a flexible and less complex integration of machines and devices into production
networks and systems. This helps to make existing machines Industry 4.0 ready and unify data
interchange for more dynamic and linked production systems.
Introduction
Today, in view of the demand for highly individualised products and rapidly changing
technologies and production environments, industry is facing new challenges [1]: traditional rigid
process chains are too inflexible and the single process steps usually are isolated. If one station fails
or one part of the process chain is reconfigured for another product, the whole production is
disrupted. Furthermore, there is a growing need to track process data along the process chain,
because this global knowledge can be used for monitoring, analysis, and optimisation of the whole
system. Therefore, the goal of a wide range of manufacturing companies is to make their systems
more flexible and highly interconnected.
Consequently, Manufacturing Execution Systems (MES) have been developed to link business
functionalities with the manufacturing floor and to coordinate decision-making and the
collaboration of machines. Starting point is the digital interconnection and unified data interchange.
However, there are many existing concepts and protocols, which also often were designed for
specific domains, consequently leading to heterogeneous interface environments with only partially
interoperable solutions [2]. Additionally, the challenge is to make existing machines Industry 4.0
ready and interconnect them with newer systems, which already are equipped with flexible
interfaces, through a common communication. So far, this upgrading has been very time-consuming
and costly due to the variety of incompatible interfaces and the associated complex development of
machine-dependent adapters and protocol translators. Therefore, the demand for a solution for an
easy and universal interconnection of machines and systems arises.
Challenges of Integration Solutions
For the digital interconnection of systems, conversion layers have become quite popular to
translate between different protocols and interface concepts [2]. By this, machines with arbitrary
interfaces can be integrated into existing production networks and systems. Task of these
middleware systems is not only to mediate between different systems and enable collaboration, but
also to hide the complexity and infrastructure of the underlying application. Hence, they are
167
perfectly suited to enable remote monitoring and control of manufacturing processes [3]. A
promising approach are Multi-agent Systems (MAS), which decentralise functionalities over the
network by the use of autonomous agents, like PABADIS [4] for plant automation. Problem of these
solutions is that each agent is implemented individually from scratch and that the collaboration
strategy is often limited to the particular agent network. Although there are already generic
middleware systems with modular interfaces to be configured according to the use case [5], most of
them require special hardware to run on and thus are quite costly. Furthermore, existing approaches
usually are inflexible, regarding the limited number of supported interface designs and the ability to
support custom, machine-dependent functionalities. A software framework instead would be able to
provide the required machine-dependent flexibility and hardware independence whilst uniformly
providing functionality among implementations, like the re-programmable IoT gateway proposed by
Al-Fuqaha et al. [6]. By the use of self-contained and reusable software modules for the individual
aggregation of control components, as illustrated by Mendes et al. [7], even more flexibility can be
introduced and reimplementation avoided. To set up loosely coupled and reconfigurable production
networks, they additionally propose the use of the Service-oriented Architecture (SOA), where
participating machines and systems offer their functionalities via services over the network.
However, they have limitations in providing concrete support for integrating machines into existing
networks and systems, as demanded by today’s production networks.
Integration Framework Approach
In order to address the emerging demand for flexible integration solutions, we implemented a
middleware software in the appearance of a framework. It hooks on top of a hardware component
and supports the integrator by avoiding the error prone and time-consuming reimplementation of
basic functionalities of a digital interconnection (see Fig. 1).
Figure 1: An agent-based software framework guides the integration of machines into
production networks and systems
Universal, Agent-based Model. Prerequisite for the independent communication of machines and
systems is a universal, machine-understandable semantic. Our basic idea is that each machine is
digitally represented by an autonomous agent, offering its data and services via various service168
oriented interfaces. By the combination of several predefined data structures, the agent describes the
characteristic properties and functionalities of the real hardware: as illustrated in Fig. 1, the »Status«
indicates whether the machine is connected, was already initialised, currently is active, or has an
error. A »Position« describes the physical location where a certain product type can be deposited. In
order to make live data available, »Variables« are created and filled with values on demand. By the
use of parameterisable »Services«, individual process steps can be modularly defined.
With this abstract and general descriptive model, a smart networking with loosely coupled
collaboration and extensive compatibility with other devices, machines, and systems is achieved.
Additionally, the service-oriented architecture introduces a high degree of flexibility und reusability,
since process chains can be arbitrarily composed of single services and reconfigured when required.
The result is a dynamic system, able to adapt to changing requirements.
Modular Software Kit. In order to achieve a high level of reusability and extensibility, we decided
to go with a modularised architecture, as visualised in Fig. 2. The architecture consists of three
layers, where each fulfils a certain purpose. Completely independent horizontal modules and clearly
defined and loosely coupled vertical interfaces ensure a flexible usage.
Figure 2: Modularised and loosely coupled, multi-environmental architecture, supporting all
Windows versions
Core layer. The basis forms a core module with predefined data structures to describe the digital
representation of a machine, as presented above. In the course of this, it specifies common
behaviour of the agent, like changing the status according to running services and persistently
storing states. Designated placeholders allow to parameterise these data structures and to inject
custom, machine-dependent functionalities.
Functionality layer. On top of this, various self-contained modules, able to build up themselves
around the core module, offer independent interfaces. One module adds an intuitive user interface
(GUI) to the agent, enabling a manual interaction. Another one provides an exchangeable model for
domain-dependent descriptions, used by the integrator to add detailed meanings to the data
structures of the core definition. By this, the framework can adapted to any subject area. The
protocol module offers a RESTful web service for remote monitoring and control, a resource
oriented and stateless communication interface based on HTTP [8]. In the utilities module, common
useful tools for communicating with hardware components are grouped.
Because the modules of this layer are completely independent of each other, any composition of
used modules for a concrete agent implementation is possible. It would even be possible to extend
the range of existing modules, in order to introduce new functionalities, for example an OPC-UA
communication interface or a model for a new domain.
Application layer. The application module holds the specific implementation concerning the
control of the targeted hardware and the actual composition of the modules of the functionality layer
and thus has to be implemented individually for each machine. Fig. 3 reflects the modular structure:
first, the data structures of the core module are used to define all positions, data, and services the
specific agent should provide, following the object-oriented programming model. Afterwards,
machine-dependent functionality can be injected into the designated placeholders, for example the
169
individual control of the hardware. Using this agent definition, the user is able to activate the
modules of the functionality layer and arbitrarily compose the interface, depending on the use case.
If later on another or an additional interface or functionality is required, the corresponding module
can simply be exchanged or added.
Figure 3: Framework architecture supports the integrator by providing stand-alone modules with
common functionalities
Environment. Main criterion for an integration solution is the support of multiple platforms, in
order to ensure a broad compatibility. Windows still is the dominant Operating System (OS)
running on desktop computers, holding the biggest portion of the market share [9]. Furthermore,
Microsoft recently released a compatible cut-down version of Windows, named Windows IoT, able
to run on single board computers, for example the Raspberry Pi, Cubieboard, or Arduino. These
computers have a favourable price and come with numerous integrated hardware interfaces, what
makes them perfectly suited to host middleware software. That is why we decided to target all
Windows versions and follow a multi-environmental approach.
With Windows 10 and Windows IoT, Microsoft introduced the Universal Windows Platform
(UWP) concept, a runtime environment allowing to implement applications running on multiple
types of devices with the same code base, like desktop computers, phones, tablets, and single board
computers. Unfortunately, UWP apps depend on the .NET Core software platform, which is
optimised for cross-platform development and only supported since Windows 10. Older Windows
versions still depend on the full .NET software platform. The solution is to provide two versions of
the integration framework: one development stack depending on .NET and one depending on .NET
Core (see Fig. 2). In order to still reduce redundant parts and keep a high configurability, as much
code as possible is moved into portable libraries to be shared by the two stacks. This enables the
framework to not depend on specific hardware and to be compatible with many different device
types at the same time.
Evaluation and Discussion
In order to demonstrate the quality of the implementation and the resulting potential of our
approach, we set up a working demonstrator environment to run measurements.
Setup. The demonstrator environment consisting of three components: one desktop computer, a
Raspberry Pi 3, and a router interconnecting both computers via wireless LAN. The Raspberry Pi
hosts an agent implementation for controlling a cooled warehouse for tubes, which are
conventionally used for cell cultures (illustrated in Fig. 4). There is an analog temperature sensor to
measure the current temperature inside the warehouse, attached to the GPIO pins of the Raspberry
170
Pi using an analog-to-digital converter. A Peltier element cools down the warehouse and can be
turned on or off by controlling a relay, attached to the GPIO pins using a transistor.
Figure 4: Exemplary network integration of a cooled tube warehouse using the proposed
framework and a Raspberry Pi
The integration framework is used to set up a digital model representation of the warehouse,
inject control logic to control the hardware and keep a constant temperature, and expose the
capabilities named below via the graphical user interface module and the RESTful web service
module:
x Temp (decimal Variable): Current value of the temperature sensor, periodically updated
every second by reading out the value of the analog-to-digital converter and calculating the
temperature. If this value is higher than the desired temperature, the Peltier element is
switched on by triggering the relay, otherwise it is switched off.
x Desired Temp (decimal Variable): Value indicating the desired temperature of the
warehouse.
x Enabled (Boolean Variable): State indicating whether the warehouse is enabled or disabled.
x Cooling (Boolean Variable): State indicating whether the warehouse is currently cooling
down and the ventilation is turned on.
x Turn On (Service): Enables the warehouse to start keeping the desired temperature.
x Turn Off (Service): Disables the warehouse to stop keeping the desired temperature.
x Set Temperature (Service): Sets a value for the desired temperature according to a passed
parameter.
x Start Measurement (Service): Measures the CPU usage and memory usage of the host
system for one minute and writes the data to a Comma-separated values (CSV) file.
This agent is remote controlled by the desktop computer, acting as client to trigger the provided
services and measure the following characteristics.
Metrics. Since a middleware software consumes the resources of the host computer and sometimes
even runs on computers that are more restricted, its performance behaviour is a quality feature.
Additionally, the performance indicates the efficiency of the implementation and the amount of
introduced computing overhead. As measures, we consider Central Processing Unit (CPU) usage
and memory usage of the framework on the host system whilst processing remote requests. Thus,
for this metric the desktop computer frequently requests the status of the remote agent via the
171
RESTful web service, in order to simulate some network and processing load, and the agent tracks
its CPU usage and memory usage.
Another important measure for remote monitoring and control and indicator for the robustness of
a middleware software is the response time of the network communication interface, since
especially the status and live data are frequently requested by clients. We define the response time
as the elapsed time span to retrieve any resource from the agent via the implemented remote
communication interface. Again, for this metric the task of the desktop computer is to frequently
request the status of the remote agent via the RESTful web service, but additionally measure the
elapsed time span between the request and the response. As reference value, the response time of a
ping request approximately indicates the overhead of the network.
Results. The CPU usage and the memory usage followed an exponential trend and an increased
spread when increasing the request frequency (see Fig. 5). Overall, both showed an expected and
reasonable trend, since with an increased request frequency also the processing workload increases.
The exponential patterns indicate that all requests consumed the same amount of resources and thus
are handled completely independent of each other. With regard to a possible maximum of 100%
CPU usage and 1 GB of RAM, the framework seems to be a lightweight implementation and leaves
enough room for machine-dependent calculations.
Figure 5: Graphical evaluation of the performance behaviour of the proposed integration
framework for different loads: CPU usage (left), memory usage (right)
The response time of a ping and the response time of a status request to the remote
communication interface stayed stable when increasing the request frequency and showed a low
level of variance (see Fig. 6). In a direct comparison, the majority of the status requests took about
35.22 ms to 38.39 ms more time than a ping request, what can be considered as computational
overhead of the framework. Overall, the response time of a status request of roughly 40 ms is
sufficient for responsive applications, even if no real-time data exchange is possible. A further
characteristic is that the response time stayed stable and showed a low spread for varying loads,
indicating a reliable and completely scalable communication.
Figure 6: Graphical evaluation of the response time of the implemented remote communication
interface for different loads: status request (left), ping (right)
172
Summary and Conclusion
In the course of Industry 4.0, production systems have to face new challenges regarding the
flexibility of process chains, the exchange and traceability of data, and the interconnection of
machines. In order to address the increasing demand for flexible machine interfaces and integration
solutions, we presented an approach for a modular, multi-platform integration framework.
Predefined data structures and functionalities support the integrator to set up a machine-dependent
middleware software with service-oriented interfaces. Exemplarily the implementation of the
framework on a Raspberry Pi for integrating and controlling a cooled warehouse for tubes over the
network was illustrated and evaluated. First results indicate an efficient performance behaviour of
the framework implementation and a reliable and scalable remote collaboration.
This concept drastically reduces the complexity of integrating machines into production networks
and systems. Moreover, the universal, agent-based model ensures to achieve a loosely coupled
collaboration and to increase the flexibility of process chains due to the service-oriented approach.
The described cooled tube warehouse is already integrated into the service-oriented control software
of the automated platform for the cultivation of stem cells presented by Kulik et al. [10]. In the
future, the scalability of the framework has to be validated by interconnecting and controlling
numerous machines and also by equipping production networks of various domains. Furthermore,
enhancements like an OPC-UA remote communication interface and plausibility checks as well as a
result delivery scheme for services would increase the versatility and interoperability of the
framework.
References
[1] L. K. Duclos, R. J. Vokurka, R. R. Lummus, (2003) "A conceptual model of supply chain
flexibility", Industrial Management & Data Systems, Vol. 103 Issue: 6, pp.446-456
[2] T. Sauter, The continuing evolution of integration in manufacturing automation, IEEE
Industrial Electronics Magazine, vol. 1, no. 1, pp. 10-19, Spring 2007.
[3] N. V. Truong and D. L. Vu, Remote monitoring and control of industrial process via wireless
network and Android platform, 2012 International Conference on Control, Automation and
Information Sciences (ICCAIS), Ho Chi Minh City, 2012, pp. 340-343.
[4] A. Bratukhin, Y. K. Penya, T. Sauter, Intelligent Software Agents in Plant Automation,
Proceeding of 3rd International NAISO Symposium on Engineering of Intelligent Systems, pp. 7783, 2002.
[5] Sotec Soft- & Hardware-Engineering, CloudPlug, https://www.sotec.eu/en/products/cloudplug,
Accessed 2017-03-24.
[6] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari and M. Ayyash, Internet of Things: A
Survey on Enabling Technologies, Protocols, and Applications, in IEEE Communications Surveys
& Tutorials, vol. 17, no. 4, pp. 2347-2376, Fourthquarter 2015.
[7] J. M. Mendes, P. Leitao, A. W. Colombo and F. Restivo, Service-oriented control architecture
for reconfigurable production systems, 2008 6th IEEE International Conference on Industrial
Informatics, Daejeon, 2008, pp. 744-749.
[8] C. Pautasso, O. Zimmermann, and F. Leymann, Restful web services vs. "big"' web services:
making the right architectural decision, In Proceedings of the 17th international conference on
World Wide Web (WWW '08), ACM, New York, USA, 2008, 805-814.
[9] StatCounter, Top 7 desktop oss on July 2016, http://gs.statcounter.com/#desktop-os-wwmonthly-201607-201607-bar, Accessed 2017-03-28.
173
[10] M. Kulik, J. Ochs, N. König, and R. Schmitt, Automation in the context of stem cell
production – where are we heading with Industry 4.0?, Cell Gene Therapy Insights 2016, 2(4), 499506.
174
Success Factors for the Development of Augmented Reality-based
Assistance Systems for Maintenance Services
Moritz Quandt1,a, Abderrahim Ait Alla1,b, Lars Meyer1,c and Michael Freitag1,2,d
1
BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen,
Hochschulring 20, 28359 Bremen, Germany
2
University of Bremen, Faculty of Production Engineering,
Badgasteiner Straße 1, 28359 Bremen, Germany
a
qua@biba.uni-bremen.de, bait@biba.uni-bremen.de, cmer@biba.uni-bremen.de, dfre@biba.unibremen.de
Keywords: Augmented reality, Maintenance, Assistance system
Abstract. The growing technical complexity and the high degree of variant diversity of installed
components poses great challenges for service technicians regarding maintenance of industrial
facilities. Mobile assistance systems can support technicians directly in the work process by
supplying contextual information on tasks and technical components. Augmented Reality is a
suitable approach due to the possibility of enhancing the field of vision of the technicians with
digital contextual information.
This paper provides an approach for the development of an Augmented Reality-based assistance
system for the maintenance of complex infrastructures. Therefore, the authors derive criteria for a
successful development of a work process related Augmented Reality solution that are transferrable
to different fields of application. The identified criteria can be assigned to different areas –
hardware, software, field of application and data – and serve as a guideline for the development of
Augmented Reality-based assistance systems for complex applications in the context of
maintenance.
Introduction
In the course of Industry 4.0, technical systems are evolved to cyber physical systems (CPS) that
combine physical processes with the computation of embedded sensors and actuators [1]. Thus,
predictive maintenance strategies that either detect signs of failures based on statistical analysis or
based on continuous or periodic monitoring of the conditions are applicable [2]. Due to the high
performance of mobile devices, these approaches can be implemented as mobile applications. At the
same time, the increasing complexity of technical systems leads to higher qualification requirements
for technicians that conduct maintenance tasks. Besides technical maintenance tasks, the service
technicians have to fulfil a rising share of tasks outside their particular area of expertise, e.g.
configuration of complex control systems [3]. Additionally, the service technicians have to deal
with a rising variability of system components, due to the high-dynamic development of new
technical components, for example in the area of building automation [4]. The provision and
analysis of real-time system data, the complexity of the technical system as well as the variety of
technical components poses great challenges for service technicians to fulfil their maintenance
tasks. Therefore, mobile assistance systems can provide service technicians with the context-related
information needed to fulfil a defined maintenance task and serve as a guide through a complex
work process [5]. Mobile devices, e.g. smart phones, tablets, are widespread and offer new
opportunities for networked mobile solutions in the context of the Internet of Things (IoT) by
applying mobile communication standards and identification technology. In this context,
Augmented Reality (AR)-based solutions have the potential to support complex maintenance
processes [6]. The technology allows to display additional virtual information in the field of view of
*
Submitted by: Prof. Bernd Scholz-Reiter
175
a service technician, without losing sight of reality. Moreover, the usage of a smart data glass as an
assistance system enables the technicians to work hands-free [7]. A central challenge for providing
such an assistance system is the work environment of mobile maintenance teams. Depending on the
maintenance task, the service technicians are faced with rough work conditions and high physical
demands. In many cases, the technicians have to wear personal protective equipment.
In this paper, the authors identify success factors for the development of AR-based assistance
systems for mobile service technicians in the context of operation and maintenance. This is achieved
by systematically analysing the work process and the impact thereof on technical specifications of
the proposed solution.
Augmented Reality
Augmented Reality allows a combination of real and virtual world in real time. AR is the
extension of reality through a computer-assisted overlay of virtual objects in the user's field of
vision rather than the complete replacement of the real world, as is the case of Virtual Reality [8].
The objective of AR is the enhancement of the human perception by providing context-sensitive
information in the form of virtual objects. All devices equipped with a camera, a GPS receiver, and
enough computing power to process the real-time data (images or geo-information) are the platform
and prerequisite for implementing AR applications [9]. The following table shows the features of
current hardware applicable for AR solutions.
Table 1: Features of current AR hardware (diagram based on [10] and [11]).
Handsfree
Monocular
data glasses
X
Binocular
data glasses
X
3D impression
of virtual
objects
Everyday
object
Large
display
Display of
High
information processing
in field of
power
view
X
X
X
Tablets
X
Smartphones
X
X
X
X
Currently, the available AR techniques for mobile devices to augment reality with virtual objects
can be classified into two main forms: position-based and marker-based [12]. Regarding the
position-based technique, geo-information is used to display the content in defined positions.
Limitations of this technique are inaccurate or missing geo-information, since synchronization
between the real and the virtual world must be achieved in the shortest possible time interval [13].
Therefore, the location-based technique is not suitable for applications that require a high
positioning accuracy and indoor applications where no geo-information is available. In this case,
additional hardware components are needed to implement an indoor navigation, e.g. WIFI, beacons,
RFID.
In the case of using the marker-based technique, the camera of the device processes all captured
images in real-time and displays the virtual contents when the camera detects a marker, e.g. a
picture, characteristic object features or a QR code. On the one hand, additional effort can arise for
equipping the application area with the selected markers. On the other hand, the usage of the
marker-based technique enables navigation solutions independent of geo-information [14].
Maintenance for Industry 4.0
176
Maintenance as a discipline has been enormously developed over the last decades. According to
[15], the development of maintenance has experienced three phases or generations. During the first
generation (beginning of industrialization in the 19th century - 1960), little attention was paid to
maintenance. Only corrective maintenance activities were carried out in case of failures. In the
second generation (1960 - 1980), increased demand for goods led to increased mechanization and
complexity of the plants. This led to a clear focus on downtime, which had a serious impact on
production. Consequently, the concept of preventive maintenance in the form of repairs at fixed
intervals has been proposed. Of course, this approach has increased maintenance costs, leading to
the development and use of maintenance planning and control systems. With increasing complexity
of the production facilities, the expectations for maintenance have also increased. There was
therefore a great need for the development of new techniques and a new adaptation of maintenance
to the new requirements, which pushed maintenance to the third generation (1980 - today). The
investigation of the risk has become very important. Environmental and safety issues are the top
priority. New concepts have emerged: state-of-the-art monitoring, just-in-time production, quality
standards, expert systems, condition-oriented maintenance, just to mention a few. The rise of
Industry 4.0 has pushed forward the maintenance community to think about the fourth maintenance
generation. Indeed, the focus is on the networking processes providing fast and reliable data for the
information and control of maintenance indicators. The introduction of new technologies like AR
will be another step towards more flexible maintenance. In the vision of Industry 4.0, Augmented
Reality will provide a significant contribution to the development of the digital and networked
factory in the maintenance context [16]. In this case, CPS represent the basis for the establishment
of networked machines, storage systems, sensors, and IT-systems. Indeed, the AR system can be
considered as CPS that interacts digitally with various IT systems, sensors, and machines with the
objective to optimise the decision support tool in the maintenance activities. As a result, knowledge
and information will be available in a decentralized way at the place of maintenance [17].
Augmented Reality for maintenance
Referring to [18], a successful implementation of an on-site maintenance assistance system
depends on the following processes: 1) find the components to maintain (target) 2) perform the
maintenance activity. Thereby, the following requirements for mobile AR applications in
maintenance are derived: i) indoor navigation and orientation, ii) support during performing
maintenance task including the documentation of work.
In the literature, several scopes of application of AR for maintenance have been proposed.
Particularly for training purposes, AR is considered as a promising technology that can offer new
possibilities for developing teaching and learning platforms [19]. In this context, a study on AR
design and application for educational purposes is given in [20]. The benefit of AR depends on the
technician´s skill level. The maintenance tasks are diverse and require a high demand of support
documentation. Not all technicians are able to comprehend and perform the advised maintenance
tasks based on the provided documents. As a result, the deployment of AR solution requires
continuous training of technicians [21]. [19] present current research on training service technicians
by using AR applications.
According to [22] about 50% of the on-site maintenance time is spent on localizing and
navigating to inspection targets. Therefore, indoor navigation approaches are integrated in existing
AR-based solutions for the support of maintenance tasks. In this context, [18] proposed a natural
marker-based AR framework that can support facility maintenance operators digitally in daily onsite maintenance activities. In this case, existing signs in the application area represent natural
markers.
Success Factors for the Development of Augmented Reality Assistance Systems for on-site
maintenance measures
177
For successfully implementing an AR-based assistance system to support service technicians in
on-site maintenance operations, attention should be paid to the following factors. We have
identified the following success factors based on expert knowledge and requirements from different
industry sectors, e.g. wind energy generation, heating and air conditioning systems. These sectors
are faced with high efforts for their maintenance operations due to the execution of the maintenance
measures on different sites. In particular, we conducted semi-structured expert interviews with
specialists from maintenance management, IT work preparation and service technicians from a
maintenance company for onshore wind turbines to develop criteria from the user´s side.
Furthermore, we performed an Analytical Hierarchy Process with IT and maintenance specialists to
develop and assess hardware requirements. Moreover, the experiences acquired from the
development and pilot implementation of an AR assistance system for service technicians for
onshore wind turbines have been considered [23]. In this case, several tests have been conducted
directly in the field of application, where the service technicians were provided with system
prototypes. From these field tests, we derived practical requirements for the development of an ARbased assistance system. Additionally, we analysed existing literature on Augmented Reality
solutions on application-specific experience values and analysed those on their relevance in the
maintenance context. In summary it can be said that the consideration of the identified requirements
constitutes the foundation of a successful implementation of AR in on-site maintenance. Figure 1
summarizes these sucess factors based on a typical on-site maintenance workflow
zĞƐ
ĂƚĂƚƌĂŶƐĨĞƌ
ĨƌŽŵ
ĞŶƚĞƌƉƌŝƐĞ
ƐLJƐƚĞŵƚŽ
ŵŽďŝůĞ
ƐĞƌǀŝĐĞ
ĚĞǀŝĐĞ
EĂǀŝŐĂƚŝŽŶƚŽ
ƚŚĞŶĞdžƚ
ƚĂƐŬďĂƐĞĚ
ŽŶƚŚĞ
ŵĂŝŶƚĞŶĂŶĐĞ
ƉƌŽƚŽĐŽů
WĞƌĨŽƌŵ
ŵĂŝŶƚĞŶĂŶĐĞ
ƚĂƐŬĂŶĚ
ĚŽĐƵŵĞŶƚ
ŵĂŝŶƚĞŶĂŶĐĞ
ĚĂƚĂ
EŽ
KƉĞŶƚĂƐŬ͍
dƌĂŶƐĨĞƌ
ĚŽĐƵŵĞŶƚĞĚ
ĚĂƚĂĨƌŽŵ
ŵŽďŝůĞ
ĚĞǀŝĐĞƚŽ
ĞŶƚĞƌƉƌŝƐĞ
ƐLJƐƚĞŵ
Technical possibilities and limitations of
AR hardware
Hardware
Essential hardware requirements
Further influence factors on hardware selection
Interaction patterns
Form of presentation
Interfaces
Software
Navigation
User-centred development
Network connection
Consideration of work process
Field of
application
Data
Impact of work environment
Provision of
training content
Data security
Data processing
Figure 1: Criteria for hardware and software development
Hardware. The following factors are related to current technical possibilities and limitations of AR
hardware, essential hardware requirements as well as influence factors on the hardware selection.
178
Technical possibilities and limitations. Even though Augmented Reality is not a completely
new technology, hardware for Augmented Reality applications has become more efficient, lighter
and more favourable recently. Constantly, technical enhancements as well as new developments can
be expected due to the dynamic market for Virtual and Augmented Reality hardware and software
that is predicted to reach a volume from 23 billion US$ to 182 billion US$ in 2025 [24]. Thus, the
market of Augmented Reality hardware should be closely monitored to overcome current
deficiencies, e.g. limited field of vision, missing industrial suitability [25].
Essential hardware requirements. For industrial applications, there are essential requirements
that have to be met by the AR hardware. Besides the constant striving for more accurate, lighter,
faster, simpler and cheaper systems [7], a sufficient battery lifetime, the resolution of camera and
display, robustness, sufficient storage capacity, compliance with safety standards, and the possibility
to implement interfaces to enterprise systems and the operating systems used have to be considered.
Further influence factors on hardware selection. Based on expert interviews with end users
and practical experience, there are several further factors that need to be considered for the hardware
selection. Thus, the selected hardware has to be highly robust and fail-safe for an application in the
context of maintenance. Furthermore, the technicians do not have to be restricted in their freedom of
movement by using hardware in the work process, for instance industrial climbing activities during
the maintenance of wind energy turbines. Therefore, the existing equipment of the service
technicians has to be considered for the development of an AR assistance system.
Software. The successful software development for an AR-based assistance system for on-site
maintenance measures is dependent on the following factors.
Interaction patterns. Suitable interaction pattern for AR-based assistance systems are dependent
on the possibilities of the selected hardware, the work processes and the work environment. For
example, the sole use of smart glasses makes manual data input uncomfortable. Another example is
a technician wearing gloves in the work process that need to be considered for the interaction with
the hardware. Furthermore, completely new challenges arise from the interaction of the user with
virtual objects in the field of view.
Form of presentation. With regard to display additional virtual information extending the actual
view of the user, factors as for example readability, position in the field of view and contrast with
the background play a role. Moreover, the displayed information should be limited to avoid an
overextension of the user.
Interfaces. Providing interfaces to enterprise systems are a basic requirement for exchanging
maintenance data. Furthermore, a direct access on historical system information can be helpful to
compare system statuses.
Navigation. Before carrying out maintenance tasks, the technicians have to find and access the
components to inspect. The navigation has been gaining importance in on-site maintenance. Since
position data is not available in most on-site facilities, other technologies have been proposed in this
regard. Nowadays, all available AR hardware are suitable to implement an indoor navigation [9].
However, for a successful indoor-navigation, depending on the adopted indoor navigation
technology, a database that contains the position of the Hotspots (in the case of a WIFI Indoor
navigation system), beacons (Bluetooth navigation system), RFID, or ample for visible light
communication technology is required. Furthermore, for an accurate position, additional hardware
(hotspots, beacons, etc.) is needed. Markers can also be used for indoor navigation. In this case, the
position data of the marker has to be stored in a database as point of reference. For a successful
implementation, digital building data has to be available. This includes the virtual 3D models as
well as the digital building plans and the positions of natural features.
User-centred development. Besides the consideration of the work process of the service
technicians, the future users should be intensely integrated in the development process. By applying
rapid prototyping approaches, the technicians can understand the technological possibilities and
actively participate in finding new functionalities for the assistance system. However, the individual
179
experience of the user relating to the applied technology as well as the discipline-specific expertise
has to be regarded [26].
Network connection. Especially with regard to on-site maintenance, a stable network connection
cannot be guaranteed. Thus, in case of a network failure, the developed AR solution has to operate
faultlessly. In some application scenarios, e.g. wind energy turbine maintenance, a reliable network
connection is not available. In this case, the maintenance data needs to be transferred on the mobile
devices before the maintenance operation, the work process data is entered offline. This might have
an influence on the hardware selection in relation to storage capacity.
Field of application. There are several factors that are directly dependent on the field of
application. The work process and the work environment have to considered as well as the provision
of suitable training content for the field of work.
Consideration of work process. The work process of the service technicians plays a very
important role for the assistance system development. In the field of maintenance, the tests and
inspections of a maintenance measure are defined as well as the required technical documentation.
Therefore, an adequate development can only be conducted after analysing the existing work
process with regard to maintenance tasks, navigation demands, documentation, interaction between
technicians etc.
Impact of work environment. Depending on the application, the AR hardware is exposed to
rough environmental conditions. The hardware has to be protected against dust, dirt and moisture.
Furthermore, the hardware must be prevented from physical damage, e.g. by dropping or bumping.
Especially for the usage of optical and video see-through displays the lighting conditions are very
important due to their ineptitude for an outdoor use [25].
Provision of training content. AR applications offer the possibility to provide additional
training content for service technicians in the work process. These tutorials have to be retrievable in
the respective work context and can be provided e.g. as videos, images or text documentation. By
providing step-by-step manuals for specific maintenance tasks, even less experienced workers can
perform complex operations [27].
Data. Besides the consideration of data security requirements for mobile systems, AR poses new
challenges for the provision of utilisable data to display.
Data security. On-site maintenance teams have to access sensitive corporate data on mobile
devices. Thus, this data needs to be transferred to these mobile devices, processed and enhanced
during the maintenance process and transferred back to the enterprise systems after completion of
the maintenance measure. In this process, the requirements of data security for data transfer and data
repository have to be fulfilled. Furthermore, the hardware selection for the mobile devices has an
influence on data security depending on the applied operating system.
Data processing. The existing maintenance documents for specific maintenance measures
usually cannot be used unmodified for Augmented Reality applications. For example, when
developing an assistance system using smart glasses, the information displayed in the field of vision
must be prepared with regard to the limited field of view and processible data formats.
Summary and Outlook
In this paper, we identified success factors for the deployment of AR-based assistance systems
for complex on-site maintenance activities. The results show that the identified success factors
depend on various interdisciplinary fields, hardware, software, field of application and data. The
identified success factors are applicable to other complex industrial fields of application in the
context of maintenance that show a certain degree of similar requirements, e.g. marine or process
industry. By transferring these success factors to other fields of application, the importance of the
individual factors can be assessed for the particular application.
Currently, the development towards predictive maintenance approaches, enables direct access to
machine data, sensor data etc. This results in the possibility of providing large quantities of data to
180
the service technicians directly in the work process using CPS to improve the decision-making
process and to accelerate the failure analysis. It has been shown that AR is a well-suited technology
to fulfil these objectives of Industry 4.0. Indeed, AR offers the possibility for visual repair
instructions and interactive presentation of industrial products and systems. Displaying additional
virtual information instantly at the maintenance location, with regard to the component to repair, is a
decisive argument for the usage of AR. This includes the immediate display of real-time machine
data.
Future work will involve the development of a general framework for industrial assistance
systems based on AR as well as the integration and display of real-time system data in AR
applications. This includes a comprehensive consideration of disturbance values that have an
influence on system development.
Acknowledgement
This work is part of the project “AR Maintenance System”, funded by the German Federal Ministry
of Economic Affairs and Energy (BMWi) under the reference number 16KN021724.
References
[1] E.A Lee, Cyber Physical Systems: Design Challenges, In: 11th IEEE International
Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing (ISORC),
Orlando (2008).
[2] M. Freitag, S. Oelker, M. Lewandowski, R. Murali, A Concept for the Dynamic Adjustment of
Maintenance Intervals by Analysing Heterogeneous Data, Applied Mechanics and Materials.
Progress in Production Engineering, Trans Tech Publications Inc, Pfaffikon, Switzerland, (2015)
507-515.
[3] D. Gorecky, M. Schmitt, M. Loskyll, D. Zühlke, Human-Machine-Interaction in the Industry
4.0 Era, Industrial Informatics (INDIN), 2014 12th IEEE International Conference on (2014) 289294.
[4] K. Matyas, Instandhaltungslogistik, 5th edition, Hanser, München, 2013.
[5] A. Martinetti, M. Rajabalinejad, L. van Dongen, Shaping the Future Maintenance Operations,
The 5th International Conference on Through-life Engineering Services (2017) 14-17.
[6] M. Neges, M. Wolf, M. Abramovici, Secure Access Augmented Reality Solution for Mobile
Maintenance Support Utilizing Condition-Oriented Work Instructions, The 4th International
Conference on Through-life Engineering Services (2015) 58-62.
[7] S. K. Ong , M. L. Yuan & A. Y. C. Nee, Augmented reality applications in manufacturing: a
survey, International Journal of Production Research, 46:10 (2008) 2707-2742.
[8] A.Y.C. Nee, S.K. Ong , Virtual and augmented reality applications in manufacturing. IFAC
Proceedings Volumes. 46 (2013) 15-26.
[9] W. Barfield, Fundamentals of wearable computers and augmented reality, second ed., CRC
Press, Boca Raton, 2016.
[10] G. Kipper, J. Rampolla, Augmented Reality: An Emerging Technologies Guide to AR,
Syngress, Rockland, 2012.
[11] R. Wang, Z. Geng, Z. Zhang, R. Pei, X. Meng, Autostereoscopic Augmented Reality
Visualization for Depth Perception in Endoscopic Surgery, Displays (2017) In Press.
[12] M. Dunleavy, C. Dede, Augmented reality teaching and learning. In M. Spector, M.D: Merill,
J. Elen, M.J. Bishop (Eds.), Handbook of research on educational communications and technology,
Springer, New York, 2014, pp. 735-745.
[13] W. Höhl, D. Broschart, Augmented Reality in Architektur und Stadtplanung, In J. Strobl, T.
Blaschke, G. Griesebner, B. Zagel (Eds.), Angewandte Geoinformatik 2014, VDE Verlag, Berlin,
2014, pp. 638-647.
181
[14] U. Rehman, S. Cao, Augmented Reality-based Indoor Navigation: A Comparative Analysis of
Handheld Devices vs. Google Glass, UWSpace (2017), 1-12.
[15] F. Iske, 30 Jahre Entwicklung der Instandhaltung–von der ausfallorientierten Instandhaltung
zum gemeinsamen TPM und RCM. In J. Reichel, G. Müller, J. Mandelartz (Eds.), Betriebliche
Instandhaltung, Springer, Berlin Heidelberg,2009, pp. 51-74.
[16] M. Quandt, T. Beinke, A. Ait Alla, M. Lütjen, M. Freitag, F. Bischoff, B. Nguyen, A. Issmer,
Augmented Reality für Prozessdurchführung und -Dokumentation - Vernetzung von Mensch und
Maschine in der Instandhaltung von Windenergieanlagen, Industrie 4.0 Management 33 (2017) 52 –
56.
[17] C.M.L. Khalid, M.S: Fathi, Z. Mohamed, Integration of cyber-physical systems technology
with augmented reality in the pre-construction stage, Technology, Informatics, Management,
Engineering, and Environment (TIME-E), (2014) 151-156.
[18] C. Koch, M. Neges, M. König, M. Abramovici, Natural markers for augmented reality-based
indoor navigation and facility maintenance. Automation in Construction, 48 (2014) 18-30.
[19] N. Gavish, T. Gutiérrez, S. Webel, J. Rodríguez, M. Peveri, U. Bockholt, F. Tecchia,
Evaluating virtual reality and augmented reality training for industrial maintenance and assembly
tasks, Interactive Learning Environments, 23 (2015) 778-798.
[20] H.K. Wu, S.W.Y. Lee, H.Y. Chang, J.C. Liang, Current status, opportunities and challenges of
augmented reality in education, Computers & Education, 62 (2013) 41-49.
[21] A. Sanna, F. Manuri, F. Lamberti, G. Paravati, P. Pezzolla, Using handheld devices to support
augmented reality-based maintenance and assembly tasks. IEEE International Conference on
Consumer Electronics (ICCE), (2015) 178-179.
[22] S. Lee, O. Akin, Augmented reality-based computational fieldwork support for
equipment operations and maintenance, Autom. Constr. 20 (2011) 338-352.
[23] Information on https://ar-maintenance.de/
[24] H. Bellini, W. Chen, M. Sugiyama, M. Shin, S. Alam, D. Takayama, Virtual and Augmented
Reality – Understanding the race for the next computing platform, Goldman Sachs Report, Profiles
in Innovation (2016).
[25] G. Dini, M. Dalle Mura, Application of Augmented Reality Techniques in Through-life
Engineering Services, Procedia CIRP, 38 (2015) 14-23.
[26] A. Cooper, R. Reimann, D. Cronin, About Face 3 – The Essentials of Interaction Design, first
ed., Wiley, Indianapolis, 2007
[27] D. Tatíc, B. Tešic, The application of augmented reality technologies for the improvement of
occupational safety in an industrial environment, Computers in Industry 85 (2017), 1-10.
182
Energy efficiency through a load-adaptive building automation in
production
Beiyan Zhou1,a , Thomas Vollmer1,b and Robert Schmitt2,c
1
2
a
Steinbachstr. 17, 52074, Aachen, Germany
Manfred-Weck-Haus 219, 52074, Aachen, Germany
beiyan.zhou@ipt.fraunhofer.de, b thomas.vollmer@ipt.fraunhofer.de, c R.Schmitt@wzl.rwthaachen
Keywords: Energy efficiency, Production planning, Weather forecasts, Integration, Indoor Climate,
Demand oriented control
Abstract. The research project “BIG”, which is funded by the German federal ministry of education
and research, aims to develop a load-adaptive building automation system connected with the
production planning and control system as well as weather forecasts. It intends to reduce energy
consumption of supporting processes during the production, such as heating, air conditioning,
ventilation and lighting. Compared with current building automation systems, “BIG” pursuers a
software solution to derive actual requirements for indoor climate under consideration of internal and
external thermal and light conditions, which controls corresponding building infrastructure
adaptively.
According to the connection with production planning and control systems (PPC), the information
such as shift plans, the number of employees and the allocation of employees is derived and
interpreted into actual indoor climate demand for different areas of the whole production hall.
According to the plan of machine utilization as well as their power outputs from PPC, it is feasible to
define future thermal influence as a result of production activities. Simultaneously, external thermal
and light effects from the environment can be predicted and utilized by the integration of weather
forecasts.
Introduction
In the age of “Industry 4.0”, digitalization and interconnection of various production systems
enable a full range of solutions to improve productivity, on time delivery and product quality or
organize customized production under dynamical environment. Anyhow, with increasing energy
prices and growing green-awareness, the energy management in production cannot be ignored and
thus is still in focus of research. The most of research and provided solutions pursue energy efficient
machine tools, stable control loops or new smart grid systems. As an inevitable factor, the energy
usage of lighting, heating, ventilation and air-conditioning (HVAC) is not well investigated, even
with 29% of the whole industrial energy consumption [1].
Current building automation systems solely intend to deploy a coupled network of sensors (to
measure environmental parameters as well as room occupancy) and implement control algorithms to
keep set-up indoor climate [2]. However, the connection of various systems and data aggregation
from several resources under the trend of “Industry 4.0” shows even a higher saving potential in
periphery energy management [3].
Recently, a research project “BIG” is conducted by Fraunhofer IPT, aiming to reduce peripheral
energy consumption in production [4]. In combination with production planning and control systems
(PPC), actual requirements of indoor climate for different production areas can be determined. Future
weather parameters shall be connected to weather forecasts [5]. By analysis of collected data and
deployment of model predictive control methods, all influence factors of production buildings will
be described, diagnosed and predicted. Hence, building automation systems (actors and motors) will
be controlled accordingly. This facilitates a demand-oriented and predictive building automation
which is able to provide a controlled stable environment for production activities and is able to
183
minimize the periphery energy required at the same time. Figure 1 illustrates the project’s overall
concept.
Figure 1: Overall structure of a demand-driven and predictive building automation
The implementation of this load-adaptive building automation consists of three steps: A
dependency model indicates the qualitative and quantitative interactions of collected parameters (for
example temperature, heating/cooling, ventilation, machine run time and sunshine) (1). A software
solution will be developed through applying predictive control algorithms and connecting with PPC
systems and weather forecasts (2). This software controls robust hardware components to execute
automation actions to adjust indoor climate (3) [6]. 30%-40% periphery energy saving will be
expected based on current research studies [7].
In this paper, the study focus lies on the first step described above: a dependency model of this
adaptive control system that defines the scope of data, correlation and dependency of data as well as
derivation of actual demands of building automation. As the first essential development of this
project, it allows a demand-oriented control and lays on the first stone for a further predictive control.
Methodology
Focusing on the first stage of this adaptive and predictive building automation, this section
provides an overview about the structure of a dependency model and gives insight into each
subsection’s approach. Figure 2 shows the methodology divided in four subsections.
Definition of a dependency
model
Requirements
definition
Development of a software
solution
Definition of data
structure
Definition of
qualitative
influences
Realization of an adaptive
control on hardware
Derivation of a
demand-oriented
control
Figure 2: Overview about research methodology.
Requirements Definition of Building Automation. A workshop executed with experts from
production, building automation and meteorology including an on-site audit in real production
184
facilities was accomplished in order to identify requirements of building automation in the production
environment. Additionally, a dedicated research of market-available functionalities for building
automations completes the definition of target climate indicators inside of production shop floors. As
following, a dedicated research and analysis in established regulation, standards and guidelines of
relevant areas enables quantification of target parameters.
Definition of Data Structure Thoughtful studies on physical principles, especially in the field of
thermodynamics, enabled the identification of factors that lead to the determination of influence target
parameters. All listed factors are classified by their data resource, e.g. production data, weather data,
building data, sensor data and automation data (features can be controlled by building automation,
such as light switches, motors of jalousies).
Generation of dependency matrix. A further classification distinguishes input factors between
variables and constants. Constants have been taken into account as boundary conditions for future
modelling. Rest factors have been transferred into a dependency structure matrix. Studies of physical
relationship between variables and targets realize a definition of qualitative influence. Table 1 shows
an exemplary dependency matrix. “+” stands for a positive correlation, “-“ stands for a negative
correlation. The different number of “+” or “-“ expresses the degree of correlation.
Table 1: Exemplary dependency matrix with qualitative correlation.
Inputs
Target indicator 1
Target indicator 2
Target indicator 3
Input 1
++
--
---
Input 2
+
-
+++
Derivation of a demand-oriented Control Strategy. Actual demands of building automation will
be interpreted by the occupation plan of corresponding production areas as well as production
activities, defined by PPC systems. Depending on negative or positive relations between input factors
and targets, strategical decision will be derived accordingly.
Results
This chapter shows the obtained results of the four steps presented in chapter Fehler!
Verweisquelle konnte nicht gefunden werden.:
Requirements Definition of Building Automation. Several standards and industrial guidelines
such as ISO 16484 [8] and VDI 3813 [9] have specified main requirements of room control:
lighting, sun screening, room climate and air quality. Therefore, five indicators are defined as target
parameters to represent the environment quality: room temperature, illumination, humidity, CO2
concentration and toxic gas concentration. A broad study has been conducted to determinate control
range of the defined targets, involving EU directives [10], German laws [11], ISO standards, DIN
standards and other industrial guidelines. Additionally, taking into account expert statements in the
field of facility management and production, target parameters are specified in Table 2. Within
production facilities, temperature requirements are distinguished from manufacturing areas and
other logistical areas, while illumination requirements vary with tolerance requirements of jobs.
Maximum humidity is dependent on room temperature. All listed requirements will be integrated
into the adaptive control loop as target values.
Table 2: Table of target indicators for building automation.
Target indicators
Room
temperature
Requirement/Control range
Canteens, first aid rooms, rest rooms and sanitary rooms: minimum
temperature: 21°C
General metal working environment:
minimum temperature: 17°C
maximum temperature: 26°C
optimal temperature: 21°C
185
Indoor
illumination
Room humidity
CO2
concentration
Air quality
Rough and medium machining work with tolerance > 0.1mm minimum
illuminance: 300 Lux
Fine machining work with tolerance < 0,1 mm: minimum illuminance: 500
Lux
Quality assurance: minimum illuminance: 750 Lux
precision and micro-mechanics: minimum illuminance: 1000 Lux
room temperature +20 °C: maximum humidity 80%
room temperature +22 °C: maximum humidity 70%
room temperature +24 °C: maximum humidity 62%
room temperature +26 °C: maximum humidity 55%
room temperature: maximum concentration: 1500 ppm
toluene concentration: <3 mg/m³
dichloromethane concentration: <2 mg/m³
carbon monoxide concentration: <60 mg/m³ (1/2h)
carbon monoxide concentration: <15 mg/m³ (8h)
The other major goal of a load-adaptive building automation is minimizing energy costs. Hence,
energy consumption is the sixth target to be taken into account.
Definition of Data Structure. The key novelty of this project is to integrate additional data from
PPC systems and weather forecast, which enables a prediction of indoor climate under both of internal
and external aspects (production activities and weather effect). Besides sensor data and actuator data
from traditional building automation [12], building data, weather data and production data from PPC
systems are aggregated into a control model. Research on building physics [13] and thermal dynamics
[14] enables the identification of specific factors from weather and building, e.g. radiation intensity,
wind speed, ambient temperature, dimension, orientation, transmissivity of windows or thermal
transmittance of walls. Data for order processing and machine performance specify the occupation of
production areas and influence from manufacturing activities. Table 3 lists identified data that
facilitate this control model.
Table 3: Table of input parameters.
Production data
working plan of each
staff
operating plan of the
machines
Planed output of each
machine
Weather data
Buildling data
Sensor data
Actuator data
Ambient temperature
Geometric position
Room temperature
Heating power
Global radiation
Floor plan
Illumination
Air conditioning power
Humidity
Dimensions
Indoor air humidity
Wind speed
Wind direction
Building material
CO2-concentration
Window transmittance Indoor air quality
Emmissivity of walls
Thermal transmittance
of walls
Ventilation volume
Percentage of sun
shading
Slat opening
lighting strength
Generation of Interdependency Matrix. The challenge to achieve this adaptive building control is
to minimize the impact of targets from uncontrollable variables. This requires a model that describes
influences from both uncontrollable and controllable variables. Therefore, a compensation strategy
can easily be conducted to eliminate the negative influence of uncontrollable variables through
regulating controllable variables. Data originating directly from actuators of automation components
such as heating power, ventilation volume and lighting are considered as controllable variables.
Production data and weather data only provide environmental influence on building systems and thus
186
can only be handled as incontrollable data. Table 3 displays an interdependency matrix between all
variables and targets. Qualitative correlation between those is analyzed according to thermal
dynamics as well as heat and mass transfer equations. The impact of ventilation onto room
temperature depends on ambient temperature, therefore it is marked as both of “+” and “-“. Strategic
decisions to control automation elements can be deduced based on this matrix.
Table 4: Interdependency matrix with qualitative correlation.
Parameter
Working
plan of each
employee
Occupation
Production
plan per
Data
machines
Planned
power out
per machine
Ambient
temperature
Solar
Weather
radiation
Data
Ambient
humidity
Wind speed
Heating
power
Ventilation
volume
Air
Actuator conditioning
power
Data
Lighting
Percentage
of sun
shading
Slat opening
Energy
Temperature Lighting Humidity
usage
CO2
concentration
Air
quality
++
+++
+
-
+++
+
+
-
---
+++
+++
+++
++
+++
++
-+++
+
+++
-
-
++/--
---
---
-
+++
++
+
+
+
+++
--
---
+
+++
Derivation of a demand-oriented Control Strategy. Every ERP or MES system provides
information regarding each order with timeline, employee assignments and machine utilization [15].
A typical daily production organization does not use all production areas at the same time. Therefore,
only areas where actual producing tasks are carried out will be controlled accordingly by building
automation, which leads to the demand-oriented control. Combining both information from ERP or
MES and factory layout, values of production planning-dependent target variables can be derived.
The concrete derivation approach of the control strategy will be illustrated by the following case
study from one validation partner in this project as a tooling supplier in textile industry. Within a
metal processing shop floor, there are six sectors divided, as displayed in Figure 3. Each area can be
controlled by an individual building controller.
187
Hardening
machine A
Mounting apparatus
Assembly area
Hardening
machine B
Grinding machine A
Hardening area
Fine machining area
Grinding machine B
Turning
machine A
Balancing machine
Warehouse
Balancing area
Milling
machine A
Turning
machine B
Turning area
Milling
machine B
Turning
machine C
Turning
machine D
Figure 3: Factory layout of an exemplary factory with six control sectors.
The typical manufacturing tasks include turning, milling, balancing, hardening and assembly
processes, for both of series products and made-to-order parts. A three-day production plan,
originated from the company specific planning system is shown in Figure 4. With a data collection
program, data regarding employees and machines will be extracted and interpreted into a demand
sheet of the defined sectors.
Table 2 solely defines target parameters for a general production environment. Additional
specific requirement of indoor climate must also be taken into account. A workshop with this
validation partner states the minimum temperature as 12 °C without occupation and minimum
humidity as 62% in order to maintain the part quality. Considering both legal and customer-specific
requirements, Figure 5 and Figure 6 illustrate demands of room temperature and illumination for the
listed six area separately. As seen in both figures, production areas where and when manufacturing
activities really take place, will be strictly controlled to reach desire temperature 21 °C. Remaining
areas shall only meet the minimal requirement of 12 °C. Based on the developed interdependency
matrix, strategical decisions can be derived: heating/air-conditioning will be activated only where
and when manufacturing activities take place. The illumination control will additionally consider
the type of production activities. During fine machining, balancing and assembly, employees
demand at 500 Lux to accomplish tasks. Rest tasks demand only 300 Lux. Altogether, It facilitates a
demand-oriented control, showing at least 40% energy saving potential compared with the
conventional constant control for the same three days.
Job No Product
Job 1
8:0010:00
Start End
10:0012:00
12:0014:00
14:0016:00
Turning mach. Milling
A
A
Balancing
mach.
15.3
17.3
15.3
15.3
Turning mach. B
15.3
17.3
Turning mach. C
15.3
15.3
Turning mach. D
Job 5
Cylinder 1
Made to
order
Made to
order
Made to
order
Made to
order
14.3
15.3
Milling mach. B
Job 6
Cylinder 2
13.3
16.3
Job 7
Cylinder 1
16.3
20.3
Job 8
Cylinder 2
14.3
17.3
Job 2
Job 3
Job 4
16:00- 18:00- 20:0018:00 20:00 22:00
6:00- 8:00- 10:00- 12:00- 14:00- 16:008:00 10:00 12:00 14:00 16:00 18:00
T. A Haderning mach. A
Turning
mach. C
Grinding mach. B
Hardening mach. B
Turning Milli
mach. B ng B Balancing
Grinding mach. A
Figure 4: Data derived from MES at a validation partner.
188
8:00- 10:00- 12:00- 14:00- 16:0010:00 12:00 14:00 16:00 18:00
Aseembly
Turning mach. C
T. A Hardening mach. A
Hardening mach. B
Turning Area
21 °C
<<<<<<<<<<<<<<<
12 °C
Night
12 °C
21 °C
12 °C
21 °C
12 °C
Fine Machining Area
Hardening Area
Assembly Area
Night
21 °C
Night
Balancing Area
21 °C
Night
12 °C
Warehouse
21 °C
12 °C
8:00
20:00
8:00
20:00
20:00
8:00
16.03.2017
15.03.2017
17.03.2017
Temperature under conventional control
Temperature considering real production demands
Potential for energy saving
Figure 5: Temperature requirements with demand-oriented control
Turning Area
500 Lux
300
150
0
Night
500 Lux
300
150
0
500 Lux
300
150
0
500 Lux
300
150
0
Fine Machining Area
Hardening Area
Assembly Area
Night
Balancing Area
500 Lux
300
150
0
Night
Night
Warehouse
500 Lux
300
150
0
8:00
20:00
8:00
15.03.2017
20:00
16.03.2017
20:00
8:00
17.03.2017
Illumination under conventional control
Illumination considering real production demands
Potential for energy saving
Figure 6: Illumination requirements with demand-oriented control
189
Summary
Compared to traditional building automation, additional data are collected and coupled with PPC
systems and weather forecasts. This broader data aggregation enables possibility more specific
description of building features, the prediction of further indoor climate conditions and to make
adaptive control decisions. At the same time, the challenge to select right data and build the relevant
correlation arises. This research focuses on a data structure and its dependency model for the loadadaptive building automation, combining not only common sensors but also PPC and weather
forecasts. Through research on heat and mass transfer, related influence factors as well as their
impacts have been identified. Based on the interpretation of production data, actual demands of
building automation can be assessed. It facilitates a strategic decision to control automation
components for the desired working environment.
In order to complete this adaptive and energy-efficient building control, a mathematical modelling
of all input factors and targets in the dependency matrix will be realized to predict climate condition.
A further development based on model predictive control will enable the derivation of concrete
automation measures and communicate with hardware components.
Acknowledgment
This project is sponsored by the German federal ministry of education and research, under funding
initiative “KMU-Innovative” to promote further development of resource and energy efficiency with
the granted number 01LY150B. DLR Project Management Agency supervises the development of
this project.
References
[1] Information on http://www.umweltbundesamt.de/daten/industrie/branchenabhaengigerenergieverbrauch-des#textpart-1.
[2] B. Asch. Aschendorf, Energiemanagement durch Gebäudeautomation, Springer Fachmedien
Wiesbaden, Wiesbaden, 2014.
[3] Paula, M.: Energieeffizienzsteigerung in der automatisierten Gebäudeklimatisierung
Energieeffizienzsteigerung in der automatisierten Gebäudeklimatisierung durch
wetterprognoseunterstützte Regelung (ProKlim), Wien, 2012.
[4] Information on https://www.big-gebaeudeautomation.de/.
[5] R. Schmitt, B. Zhou, T. Vollmer, Energieeffiziente Produktionsstätte durch bedarfsabhängige
und innovative Gebäudeautomation, ZWF, vol. 112, no. 3 (2017) 155–158.
[6] A. Afram, F. Janabi-Sharifi, Theory and applications of HVAC control systems – A review of
model predictive control (MPC), Building and Environment, vol. 72, 2014, pp. 343–355.
[7] E. Bollin and T. Feldmann, Verbesserung von Energieeffizienz und Komfort im
Gebäudebetrieb durch den Einsatz prädiktiver Betriebsverfahren (PräBV): [Abschlussbericht],
Fraunhofer-IRB-Verl., Stuttgart, 2014.
[8] DIN EN ISO 16484-3:2005: Building automation and control system (BACS)- Part 3:
Functions.
190
[9] VDI 3812 Part2: Building automation and control systems (BACS) –Room control functions
(RA functions), 05, 2011.
[10] Council Directive 89/391/ECC: on the introduction of measures to encourage improvements in
the safety and health of workers at work, 1989.
[11] Technische Regeln für Arbeitsstätten (ASR).
[12] Y. Agarwal, B. Balaji, R. Gupta, J. Lyles, M. Wei, T.s Weng, Occupancy-Driven Energy
Management for Smart Building Automation, in ACM Workshop on Embedded Sensing Systems For
Energy-Efficiency, in Buildings, 2010.
[13] C.O. Lohmeyer, H. Bergmann, M. Post, Praktische Bauphysik Eine Einführung mit
Berechungsbeispielen, 5., vollst. überarb. Auflage, Springer, Wiesbaden, 2005.
[14] M. Merz, Object-oriented modelling of thermal building behavior, Kaiserslautern, Selbstverlag,
2002.
[15] B. Saenz de Ugarte, A. Artiba, R. Pellerin, Manufacturing execution system – a literature review,
Production Planning & Control, vol. 20, no. 6, 2009, pp. 525–539.
191
Vertical integration of production systems for resource efficiency
determination
Thomas Vollmer1, a , Niklas Rodemann2,b and Robert Heinrich Schmitt3,c
1
Fraunhofer Institute for Production Technology IPT, Steinbachstrasse 17, 52074 Aachen,
Germany
2
RWTH Aachen University, Faculty of Mechanical Engineering, Kackertstrasse 9, 52072
Aachen, Germany
3
RWTH Aachen University, Laboratory for Machine Tools and Production Engineering (WZL),
Chair of Metrology and Quality Management, Germany
a
thomas.vollmer@ipt.fraunhofer.de, bniklas.rodemann@rwth-aachen.de, cr.schmitt@wzl.rwthaachen.de
Assignment Abstract to Congress Topic: Internetbasierte Produktionstechnik
Abstract:
The current trend of digitalising processes as well as the manufacturing environment in
general within the context of “Industry 4.0” offers a wide range of opportunities such as increase
of productivity and performance, individualisation or quality but entails challenges as well.
These include especially for small and medium enterprises (SME) the proper selection and
introduction of production systems, selection of entities to be digitalized as well as setting up
the suitable “Industry 4.0” environment.
Besides the mentioned aspects, digitalisation by means of vertical integration of production
systems offers the opportunity of increasing the transparency of processes, e.g. regarding
resource consumptions which founds the basis for improvements in resource efficiency. This
aspect is worth being considered due to the constantly increasing prices of production factors
caused by resource scarcity and cost-intensive labor.
To use “Industry 4.0” as a driver for productivity in manufacturing and increase
transparency, literature suggests the vertical integration of production systems. The integration
of data within production systems such as enterprise resource planning (ERP), manufacturing
execution systems (MES) and sensors enable to develop meaningful key performance indicator
(KPI) about the manufacturing processes’ performance. This concept includes the use of
gathered process data and their aggregation to knowledge about the most relevant resources.
Together with the intelligent selection of necessary datasets within e.g. ERP or MES, an
intelligent interconnection leads to the meaningful calculation of resource consumptions.
For a useful preparation of companies for the presented “Industry 4.0” transformation
process, this paper strives to (1) select the most relevant activities in the main corresponding
research areas, (2) identify the major industrial requirements, (3) evaluate existing approaches
regarding those and (4) finally derive further research demand about this vertical integration
approach’s development as well as its implementation into an existing manufacturing
environment.
193
1. Introduction
The term “Industry 4.0” represents a broad meaning. Its most common understanding can be
expressed by digitalization and interconnection of production systems. The increase in
communication establishes manifold applications to improve the productivity, processes’
performance or the product quality. More and more processes are being digitalized which leads
to machines generating many data. The more data is available, the harder it gets to handle those
and extract meaningful information for the workers on the shop floor. Until 2020, the generated
data volume shall again grow by the factor 10. [FUNK16] Thus, concepts how to deal with
these demographical changes in manufacturing have been developed.
One of these concepts is the vertical integration of production systems to link various
production systems with each other and interconnect the included information respectively data.
This interconnection of e.g. order information, production times and machine power input shall
generate knowledge about the processes’ or product’s performance automatically with the help
of KPI regarding resource consumptions. The logic behind the calculation shall be universally
applicable and may be transferred to the most relevant resources in manufacturing. Since prices
for the relevant resources such as energy are increasing, this paper considers the transparency
in resource consumptions as a use case for the application of a vertical integration concept.
Goal of the study is the provision of an overview about the research landscape focusing onto
vertical integration of production systems for transparency regarding resource consumption and
the development of a catalog showing the main requirements for a company striving to apply
the presented concept. These requirements found the basis of the following assessment of the
identified existing approaches. The approaches are matched with the requirement specifications
to derive a gap for further research with regard to the development of such an application for
vertical integration to increase transparency in resource consumptions.
2. Methodology
This section provides an overview about the paper’s structure and gives insight into each
subsection’s approach and its goals. Figure 1 shows the methodology divided in four
subsections.
Figure 1: Overview about research methodology.
(1) Research about existing approaches and prioritization
For the identification of existing approaches which are following similar goals and scope, a
dedicated research in established scientific databases respectively search engines has been
conducted. The considered databases are ScienceDirect, EBSCOhost, Emerald Insight, JStor,
Google Scholar as well as the public libraries of RWTH Aachen University and Laboratory for
Machine Tools and Production Engineering (WZL). Within these databases and libraries,
different search operators in combination with several suitable search terms have been used.
(2) Definition of requirements catalog
The selection of relevant approaches within already existing ones founds the basis for the
later analysis. The analysis extracts essential requirements that have to be included into the
focused concept. The main topics involved in the first selection of a reasonable number of
approaches have been the terms “vertical integration”, “resources” and “automated data
acquisition”. Furthermore, the main requirements will be divided again into subsections to
increase the level of detail.
194
(3) Requirements matching with selected approaches
The deduction of requirements from the related research approaches shall be enhanced by
adequate aspects for the development of an automated solution for KPI calculation. For each of
the requirements, criteria will be introduced and each approach will be evaluated consecutively
regarding these requirements. According to the criteria’s degree of fulfillment, Harvey Balls
are assigned to each subsection as displayed in Table 1.
Table 1: Degrees of fulfillment for criteria assessment.
demand met
demand mostly
met
demand
partially met
demand slightly
met
demand not met
at all
(4) Derivation of further research demand
Depending on the degree of fulfillment, only partially filled Harvey Balls in the assessed
subsections show a lack of existing approaches. However, despite the display of missing
research this method of assessing the approaches completeness regarding the goals of
developing a solution for automated KPI calculation for resource consumption, dedicated
measures to fulfilling each subsection individually have to be discussed.
3. Results
This chapter shows the results obtained due to the application of the four steps presented in
the previous chapter 2.
(1) Research about existing approaches and prioritisation
Several search terms such as network, production, data, acquisition, energy, KPI, vertical
integration and search operators (AND, OR, AND + OR, (search terms + search operator),
“search terms”) and suitable combinations of those have been used both in English and German
language. The selection of search terms is based on contentual matches with the scope. The
selected search operations provided numbers of results between 6 (("data collection" OR "data
acquisition") AND production AND kpi AND "consumption monitoring") and 496,926
(networked AND production) approaches. The extension by additional search terms
continuously narrowed the search results by adding new terms relevant for the approach to be
developed. The manual screening of the most relevant topics and the suitability regarding the
desired concept of resource consumption KPI from production systems led to the prioritization
of the further considered approaches. Table 2 shows the chosen search terms and the used search
operators as well as their results for the exemplary ScienceDirect database. For all databases,
the same searches have been conducted.
Table 2: Overview about search terms, search operators and search results for English language.
Search terms and search operators
networked AND production
“networked production“
“networked production“ AND data
"networked production" AND "data collection"
"networked production" AND kpi
data AND collection AND production
"data collection" AND production
data AND collection AND kpi AND production
Number of search results and selected
approaches
496,926
608
475
52
10
479,650
108,837
1,261
195
data AND collection AND kpi AND "production
data" AND erp AND mes
"data acquisition" AND production
("data collection" OR "data acquisition") AND
production
("data collection" OR "data acquisition") AND
production AND kpi
("data collection" OR "data acquisition") AND
production AND kpi
AND "vertical integration"
("data collection" OR "data acquisition") AND
production AND kpi
AND monitoring
("data collection" OR "data acquisition") AND
production AND kpi
AND "consumption monitoring"
("data collection" OR "data acquisition") AND erp
AND mes
("data collection" OR "data acquisition") AND erp
AND mes AND
integration
("data collection" OR "data acquisition") AND erp
AND mes AND
integration AND "production data"
mes AND erp AND automation AND production
mes AND erp AND automation AND "production
data"
transparency AND energy AND consumption AND
product AND
machine AND process
transparency AND energy AND consumption AND
product AND
machine AND process AND factory
transparency AND energy AND consumption AND
product AND
machine AND process AND factory AND "data
acquisition"
monitoring AND energy AND consumption AND
industry AND data
AND collection AND erp AND mes
monitoring AND energy AND consumption AND
industry AND data
AND collection AND erp AND mes AND kpi
production AND data AND acquisition AND erp
AND mes
"production data" AND acquisition AND erp AND
mes
energy AND resource AND efficiency AND
production
energy AND resource AND efficiency AND
production AND data AND
consumption
energy AND resource AND efficiency AND
“production data acquisition”
energy AND resource AND efficiency AND
“production data acquisition”
AND erp AND mes
8
69,635
170,952
772
25 (Gerber)
526
6 (Abele, FoFdation)
1,004
565
41
502
59 (May, Pintzos)
2,442
640
104 (Green Cockpit, EnHiPro)
125
19 (Gerber, Vikhorev)
786
57 (FoFdation, FOREnergy)
151,018
70,453
19,512
209 (Keller, Vikhorev)
The table shows that different combinations of search terms and search operators deliver
similar results in the cases of Gerber, FoFdation and Vikhorev. The results of the other
196
databases provided similar results regarding the relevant research activities. The further
selection has been conducted under consideration of overlaps of the existing approaches and
the approach to be developed since those requirements to whom are often referred to play a
significant role in the desired approach. The existing approaches of choice have been Gerber
[GERB12], Pintzos [PINT13], FOREnergy [FORE16], Green Cockpit [RACK15], FoFdation
[FOFD16], EnHiPro [HERR13] and PLANTCockpit [VASY12].
(2) Definition of requirements catalog
Based on the selected approaches and the defined goals for the concept of automated KPI
calculation for resource consumptions, six main criteria with several sub criteria could be
derived. These criteria are shown in the following table and are essential to meet the defined
objectives.
Table 3: Criteria required to establish automated KPI calculation for resource consumption.
Main criteria
1.
Target system & scope
2.
Level of aggregation
3.
Holistic view
4.
Interconnection of relevant systems
5.
Visualization
6.
Application, individualization and
integrability
Sub criteria
1.1
1.2
2.1
2.2
2.3
2.4
2.5
3.1
3.2
3.3
3.4
3.5
4.1
4.2
4.3
5.1
5.2
6.1
6.2
6.3
6.4
6.5
Entirely defined target system
Entirely defined scope
Product level
Machine level
Process level
Building level
Facility level
Energy
Material
Water
Emissions
Waste
Enterprise level
Plant level
Data acquisition/sensor level
Display format
Numerical result
Integrability
into
established
manufacturing
environment
Individualization of data transformation
Manifold interfaces
Time-invariant flexibility of KPI request
Flexibility of KPI definition
(3) Requirements matching the selected approaches
For a proper assessment of the mentioned approaches, assessment criteria have to be defined.
1.1. Entirely defined target system:
Demand is met, if a vertical integration of participating systems generates data and these are
aggregated to KPI. If one aspect is missing, the demand is partially met, otherwise not met at
all.
1.2. Entirely defined scope:
If only the main process is considered, this criterion does not meet the demand at all. If the
approach also considers peripherals, the demand is partially met. Is also the second or third
level of peripherals included, the criterion meets the demand.
2.1-2.5.
Level of aggregation:
If a concept allows the calculation of KPI on the above mentioned level of aggregation,
the criterion’s demand is met. The criterion is partially met if the approach provides vague
information but leads to the assumption that the KPI might be included.
197
3.1-3.5.
Holistic view:
The demand is met, if the approach considers the mentioned resources. Otherwise, the
demand is not met at all. In case of energy consumption, the demand is partially met if not all
the considered energy carrier (electricity, compressed air, oil, gas) are included.
4.1-4.3.
Interconnection of relevant systems:
The demand of these sub criteria is met, if the specific production system level is connected
to the concept.
5.1. Display format:
Richter classifies relevant types of visualization for the display of consumption data.
[RICH13, p. 223] Based on this classification, the criterion’s demand is met, if bar or line chart
are realized and correlated with a timeline. The demand is partially met, if the time correlation
is missing or a pie chart or a Sankey diagram are used. The display of statuses or tabular
information meets the demand only slightly.
5.2. Numerical result:
Does the approach support the numerical calculation of the overall consumption the demand
is partially met. The demand is met, if additional information such as minimum or maximum
consumption within a period of time is available.
6.1. Integrability into established manufacturing environment:
The demand of this criterion is not met at all, if an approach only develops a theoretical
concept. The demand is partially met, if an application in a lab environment has been conducted.
A successful application in more than one company meets the demand.
6.2. Individualization of data transformation:
If an approach allows the integration of different data types, e.g. due to the use of a parser,
the demand is met. If data have to be provided in a specific and prescribed type, the demand is
not met at all.
6.3. Manifold interfaces:
The approaches shall support an easy integration of entities such as machines or production
systems. Thus, the support of standard interfaces such as OPC-UA meets the demand. The
support of web services mostly meets the demand, provider-specific drivers meet the demand
only slightly. If an approach does not provide information about interfaces, the demand is not
met at all.
6.4. Time-invariant flexibility of KPI request:
If request cycles are pre-defined and static (e.g. quarter, months, days), the criterion’s
demand is not met at all. If request cycles can be individualized and are also available for short
periods of time, the demand is met.
6.5. Flexibility of KPI definition:
If only the calculation of the overall consumption is available, the demand is not met at all.
If the approach supports more level of detail by means of calculating KPI for one more level of
aggregation, the demand is only slightly met. If both combinations are provided, the demand is
partially met. For meeting the demand entirely, the approach has also to consider the single
process level for a product.
The final assessment with the above mentioned assessment criteria is shown in Table 4.
198
1.
2.
3.
4.
5.
Target system &
scope
Level of
aggregation
Holistic view
Interconnection of
relevant systems
Visualization
2.1
Entirely
defined
target system
Entirely
defined
scope
Product level
2.2
Machine level
2.3
Process level
2.4
Building level
2.5
Facility level
3.1
Energy
3.2
Material
3.3
Water
3.4
Emissions
3.5
Waste
4.1
Enterprise level
4.2
Plant level
1.1
1.2
5.1
Data
acquisition/sensor
level
Display format
5.2
Numerical result
4.3
6.1
6.
Application,
individualization
and integrability
6.2
6.3
6.4
6.5
PLANTCockpit
EnHiPro
FoFdation
Green
Cockpit
FOREEnergy
Sub criteria
Pintzos
Main criteria
Gerber
Table 4: Final assessment of approaches’ fulfillment of sub criteria.
Integrability
into
established
manufacturing
environment
Individualization of
data transformation
Manifold interfaces
Time-invariant
flexibility of KPI
request
Flexibility of KPI
definition
199
(4) Derivation of further research demand
As one can see in Table 4, no approach fulfills all the criteria relevant for an automated
calculation of resource consumption KPI. Only Gerber and Pintzos cover the main criterion
“interconnection of relevant systems” entirely. Anyhow, even those are lacking of the sufficient
levels of aggregation, consideration of resources or the industrial applicability. Thus, the
selected and characterized approaches serve as a basis for further research for the realization of
the desired system. Anyhow, those are not eligible as ready-to-use role model yet.
4. Discussion
The research regarding the definition of criteria lead to the finding that further research has
to be conducted. For the development of a concept for automated resource consumption KPI
based on data within production systems, at the moment no applicable model exists. Table 4
shows lacks of insights even in relevant and already existing approaches. However, the
activities that have been described unveiled topics which have stronger importance for further
research. This includes the breakdown of information onto machine level. Since the KPI
calculation is based on data stored within production systems and mostly gathered by sensors,
the resource allocation has to be evaluated further, e.g. in case of connection of various
machines to one sensor or data inconsistency. Several concepts exist to solve this issue. Still,
their impact onto inaccuracy has to be evaluated for a proper selection. Additionally, the
example of energy consumptions explains another challenge: since only value-adding processes
shall be considered, the gathered sensor data include the total power consumed by a machine
(including e.g. standby and setup times). The determination of the difference between the total
amount of power and the one responsible for value-addition has to be considered in further
research.
Finally, the concrete conceptual design of the automated KPI calculation using relevant
production systems with focus on resource consumptions and the desired functionalities, its
practical development as well as its validation shall be the next and most important steps.
200
References
[FOFD16] FoFdation: Foundation for the sustainable factory of the future.
URL: http://www.fofdation-project.eu/results.asp#.V4TeeHppnNc [Accessed
on September 21, 2016].
[FORE16] FOREnergy: Die energieflexible Fabrik. Teilprojekt 1: Transparenz.
URL: http://forenergy.de/de/projektverbund/teilprojekte/tp1.html [Accessed on
September 19, 2016].
[FUNK16] Funkschau Kommunikationstechnik: Internet of Things oder die
Informationsflut
der
Dinge.
URL: http://www.funkschau.de/datacenter/artikel/107695/ [Accessed on
September 21, 2016].
[GERB12] Gerber, T.; Bosch, H., C.; Johansson, C.: Vertical Integration of decision
relevant production information into IT-Systems of manufacturing companies.
In: Proceedings of the 14th IFAC Symposium in Information Control Problems
in Manufacturing. Bukarest, 2012, pp. 811-816.
[HERR13] Herrmann, C.; Posselt, G.; Thiede, S.: Energie- und hilfsstoffoptimierte
Produktion. 1. Aufl. Heidelberg: Springer, 2013.
[PINT13] Pintzos, G.; Matsas, M.; Papakostas, N.; Chryssolouris, G.: Production Data
Handling Using a Manufacturing Indicators` Knowledge Model. In: 46th CIRP
Conference on Manufacturing Systems. Sesimbra, 2013, pp. 199-204.
[RACK15] Rackow, T.; Javied, T.; Donhauser, T.; Martin, C.; Schuderer, P.; Franke, J.:
Green Cockpit: Transparency on Energy Consumption in Manufacturing
Companies. In: 12th Global Conference on Sustainable Manufacturing. Bahru,
2015, pp. 498-502.
[RICH13] Richter, M.: Energiedatenerfassung. In: Neugebauer, R. (Hrsg.): Handbuch
Ressourcenorientierte Produktion. 1. Aufl. München: Carl Hanser, 2013.
[VASY12] Vasyutynskyy, V.; Hengstler; C.; Nadoveza, D.; McCarthy, J.; Brennan, K.;
Dennert, A.: Layered Architecture for Production and Logistics Cockpits.
Dresden, 2012, pp. 1-8.
201
Decentral Energy Control in a Flexible Production
Sebastian Weckmann1, a, Darian Schaab2, b and Alexander Sauer3,c
1, Institute for Energy Efficiency in production EEP, Nobelstr. 12, D-70569 Stuttgart, Germany,
Tel.: +497119701955
2 Institute for Energy Efficiency in production EEP, Nobelstr. 12, D-70569 Stuttgart, Germany,
Tel.: +497119703600
,
3 Institute for Energy Efficiency in production EEP, Nobelstr. 12, D-70569 Stuttgart, Germany,
Tel.: +497119701065
a
sebastian.weckmann@eep.uni-stuttgart.de, b darian.schaab@eep.uni-stuttgart.de, c
alexander.sauer@eep.uni-stuttgart.de l
Keywords: Optimization, Manufacturing system, Energy flexibility
Abstract.
A volatile energy supply sector with fluctuating energy prices poses new challenges to sustainable
and cost efficient manufacturing. Due to a growing proportion of renewable energy sources as well
as a decentralization of energy production, the energy system faces major changes and challenges.
Industrial facilities are high energy consumers which are responsible to lead the change on the
consumer side. The role of the consumer is in particular focus, since e.g. the increasing penetration
of wind and solar power is necessitating a more active role for energy management in homes,
buildings, and industries. The intermittency and unpredictability of renewable power generation is in
sharp contrast to traditional power generation. With power coming entirely or almost entirely from
the latter assets, system operators have been able to keep the grid balanced by adjusting generation in
real-time in response to demand variation. With unpredictability now extending to generation,
imbalances in the grid may cause grid reliability issues or energy price fluctuations. Therefore,
industrial facilities tend transform its infrastructure more and more from a consumer only to an energy
prosumer system. Onsite energy production, consumption and storage on the one hand as well as an
increasingly complex interface to the energy system on the other hand require an advanced onsite
grid and energy focused production management. To ensure a stable and cost efficient energy supply
in the industrial energy system, energy supply as well as the demand for manufacturing has to be
balanced. The goal is to use energy when it is cheap and provide energy or use less energy during
periods of high energy prices. Achieving this goal is strongly limited by ensuring the production
performance especially the delivery time and the output and depends on the flexibility of the
production. While smart grids provide solutions for balancing supply and demand for regional and
higher structured energy networks, solutions in an industrial energy environment are missing. This
paper presents the ongoing research concerned with the development of a decentral system including
methods and control units to autonomous control an industrial energy system with fluctuating prices.
The system will ensure production performance while decreasing energy cost through balancing
energy demand and supply. For this purpose, the control units will measure the energy available inside
the system. This information has to be balanced with the actual production order situation of each
single machine. Based on this comparison, the control units will decide autonomously, considering
different production relative parameters, to produce or to wait for more, cheaper energy in the
network.
Introduction
*
Submitted by: M.Sc., Weckmann, Sebastian
203
The energy system faces major changes and challenges, due to a growing proportion of renewable
energy sources as well as a decentralization of energy production, [1]. The role of the consumer is
getting more and more important, since e.g. the increasing penetration of wind and solar power is
necessitating a more active role for energy management in homes, buildings, and industries [2]. High
energy consumers like Industrial facilities are responsible to lead the change on the consumer side
[3]. The intermittency and unpredictability of renewable power generation is in sharp contrast to
traditional power generation. With power coming entirely or almost entirely from the traditional
power plants, system operators have been able to keep the grid balanced by adjusting generation in
real-time in response to demand variation [4]. With unpredictability now extending to generation,
imbalances in the grid may cause grid reliability issues or energy price fluctuations. Therefore,
industrial facilities tend to transform its infrastructure more and more from a consumer only to an
energy prosumer system. Energy consumption, storage and on-site energy production on the one hand
as well as an increasingly complex interface to the energy system on the other hand require an
advanced on-site grid and energy focused production management.
State of the Art
The energy system is historically dominated by large power plants, which produce the required
energy quantities and balance demand and supply at any time. With a growing fluctuation and
decentralization on the production side, balancing supply and demand is getting more and more
complex and dynamic. An active involvement of the consumer side is not an entirely new approach.
However, falling costs of communication infrastructure and embedded systems enable a "smart" and
controllable consumption [5]. Demand side management (DSM) is based on the assumption that it is
more cost-effective to intelligent influence a load than to build or install new power plants or energy
storage [6]. DSM includes the planning, implementation and monitoring of efficiency and flexibility
measures on the consumer side to change the load profile of the consumer. [7] The fundamental
elements of DSM are energy efficiency and energy flexibility. Energy efficiency are all permanent
system optimization to increase energy productivity. Measures for flexible adaptation of the energy
consumption to signals from the energy market are gathered under the term energy flexibility.
Energy Flexibility. Due to an increasing complexity of production tasks and a continuous increase
in product variants, production systems are in an environment that is characterized by great
uncertainty. [8] This uncertainty provides manufacturing companies with major challenges and risks.
To be able to adapt to a changing environment, companies need to have sufficient flexibility [9]. With
uncertainty now extending to the energy supply, energy flexibility enables the energy consumer to
adapt to changing energy prices. Based on the automation pyramid, different levels of energy
flexibility in production can be categorized and described (Figure 1).
On ERP level, the central task is to incorporate the energy demand in the phase of production
planning, to secure the energy supply on a long term [10]. On Plant level, the goal is to achieve the
best possible schedule within the framework of the energy specific demand planning [11]. In this
context, sequence planning, machine usage plan and job fine termination are energetically optimized.
On control level energy usage of individual machines is optimized with respect to energy specific
scheduling and supply. For example, process parameter can be adjusted, job starts can be shifted or
process can be interrupted [12].
Additional to the standard levels a strategic factory planning level as well as an energy supply grid
level are introduced. On a strategic level the design of the production system sets the boundaries for
its flexibility and therefore limits its energy flexibility. Since the adaption of energy consumption to
external market signals causes not only changes in the machine control but also effects the energy
supply grid, the energy supply grid has to be taken into account in terms of feasibility and grid
stability.
204
Year
Week-Days
Day-Minutes
Strategic factory planning
Company level
ERP
Energy specif ic demand planning
Plant level
MES
Energy specific scheduling
Seconds
Cont rol level
SCADA
Milliseconds
Cont rol level
PLC
Field level
Reactor/ Sensor
Supply oriented
consumption
Signal conversion
Machine/ Process
Real time
Grid management/ Grid stability
Figure 1: Automation pyramid based levels of energy flexibility in production
Implementation of Energy Flexibility. To successfully implement energy flexibility in production
systems, a continuous approach from company to grid level has to be established. Approaches to
optimize energy specific demand planning as well as energy specific scheduling are very well covered
in literature, whereas approaches to optimize the consumption with respect to the energy supply are
not yet covered very well [13]. So far a decentral and autonomous production control for energy
flexibility of a production system in real time is missing.
Problem Statement and Approach
After examining the approaches on energy flexibility, this paper presents an approach on “How to
autonomously control an industrial energy system in the context of fluctuating prices with respect to
the production planning and the energy supply grid”. To address this problem, a manufacturing
process chain and its energy supply grid is simulated, while tracking the energy consumption and the
production volume. In a case study of a plastics manufacturer, the simulation model determines an
optimized energy consumption for a planned production volume.
Methodology
A system dynamics simulation-based method was chosen to analyze the production system
behavior on energy flexibility and production volume. First, a model of a production environment is
created. Then the energy supply is varied in a series of experiments, followed by an analysis of the
production volume.
Modeling the Production System. A two process production system is modeled to analyses the
effect of an energy flexible production control (Fig 2). The production management system provides
each process with information about the orders and the associated start and mandatory end time of
205
each order. Furthermore, the processes as well as the storage systems are able to communicate with
each other and are able to assess the energy supply situation.
Figure 2: Model structure of the production system
The following essential premises are formulated for the developed model:
• Constant sequence of order
• At the end of each day all the orders have to be processed
The working hours are described in a predefined shift schedule. Material buffers are modeled as
single-mode sinks. On process level the available flexibility is restricted by:
• Minimal continuous processing time
• Minimal time to change between operational modes
• Energy consumption to change between operational modes
• Process time variability
On production system level the available flexibility is significantly influenced by the number of
orders, which can dramatically change the production volume, the duration it takes for each order to
be finished and the required set-up time between each order.
Modeling the Energy System. The energy system is modeled as DC-supply grid [14]. The power
supply system consists of service providers (active-front-end), consumers (machines), prosumers
(energy storage systems) and the grid structure and is setup as a line topology (Fig 3). Consumers are
participants, which use the grid to provide superior services for manufacturing. A special type of
consumers are passive prosumers, which allow refeed of energy from recuperation. The refeed of
passive prosumer is not controllable, since it is dependent on the overlying production process.
Sources are participants, which provide electric power supply to the grid. The major source is the
active-front-end, which is connected to the external grid structure. Active prosumers describe a
special power source as they are able to shift power draw and supply on demand. In summery two
major tasks can formulated for the supply system:
• Balancing power feed and draw on the local grid,
• Real time distribution of information for the availability of energy on the grid.
206
>ŽĐĂůͲ
ƐƵƉƉůLJ'ƌŝĚ
DĂĐŚŝŶĞϭ
ŐƌŝĚ
ĐƚŝǀĞ
&ƌŽŶƚĞŶĚ
DĂĐŚŝŶĞϮ
^^
...
DĂĐŚŝŶĞŶ
Figure 3 Model structure of the energy supply grid
The energy distribution network can be modeled as an equivalent network, where every machine
is depictured with its impedance and observes the overall bus-voltage (Figure 4). All power sources
are modeled as ideal voltage source with their internal impedance (Figure 4). In a stable grid operation
supply and demand are balanced depending on the load impedance of the machines and the internal
impedances of the sources. Additionally the voltage level within the DC-Bus represents the
availability of energy within the grid. The higher the voltage level is, the more energy is available.
y'
iG
y^^
iESS
iC
h'
h
yD͕ϭ
yD͕Ϯ
...
yD͕Ŷ
h^^
ܺெǡ௜
ܷ஽஼
݅஼
݅ீ , ܺீ , ܷீ
݅ாௌௌ , ܺாௌௌ ,
ܷாௌௌ
Impedance of machine i
Overall DC-Bus voltage
Overall supply current for the machines
Grid current, internal impedance and voltage
Energy storage current, internal impedance and
voltage
Figure 4 Model of the energy system as equivalent electric network.
internal impedance XG
w/o power balancing
power balancing
price p
voltage UDC
Modeling the active-front-end. The active-front-end is modeled with an internal impedance and
an internal ideal voltage source. The internal voltage level is assumed to be fixed at a specific value.
Since the DC-Bus-voltage is inversely proportional to the current and the internal impedance, the
voltage drop over the internal resistance rises, if current flow from the grid increases. A
communication less control scheme that only uses the voltage is established [14,16,15]. As energy
prices rise on the external grid the active frontend passes information of a lower energy availability
to the DC-Bus through lowering the voltage level (Figure 5).
time
external price p
Figure 5. Voltage price dependence of the active frontend.
207
ŐƌŝĚǀŽůƚĂŐĞ
ůŽƐƐĞƐ
ƉĞƌĨŽƌŵĂŶĐĞͲůŝŶŬĞĚ
ĐŽƐƚƐ
^^ǀŽůƚĂŐĞůĞǀĞů
internal impedance XESS
/ŶƚĞƌŶĂůǀŽůƚĂŐĞůĞǀĞů
Modeling the energy storage system (EES). The ESS is modeled with an internal impedance and
an ideal voltage source. The voltage of the ESS is a variable value, which represents the costs for
energy in the production system. The costs are dependent on the voltage level of the grid while
charging, losses while charging and discharging and performance-linked costs of the system (Figure
6). The internal resistance of the ESS has to be a function of its own state and predictions of the future
development for the surrounding environment.
State of Charge
temperature
XESS= f(SoC, T, …)
͘͘͘
Future power demand
Figure 6. Internal energy storage system voltage in dependence on losses, performance-linked costs
and the internal impedance
Consumer optimization. The goal of the autonomous consumer control is to achieve an energy
sensitive production system optimization, while every consumer optimizes itself. Based on an energy
component, a logistic component and a storage component an autonomous energy sensitive control
can be modeled for each consumer in the production system.
Logistic component. For each consumer the flexibility potential for one day can be calculated
based on the operational time capacity for each day, the working time of each order and the required
setup time (Eq.1).
flexibility potential= σ
otc
(order working time+setup time)
(1)
otc = operational time capacity, describes the maximum working hours per day including the
setup time.[hours]
To calculate the potential flexibility of each consumer at any time the remaining work time is
required (Eq.2).
remaining orders
RWT= σn=1
(WTorder, n +STn )
(2)
RWT = Remaining Work Time [hours]; WT = Working Time [hours]; ST = Setup Time [hours]
Based on the remaining work time and the remaining operational time for each order the flexibility
pressure can be derived (Eq.3).
FP=Cfp
remaining work time
remaining operational time capacity
(3)
FP = Flexibility Pressure; Cfp = Constant
Energy component. The energy component describes whether high or low energy prices (little or
much energy) are available for the production system. The available energy is described by the net
voltage (Ep.4). The more energy is available in the system, the higher is the net voltage.
EP =CEP
actual net volatage - minimal net voltage
maximum net voltage - minimal net voltage
(4)
EP = Energy Pressure; CEP = Constant; EP = Energy Pressure; CEP = Constant
Stock component. Additional to a logistic and an energy component a stock component is
established. Stock capacity can dramatically enhance the flexibility of a process by storing the output
of the process. In this context, only stock which follows after the consumer to store the output and
208
decouple a process from a following consumer is considered. The stock component is then describe
by actual the stock level at a given time and the safety stock level (Eq. 5).
ൌ
•ƒˆ‡–›•–‘ Ž‡˜‡Žƒˆ–‡”’”‘ ‡••
ƒ –—ƒŽ•–‘ Ž‡˜‡Žƒˆ–‡”’”‘ ‡••
ሺͷሻ
QP = Quantity Pressure; CQP = Constant
Production Pressure. For each consumer an autonomous production energy control function can
be modelled based on the logistic component, the energy component and the stock component (Eq.6).
3URGXFWLRQ 3UHVVXUH )3(343
(6)
For each consumer a threshold value of the production pressure must be established, that describes
whether a consumer should be in production state or not (Eq.7).
݂݅ production pressure > ܵ ĺ then produce
(7)
S = Threshold Value, depends on the
- Machine flexibility (high flexibility ĺ large threshold)
- Volume flexibility
- Costs to change between production modes
- Energy intensity of the machining
- production based personnel utilization degree
Case Study
A plastics part manufacturing system with two injection molding machines and a clear coat paint
shop under controlled conditions was selected (Figure 7).
Figure 7. Production flow chart of a plastics part manufacturer
Hybrid Simulation. On the one hand a logistic simulation of the production chain was set up in the
software Plant Simulation. All data were collected on-site and imported into the model. Furthermore,
a production planning system was programmed. The model is thus able to track logistic and energy
parameters e.g. production volume, energy consumption on machine level or stock level.
On the other hand the energy supply system and the energy consumers of the production system
were recreated on model scale. Every process and storage system was equipped with a machine
control as a hardware in the loop sub-system. Furthermore, every process was equipped with a voltage
meter and a communication platform was implemented. This allows to vary the energy supply
situation of the production system and to track the supply system behavior for energy flexibility
approaches. To enable data exchange and to track feedback effects both models were connected via
the communication platform.
Production Pressure Calculation and Hypothesis. Using the case study specific processing time,
set-up time, and production volumes, the production pressure was calculated for each consumer of
the production system. Based on the ability of each process and stock system to communicate with
209
each other the team expected to see a complete system optimization and therefore a lower energy
consumption and a cost reduction dependent on the price for the consumed energy and number orders.
Results Over a period of four weeks energy prices were imported from the German energy market
with an interval of 15 minutes. In total the energy costs were reduced by 10 % (Fig. 8). The storage
capacity as well as the human resources were not changed. For each day all orders were processed.
Results of the simulation at the plastics manufacturer support the assumption that energy flexibility
can significantly reduce energy costs. Furthermore the voltage level within the DC-Bus, used as a
communication structure, provides a real time decision making approach. In this case an interface
between grid stability and production energy management is implemented.
Every production system has a flexibility potential, which is not a constant but rather a dynamic
function depending on e.g. order situation. The introduced model provides the potential to
dynamically and automatically adapt to a changing flexibility potential of a production system.
Besides of the technical aspects a variation of the production based personnel utilization degree can
have huge economic impacts. The results of the plastics manufacturer show a use case where the
personnel utilization degree does not change. Typically process with a high degree of automation
show the highest degree of freedom in terms of energy flexibility.
Figure 8. Energy cost savings of a plastics part manufacturing operated by an autonomous and
decentral energy flexibility control
Outlook
The simulation results indicate a trade-off for manufacturers between economy and flexibility.
Even with a low energy price, energy flexibility is a promising approach in a rapidly and randomly
changing supply environment. Energy storage systems are a huge driver of energy flexibility in
production. In addition to a further development of a decentral and autonomous consumer control, an
autonomous and dynamic prosumer control is of special interest.
References
[1] Agora Energiewende, 2016. Die Energiewende im Stromsektor: Stand der Dinge 2015. Rückblick
auf die wesentlichen Entwicklungen sowie Ausblick auf 2016. http://www.agoraenergiewende.de/fileadmin/Projekte/2016/Jahresauswertung_2016/Agora_Jahresauswertung_20
15_web.pdf. Accessed 5 February 2016.
[2] Elsner, P., Fischedick, M., Sauer, M.U. (Eds.), 2015. Flexibilitätskonzepte für die
Stromversorgung 2050: Technologien - Szenarien - Systemzusammenhänge. Deutsche Akademie
der Technikwissenschaften, München, 116 S.
210
[3] Sauer, A., Bauernhansl, T. (Eds.), 2016. Energieeffizienz in Deutschland - eine Metastudie:
Analyse und Empfehlungen, 2. Aufl. 2016 ed. Springer Vieweg, Berlin, Heidelberg, OnlineRessource (XIX, 321 S. 266 Abb, online resource).
[4] Samad, T., Kiliccote, S., 2012. Smart grid technologies and applications for the industrial sector.
Computers & Chemical Engineering 47, 76–84.
[5] Müller-Scholz, W., 2013. Die stille Transformation: Wie Unternehmen jetzt von IT und ECommerce profitieren. Gabler Verlag, Wiesbaden.
[6] Palensky, P., Dietrich, D., 2011. Demand Side Management: Demand Response, Intelligent
Energy Systems, and Smart Loads. IEEE Trans. Ind. Inf. 7 (3), 381–388.
[7] Kreith, F., Goswami, D.Y. (Eds.), op. 2007. Handbook of energy efficiency and renewable energy.
Taylor & Francis, Boca Raton, Ca 1560 s. med var.
[8] Abele, E., Liebeck, T., Wörn, A., 2006. Measuring Flexibility in Investment Decisions for
Manufacturing Systems. CIRP Annals - Manufacturing Technology 55 (1), 433–436.
[9] Sethi, A., Sethi, S., 1990. Flexibility in manufacturing: A survey. Int J Flex Manuf Syst 2 (4),
289–328.
[10] Cedric, S., Fabian, K., Gunther, R., 2014. Modellierung einer energieorientierten PPS. wt
Werkstattstechnik online 2014 (11), 771–775.
[11] Sauer, A., Weckmann, S., Zimmermann, F., 2016. Softwarelösungen für das
Energiemanagement von morgen: Eine vergleichende Studie. Universität Stuttgart, Stuttgart.
http://www.eep.uni-stuttgart.de/publikationen/studien/EMS_Studie/EMS-Studie.pdf. Accessed
24 March 2017.
[12] Graßl, M., 2015. Bewertung der Energieflexibilität in der Produktion. Utz, München, XVI,
163 S.
[13] Beier, J., Thiede, S., Herrmann, C., 2017. Energy flexibility of manufacturing systems for
variable renewable energy supply integration: Real-time control method and simulation. Journal
of Cleaner Production 141, 648–661.
[14] Augustine, S., Mishra, M.K., Narasamma, N.L., 2014. Proportional droop index algorithm for
load sharing in DC microgrid, in: IEEE International Conference on Power Electronics, Drives
and Energy Systems (PEDES), 2014. 16 - 19 Dec. 2014, Mumbai, India. 2014 IEEE International
Conference on Power Electronics, Drives and Energy Systems (PEDES), Mumbai, India. IEEE,
Piscataway, NJ, pp. 1–6.
[15] Ott, L., 2015. Overview DC-Grid Manager and Voltage Droop Control. Fraunhofer IISB.
INTELEC 2015, 2015, Osaka.
[16] Jin, Z., Sulligoi, G., Cuzner, R., Meng, L., Vasquez, J.C., Guerrero, J.M., 2016. NextGeneration Shipboard DC Power System: Introduction Smart Grid and dc Microgrid
Technologies into Maritime Electrical Netowrks. IEEE Electrific. Mag. 4 (2), 45–57.
211
&KDSWHU
$VVHPEO\
Analyzing the impact of object distances, surface textures and interferences
on the image quality of low-cost RGB-D consumer cameras for industrial
applications
Eike Schaeffer, Alexander Beck, Jonathan Eberle, Maximilian Metzner, Andreas
Blank, Julian Seßner, Jörg Franke
Institute for Factory Automation and Production Systems, Friedrich-Alexander-University ErlangenNuremberg
Abstract— Low-cost RGB-D cameras are currently used in
applications in which their functionality and low price is
preferred over accuracy. As there are many approaches for
software optimization, the focus is primarily on improving the
measurement setup to increase depth image quality of popular
RGB-D sensors: we analyze the potentials of the Microsoft
Kinect v1, Kinect v2 and Intel RealSense R200 as they differ in
weight, power consumption, resolution and technology to acquire
3D-information resulting in different strengths and potentials.
Initially, we briefly explain our measurement setup, the
adjustable inputs and resulting outputs such as standard
deviation of depth values, precision and error rate. Afterwards
the results for each indicator, depending on camera sensors,
surfaces, and distances between object and camera will be
displayed. Based on our results, it is possible to quickly derive
optimal scene surroundings to improve the depth image of a
given 3D-camera before any programming effort is needed. A
more accurate depth image improves subsequent image
processing such as mapping, object or gesture tracking. This
paper states the context in which a given camera or surface
performs best depth quality, and how to link the results to
industrial environments.
I. INTRODUCTION
The popularity of low cost consumer cameras has increased in
the last years. Primarily designed for the entertainment
industry, the field of application now reaches from simple
detection tasks up to complex robotic operations. Due to the
rising demand of affordable RGB-D sensors, second
generation products are already available; the performance of
both hard- and software improved significantly.
Besides the improvements of hard- and software, the
measurement setup not only has a major impact on the output
quality of RGB-D sensors but also cannot be influenced
directly by the manufacturers. The quality deficiencies of
collecting 3D-data are present in the form of varying or
missing depth values. There are various reasons responsible
for the inconsistent output quality. On one hand, the quality
differences are caused by the components of the camera. On
the other hand, reflections of interfering infrared (IR) rays or
inconsistent lighting conditions can lead to unsuitable or
incorrect frames. Although consumer cameras are not required
to meet the same quality expectations as high standard
industrial camera sensors, there are already several existing
approaches to optimize the output quality of low cost 3Dcameras.
Most concepts so far, reached remarkable success by focusing
on software-based approaches to reduce the camera quality
deficiencies (e.g. [2], [3], [6], [7], [8], [10], [13], [14], [16],
[19], [22], [23], [25]). The approach of software-based
improvements through complex algorithms is a very specific
and multifaceted process that aims to enhance the quality of an
individual application. Therefore, software-based approaches
are restricted to reduce the negative impacts of the camera
itself and not the measurement setups. By customizing the
measurement setup, the output quality of the 3D-cameras can
be significantly improved; this results in less missing data
points and clearer contours. Consequently, resulting
programming effort can be decreased and further processing
of captured images is simplified leading to more reliable
results in industrial applications.
In this publication we focus on different aspects of the
measurement setup, which have a decisive influence on the
output quality of 3D-camera sensors. Our research subjects are
the influences of distance, degree of surface reflectivity and IR
interference on the quality of depth measurements and
captured point clouds.
In total, three different camera sensors are compared to each
other (Kinect v2, Kinect v1 and RealSense R200). A grid of
24 points is laid on top of the captured depth image to
compute standard deviation, precision and completeness of
point clouds.
The overall goal is to present a methodological approach for
an optimal measurement setup. The results are evaluated both
qualitatively and quantitatively, as well as summarized in
tables for a detailed comparison. In conclusion the results are
tested on industrial objects - using reflective and absorbing
objects to emphasize our results.
II. RELATED WORK
RGB-D sensors are already subject of many scientific
publications focusing on a large variety of possible
215
applications such as object and face recognition ([6], [25],
[12], [16]), shape estimation ([7], [14], [8]), room layout and
mapping ([15], [13], [20], [23], [9]), and benchmark suits and
comparisons ([2], [17], [24]). These works cover different
applications, but all stumbled over noise and holes in depth
images, which is a well-known problem for low-cost RGB-D
sensors resulting of both camera components and
measurement setup such as reflection of surfaces, interference
of IR rays and working distance of the camera sensors.
Especially 3D-cameras using triangulation techniques, which
result in lower precision of depth values at increasing
distances ([11], [5], [24]), are optimized by using softwarebased approaches to improve their suitability for various
applications (e.g. [11], [1], [19], [21], [22], [10], [3], [16]).
Dealing with reflective or absorbing surfaces is a challenging
task for RGB-D sensors but from our point of view didn’t
receive enough attention in previous publications. Next the
most relevant researches and their results are stated.
H. Sarbolandi et al. use three RGB-D sensors (Kinect v2,
Kinect v1 and RealSense F200) for a Scene Understanding
Benchmark Suite [20]. They state the RealSense as the
noisiest camera sensors of the three with most missing values.
The Kinect v2 performs best, due to high accuracy of depth
measurements despite its sensitivity to reflection and dark
color. Regarding the Kinect v1, they confirm an observable
quantization effect.
K. Khoshelham and S. Elberink test the Kinect v1 depth data
for indoor mapping [11] where they experience that lighting
conditions influences the correlation and measurement of
disparities. In strong light the laser speckles appear in low
contrast in the IR image, which can lead to outliers or gap in
the resulting point cloud. Using the Kinect v1 for indoor
mapping and additionally providing an analysis of the
accuracy and resolution of the Kinect’s depth data,
discrepancies of the RGB-D sensor with increasing distance
between camera and object/plane appeared, particularly at the
edges of the point cloud. Evaluated error sources are the
sensor itself, properties of the object surface, and the
measurement setup which mainly focuses on lighting
conditions. The error in distance measurements reached from a
few millimeters up to 40 mm. This is also documented by
Schöning et. al [18].
In [5], the suitability of the Kinect v2 for mobile robot
navigation is examined with an analysis of the RGB-D sensor
accuracy by holding the Kinect v2 against a white wall at
distances between 0.7-2.75 m while performing 100
measurements for each distance. The result is a depth
distortion of ± 6 mm, although the offsets vary between the
center and the edges of the depth image. The maximum offset
ranges from 20-30 mm between 0.7 m and 2 m of distance.
[24] evaluated the performance of Kinect v1 and v2 for near
and far ranges. For near range, they also analyzed the effect of
artificial light. Artificial bright lighting does minimally affect
the Kinect v1 while the Kinect v2 is invariant to bright indoor
light. At a near distance of 23.6 mm the Kinect v2 obtains
about 10% more valid points compared to the Kinect v1 and
proves to be two times more accurate than its predecessor. At
216
far ranges, the Kinect v2 remains accurate at all distances
having a standard deviation below 100 mm at a distance of
7 m, in comparison to the Kinect v1 whose deviations increase
quadratically with the distance due to quantization and depth
computation errors, resulting in a ten times higher accuracy of
the Kinect v2 at a distance of 6 m.
Under direct sunlight, they experienced that the Kinect v1
cannot estimate any depth values, while the Kinect v2
generates a partial point cloud in the center of the image up to
3.5 m.
They did not focus on the influence of reflectivity of surfaces
in combination with different distances and multiple 3Dsensors. Indicators such as precision and completeness of
depth images are also not covered. We present a quantitative
comparison that leads to a methodological approach for an
optimal measurement setup.
III. DATASET CONSTRUCTION & EXPERIMENTAL SETUP
The goal of our measurement setup is to analyze the impact of
interference of IR rays, the degree of reflectivity of different
surfaces, and the output accuracy of RGB-D sensors. To
achieve this goal, we obtain large datasets covering more than
24,000 measurements by varying different inputs and
evaluating their impact on our indicators (output) (Table 1).
The sensors and input and output factors are listed in the
associated columns; rows do not correlate.
TABLE 1 MEASUREMENT SETUP
Input
Sensors
Surfaces
Distances
RealSense
High
reflection
(metal)
0.5 m
Medium
reflection
(polystyrene)
2m
Kinect v2
Kinect v1
Output
Measurement
types
Single
measurements
1m
Multimeasurements
Interference
Indicators
Multiple
RGB-D
sensors
Standard
deviation
Precision
Sunlight
Ratio
Ratio
Low
reflection
Absorption
A. Sensors
Rising popularity of RGB-D sensors results in multiple
sensors available in the market today; they vary in weight,
power consumption and used technology to acquire 3Dinformation. To cover the broad field of sensors we choose
three camera sensors with different depth imaging techniques.
Camera sensors are used with their default parameters since
software optimization is not the focus of this research and tests
with current official drivers are easily reproducible.
The RealSense R200 is a lightweight, low power consuming
3D-camera. Originally designed for tablets it is especially
suitable for mobile applications weighing 35 g and requiring
2.5 W of power. In addition to other RGB-D sensors, the
RealSense can capture both color and depth images at 60 fps.
It uses stereo matching of two IR sensors at 850 nm to obtain
depth information and projects an IR pattern to the
environment to add texture to the scene. For outdoor
environments, it can switch automatically to stereo matching
without an IR pattern. In addition, 18 parameters are available
for manual adjustments.
Although its depth image is noisier than that of other RGB-D
sensors, it can be installed in small devices like tablets.
Furthermore, it requires lower processing power and no
external power source which allows running it on simpler
hardware.
The Kinect v2 is the second generation of the Kinect. Instead
of using triangulation for computing depth values, the
Kinect v2 uses the time-of-flight principle with three IR laser
diodes. The time-of-flight sensor measures the time needed of
the projected IR pulses from the projector to the surface and
back to the sensor. The distance to obstacles is internally
determined by wave modulation and phase detection (indirect
time-of-flight).
The resolution of the color image is increased from
1280x1024 to 1920x1080 pixels and the depth image of the
Kinect v2 has a three times higher fidelity than its predecessor.
However, it not only requires an external power source but
also high processing power, consuming 115 W and weighting
0.7 kg.
The Kinect v1 is one of the first low cost RGB-D sensors.
Originally designed for consumer purposes (Xbox 360), it
soon became popular for research purposes because of its
detailed depth image in relation to its price.
Kinect v1 uses an IR emitter and sensor to acquire depth
information. In detail the Kinect v1 projects a known pattern
of structured IR light that is deformed by the shape of objects;
these known patterns are then recorded by the IR camera and
compared to the known pattern stored on the unit. The depth is
computed by using simple triangulation techniques between
the projected pattern seen by the IR camera and the known
pattern. The output depth image has a much better quality
compared to the RealSense, however the Kinect v1 weighs
0.44 kg and requires an extra power source.
B. Surface of objects
The degree of object reflection or absorption has a significant
influence on measurement results; we analyze the impact of
different degrees of reflectivity and evaluate how the accuracy
of depth measurements and completeness of point clouds are
affected by reflection or absorption respectively. As a
reference for industrial applications we use surfaces that
mainly occur in an industrial context: for a highly reflective
surface we use metals (e.g. body construction), for absorbing
surfaces black polyester (e.g. packaging technology), for
medium reflectivity polystyrene (e.g. logistics) and for low
reflection white polyester (e.g. textile industry).
C. Distances
All measurements were performed within multiple distances:
0.5 m for near range, as well as 1 m and 2 m for medium range
applications. The distance is measured from camera origin to
surface. Since accuracy and precision of all camera systems
worsens with increasing distance from 1 m to 2 m, further
distances are not relevant for our desired recommendation.
D. Measurement types:
We use three different types of measurements to evaluate
accuracy and quality of each RGB-D sensor on different
surfaces:
For single and multi-measurements, a grid of 24 equally
distributed points is laid on top of the captured depth image.
For both measurement series, the standard deviation is
calculated by comparing all 240 depth values: 24 points over
10 frames. In addition, the average standard deviation for each
point is calculated.
a)
b)
Figure 1. a) For single measurement series, depth for each of the 24 points
are taken into account for 10 frames, hence 240 measurements provide the
basis for the evaluation. b) For multi-measurement series, the depth of each 24
points is computed as average depth of 45 of close-by depth values; also for
10 frames.
The ratio measurement series indicates the noise of 3Dcameras: we compare the displayed data points against
maximum possible amount of data points. Additionally, we
examine the time to build up the point cloud; after each frame
missing data is added in case the new frame provides
previously missing data. The results are two indicators: the
ratio as a value for completeness of depth images and the
build-up rate to show how many frames are required to
acquire at least 90% of the maximum achieved density.
E. Interference
To analyze the impact of interfering IR rays we use two
different scenarios: In the first scenario we perform our
measurements with all three camera sensors running
simultaneously; the projected IR patterns are overlapped. In
the second scenario we conduct all measurements in an
outside setup to evaluate the impact of interfering sunlight.
IV. RESULTS
All values in the upcoming tables focus on the whole picture,
results of the performed measurements are described in the
following tables. Values are shown with three valid digits. The
results do not contain further descriptions within the chapter;
217
an interpretation of the results is included in chapter 5.
Intensity and color of indoor illumination do not affect depth
measurements in any way.
cloud has a completeness of at least 90%. Completeness of
metal surface at 2 m is not measured.
TABLE 4: COMPLETENESS OF POINT CLOUDS
A. Standard deviation over whole frames
For all 240 values of a measurement series, standard deviation
in millimeter is calculated and shown in Table 2. Standard
deviation as the degree of scatter describes the smoothness of
a given surface. Higher values indicate a lower smoothness
and therefore attest a lower depth image quality. The values
strongly depend on orientation and alignment of cameras
towards surfaces. Slightly nonparallel setups lead to high
standard deviations.
TABLE 2: STANDARD DEVIATION FOR 240 MEASUREMENTS IN MM
0.5m
1m
2m
RealSense
Kinect v2
Kinect v1
RealSense
Kinect v2
Kinect v1
RealSense
Kinect v2
Kinect v1
Polystyrene
White
Black
Metal
6.2
3.3
5.4
8.9
14
26
21
37
5.6
2.6
4.3
8.0
7.9
32
18
11
6.2
3.5
>400
9.7
6.8
5.5
2.6
6.7
11
31
75
19
12
22
>400
0.5 m
1m
2m
Polystyrene
>5
1
1
1
1
1
2
1
1
RealSense 9%,
Kinectv2 51%
Kinectv1 70%
RealSense 97%
Kinectv2 100%
Kinectv1 93%
RealSense 90%
Kinectv2 100%
Kinectv1 93%
White
10% >5
49% 1
77% 1
98% 1
100% 1
93% 1
67% 5
99% 1
93% 1
Black
5%, >5
94% 2
95% 1
3% >5
83% 4
100% 1
1% >5
54% >5
76% 3
Metal
9% >5
87% 1
87% 1
89% 1
100% 1
84% 3
D. Difference between center and edge depth measurements
When looking closer into the data, centered measurement
points have a higher precision than values closer to the edge.
This is experienced for all cameras on all surfaces. For object
detection purposes, the object needs to fit in the centered area
to receive best results
The RealSense at 0.5 m is too noisy for reliable data.
B. Precision as standard deviation of each point
In Table 3 we show the precision of each of the 24 points over
10 frames for each measurement series.
TABLE 3: PRECISION IN MM
0.5 m
1m
2m
RealSense
Kinect v2
Kinect v1
RealSense
Kinect v2
Kinect v1
RealSense
Kinect v2
Kinect v1
Polystyrene
White
Black
Metal
1.5
1.2
2.8
1.2
8.6
15
2.2
11
1.8
0.8
1.9
1.3
2.5
22
2.7
5.5
2.3
1.2
5.1
6.9
1.9
1.8
0.4
1.3
2.6
4.1
41
5.7
5.8
17
>400
At best, 10 values are considered if no errors appear. These
results are independent to the alignment of cameras, which
allows a comparison of accuracy and fluctuation of each
camera sensor (Table 3). Exceptionally high values are the
result of few outliers. No values for the RealSense at 0.5 m are
measurable.
C. Ratio and frames needed for a complete point cloud
The completeness of depth values is shown in Table 3. The
percentage of values shows the amount of depth values
already available at the first frame compared to a completely
dense point cloud. The number right next to the percentage
value represents the number of frames needed until the point
218
Figure 2. Represents which of the 24 measurement points are considered
edge (gray background) or central points.
For each frame, the relation of standard deviation between
edge and central points is computed; the resulting number
represents the deviation of edge points compared to central
point’s e.g. for 0.5 m the standard deviation for the Kinect v2
is 2.4 times higher at the edges compared to the center as seen
in Table 5.
TABLE 5 STANDARD DEVIATION RATIO OF EDGE POINTS OVER CENTRAL POINTS
0.5 m
1m
2m
RealSense
Kinect v2
Kinect v1
RealSense
Kinect v2
Kinect v1
RealSense
Kinect v2
Kinect v1
Polystyrene
White
2.4
1.1
1.4
1.5
1.6
1.0
2.4
2.1
1.6
1.4
1.4
1.5
1.6
1.6
2.6
0.9
Black
0.5
1.1
1.8
1.8
2.0
1.8
0.9
0.8
Metal
811
0.5
1.2
0.8
3.0
1.4
0.4
3.0
1.0
E. Resulting depth images for different measurement setups
Figure 3 displays depth images for further understanding.
White
Black
Metal
Kinect v2
Kinect v1 Kinect v2 RealSense Kinect v1 Kinect v2 RealSense
2m
1m
Kinect v1
0.5 m
RealSense
Polystyrene
Figure 3. Visualizes recorded depth images for each distance and surface.
Missing data is represented by black dots, valid depth measures are shown by
white dots. Exception are the black captions of each green measurement point.
The images relate to the results shown in Table 2 and Table 3; e.g. it
emphasizes missing depth measurements at a distance of 0.5 m for the
RealSense (black screen).
V. CONCLUSION AND EVALUATION
A. Common observations for all three RGB-D sensors
At a distance of 1 m all camera sensors have the highest point
cloud density for low and medium reflective surfaces
(Table 4). Additionally, at a distance of 2 m, the accuracy,
ratio, standard deviation for all points as well as the precision
are worse; it is recommended to use the tested cameras at an
application distance of no higher than 1 m if the specific
application allows it.
Higher standard deviations towards the edges of the depth
image can be experienced for all camera sensors (Table 5). On
average, deviation at the edge is 1.6 times higher than for
centered points so the region of interest should be center
focused.
For absorbing surfaces, the highest ratios, lowest standard
deviations and highest repetitive accuracies are achieved at a
distance of 0.5 m since IR intensity saturation in the center is
reduced. However, at distances beyond 0.5 m the absorption
has negative impacts on the quality of the depth image since
depth information is missing.
Precision is the only indicator optimized by using multimeasurements compared to single measurements. By
performing multi-measurements, standard deviation for
reflective and absorbing surfaces deteriorates; errors result
from unreliable measurements when there is hardly any 3Ddata available.
B. Conclusion RealSense
Interference has major impacts on the quality of depth images
of the RealSense. Any interfering IR rays from sunlight or
other RGB-D sensors mainly result in depth images without
any reliable 3D-information. For some setups, almost 30
frames for the depth image to build up and collect enough
points to show a complete image are required. Collected
points have a higher standard deviation and error rate, leading
to totally unreliable data.
The RealSense has a small range where measured indicators
are comparable to Kinect sensors. Small distances such as
0.5 m (which is also not recommended by Intel) show
completely black pictures with no depth values at all. For 1 m
and further distances, the quality is significantly better. At
around 1m, standard deviation, error rate, accuracy and ratio
are at its optimum (Table 2, Table 3 & Table 4). With
increasing or decreasing distance, depth values get less
reliable. This is visualized at noisy areas with no available
depth data. Multi-measurements increase the quality especially
for the optimal distance. For further distances, single measures
are more reliable. However, the results differ slightly
depending on the surface. 1 m as the preferred distance is also
backed up by best ratio values. First frames already show
more than 90% of values and only need one more frame for a
complete picture. On other distances, initial frames only show
values as low as 5%.
A second constraint next to large and extremely low distances
is the surface that is measured. High absorbing surfaces cannot
be used at any distance because information is lost and even
detecting contours is challenging. Best results are achieved
with polystyrene and white polyester. Standard deviation,
errors and ratio are similar to Kinect sensors. Metals are also
detected in great detail at 1 m but lose quality at other
distances. As a result, the IRS can be used in its optimal range
at non-absorbing objects.
C. Conclusion Kinect v2
The depth image is not affected in any way by IR rays of other
3D-camera sensors. Compared to IR rays of other 3D-camera
sensors, IR rays of sunlight do impact the quality of the depth
image in different ways: while passive sunlight does not affect
the quality of the depth image, direct sunlight exposure
decreases the depth image quality.
The Kinect v2 is the most consistent 3D-camera among all
three, having the highest accuracy of all camera sensors for all
219
surfaces and distances, as well as having the highest precision
at distances of one meter and above for all surfaces (Table 3).
Additionally, for all surfaces and distances except black
polyester, the point cloud builds up to 90% of its maximum
within the first frame (Table 4). Although standard deviation
and density of the point cloud vary between different surfaces
and distances especially at 0.5 m, measurement deviations
constantly stay below 1%. The Kinect v2 has a higher standard
deviation than its predecessor at a distance of 0.5 meters
because of a wider recording angle and is more sensitive
towards nonparallel setups. Therefore, higher standard
deviations at the edge are more likely.
Using just the center data points, the Kinect v2 not only has
the highest accuracy but also the lowest standard deviation for
all surfaces. Due to the wide angle lens it still observes a large
area which makes it overall superior in performance.
D. Conclusion Kinect v1
In outdoor setups with high sunlight interference, depth
images are noisy with many holes which disqualify the
Kinect v1 for scenes with sunlight interference. In close
distances, indoor interference from other RGB-D sensors has
only little to no effect on quality; there are no missing values
under either circumstance and the depth picture is built up
within the first frame on any distance.
The Kinect v1 is the best camera for close distances; it shows
the lowest standard deviation compared to the other RGB-D
sensors (Table 2 & Table 3), especially on absorbing surfaces,
its depth image has the highest consistency when comparing
edge and centered values (Table 5). With increasing distance,
surfaces with less absorbing textures should be used. A strong
advantage is the robust point cloud towards non-reflective
surfaces, displaying the densest point cloud for short distances
(0.5 m) and building up its point cloud in one frame (Table 4).
For close distances, it outperforms the Kinect v2 and
RealSense.
White and metal surfaces show decent results for all
indicators, whereas polystyrene and black polyester are
inferior compared to the other sensors since these surfaces
absorb most of the IR rays leading to wrong distance
measures. In total, the camera has no missing values: error
measures are zero at all surfaces and distances which can be
seen in Table 4. Among all surfaces, white polyester shows
best results on all distances.
VI. METHODOLOGICAL APPROACH FOR USE CASES
Results of the performed measurements lead to the
conclusions stated above. As a practical guideline, especially
for industrial purposes, camera sensor and surface are set in
relation to each other with regard to optimal measurement
results as displayed in Figure 4.
Use cases are presented for each camera sensor and for each
surface separately. As a major constraint, these
recommendations focus on object detection where depth
values play a major role and the whole picture is evaluated.
For detecting contours or focusing on a central area of interest,
other distances also lead to suitable results.
220
a)
b)
Figure 4 The figure visualizes the optimal measurement setup for each surface
(a) and camera sensor (b). The arrows points to the recommended surface
(camera) for a given camera (surface) for best possible results. Solid lines
mark preferred combinations, dotted lines with gray distance boxes show
alternatives that still perform comparably well. Small boxes indicate the
optimal distance.
In regard to industrial applications, Figure 4 can be used to
select the optimal camera sensor for a given surface such as
clothing of workers or materials to receive best results for
object recognition or detection as shown in Figure 5.
a)
b)
c)
d)
Figure 5 Exemplary use of camera sensors in industrial context for object
detection. Case a) shows the Kinect v2 performing better for white textiles
than dark textiles or a medium reflective dark transportation box from 1.5 m.
Case b) shows the Kinect v1 and its great results, especially for dark surfaces,
at close distances. White surfaces also can be detected in great detail. Pictures
c) and d) illustrate the depth image quality of the RealSense detecting white
areas compared to dark and reflective areas which have almost no depth data.
When turning the reflective object slightly to avoid direct reflection, the object
gets lost as a whole.
VII. SUMMARY
In this paper, we present how different surfaces and distances
affect the depth image quality of low-cost RGB-D sensors.
The demonstrated results can improve the depth image for a
given application if the optimal distance is set and right
sensors are used. The differences are crucial for selecting the
optimal camera, also considering industrial purposes. The
Kinect v2 performs best at detecting larger areas at a distance
of 1-2 m for all but highly absorbing surfaces. Quality can be
improved by cutting the edges and only using the center,
which still leaves a greater image due to the wide-angle lens
and larger resolution. The Kinect v1 is strong at close
distances around 0.5 m and highly absorbing surfaces. For
further distances, reflective objects can still be detected in
detail. The RealSense has only a short distance range of
sufficient depth data quality which is around 1 m. There, it has
best results with bright, non-reflective surfaces. However, the
RealSense’s light weight and low power consumption makes it
interesting for mobile applications and contour detection. The
camera systems can be combined to best utilize their
individual strengths in consideration of interference. In
industrial environments, low-cost cameras offer great potential
which justifies investments in optimal setups for better depth
image quality.
[1]
Adini, Y.; Moses, Y.; Ullman, S. (1997): Face recognition. The problem
of compensating for changes in illumination direction. In: IEEE Trans.
Pattern Anal. Machine Intell. 19 (7), S. 721–732. DOI:
10.1109/34.598229.
[2] Cruz, Leandro; Lucio, Djalma; Velho, Luiz: Kinect and RGBD Images:
Challenges and Applications. In: 2012 XXV SIBGRAPI Conference on
Graphics, Patterns and Images Tutorials (SIBGRAPI-T). Ouro Preto,
Brazil, S. 36–49.
[3] Dolson, Jennifer; Baek, Jongmin; Plagemann, Christian; Thrun,
Sebastian: Upsampling range data in dynamic environments. In: 2010
IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
San Francisco, CA, USA, S. 1141–1148.
[4] Endres, Felix; Hess, Jurgen; Sturm, Jurgen; Cremers, Daniel; Burgard,
Wolfram (2014): 3-D Mapping With an RGB-D Camera. In: IEEE
Trans. Robot. 30 (1), S. 177–187. DOI: 10.1109/TRO.2013.2279412.
[5] Fankhauser, Peter; Bloesch, Michael; Rodriguez, Diego; Kaestner, Ralf;
Hutter, Marco; Siegwart, Roland: Kinect v2 for mobile robot navigation:
Evaluation and modeling. In: 2015 International Conference on
Advanced Robotics (ICAR). Istanbul, Turkey, S. 388–394.
[6] Feng, Jie; Wang, Yan; Chang, Shih-Fu: 3D shape retrieval using a single
depth image from low-cost sensors. In: 2016 IEEE Winter Conference
on Applications of Computer Vision (WACV). Lake Placid, NY, USA,
S. 1–9.
[7] Han, Yudeog; Lee, Joon-Young; Kweon, In So: High Quality Shape
from a Single RGB-D Image under Uncalibrated Natural Illumination.
In: 2013 IEEE International Conference on Computer Vision (ICCV).
Sydney, Australia, S. 1617–1624.
[8] Haque, Sk. Mohammadul; Chatterjee, Avishek; Govindu, Venu Madhav:
High Quality Photometric Reconstruction Using a Depth Camera. In:
2014 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR). Columbus, OH, USA, S. 2283–2290.
[9] Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. (2012): RGB-D
mapping. Using Kinect-style depth cameras for dense 3D modeling of
indoor environments. In: The International Journal of Robotics Research
31 (5), S. 647–663. DOI: 10.1177/0278364911434148.
[10] Kerl, Christian; Souiai, Mohamed; Sturm, Jurgen; Cremers, Daniel:
Towards Illumination-Invariant 3D Reconstruction Using ToF RGB-D
Cameras. In: 2014 2nd International Conference on 3D Vision (3DV).
Tokyo, S. 39–46.
[11] Khoshelham, Kourosh; Elberink, Sander Oude (2012): Accuracy and
resolution of Kinect depth data for indoor mapping applications. In:
Sensors (Basel, Switzerland) 12 (2), S. 1437–1454. DOI:
10.3390/s120201437.
[12] Li, Billy Y.L.; Mian, Ajmal S.; Liu, Wanquan; Krishna, Aneesh: Using
Kinect for face recognition under varying poses, expressions,
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
illumination and disguise. In: 2013 IEEE Workshop on Applications of
Computer Vision (WACV). Clearwater Beach, FL, USA, S. 186–192
Newcombe, Richard A.; Davison, Andrew J.; Izadi, Shahram; Kohli,
Pushmeet; Hilliges, Otmar; Shotton, Jamie et al.: KinectFusion: Realtime dense surface mapping and tracking. In: 2011 IEEE International
Symposium on Mixed and Augmented Reality. Basel, S. 127–136.
Or-El, Roy; Rosman, Guy; Wetzler, Aaron; Kimmel, Ron; Bruckstein,
Alfred M.: RGBD-fusion: Real-time high precision depth recovery. In:
2015 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR). Boston, MA, USA, S. 5407–5416.
Park, Jaesik; Kim, Hyeongwoo; Tai, Yu-Wing; Brown, Michael S.;
Kweon, In So (2014): High-quality depth map upsampling and
completion for RGB-D cameras. In: IEEE transactions on image
processing : a publication of the IEEE Signal Processing Society 23
(12), S. 5559–5572. DOI: 10.1109/TIP.2014.2361034.
Qing Zhang; Mao Ye; Ruigang Yang; Matsushita, Y.; Wilburn, B.;
Huimin Yu: Edge-preserving photometric stereo via depth fusion. In:
2012 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR). Providence, RI, S. 2472–2479.
Sarbolandi, Hamed; Lefloch, Damien; Kolb, Andreas (2015): Kinect
range sensing. Structured-light versus Time-of-Flight Kinect. In:
Computer Vision and Image Understanding 139, S. 1–20. DOI:
10.1016/j.cviu.2015.05.006.
Schöning, Julius; Heidemann, Gunther: Taxonomy of 3D Sensors - A
Survey of State-of-the-Art Consumer 3D-Reconstruction Sensors and
their Field of Applications. In: International Conference on Computer
Vision Theory and Applications. Rome, Italy, S. 192–197.
Shiguang Shan; Wen Gao; Bo Cao; Debin Zhao: Illumination
normalization for robust face recognition against varying lighting
conditions. In: 2003 IEEE International Workshop on Analysis and
Modeling of Faces and Gestures. Nice, France, 17 Oct. 2003, S. 157–
164.
Song, Shuran; Lichtenberg, Samuel P.; Xiao, Jianxiong: SUN RGB-D:
A RGB-D scene understanding benchmark suite. In: 2015 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR).
Boston, MA, USA, S. 567–576.
Spinello, Luciano; Arras, Kai O.: People detection in RGB-D data. In:
2011 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2011). San Francisco, CA, S. 3838–3843.
Xia Liu; Fengliang Xu; Fujimura, K.: Real-time eye detection and
tracking for driver observation under various light conditions. In:
IV'2002. IEEE Intelligent Vehicle Symposium. Proceedings. Versailles,
France, 17-21 June 2002, S. 344–351.
Xiao, Jianxiong; Owens, Andrew; Torralba, Antonio: SUN3D: A
Database of Big Spaces Reconstructed Using SfM and Object Labels. In:
2013 IEEE International Conference on Computer Vision (ICCV).
Sydney, Australia, S. 1625–1632.
Zennaro, S.; Munaro, M.; Milani, S.; Zanuttigh, P.; Bernardi, A.;
Ghidoni, S.; Menegatti, E.: Performance evaluation of the 1st and 2nd
generation Kinect for multimedia applications. In: 2015 IEEE
International Conference on Multimedia and Expo (ICME). Turin, Italy,
S. 1–6.
Zollhöfer, Michael; Dai, Angela; Innmann, Matthias; Wu, Chenglei;
Stamminger, Marc; Theobalt, Christian; Nießner, Matthias (2015):
Shading-based refinement on volumetric signed distance functions. In:
ACM Trans. Graph. 34 (4), 96:1-96:14. DOI: 10.1145/2766887.
221
Multi-Criteria Classification of Logistics Value Streams by Using Cluster
Analysis
Siri Adolph1, a, Tobias Keller1, Joachim Metternich1 and Eberhard Abele1
1
Institute for Production Management, Technology and Machine Tools
Otto-Berndt-Str. 2, Darmstadt, Germany
a
adolph@ptw.tu-darmstadt.de (Tel.: 06151-1620137),
Keywords: Production Logistics, Material Supply, Cluster Analysis
Abstract. A high variety of products in combination with short product life cycles require internal
processes to be constantly adapted.
Frequently, material supply planning is intuitive and iterative, which leads to a high planning effort
and difficult to standardize processes. An overview of the possibilities of material supply is also
missing in many companies. Thus, often only known solutions are followed.
This article presents an approach which systematizes logistics value streams from the delivery to
the supply at the point of use by applying the methods of cluster analysis. For this purpose, logistics
value streams are derived using a morphology. A cluster analysis is then carried out on the basis of
already identified requirements for material supply. Seven clusters are characterized. This procedure
provides the whole solution space of the material supply and forms the basis for a planning process
to be developed for the design of logistics value streams in assembly.
Introduction
Due to the increasing number of variants and the associated small lot sizes [1], the design of
material supply has become an area with intensive planning tasks. This often results in intransparent
logistics systems [2]. The planning of material supply systems in practice is often intuitive on the
basis of experience knowledge [1] which cannot be standardized easily. Moreover, the strategic
importance of logistics is still underestimated [2]. Companies that have not followed logistical targets
so far are supposed to have great potential for increasing their performance and reducing their
production costs [3]. However, a holistic view of the material flow is necessary in order to achieve
cost advantages while at the same time increasing performance. For that reason, logistics processes
and the design of assembly processes need to be considered simultaneously [4] and throughout the
whole chain from goods reception to the supply at the work station [5]. Due to the strong practical
relevance of previous planning procedures, which are often not based on formal models, fundamental
knowledge is lacking as to how formal methods can be transferred to this area of engineering sciences.
Thus, this article presents an approach to plan logistics processes by applying the methods of cluster
analysis. By this, logistics value streams are classified by requirements on material supply processes.
Basics to logistics and material supply
Logistics and material supply
The understanding of logistics varies widely in science. Numerous definitions can be found in
scientific literature [6,7,8]. Due to the comprehensive description, the logistical understanding of this
approach is based on the following definition: Logistics includes all activities by which the spatial
transformation and the related transformations are planned, controlled, implemented or controlled
with regard to the quantities and types of goods, the handling properties and the logistical
determinateness of the goods. The interaction of these activities is intended to initiate a flow of goods
which connects a delivery point with a receiving point as efficiently as possible [9]. The focus of this
approach is on production logistics, which plans, controls and realizes the entire internal material
flow through the production system, including the associated information flow, as well as technical
and organizational control [10]. Material supply as a part of production logistics covers the whole
223
process from goods reception to the storage at the point of use [11]. Its task is to provide material for
the utilization during the execution of tasks in the demanded quality and quantity in the correct time
slot at the right place [12]. The execution of material supply comprises the physical activities of
storing, commissioning, transporting and handling at the workplace [12].
Approaches to material supply planning
Due to the lack of appropriate methods logistics systems are rarely planned by using methods. A
lack of methodical knowledge about logistics planning leads to operational deficits, such as an
inefficient use of space in the supply of materials, inefficient strategies for deployment, or an increase
in the logistics costs due to insufficient management of the processes [13]. Several approaches are
discussed in the scientific literature which can be classified into the following three groups:
• Generic planning procedure [e.g. 12, 14]
• Systemic approach [e.g. 15]
• Intuitive design/improvement of material supply concepts [e.g. 16, 14]
The guide to the material supply planning of Bullinger and Lung [12] is integrated into the
assembly structure planning and focuses on the consideration of personnel aspects. The process is
divided into three phases: pre-planning, target planning and system planning. Once the task and the
system limits are identified, the basic conditions are determined. Subsequently, target criteria are
defined, the task is further specified and alternative material supply concepts are developed. The
alternatives are evaluated and the optimal concept is selected. In the planning guide of Durchholz
[14], a logistical value stream is designed in the sense of lean principles. The areas of material flow,
information flow and employees are considered as "managers" of the process. Bozer and McGinnis
[15] reveal differences between kitting and sorted direct exposure. In a mathematical model, the
tradeoffs in material handling, space requirements, and inventory are presented at an early decision
stage. Drews [16] develops combinations of organizational forms of production, in-house transport
and in-house storage form production logistic types and selects it to solve the specific production
requirements. The corresponding logistics types are assigned to the identified production types
(workshop manufacturing, flow processing, etc.). Finnsgard et al. [4] describe the influence of
material supply strategies on workplace performance in the dimensions of value creation, space
requirements and ergonomics. For this purpose the dimensions and their influence on the performance
are described in detail. Subsequently, a theoretical analysis model is developed.
Need for action
Although generic approach models cover a broad spectrum of the design fields of material supply,
they usually do not give specific recommendations for the design of the system. The comparison of
selected material supply concepts is often associated with lower effort, but reflects only a small part
of production logistics. Also, the intuitive design of logistics systems offers disadvantages, since there
is often no way to compare the planned system with alternatives, resulting in uncertainty about further
improvement potentials.
Thus, this article presents a procedure for clustering logistics value streams as a basis for a
standardized material supply planning. The concept of logistics value stream is not defined in the
scientific literature. Durchholz [14] develops a procedure for a logistical value stream design without
using the term. Bauer [17], on the other hand, uses the term without defining it. However, both
approaches show the same understanding by planning logistical processes for manufacturing
environments. For this reason, the term for the following procedure is defined as all logistical
processes from delivery to supply at the workplace for the production of a product (supply, storage,
picking, transport).
Classification of logistics value streams
Basics to cluster analysis
224
Cluster analysis is a method for grouping objects [18]. The aim of cluster analysis is to combine
different objects on the basis of properties into groups, whereby the objects within the groups are as
similar as possible, while the groups should differ as far as possible from each other [18]. From a
heterogeneous set of objects, as homogeneous groups as possible are identified. An essential
characteristic of this process is the simultaneous consideration of all the defined properties to the
grouping [19]. Cluster analysis is divided into two steps. To begin with, the degree of proximacy must
be determined. This includes the pairwise analysis of the similarities between objects by the numbers
of the properties. Based on this, a grouping method is performed to group the similar objects that
constitute the actual clusters [18].
In this approach, cluster analysis is used to systematize the whole solution space of logistics value
streams in order to obtain logistic types. At first, the solution space of practical relevant logistics
value streams is generated by combining elements of a morphology. Afterwards classification
variables are defined and the logistics value streams are evaluated regarding these variables. Based
on this the cluster analysis is carried out and the results are discussed. The generated groups in turn
provide the basis for a logistics planning process to be developed further.
Derivation of logistics value streams
In order to derive logistics value streams from delivery to supply at the point of use, a
morphological scheme covering the aspects delivery type, storage, supply, supply type, supply form,
place of supply, sequence is created (see fig. 1). This morphological scheme serves as the basis for
generating the solution space consisting of practical relevant logistics value streams.
characteristic
expression
delivery type
commissioned
direct delivery
direct delivery
of standard
amount
storage
without
warehouse
buffer stock
supply type
supply
static
commissioned
commissioned
warehouse delivery
warehouse for
commissioning material
dynamic
warehouse
delivery of
standard amount
warehouse for commissioned / sequenced
material
standard amount
partial
order
supply form
order
place of supply
working station
sequence
sequence of assembly
single
combined
single parts
product
orders
close-by working
working system
station
sequence of orders
not sequenced
logistics value stream
Fig. 1: Derived logistics value stream (example) according to [12,11,1,1,9, 20]
By finding all practical relevant combinations of the aspects, 132 logistics value streams can be
identified. Thus, the solution space is generated.
225
Solution space of logistics value streams
buffer stock
supplier
warehouse for
commissioning material
Working system
Close-by work station
warehouse for
commisioned/sequenced material
Work station
system boundary
Fig. 2: Solution space of derived logistics value streams
Figure 2 shows the complexity of the solution space. At this point, the solution space is unclear
and difficult to manage. For that reason, a systematization would be suitable in order to decrease the
complexity of the system. A well-known instrument that helps to systematize difficult contexts is
provided by the methods of cluster analysis. In the following chapter, the Ward-procedure is used to
classify the solution space of logistics value streams by their properties.
Execution of cluster analysis
In order to systematize the logistics value streams by using a cluster analysis, classifying variables
are to be identified. Important requirements for material supply were selected as classifying variables.
As a result of the analysis, clusters which can be characterized according to their requirements are
thus obtained. The following requirements were identified in literature and a questionnaire-based
study in manufacturing companies [21].
• low control effort
• flexibility
• material availability
• clear arrangement
• handling
• reduction of inventories
In consideration of interviews with experts, the analysis of logistics processes during factory tours
and scientific literature the elements of the value streams were assessed regarding the degree of their
requirements’ fulfillment on a scale of 0-4 (0=weak requirements’ fulfillment, 4=strong
requirements’ fulfillment). By averaging over the values for each requirement by combining the
elements to generate value streams, the value streams were assessed. Afterwards the value streams
were clustered by using the Ward-algorithm in IBM SPSS Statistics. This algorithm seems feasible
as it gives advices on the number of clusters. The degree of proximacy is determined by the pairwise
analysis of the similarities between the value streams by the requirements. Based on this, the
algorithm is performed to group similar value streams. As a result, homogenous clusters of value
streams are obtained which are heterogeneous among themselves. The Dendrogram (see fig. 3) and
the Scree-diagram (see fig. 4) show that either five or seven clusters are feasible. The Dendrogram
shows that the heterogeneity value increases by building less groups. The Elbow-diagram is read from
right to left and shows that the residuals (error squares) increase by building seven groups and again
by building five groups. Also the interpretation of the clusters shows that a seven-cluster solution
226
seems appropriate. By building less groups the results become less significant and the cluster would
not be homogenous as the scale heterogeneity value of the Dendrogram shows.
scaled heterogeneit y value
0
5
10
15
20
25
{ 6}
{ 2,6}
{ 2}
{ 2,6,7,9}
{ 2,6,7,8,9}
{ 9}
{ 7,9}
{ 7}
{ 1,2,3,4,5,6,7,8,9}
{ 8}
{ 4}
{ 1}
{ 5}
{ 1,3,4,5}
{ 1,3,5}
{ 3,5}
{ 3}
Fig. 3: Dendrogram of Cluster analysis (last four steps)
Scree-diagram
150
140
residuals
130
120
110
100
90
80
70
60
4
5
6
7
8
number of clusters
9
10
Fig. 4: Scree-diagram
In order to assess the homogeneity of the clusters, an F-Test is executed. It shows that the value
streams are homogeneous within the groups but heterogeneous between the groups, as the values do
not exceed the level of “1” (see fig. 5).
Fig. 5: F-test: F-values of the clusters
227
Results
Figure 6 shows the cluster “Commissioned direct supply“. This cluster combines logistics value
streams which are characterized by a commissioned direct delivery to the working station. The
material is sequenced and delivered to the point of use without any storage processes. In order to
realize these processes the synchrony of logistics and assembly processes is a prerequisite. Figure 8
shows the six remaining clusters.
commissioned / sequenced
direct supply to work
station or close-by
work station
supplier
supply
supply form
sequence
standard amount
commissioned
order
partial order
sequence of assembly
sequence of orders
static
supply type
combined
orders
single product
single parts
not sequenced
dynamic
Fig. 6: Cluster 1: “Commissioned direct supply”
Within these seven clusters all practical relevant logistics value streams based on the solution space
are systematized by their requirements’ fulfillment. The requirements’ fulfillment for all clusters is
shown in figure 7.
reduction of inventories
3,5
3,0
low control effort
2,5
2,0
clear arrangement
1,5
1,0
0,5
0,0
flexibility
material availability
handlingg
Fig. 7: Requirements fulfillment of the clusters
228
Multistage static material supply
(working station) (Cluster 3)
Multistage static material supply
(close-by working station or working system) (Cluster 2)
commissioned / sequenced
commissioned / sequenced
warehouse for
commissioned /
sequenced
material
supplier
sorted
sorted
supply to work
system or close-by
work station
warehouse for
commissioning
material
buffer stock
buffer stock
commissioned
supply form
order
supply to work
station
warehouse for
commissioning
material
sorted
sorted
supply
warehouse for
commissioned /
sequenced
material
supplier
standard amount
partial order
sequence
sequence of assembly
supply type
static
single product
sequence of orders
combined orders
not sequenced
dynamic
commissioned
supply
single parts
order
supply form
partial order
sequence
sequence of assembly
supply type
static
standard amount
single product
sequence of orders
combined orders
single parts
not sequenced
dynamic
Commissioned static material supply (single-stage)
(Cluster 5)
sorted
supplier
commissioned
supply
supply form
supply to work station,
close-by work station or
working system
buffer stock
order
partial order
sequence
sequence of assembly
supply type
static
standard amount
single product
sequence of orders
combined orders
single parts
not sequenced
dynamic
Fig 8: Further clusters
Conclusion and outlook
The presented approach provides a foundation for material supply planning. Based on a
morphology the whole solution space of practical relevant logistics value streams from delivery to
the feeding has been drawn. By using the methods of cluster analysis the logistics value streams were
systematized into seven clusters. From the scientific point of view it was shown that formal methods
are applicable on engineering tasks. Task of further research is the development of a method for
planning logistics value streams for certain material types. This could be for example realized by
using an utility analysis. Thus, material supply planning follows a standardized procedure and
intuitive planning can be avoided.
229
References
[1] K. Heinz, M. Mayer, and L. Grünz, Planung der Materialbereitstellung bei optimalen Kosten und
Zeiten. wt Werkstattstechnik online 92 (2002) 531–535.
[2] D. Specht, N. Höltz, Schlanke Logistik. Adaption der Lean Production-Methodik auf die Logistik.
ZWF 106 (2011) 69–74.
[3] S. Seeck, Erfolgsfaktor Logistik. Klassische Fehler erkennen und vermeiden. Gabler, Wiesbaden,
2010.
[4] C. Finnsgard, C. Wänström, L. Medbo, and P. Neumann, Impact of materials exposure on
assembly workstation performance. International Journal of Production Research 49 (2011) 7253–
7274.
[5] L. Grünz, Ein Modell zu Bewertung und Optimierung der Materialbereitstellung. Shaker,
Dissertation Dortmund Aachen, 2004.
[7]
O.-E. Heiserich, Logistik: Eine praxisorientierte Einführung. Gabler, Wiesbaden, 2002.
[6] H. Ehrmann, Logistik. Friedrich Kiehl Verlag, Ludwigshafen, 2001.
[8] R. Jünemann, Materialfluß und Logistik. Systemtechnische Grundlagen mit Praxisbeispielen.
Springer, Berlin, 1989.
[9] H.-C. Pfohl, Logistiksysteme. Betriebswirtschaftliche Grundlagen. Springer, Berlin, Heidelberg,
2010.
[10] C. Schulte, Logistik. Wege zur Optimierung der Supply Chain. Vahlen, München, 2009.
[11] J. Golz, Materialbereitstellung bei Variantenfließlinien in der Automobilendmontage. Springer
Gabler, Wiesbaden, 2014.
[12] H.-J. Bullinger, M. Lung, Planung der Materialbereitstellung in der Montage. Teubner, Stuttgart,
1994.
[13] S. Dürrschmidt, Planung und Betrieb wandlungsfähiger Logistiksysteme in der variantenreichen
Serienproduktion, Diss. München, 2001.
[14] J. Durchholz, Wertstromdesign für die Logistik – ein Planungsleitfaden. In Lean Logistics.
Methodisches Vorgehen und praktische Anwendung in der Automobilindustrie, W. A. Günthner, J.
Boppert, Eds. Springer, Berlin (2013), 145–161.
[15] Y. A. Bozer, L. F. McGinnis, Kitting versus line stocking: A conceptual framework and a
descriptive model. International Journal of Production Economics 28 (1992) 1–19.
[16] R. Drews, Organisationsformen der Produktionslogistik. Konzeptionelle Gestaltung und Analyse
der Wechselbeziehungen zu den Organisationsformen der Teilefertigung. Dissertation Rostock.
Shaker, Aachen, 2005.
[17] W. Bauer, O. Ganschar, and S. Gerlach, Development of a method for visualization and
evaluation of production logistics in a multi-variant production. Procedia CIRP 17 (2014) 481–486.
[18] K. Backhaus, B. Erichson, W. Plinke, and R. Weiber, Multivariate Analysemethoden. Eine
anwendungsorientierte Einführung. Springer, Berlin, 2000.
[19] J. Bacher, A. Pöge, and K. Wenzig, Clusteranalyse: Anwendungsorientierte Einführung in
Klassifikationsverfahren: Anwendungsorientierte Einführung in Klassifikationsverfahren.
Oldenbourg, München, 2000.
[20] F. Klug, Logistikmanagement in der Automobilindustrie. Grundlagen der Logistik im
Automobilbau. Springer, Berlin, 2010.
[21] S. Adolph, J. Metternich, Materialbereitstellung in der Montage. Eine empirische Analyse zur
Identifikation der Anforderungen an zukünftige Planungsvorgehen, Zeitschrift für wirtschaftlichen
Fabrikbetrieb 111 (2016) 15–18.
230
Optimising Matching Strategies for High Precision Products by
Functional Models and Machine Learning Algorithms
Raphael Wagner1,a , Andreas Kuhnle1,b and Gisela Lanza1,c
1
wbk Institute of Production Science, Kaiserstr. 12, D-76131 Karlsruhe, Germany
a
raphael.wagner@kit.edu, b andreas.kuhnle@kit.edu, c gisela.lanza@kit.edu
Keywords: Assembly, Precision, Neural Network
Abstract. Companies are confronted with increasing product quality requirements to manufacture
high quality products, close to technological limits, in a cost-effective way. Matching of assembly
components offers an approach to cope with this challenge by means of adapted production strategies.
To satisfy and optimize precise functionality requirements a model that integrates process variation
and functionality is applied to enhance existing matching strategies.
This paper demonstrates the implementation of functional models within production strategies for
fuel injector systems. The injector system must fulfil high requirements regarding the functionality,
i.e. providing a homogeneous fuel mixture at a constant level. To enhance matching strategies and
the functional models for the assembled components, a machine learning algorithm will be applied.
This model is utilized to determine and quantify a model for the functional relation between preprocess variations and product functionality and to optimize matching strategies by selecting the
relevant features.
Introduction
Manufacturing companies in various industries have to meet rising quality requirements on behalf
of their consumers. Companies need to balance between keeping production costs at a low level and
fulfilling customers’ demands. Meanwhile, precision requirements increase and reach technological
production limits. High requirements occur especially within the automotive industry, for example
common-rail injectors, hydraulic transmission actors, electric motors and precision bearings. Herein
precision requirements trend towards narrow ranges in order to realize highly accurate functions with
an optimal degree of efficiency [1 to 3].
In order to decrease production costs, it is important to focus on value-generation so that costs of
different type of waste like scrap, rework and storage are minimized [4]. Inline-control of quality
features in real-time is one approach to detect errors early and prevent value creation of defective
components [5].
Recent Industrie 4.0 developments in the areas of sensor and information technology support the
realization of a cost-efficient production, while additionally ensuring given tolerances of the
components. Furthermore Cyber-Physical Systems (CPS) that monitor the production environment
and autonomously optimize the respective process are of great importance [6].
Another promising approach for cost reductions is the so-called selective assembly of components
that yield low quality requirements but can be assembled to high precision modules. By harmonizing
associated components more narrow tolerance margins can be realized in comparison to
conventionally assembled components in the injector production [7]. Additionally, adaptive
manufacturing of single components is used for selective assembly, resulting in a decreasing scrap
rate [1, 3].
This paper demonstrates the implementation of a multi-characteristic functional model to enhance
existing matching approaches. An optimization of the functional model is considered continuously
during the assembly processes by the implementation of a machine learning algorithm. The intention
of both the functional model as well as the machine learning algorithm is to enable matching
approaches for products with complex component interactions. The objective is to increase high
*
Submitted by: M.Sc. Raphael Wagner, M.Sc. Andreas Kuhnle, Prof. Dr.-Ing. Gisela Lanza
231
precision product quality with decreasing production costs through intelligent production strategies
enabled by Industrie 4.0 applications.
Fundamentals and literature review
Matching strategies. Technological developments of Industrie 4.0 such as sensors for traceability
as well as for precise inline measurement and CPS allow the application of real-time quality-based
control cycles in the entire value stream (Figure 1). These control cycles enable new possibilities for
intelligent, robust and cost-efficient production strategies [8].
Figure 1: Quality-based control cycles for robust and cost-efficient production strategies [8]
Selective assembly describes a method to increase the quality of products while decreasing the
production costs by minimizing errors that occur during the production [9, 10]. Single components
are grouped into multiple tolerance classes based on their individual deviation from a certain set point
and subsequently paired with an appropriate corresponding component [11]. Generally, in the process
of grouping components into classes, information about the exact geometries gets lost. Both, the
amount of classes as well as the tolerance margin define the degree of information getting lost [11].
A high number of classes and narrow tolerance margins are preferred for preserving precise
measurement data. However, on the other hand it is essential to hold enough components for each
class. Therefore, the number of components and the resulting inventory and overhead costs restrict
the number of tolerance classes. In case of lacking components, active and passive combinations of
non-corresponding classes are applied to avoid downtimes [12].
Individual assembly is a technique to reduce the loss of information since components are not
grouped into tolerance classes and the exact measurement is saved in combination with its distinct
storage place. However, high organizational and technical requirements are needed to implement
individual assembly [10].
Using the approach of adaptive manufacturing, surplus components for single tolerance classes
are prevented through consumption-oriented production of corresponding matching components.
Therefore, the statistical population of the quality-critical component is recorded and the
corresponding component is produced under set point adaption. The statistical population of
adaptively manufactured parts needs to fit to this population. This ensures that in total every
component can be matched and the required overall tolerance level is met [13]. Combinations of
selective / individual assembly and adaptive manufacturing seem to be promising to achieve high
rates of good parts as well as low production costs and production times [14]. Different compositions
of selective / individual assembly and adaptive manufacturing offer various new production strategies
[3].
232
The strategy of individual manufacturing joins the ideas of individual assembly with a minimum
storing capacity of provided assembly components. One of the components is supplied as halffinished product and individually finished after measuring the quality-critical component. Obviously,
a manufacturing process with lower process deviations than the overall tolerance is required for the
implementation. Savings of lower supplied storage capacities are accompanied with a larger
investment for an appropriate machine.
A framework (Figure 2) of alternative production strategies was introduced in [1].
Figure 2: Framework of alternative production strategies [1]
Functional requirements. Technological immanent process deviations refuse to manufacture
components, which meet all requirements at any time. Even technological limits of the manufacturing
processes are reached or it is not possible to realise it in a cost-efficient way leading to a high number
of scraped units in the manufacturing and assembly process.
It has been shown that the application of matching strategies for single component characteristics,
with simple functional models achieve improvements in the scrap rate and production cost in the
assembly of high pressure fuel injectors. The findings are based on event-driven simulation models
[8]. Compensating quality-critical process deviations for multiple component characteristics seem to
be promising to reach given precision requirements. Over-fulfilment of one characteristic could
compensate quality-critical characteristics depending on their functional interaction with other
characteristics. Moreover, qualitative relations between characteristics and their functional fulfilment
have already been studied in product development, such as modelling the physical structure of a
product and the component interactions to achieve its functions [15]. Single geometrical
characteristics are analysed through well-known tolerance management tools. These tools allow the
assessment of tolerances to a certain degree of complexity and level of interaction. However,
tolerance management focuses on the choice of proper requirements under conventional assembly
[16]. Quantitative correlations between multiple characteristic process deviations and their functional
fulfilment by enhanced matching approaches is not yet studied.
The function of a high-pressure injectors are, for example, defined by a homogeneous fuel mixture
at a constant pressure level. Therefore, internal component combinations of injector shaft and bore
must fulfil axial guidance for translational movement at a minimum hydraulic system leakage. Both
components underlie multiple functional-critical, narrow precision tolerances. For example, the shaft
(Figure 3) must meet length, diameter, roughness and cylindricity requirements. For complexity
reduction, such components are often divided into multiple zones. This simplifies the matching
233
process, but additional inter-zonal requirements needs to be considered. The correlating bore has
similar requirements. Matching strategies in combination with adaptive manufacturing (strategies 4
and 5) seem to be promising approaches to reach the functional fulfilment of shaft and bore. However,
a quantitative model of process deviation and functional fulfilment is necessary to implement and
operate a control cycle in a real-world application.
Figure 3: Multiple requirements for high precision shaft: length, diameter, roughness and
cylindricity
Machine learning. In order to define the quantitative correlation in a dynamic environment,
machine learning algorithms are considered in this paper to enhance existing matching strategies.
These algorithms have a long history and have been first described in the mid of the last century [17].
The algorithms differ from classical ones in such a way that they are able to learn an algorithmic
procedure without a given explicit procedure or certain rules [18]. Hence, machine learning
algorithms are prone for applications where patterns need to be recognised [19]. First successful
applications in manufacturing have already been proposed in [20].
Günther et al. [21] present some more recent applications of machine learning in the industrial
application of laser welding. A machine learning algorithm is implemented that is based on an
Artificial Neural Network (ANN) combined with reinforcement learning to significantly increase the
stability and quality of a laser welding process. An application of Support Vector Machines (SVM)
for the fault detection of industrial steam turbines is investigate in [22] and it is shown that the
algorithm is able to outperform conventional approaches. The combination of ANN together with a
genetic algorithm is illustrated in [23] for the optimization of machining parameters to minimize the
surface roughness. These examples demonstrate the wide range and successful application of machine
learning algorithms in production engineering as well as manufacturing.
One widely used machine learning algorithm are ANN. ANN are commonly defined by three layer
types: input layer, hidden layer(s) and output layer [24]. The first layer represents the input signals
and values, the hidden layer(s) define the internal structure of the network and preserve the captured
knowledge and the output layer eventually returns the results. The nodes of the network are connected
via weighted edges and those build the basis of the learning algorithm. During the learning phase the
weights are continuously adjusted. The training phase is supported by learning rules such as Hebb’s
rule, delta rule, backpropagation or competitive learning [25]. One more generally distinguishes
between reinforcement, supervised and unsupervised learning [17]. The former is characterised by an
evaluation function for each action that guides the teaching phase following a certain target. The
second is based on a predefined data set which is made of pairs ሺš୧ ǡ ›୧ ሻ where ‫ݔ‬௜ is representing the
input values and ‫ݕ‬௜ the associated output values. In the latter case, no output values are given and
hence the weights are calculated based on the similarity of the input values. Another classification of
machine learning algorithms categorizes them with respect to the output they generate. Herein the
most common categories are classification or similarly clustering and regression [25]. It has to be
considered that these algorithms are data-driven approaches meaning that the performance is mostly
dependent on the available data set [26].
234
In general, machine learning algorithms outperform other existing solution algorithms when
complex, non-linear inter-dependencies prevail and multiple features are considered [27]. In that case,
it is hard to obtain optimal solutions or perform optimal actions based on formulas provided e.g. by
engineering. Therefore, the previously presented use case of this paper is highly suitable for the
application of machine learning algorithms. Huge performance increases are achieved by the
utilization of GPU for processing as well as an inherently parallelizable setup of ANN, which allow
parallel computations.
Optimizing functionality of high quality products by matching strategies and machine
learning algorithms
The overall objective of this paper is the optimization of existing matching strategies to reach
precise functional requirements. Both, the application of functional models as well as the use of
intelligent data mining methods enable the assembly of high precision products under
technologically induced process deviations.
Process deviation – functional model. The usage of functional models to correlate process
deviations of component characteristics with the degree of functional fulfilment is a new approach in
matching strategy applications. The aim is to produce high precision products with a high degree of
complexity by utilizing the advantages of both, matching strategies and functional models. The
advantage of matching strategies is to produce high precision assembly products out of available lowprecision components through deviation compensation. The advantage of functional models is the
quantitative assessment of the available low-precision component combinations with respect to their
functional fulfilment based on observed process deviations. Obviously, the matching assessments
needs to be done before the assembly processes.
Interactions between component characteristics (Ci) in the so-called Working Surface Pairs
(WSP) [15] are analysed based on a qualitative level within product development. A regression
analysis is a basic method to describe the functional relationship between complex characteristic
deviation effects and the product’s functional fulfilment. Therefore, the functional fulfilment is
evaluated quantitatively. Product experiments under variation of characteristic deviations serve as
input for the regression model for each considered product function (Eq. 1 and Eq. 2).
ܺ௜
ܻ௞
Characteristic deviations
Functional fulfilment
ܻଵ ൌ ݂ሺܺ௜ ሻǡ ݅ ൌ ͳǤ Ǥ ݊
(1)
ܻଶ ൌ ݂ሺܺ௜ ሻǡ ݅ ൌ ͳǤ Ǥ ݊
(2)
Data are gathered from an existing assembly or in experiments. Herein it is required to have a
continuous product data traceability in place, from manufacturing processes to functional testing.
Statistical experimental design, for example, could serve as a tool to plan a minimal number of
experiments. Furthermore, a precise adjustment of characteristic variation is needed, otherwise
common regression designs also serve the regression analysis.
After validation, the gained functional model enables an individual evaluation of component pairs
with respect to their functional fulfilment. Complex interactions, which could not be solved with
common tolerance management tools, can be assessed in real-time. The model then serves as a
matching criterion for selective and individual assembly strategies to reach required functionalities.
Moreover, a prediction of the function of assembled products (Figure 4) is conducted based on the
process deviations as input quantities. A distribution of the functional fulfilment is numerically
calculated by weighted convolution of the process distributions with respect to known effects on
functional fulfilment for conventional assembly. Optimisation effects of process deviations (Figure
235
4), due to slower machining for example, on the product functionality in case of conventional
assembly can be also estimated. In addition, the optimisation of process deviations under a constant
degree of functional fulfilment can be evaluated via weighted deconvolution. Greater tolerances on
expensive parts and processes, for example, could be compensated through smaller tolerances and
more precise processes on cheaper corresponding components. The optimisation of process
deviations to a more pleasant combination could enhance product quality or even be more costefficient.
Figure 4: Functional model for functional prediction and process deviation optimisation
An analytical analysis of alternative production strategies with functional model and adaptive
manufacturing strategies is very complex. Thus, in order to model the effects of functional deviations
on production metrics such as the scrap rate and production costs an evaluation by means of eventdriven simulation is suitable.
Machine learning for modelling selective assembly decisions
The previously introduced regression model aims to describe the functional relation between the
process deviation and the functionality of the final product. As stated in the fundamentals section
such applications are highly suitable for machine learning algorithms. Furthermore, it is known that
relationships exist within the model, however, many parameters are considered and non-linear
relationships are likely. Thus, a complex setup is given for which machine learning algorithms have
been successfully applied in other domains such as engineering tasks and showed compelling results,
as aforementioned.
The suggested algorithm in this paper combines an ANN together with a reinforcement learning
algorithm. Since this algorithm design supports the overall decision and prediction whether the
product meets a certain functionality, a typical classification problem is given. The nodes and
weighted connections of the ANN are depicted schematically in Figure 5. The weights ‫ݓ‬௜ǡ௛ and ‫ݓ‬௛ǡ௢
determine the parameters of the functional model and the process deviations are represented in the
input layer. After the learning phase, the algorithm can predict the function of a product and at the
same time, the results are used for process deviation optimization.
236
Figure 5: ANN representation used for the machine learning algorithm
Additionally, a reinforcement Q-learning algorithm is utilized to continuously adjust the ANN
depending on different actions and states, so-called stat-action pairs ሺ‫ݏ‬௧ ǡ ܽ௧ ሻ. Therefore, the system
conforms the definition of a CPPS, which is highly adaptable to changing conditions. It promises to
be a powerful tool to optimize matching strategies, even in volatile manufacturing systems. In other
words, the combination of reinforced learning and ANN can handle the complex analytic
determination of the process deviation optimization outlined in the previous section and Figure 4.
The states of the Q-learning algorithm represent for example the initial process deviations. Feasible
actions are the increase or decrease of these process deviations. So, the set of actions is given by ‫ ܣ‬ൌ
ሼ‹ ”‡ƒ•‡’”‘ ‡••†‡˜‹ƒ–‹‘݅ǡ †‡ ”‡ƒ•‡’”‘ ‡••†‡˜‹ƒ–‹‘݅ሽ. The algorithm iteratively learns the
optimal action by evaluating how good a certain action is and receiving a reward when a good action
is chosen. Again, the functional fulfilment is used as indicator. Thereby the optimal state-action pair
is reached. This solution approximates the optimal solution, which could only be determined by
computationally hard evaluation of the above mentioned convolutions.
Conclusion and outlook
Functional models are presented in this to improve existing matching approaches. The
optimisation of the functional model could be conducted during the running assembly process by
using a machine learning algorithm design which is based on ANN and a reinforced learning
algorithm. The combination of both approaches promises a cost-efficient production of high precision
products by intelligent production strategies. The verification and validation of the suggested
approach must be demonstrated in a model for multi-characteristic assembly products and an eventdriven simulation.
However, the application of matching strategies for multiple characteristics is accompanied by a
high increase of storage capacity. Each component characteristic combination needs to be provided
for any possible corresponding matching component. To reduce the number of stored components the
application of an intra- or inter-plant adaptive manufacturing control cycle should be studied in future.
References
[1] Lanza, G., Haefner, B. and Kraemer, A.: Optimization of selective assembly and adaptive
manufacturing by means of cyber-physical system based matching. 5CIRP6 Annals Manufacturing Technology 64 (2015) 1, p. 399–402
[2] Peter, M. and Fleischer, J.: Rotor balancing by optimized magnet positioning during algorithmcontrolled assembly process: Selection and assembly of rotor components minimizing the
unbalance. Electric Drives Production Conference (EDPC) (2014), p. 1–4
[3] Wagner, R., Haefner, B. and Lanza, G.: Paarungsstrategien für hochpräzise Produkte. Industrie
4.0 bietet Potentiale bei steigenden Präzisionsanforderungen kostengrünstig zu produzieren. wt
Werkstatttechnik online 106 (2016) 11/12, p. 804–808
237
[4] Iyama, T., Mizuno, M., McKay, K. N., Yoshihara, N. and Nishikawa, N.: Optimal strategies
for corrective assembly approach applied to a high-quality relay production system. Computers
in Industry 64 (2013) 5, p. 556–564
[5] Schmitt, R.: Mit Inline-Messtechnik zum Erfolg. Proceedings of symposium AWK. 2005,
p. 277–305
[6] Schmitt, R., Niggemann, C., Isermann, M., Laass, K. and Matuschek, N.: Cognition-based selfoptimisation of an automotive rear-axle-drive production process. Journal of Machine
Engineering 10 (2010) 3, p. 68–77
[7] Mease, D., Nair, V. N. and Sudjianto, A.: Selective Assembly in Manufacturing: Statistical
Issues and Optimal Binning Strategies. Technometrics 46 (2004) 2, p. 165–175
[8] Lanza, G.: Resilient Production Systems by Intelligent Loop Control, Excellezenzcluster
Integrative Produktionstechnik für Hochlohnländer. Aachen 2016
[9] Warnecke, H.-J.: Die Montage im flexiblen Produktionsbetrieb: Technik, Organisation,
Betriebswirtschaft. Springer-Verlag 2013
[10] Loosen, P. and Funck, M.: Integrative Produktion von Mikro-Lasern. In: Brecher, C. (ed.):
Integrative Produktionstechnik für Hochlohnländer. VDI-Buch. Springer Berlin Heidelberg
2011, p. 1068–1113
[11] Colledani, M., Ebrahimi, D. and Tolio, T.: Integrated quality and production logistics
modelling for the design of selective and adaptive assembly systems. 5CIRP6 Annals Manufacturing Technology 63 (2014) 1, p. 453–456
[12] Ebrahimi, D.: Integrated quality and production logistic performance modeling for selective
and adaptive assembly systems, Politecnico di Milano PhD Thesis. Milano 2014
[13] Matsuura, S. and Shinozaki, N.: Optimal process design in selective assembly when
components with smaller variance are manufactured at three shifted means. International
Journal of Production Research 49 (2011) 3, p. 869–882
[14] Kayasa, M. J. and Herrmann, C.: A Simulation-based Evaluation of Selective and Adaptive
Production Systems (SAPS) Supported by Quality Strategy in Production. Procedia 5CIRP6 3
(2012), p. 14–19
[15] Albers, A. and Wintergerst, E.: The contact and channel approach (C&C2-A): relating a
system’s physical structure to its functionality. In: An Anthology of Theories and Models of
Design. Springer 2014, p. 151–171
[16] Walter, M., Sprügel, T. and Wartzack, S.: Tolerance analysis of systems in motion taking into
account interactions between deviations. Proceedings of the Institution of Mechanical
Engineers, Part B: Journal of Engineering Manufacture 227 (2013) 5, p. 709–719
[17] Monostori, L.: AI and machine learning techniques for managing complexity, changes and
uncertainties in manufacturing. Engineering Applications of Artificial Intelligence 16 (2003) 4,
p. 277–291
[18] Rich, E.: Artificial Intelligence. International Student Edition. London: McGraw-Hill Book
Company 1983
[19] Ertel, W.: Introduction to Artificial Intelligence. Undergraduate Topics in Computer Science.
London: Springer-Verlag London Limited 2011
[20] Monostori, L., Váncza, J. and Kumara, S.: Agent-Based Systems for Manufacturing. CIRP
Annals - Manufacturing Technology 55 (2006) 2, p. 697–720
[21] Günther, J., Pilarski, P. M., Helfrich, G., Shen, H. and Diepold, K.: Intelligent laser welding
through representation, prediction, and control learning: An architecture with deep neural
networks and reinforcement learning. Mechatronics 34 (2016), p. 1–11
[22] Salahshoor, K., Khoshro, M. S. and Kordestani, M.: Fault detection and diagnosis of an
industrial steam turbine using a distributed configuration of adaptive neuro-fuzzy inference
systems. Simulation Modelling Practice and Theory 19 (2011) 5, p. 1280–1293
[23] Kant, G. and Sangwan, K. S.: Predictive modelling and optimization of machining parameters
to minimize surface roughness using artificial neural network coupled with genetic algorithm.
Procedia CIRP 31 (2015), p. 453–458
238
[24] Rey, G. D. u. Wender, K. F.: Neuronale Netze. Eine Einführung in die Grundlagen,
Anwendungen und Datenauswertung. Bern: Huber 2011
[25] Russell, S. J. u. Norvig, P.: Artificial intelligence. A modern approach. Always learnin.
Harlow: Pearson 2014
[26] Reuter, C. and Brambring, F.: Improving Data Consistency in Production Control. Procedia
CIRP 41 (2016), p. 51–56
[27] Wuest, T., Irgens, C. and Thoben, K.-D.: An approach to monitoring quality in manufacturing
using supervised machine learning on product state data. Journal of Intelligent Manufacturing
25 (2014) 5, p. 1167–1180
239
PLM-supported automated process planning and partitioning for
collaborative assembly processes based on a capability analysis
Simon Storms1,a,d, Simon Roggendorf1, Florian Stamer1, Markus
Obdenbusch1 and Christian Brecher1
1
WZL – Laboratory for Machine Tools and Production Engineering, Chair of Machine Tools,
RWTH Aachen University, Steinbachstrasse 19, D-52074 Aachen
a
s.storms@wzl.rwth-aachen.de
d
+49 241 80-27448
Keywords: Assembly(ing), Lifecycle, Planning, Production planning, Robot
Abstract. The individualized production or production of many variations in general is dominated by
manual assembly. Skilled workers assemble products with different devices and universal tools.
Compared with this manually based approach, mass production of common equal goods is almost
fully automated. In this field, highly automated production facilities produce high amounts of
identical products autonomously. Automated production facilities in the production of mass
customized products are barely represented, although it is in fact a growing sector due to Industrie
4.0. The under-representation is caused by the lack of scalability between manually and fullyautomated applications. Often the latter solution is not an option due to the high costs and risks of
such systems – especially in the ramp-up phase. One possible solution for a scalable production with
manual processes and automated systems is Human-Robot Collaboration (HRC).
Within the scope of this paper, possible classification methods for different assembly tasks in the
production environment will be compared. Afterwards an assembly will be exemplarily analyzed
considering the classification methods. In the next step, the potential of transforming the assembly
definition into discrete tasks will be examined. Furthermore, a concept for an automatic allocation of
the assembly tasks to the resources staff or robot will be developed.
Introduction
The main stages during a product development process (product in relation to equipment) are
engineering, production, buildup, and commissioning. Especially for automated systems, these steps
are the most cost-intensive. They are characterized by lots of manual processes and iterations. One
approach to improve this development process is Product Lifecycle Management (PLM)-supported
integrated engineering, which is an essential tool for Industrie 4.0-oriented production. PLM systems
are only suitable for fully automated process equipment but not for the HRC processes. The design
of HRC faces a lot of problems, such as process description for human tasks, the separation of tasks
for the worker and the robot, or the commissioning of the semi-automated system. To realize the
collaborative assembly process for individualized or new products with a minimal investment in cost
and time, one approach is to consult an all-encompassing and inclusive product model. The idea is to
accumulate assembly information during early phases of the product development in this model and
make them accessible during the products lifecycle [1]. Based on this information, planning tools and
algorithms can provide assembly instructions for the staff and robot programs for new assembly tasks.
Related Work
The overview of the related work is structured in three different topics. First, a general review for the
full automation of assembly processes in combination with a skill-based analysis of assembly tasks
is given. The second topic is assembly planning in general, with the known topics and targets. In the
end, optimization methods from the field of Operations Research are applied to the exercises and
challenges in assembly planning.
*
Submitted by: Simon Storms, M. Sc.
241
Assembly automation and skill-based analysis – One aspect to determine an assembly solution for
a given task is its suitability for automatic execution. The product design can crucially influence the
possibilities of automation as it can increase, decrease, or even enable or prevent the assembly
automation [2]. For a given design, there are different methods to estimate the automation capacities
like classification, criteria catalogues, or capability models.
In the literature different approaches to evaluate an assembly task by classification are
represented, whereby most of them depend on the engineer’s expertise. [3] classifies the process with
a range of quantitative and qualitative questions. In [4], the dependencies between product design and
assembly automation are highlighted. A widely-used classification method named Design For
Assembly (DFA) is introduced in [2] and applied in [5]. Comparable methods are described in [6]
and [7].
To get a more quantitative evaluation of the problem, methods such as presented in [8] expand
the DFA method in terms of economic aspects of the problem. In this solution, expert knowledge still
has an influence but is not the dominating factor. [9] is also based on a criteria catalogue – albeit,
with a more analytical computer-based solution. The calculations will end up with capability indexes
for the human and the machine to determine an automation solution for the given problem.
Assembly planning – Assembly planning is a part of the work preparation, which is in turn part of
the production planning and control (PPC). There are other related terms for work preparation that
can be used synonymously. The exact division between terms and their scope is not clearly defined.
[10] describes the goals of production planning and control as high adherence to schedules, high and
consistent capacity utilization, short cycle time, small stock levels and high flexibility. To realize
these goals, products, quantities and deadlines are defined in the PPC. In close consultation between
sales, logistics and production, the resource rough planning is done and capacity needs are estimated.
However, the assembly planning can be considered as largely independent from overlaying
processes but with respect to the overlaying goals. In [11] assembly planning is divided into assembly
facility planning and assembly process planning. The facility planning covers selection, construction
and arrangement of needed working fund, while the process planning deals with material flows,
capacity planning, deadlines and assembly order.
Within the scope of this paper, the focus is on the assembly process planning. As a first step, it
is useful to model order relations in form of well-known techniques like precedence graphs or Petri
nets, to determine dependencies and deadlines [12]. To estimate cycle times of the modeled assembly
tasks, pre-defined or monitored times can be analyzed [13]. The next step will be to map assembly
tasks and resources in an appropriate order. The local optimum of the mapping is reached, when all
resources (stations) have the same workload and cycle times, to eliminate waiting times. The overall
cycle time is determined from the slowest station.
Methods to assign tasks and resources can be matched to one of the following categories: trial
and error methods, heuristic methods, and exact methods. While trial and error is mainly based on
the scheduler’s skills and expertise, computers can support the usage of heuristic and exact methods.
Operations Research (OR) can help to realize the heuristic and exact methods. It is described in the
following.
Operations Research in assembly planning – For the allocation of tasks to resources OR can help
to find an optimal solution. Premise for the usage of OR techniques is on the one hand the modeling
of the optimization problem itself, and on the other the development and application of the possible
algorithms to solve the optimization problem [14]. The more options and flexibility, the higher is the
probability to find a better solution. However this will increase the complexity of the problem by far
[15].
Optimization problems in production and logistics often are designed as mixed-integer programs
(MIP) to be able to describe their discrete nature. To be more specific, the task mapping is an
assignment problem. To specify the problem in an appropriate way, target values (e.g., pass-through
242
time) and control values (e.g., assembly alternatives, dispatching time points or dispatching order)
have to be determined [15]. MIP problems are NP-complete in general and therefore not solvable in
a predicable time. The complexity of the problem typically increases exponentially with their scope,
which often results in the usage of heuristic methods. The selection of the best algorithm and model
with the right balance between specificity and generality is therefore a considerable challenge for the
designer to solve. A very specific algorithm will fail for inappropriate problems, while general
algorithms may deliver the best solution but within an infeasible calculation time.
There are approaches to help the planner find the optimal optimization method with an adaptive
learning process so that the system can find the most suitable algorithm and settings by itself over
time [16]. Due the exponential behavior of these optimization problems, an approach based on decompensation can be used to decrease the solving time for a high order amount. There, orders are
grouped and sorted by priorities before their optimal planning is solved separately [17]. This approach
called cyclic MIP causes the problem complexity to increase linearly with each additional subproblem.
Methodology for an automated assembly process planning
Association Model – Fundamental for solving the given optimization problem – the assembly
scheduling – is an accurate association model that determines possible allocations of resources and
tasks. In general, a mapping function
݂ǣܴ ՜ ܶ
(ͳ)
has to be defined that allocates a set of resources ܴ to a set of tasks ܶ. A specific resource ‫ ܴ א ݎ‬is a
physical processing unit (human or automated equipment), which can solve an assembly task at a
specific location. Examples are robots, humans, or assembly machines. A specific task ‫ ܶ א ݐ‬is part
of the assembly process and must be processed by at least one resource.
In the next step a set of features ‫ ܧ‬with features ݁ ‫ ܧ א‬is added to define tasks and resources. A
feature contains at least a type and a value, such as e size and͸݉݁‫ݎ݁ݐ‬. Now, a resource ‫ݎ‬௜ can be
fully described by its set of features ‫ܧ‬௥೔ ‫ ܧ ك‬so that
ሺʹሻ
‫ܧ׊‬௥೔ ൌ ‫ܧ‬௥ೕ ǣ ‫ݎ‬௜ ൌ ‫ݎ‬௝ Ǥ
The same is true for tasks‫ݐ‬௜ , so the function given in (1) can be specified with
ሺ͵ሻ
݂ሺ‫ݎ‬௜ ሻ ൌ ൛‫ܶ א ݆ݐ‬ห‫ܧ‬௧ೕ ൌ ‫ܧ‬௥೔ ሽ
This description still cannot fulfill the requirements of the desired model because resources can
now only be assigned to tasks, if they have the same features, which is not realistic in general. To
solve this gap, a set of requirements ܳ with requirements q‫ ܳ א‬are introduced which can fully
describe a task. Requirements describe tasks as features describe resources. Now equation (3) can be
modified so that a resource ‫ ݎ‬can execute a task‫ݐ‬, if the function ݃ is zero.
݂ሺ‫ݎ‬௜ ሻ ൌ ቄ‫ݐ‬௝ ‫ܶ א‬ȁ݃ ቀܳ௧ೕ ǡ ‫ܧ‬௥೔ ቁ ൌ ͳቅ
ሺͶሻ
Disadvantage of this expression is the fact that a function ݃ for checking a set of features against
a set of requirements is needed. This can be done manually or with an automatic function. If the
requirements ܳ௧ೕ of the task can be fulfilled by the features ‫ܧ‬௥೔ of the resource the function is one
otherwise, it is zero. Advantage is that the overall assignment problem is divided into small, easy-tosolve problems that can be learned, combined and predicted, for example, by a machine learning
system.
Figure 1 shows the last expansion stage of the model described in equation (4). Resources own
features, tasks own requirements, while features and requirements have to be associated with each
other.
243
association
ownership
resource 1
resource 2
resource 3
resource 4
feature 1
feature 2
feature 3
ownership
requirement 1
task 1
requirement 2
task 2
requirement 3
task 3
requirement 4
task 4
Figure 1: Association Model with resources, features, requirements and tasks
Relation between resource feature and task requirements – Basis for the final mapping of
recourses and tasks is the mapping of features and requirements, which were introduced earlier.
Different hypothesis can be constructed, which may disagree with each other while both have a
legitimization to exist.
The first hypothesis is that one features is associated with exactly one requirement, which is
called one-to-one relation. Following this hypothesis, a resource ‫ݎ‬௞ can adopt a task ܽ௜ from
resource‫ݎ‬௝ , if‫ܧ‬௥ೕ ‫ܧ ك‬௥ೖ . In this case, resource ‫ݎ‬௞ is abler or mightier than‫ݎ‬௝ . Analog to this, a task
‫ݐ‬௞ can be called more demanding than task‫ݐ‬௝ , ifܳ௧ೕ ‫ܳ ك‬௧ೖ . If a resource can execute task‫ݐ‬௞ , then it
also can execute‫ݐ‬௝ .
The second hypothesis assumes an n-to-n relation between tasks and requirements. In this
hypothesis associations can overlap with each other.
At this point in time it is not clear which assumption (one-to-one or n-to-n relation) will lead to
overall better results. Nevertheless, the following example shows that an n-to-n relation is needed in
some cases. Make a case where there are resource features named maximum force with different
valuesͳͲܰ, ʹͲܰ andͶͲܰ. On the opposite there are requirements named needed force with
values͵ͷܰ, ͳͷܰ andͷܰ.
feature
max. force
40 N
20 N
10 N
requirement
projection needed force
35 N
15 N
5N
Figure 2: n-to-n relation of features and requirements
As shown in Figure 2, it is necessary to a have the possibility of n-to-n relations between features
and requirements to be able to express dependencies of values that can be sorted with ordinal or
cardinal scales. In many cases like in the given example, it is true that a feature can fulfil all
requirements with a value lower or equal (in other cases higher or equal) than the requirement.
Classification of the optimization problem – In order to model the problem with an accurate degree
of details for an optimization solution with usable results and feasible calculation time, some
assumption have to be defined. The chosen model consists of stocks, workstation, buffers and
resources that can be assigned to workstations. This, in fact, creates a new dimension in our
assignment problem consisting of designating resources to workstations at a specific time. It is
assumed that workstations are equipped in a general way so that different variants, models and
assembly steps can be executed on every workstation as long as a corresponding work plan is attached
244
to the station. The assembly process starts with a base and grows over different workstations to the
final product. A schedule defines deadlines for the finalization of products or product batches.
The assembly equivalent in the model can be described by a part-centered or a task-centered
view. A part-centered view would describe the product by disassembling it into single assembly parts.
A task-centered view however describes the product by its different tasks needed in order to assemble
the product. At this point it should be considered that the assembly task order produces a significant
part of the optimization problem. Therefore a task-centered view is favored, where the order of the
tasks can be expressed. Within one task multiple parts can be assembled.
A production typically includes additional information and physical flows. These shall be
considered separately. Here, material flows are reduced to the transport of the growing assembly
group between the workstations and from or to the stocks and buffers. The material supply for the
workstations is not considered. The information flow is reduced to the supply of work plans for the
stations. Finally, buffer limits and deadlines have to be satisfied.
workplan
source stock
workstation
workstation
workplan
workstation
target stock
workstation
target stock
buffer
Figure 3: Description of the optimization problem as directed graph
Figure 3 shows an example mock-up of a production line resulting from the model, with the
elements previously described. The assembly process begins at the source stock and ends at one of
the target stocks, while it passes through the workstations. The direction of the arrows give the
preferred assembly flow direction, in case a one direction flow is desired. The single problems
described in the classification are combined to optimize the overall assignment problem with respect
to the time scale. The problem can be considered as NP-hard.
Allocation of resource features and task requirements
The allocation of resources and tasks is done in two steps. The first step is the prediction of the
possible allocation of resources and tasks. This is done by the association system based on the
association model – in particular, via the allocation of the resource features and task requirements.
The result of the association system is used in the optimization system together with additional
constraints, where the final allocation with respect to a concrete order is determined. The additional
constraints, which may not be explained in detail here, take care of circumstances like the capacity of
buffer, delivery time of the assembled part, delivery or transport time between different assembly
stations, degree of freedom of employees, and many more. To solve this optimization problem, a
branch and bound pattern was used as a basis.
Case study
For the evaluation and validation of the developed model and the implemented solving algorithm an
exemplary assembly group was created and is used, which is similar to many products manually
assembled today in small and medium-sized enterprises. The structure can be seen in Figure 4
consisting of a housing, an electrical board, switches, a display, and a sticker.
245
switches
display
board
housing
nut
sticker
Figure 4: Exploded drawing of the used assembly group
Two different work plans can be used as input for the solver. One plan includes a determined
assembly order for every task while the other plan only includes flexible order relations of the tasks.
Examples for tasks and their order respectively their order requirements can be seen in Table 1. The
requirements for the tasks itself can be seen in Figure 5 (left-hand side).
Table 1: Example representation of assembly task and their requirements
Label for assembly step
Place board on housing (bottom)
Mount board with 4 screws
…
Fix sticker on housing (top)
Number
1
2
…
9
Order requirement
before 2
before 3
…
none
To fulfill the given tasks, different resources and workstations are available. In this example,
there are at least as many workstations as tasks to assemble the product. All stations can process all
tasks if a qualified resource is assigned to the station and the corresponding work plan is available.
This enables the highest range of possible optimization solutions. There are skilled workers, which
have different trainings/permissions that are represented as features, and there is a robot with a fixed
workstation and specific features (tools). Human and robot features are represented in Figure 5 (righthand side).
246
requirement
replace
X
force in
Y
stiffness
feature
replace
X
force in
Y
Z
Z
AX
AX
AY
AY
AZ
AZ
rigid
dexterity
non-rigid
screw in
stick
crosstrip
low
high
tools
glue gun
slotted
screwdriver
general
crosstrip
slotted
max. force
100 N
200 N
500 N
Figure 5: Feature and requirement tree for validation
There is specific data included in the output of the optimization process: The association of
resources to workstations, the mapping of task to resources, and a workflow definition. For this
optimization process an important precondition is the information about the possible association of
resources and tasks. This is done by the association system. Figure 6 shows an example result as a
tree diagram. The system uses three different methods to predict possible allocations based on the
association model introduced before. This can be seen in Table 2, where the different prediction
results for the tasks (1, 2, 3...) and resources (human 1, human 2… , robot 1) are shown. From left to
right, separated by vertical lines: general comparison of requirements and features, knowledge
database based on earlier associations, association rules learned from the knowledge base and far
right the correct desired value.
If a connection is indicated by at least one of the methods, the system overall predicts an
allocation. Only if no method respond with an allocation, the overall prediction is no allocation. Based
on these possible associations, the final association of resources, workstations and task is calculated
with respect to the optimization target. Results show a dependency of batch sizes with the association
of tasks, workstations and resources, while the calculation time increases exponentially with growing
batch sizes. This can be dealt with subdividing the batch into smaller batches and solving them
sequentially. However, this method called cyclic optimization prohibits any conclusion about the
optimality of the overall solution of the reconstructed total problem.
247
requirements
features
replace
force in
X
Y
Z
AX
AY
AZ
X
Y
Z
AX
AY
AZ
replace
force in
stiffness
rigid
non-rigid
low
high
dexterity
screw in
crosstrip
slotted
general
glue gun
screwdriver
crosstrip
slotted
tools
100 N
200 N
500 N
max. force
stick
Figure 6: Association of requirements and features
Table 2: Association results
Task Human 1 Human 2 Human 3 Human 4
O|X|X|X O|X|X|X O|X|X|X O|X|O|X
1
O|X|O|X O|X|O|X O|X|O|X O|X|O|X
2
O|O|O|O O|O|O|O O|O|O|O O|O|O|X
3
…
…
…
…
…
O|O|O|O O|O|O|O O|O|O|O O|X|X|X
9
…
…
…
…
…
…
Robot 1
O|X|O|X
O|X|O|X
O|O|O|O
…
O|O|O|O
Summary
Based on a skilled-orientated approach, an association model that fulfills the association of assembly
tasks and resources was developed. A planning module was created to solve the optimization
problem. The main challenge for the model preparation and implementation of the solving algorithm
was the creation of a sufficient model as input that could represent the real world with enough details
to fulfill all kind of different problems but fast enough to provide a solution in a realistic amount of
time. With the generic structure, the user can decide himself how detailed the modeling should be.
First attempts have proved the exponential relation between input (for example batch size) and
calculation time. By dividing the problem into sub-problems and cycled optimization, the calculation
time can be limited to a linear growth.
Acknowledgements
The support of the German National Science Foundation (Deutsche Forschungsgemeinschaft - DFG)
through the funding of the graduate program “Ramp-up Management: Development of Decision
Models for the Production Ramp-Up” is gratefully acknowledged.
248
This research and development project is funded by the German Federal Ministry of Education
and Research (BMBF) within the Framework Concept “Research for Tomorrow’s Production” and
managed by the Project Management Agency Karlsruhe (PTKA). The author is responsible for the
contents of this publication.”
References
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Brecher, C., Storms, S., Ecker. C. et al. (2016). An Approach to Reduce Commissioning and
Ramp-up time for Multi-variant Production in Automated Production Facilities. Procedia
CIRP, 51: 128–133.
Boothroyd G. (2005). Assembly Automation and Product Design (2nd Ed.). New York: CRC
Press.
Nof, S.Y (2009). Springer Handbook of Automation – With 149 Tables. Berlin [u.a.]: Springer.
Riley, F. (1999). Assembly Automation – A Management Handbook. New York/Lancaster:
Industrial Press/Gazelle.
Boothroyd G, Dewhurst P, & Knight W.A. (2011). Product Design for Manufacture and
Assembly (3rd. Ed) Boca Raton: CRC Press.
Lotter, B. (1989) Manufacturing Assembly Handbook. London/Boston: Butterworths.
Delchambre, A. (2013). Computer-aided Assembly Planning. [Place of publication not
identified]: Springer.
Eskilander, S. (2001). Design for Automatic Assembly – A Method for Product Design: DFA2.
Stockholm.
Beumelburg, K. (2005). Fähigkeitsorientierte Montageablaufplanung in der direkten MenschRoboter-Kooperation. Heimsheim: Jost-Jetter.
Kurbel, K.E. (2013). Enterprise Resource Planning and Supply Chain Management –
Functions, Business Processes and Software for Manufacturing Companies. Berlin/Heidelberg:
Springer.
Eversheim, W. (2002). Organisation in der Produktionstechnik 3. Arbeitsvorbereitung. 4. Aufl.
Berlin/Heidelberg: Springer Berlin Heidelberg. Available at: http://dx.doi.org/10.1007/978-3642-56336-2; 2002
Cao T & Sanderson A.C. (1996). Intelligent Task Planning Using Fuzzy Petri Nets.
Singapore/River Edge, NJ: World Scientific.
Crowson, R. (2006). The Handbook of Manufacturing Engineering (2nd. Ed.). Boca Raton:
CRC/Taylor & Francis.
Weigert, G., Henlich, T., & Klemmt, A. (2011). Modelling and optimisation of assembly
processes. International Journal of Production Research, 49: 4317–4333.
Hillier, F.S. & Lieberman, G.J. (2010). Introduction to Operations Research (9th Ed.). Boston:
McGraw-Hill.
März, L. (2011). Simulation und Optimierung in Produktion und Logistik. Berlin/Heidelberg:
Springer-Verlag Berlin-Heidelberg.
Bourdeaud'huy, T. & Korbaa. O. (2006). A MATHEMATICAL MODEL FOR CYCLIC
SCHEDULING WITH WORK-IN-PROGRESS MINIMIZATION. IFAC Proceedings,
Volumes 2006, 39: 63–68.
249
A three-step transformation process for the implementation of
Manufacturing Systems 4.0 in medium-sized enterprises
Christoph Liebrecht1,a, Jonas Schwind1, Moritz Grahm1 and Gisela Lanza1,b
1
wbk Institute of Production Science, Karlsruhe Institute of Technology (KIT),
Kaiserstrasse 12, 76131 Karlsruhe, GERMANY
a
Christoph.Liebrecht@kit.edu, bGisela.Lanza@kit.edu
Keywords: Digital Manufacturing System, Evaluation, Roadmapping
Abstract
Introducing Manufacturing Systems 4.0 (MS4.0) is essential for the competitiveness of industrial
companies. Nevertheless, their knowledge about the digitalization of manufacturing and the
transition process is limited. This paper shows a structured way to plan, evaluate and implement
MS4.0. This paper uses a three-step approach: In the first and second step different MS4.0
applications are structured and the interactions in between them are analyzed. The paper focusses on
the third step, where a comprehensive method to evaluate different applications of MS4.0 and the
Balanced Scorecard to support a coordinated and structured implementation of MS4.0 applications
are introduced.
Introduction
In the last years, Manufacturing Systems 4.0 (MS4.0) have gained interest in the field of
production research. The introduction of Cyber-Physical Systems should result in shorter lead times,
higher quality and increased flexibility. According to a survey by McKinsey & Company, 92% of
the German manufacturers have a positive opinion on MS4.0 and see it as a chance rather than a
threat. Nonetheless, the report shows that 44% of the companies made no or only little progress
during the year 2015. Only one out of five manufacturers reports to have implemented a MS4.0
strategy and only one out of three has assigned clear responsibilities for the implementation. The
main reasons for the current situation are: difficulty in coordinating actions, lack of a clear business
case to justify investments in IT systems and lack of necessary know-how and talent. [1]
Literature Overview
Gaining a structured overview of available MS4.0 methods is important to support the
introduction of MS4.0 in companies. For that reason, several structurings of Industry 4.0 have been
designed following different structuring principles (e.g. the McKinsey Digital Compass [2] is
structured according to value drivers in the production). Additionally, structuring principles from
other areas like the house model for integrated production systems according to Spath [3] can be
used as basis for a MS4.0 structuring. Another way to classify MS4.0 methods is the use of maturity
levels, which evaluate the development within a MS4.0 method (internal maturity level) or classify
a MS4.0 method within the overall development of MS4.0. As an example, a generic approach to
internal maturity levels is the maturity index for lean methods by Jondral [4]. An example for
external maturity levels is acatech’s “Industrie 4.0 Maturity Index” [5], which splits the
development of Industry 4.0 in six maturity levels, to which methods can be matched.
Besides structuring MS4.0 methods and assessing their maturity levels, it is essential to address
interactions between individual MS4.0 methods, in order to get an extended overview on the topic
of MS4.0. An approach on identifying and evaluating interactions under uncertainty has been
designed by Aull [6]. Aull uses the concept of system dynamics to create a model that enables users
251
to fully understand concurrence of lean production methods and to support the defining of sets of
implementation strategies [6]. System dynamics is a modelling method that includes various
elements of a system and provides an insight on the systems dynamic behavior under uncertainty
[7].
The implementation of MS4.0 methods starts with strategic planning of new technologies, which
is supported by roadmapping methods. Roadmaps are widely applied in manufacturing companies
and usually visualize a market, product and technology perspective on a multilayer timeline [8].
However, they need to be tailored to the specific planning occasion, which is why many different
roadmapping approaches exist. One of the first roadmap approaches is the so-called technology
calendar by Westkämper, aiming at synchronizing the planning of the production program with the
introduction of new product and process technologies [9]. Subsequently, Burgstrahler integrates
Westkämper’s technology calendar into a strategic planning process [10]. Other authors combine
technology roadmapping with planning processes of production resources, e.g. with factory planning
[11]. In conclusion, roadmapping is very relevant in manufacturing companies to facilitate planning
and the introduction of new technologies. However, none of the approaches fulfills the requirements
for planning the implementation of MS4.0, i.e. taking a broad perspective by additionally
integrating IT, HR and operational structure elements in the roadmap.
The research field of evaluating of MS4.0 is still in its infancy. However, the evaluation methods
used for advanced manufacturing technologies (AMT), factory planning, information and work
systems offer valuable insights. Generally, evaluation methods can be classified in four groups:
economic, strategic, analytic and hybrid methods [12]. Since economic, strategic and analytic
methods have distinct shortcomings, hybrid methods are increasingly used to perform a
comprehensive evaluation. Particularly, economic and analytic methods are combined in recent
approaches. In factory planning, Kolakowski et al. combine a NPV calculation for monetary and
indirect monetary criteria with a weighted scoring models (WSM) for non-monetary criteria [13].
The idea is based on Zangmeister, who developed a three-step model to evaluate work systems. In
the first and second step, economic methods are used for monetary and indirect monetary criteria
respectively. Thereafter, non-monetary criteria are assessed in a WSM [14]. In addition to work
systems, the issues of qualitative and long-term benefits are present for investments in information
system [15]. For instance, Chou et al. perform a fuzzy AHP based on 26 monetary and nonmonetary criteria [16]. Westkämper et al. categorize the benefits into direct (monetary), indirect
(quantifiable) and strategic benefits [17]. They suggest NPV calculation, activity-based costing or
WSM and Balanced Scorecard respectively as evaluation methods for the implementation of Virtual
Reality. Isensee et al. evaluate investments in RFID technologies by monetarizing non-monetary
criteria based on cause-effect-relations to monetary criteria [18]. Yet, the discussed evaluation
approaches are not suitable for evaluating MS4.0 methods as they are missing the breadth and depth
required by small and medium-sized enterprises (MSEs).
To implement the strategy, we recommend the Balanced Scorecard, because it is a tool that is
used to align an organization’s business activities to its vision and strategy. It was developed by
Robert Kaplan and David Norton as a performance measurement system, which combined nonfinancial and traditional financial performance measures. Thereby, it enables managers to have a
more balanced view of their organization’s performance. The Balanced Scorecard consists of four
perspectives: Innovation and Learning, Internal Business, Customer and Financial Perspective.
These are connected by bottom-up causal relationships. The viewpoints are not fixed and can be
adapted to any organization or business unit, to implement a strategy in practice. [19]
Solution
The implementation of MS4.0 requires thoughtful planning and preparation before the actual
investment decision is made. Therefore, we are proposing a three-step model (figure 1) aiming at
supporting MSEs in the transformation process towards MS4.0.
252
1
2
3
Structuring of Manufacturing Systems 4.0 Methods
Interactions of Manufacturing Systems 4.0 Methods
Implementation of Manufacturing Systems 4.0
Fig. 1: Structure of transformation process
1. Structuring of Manufacturing Systems 4.0 Methods
To introduce MS4.0 in an efficient and structured manner, a first step for companies is a
structured overview of available technologies and methods. The structuring needs to be intuitively,
close to the industrial practice and needs to define MS4.0 methods accurately and concisely while
integrating internal and external maturity levels to represent developments within Industry 4.0.
To meet these requirements, the Industry 4.0 House has been designed, which uses a house
model with hierarchically ordered categories to structure methods and technologies of Industry 4.0.
The categories of the house are arranged in three areas, whereas the basis represents basic
technologies, the columns include applications of Industry 4.0 in the production and the roof
consists of applications combining and going beyond those of the production.
Based on the Industry 4.0 House, a profile for MS4.0 methods was designed, which includes a
detailed description as well as targets, potentials and risks of the method. Additionally, internal
maturity levels (based on the maturity index by Jondral [4]) are described to show the development
stages of the method. Finally, it is stated, to which external maturity levels the method belongs.
2. Interactions of Manufacturing Systems 4.0 Methods
Before covering the implementation process of MS4.0, the following concept addresses the
interactions of MS4.0 methods. In order to enable decision makers, not only to structure methods on
its maturity level, but also to identify and evaluate efficient implementation strategies for MS4.0
methods, we develop an approach which provides a recommendation for the order of
implementation fitted to specific frameworks. The concept is built on basis on system dynamics.
The introduced concept takes a set of different aspects into account: interactions between MS4.0
methods in general, interactions between methods and specific key performance indicators,
production structures and basic requirements.
The first step is to choose MS4.0 methods and Key Performance Indicators (KPIs) based on
individual needs. While there is a large set of different methods that are discussed in the area of
MS4.0 and digitalization in general, the concept provides the most relevant methods and KPIs for
further analysis. To prevent a complex and misleading selection process, KPIs can be divided into
following groups: costs, time, quality, and flexibility. To identify and quantify interactions between
MS4.0 methods, and correlations between methods and KPIs experts familiar to the topic of MS4.0
and digitalization are interviewed.
The next step is to transfer all information into a system dynamic model including the aspect of
uncertainty. Modern production systems are not capable of involving all internal and external
influences. This step can be supported by using advanced simulation and modelling software.
Additionally, information about production structures and basic requirements have to be taken
into consideration [20]. Depending on the individual settings of different production systems,
effective implementation strategies can vary.
3. Implementation of Manufacturing Systems 4.0
The implementation process of MS4.0 follows three distinct steps, which we address in the
following subsections. First, strategic planning sets the necessary base for the implementation of
253
MS4.0. Thereafter, specific MS4.0 methods are evaluated and selected to transform the
manufacturing system. Finally, a Balanced Scorecard approach aims at monitoring the realization
process.
3a. Strategic planning of Manufacturing Systems 4.0. The transformation to MS4.0 is a longterm gradual process, which consists of the incremental implementation of MS4.0 methods. This
transformation process needs to be aligned with the overall corporate development. We suggest the
definition of a MS4.0 vision, which describes the manufacturing system’s role within the future
company. Based on the MS4.0 vision, objectives are derived and strategies, which describe the
measures to achieve the objectives, are formulated to specify the single steps along the
transformation process to MS4.0. The transformation is supported with advanced planning
techniques. We propose a MS4.0 roadmap in order to set up manufacturing companies for the future
of production.
The MS4.0 roadmap synchronizes production and product elements among each other to assure
an aligned development of the manufacturing system. The roadmap is based on Westkämper’s
technology calendar, which aims at synchronizing product program planning with the development
of product and process technologies [10]. Additionally, the elements HR, IT and process
organization are added to the roadmap. Within the production department, workers’ roles are
shifting to supervision and management activities under MS4.0. Close interaction of workers and
machines will become the norm. Thus, workers need to be technology savvy and highly qualified.
As skilled manpower is scarce, companies need to invest in the qualification of their workforce.
Integrated, state-of-the-art information systems build the foundation of a successful introduction of
MS4.0. Yet, many manufacturing companies, particularly MSEs, lack a solid IT basis. Therefore,
the IT landscape must be developed, representing an essential part of the MS4.0 strategy.
Furthermore, MS4.0 have an impact on companies’ organizational and operational structure.
Operational processes will be autonomously managed by decentralized units (Cyber-Physical
Systems), instead of being centrally controlled. Moreover, the hierarchy of operational control
changes to a flat network, where all units can communicate and interact with each other. Hence, the
organizational and operational structure must be adapted to MS4.0 requirements. IT, HR and
organizational readiness is a prerequisite for the implementation of MS4.0 and must be developed
first. The MS4.0 roadmap, we suggest, includes these elements into the company-wide planning
process. Furthermore, the introduction of MS4.0 should be synchronized with the product program
and the introduction of new product technologies. In figure 2 we show an exemplary MS4.0
roadmap.
Fig. 2: Exemplary Manufacturing System 4.0 roadmap
254
3b. Evaluation of Manufacturing Systems 4.0. The evaluation process aims at identifying the best
investment alternative in terms of both financial and strategic benefit. A financial evaluation is
fundamental in the investment process and should always be performed to determine the cash inand outflows [14]. However, acknowledging the shortcomings of economic method in including
qualitative and long-term benefits, an additional evaluation is carried out which aims at covering
those (strategic) benefits.
The evaluation process consists of several steps, which are shown in figure 3 and explained in
detail in the following. Based on the MS4.0 vision and strategy, a MS4.0 method (investment
object) is identified and investment objectives are derived. Thereafter, the evaluation is performed
with two distinct methods. The financial evaluation is carried out applying the NPV method,
supplemented by a Monte-Carlo Simulation to account for the uncertainty of cash flows. For the
strategic evaluation, a fuzzy AHP technique is applied. The AHP supports the comparison and
ranking of the investment alternatives, while Fuzzy Set Theory accounts for the vague characteristic
of qualitative criteria.
By determining the investment object, the evaluation scope and time span are defined. An
investment object is a specific MS4.0 method, e.g. paperless production. The evaluation time span
should correspond to the investment object’s lifecycle in order to account for all cash flows
resulting from the investment. Additionally, investment alternatives are developed.
Next, the investment objectives are determined. In general, the investment should contribute to
the corporate vision and strategy. Thus, investment objectives derive from corporate and production
objectives. The most relevant objectives must be selected and ordered in an objective system. The
objective system should be complete, non-redundant, specific, operational and minimal [21].
Fig. 3: Structure of the evaluation method
Then, the financial benefit is calculated applying the NPV method. Financial objectives are cost
and income related. Therefore, the cash flows associated with each investment alternative should be
identified and forecasted over the evaluation time span. Monetary criteria over a basic life span of
machinery can be found in e.g. VDI [22]. In addition to the monetary criteria, indirect monetary
criteria are included in the NPV calculation. Indirect monetary criteria are quantitative criteria,
which can be linked to monetary criteria through a clear cause-and-effect-relationship. For instance,
the reduction of cycle time due to a new process technology can be measured and translated into a
monetary benefit. The determination of the financial impact of non-monetary criteria is called
monetarization. An overview of monetarization functions and financial impacts can be found in
Brieke [21] and VDI [22]. The financial impact of non-monetary criteria is then added to the
255
respective monetary criteria, resulting in a so called extended NPV [14]. Uncertain criteria are
modelled via probability distributions, e.g. by using non-standard beta distributions based on three
point estimations [23]. The NPV is thus calculated by the aggregation of monetary criteria (MC) and
financial impacts (FI), as shown in formula 1. Evaluation periods are represented by t with T as time
span, r is the interest rate, I and i correspond to the monetary criteria and J as well as j denote the
nonmonetary criteria. The NPV calculation is run several times with the Monte-Carlo Simulation,
resulting in a probability distribution of the NPV.
(1)
Thereafter, strategic benefit is assessed with a fuzzy AHP technique. MS4.0 generate long-term
benefits, which translate into monetary benefits over time. However, those benefits cannot be
measured. Yet, they need to be included in a comprehensive evaluation. This is achieved by
incorporating strategic criteria. The identification and selection of appropriate criteria build the core
of the strategic evaluation. As a basis, a comprehensive literature review has been conducted in
different research fields [14, 16, 21, 22, 24, 25, 26]. Evaluation criteria are deduced from the
evaluation of manufacturing systems, technology, AMT, IT/IS and RFID. The criteria are structured
with respect to production and functional area objectives. Production criteria capture the qualitative
effects of the MS4.0 method within the production department. For instance, efficiency describes
the economical use of manufacturing resources, while performance measures the speed and time of
production processes. Quality comprises product and process quality. Flexibility represents the
ability to handle variations of volumes and product mix, whereas transformability describes the
ability to adapt to a changing environment quickly and with minimal effort. Production management
summarizes the manufacturing system’s indirect benefits such as transparency or efficiency of
production management activities. However, the strategic contribution of MS 4.0 is not limited to
production. Thus, further criteria are formulated which span across different areas. These functional
criteria are grouped into technology, IT, HR, organization, customer, competition and ethics. The
HR perspective, for instance, evaluates changes of labor quality, workplace design and qualification
level resulting from the implementation of the MS4.0 method. The production and functional
criteria have been further detailed and compiled to a comprehensive catalogue of more than 100
different criteria. Only relevant criteria should be selected from the catalogue to reduce complexity
and speed up the assessment process. We suggest to include maximal the 20 most relevant criteria.
Since the strategic evaluation criteria is qualitative, a fuzzy AHP technique is applied which
accounts for linguistic imprecision and subjectiveness in the pair-wise comparison process [27]. The
fuzzy AHP can be calculated according to Chang’s [28] Extent Analysis Method.
Finally, the results of the economic and strategic evaluation are plotted on a graph. The y-axis
denotes the strategic benefits. The outputs of the fuzzy AHP are global weights for each investment
alternative, which represent their respective importance. Since the weights are normalized, they
always lie between 0 (low) and 1 (high). On the x-axis, the economic benefits are denoted. The
economic benefit is represented by the extended NPV’s mean, which is drawn from the probability
distribution generated by Monte Carlo Simulation. Additionally, economic risk can be included
using the size of the circles. The coefficient of variation, which is a standardized measure of
dispersion of a probability distribution, can be used to compare economic risks across alternatives.
In doing so, relevant information can be displayed on a single chart. Obviously, an investment
alternative is the better, the larger the NPV (to the right), the higher the strategic benefit (to the top),
the lower the financial risk (smaller the circle). If none of the investment alternatives is absolute
dominant, decision makers need to carefully assess the alternatives’ benefits.
256
Fig. 4: Visualization of investment evaluation results
3c. Implementation of Manufacturing Systems 4.0. To implement the applications in the
everyday production process, a new developed, modified version of the Balanced Scorecard, called
Balanced Scorecard 4.0 is recommended in this paper. It is used to transform the strategic goals and
selected applications in a well-balanced measurement system. Therefore, it helps to control and
monitor the change process, to communicate the process to the workers and to focus on the
important improvements.
The system consists of four perspectives which reflect the critical areas for a successful
introduction of MS4.0: HR Perspective, IT Infrastructure Perspective, Process Perspective and
Performance Perspective. The organization selects KPIs for every perspective to measure its
progress. To keep it simple, the number of KPIs is limited to six for each viewpoint.
Fig. 5: Structure of Balanced Scorecard 4.0
The four perspectives are linked by causal relationships. The basis consists of the HR
Perspective, because a change in the IT infrastructure will have no effect, if the workforce does not
possess the ability to use it properly [29]. The prerequisite for automated and intelligent processes is
the IT Infrastructure. Fast data processing, data availability and networking are needed [30]. In the
Performance Perspective, the impact of all changes done is measured as result of the transformation.
HR Perspective: This perspective answers the question, which abilities our production staff
needs to implement MS4.0 effectively. The aim is to integrate the worker in the production system
as the superordinate control instance. The implementation of MS4.0 methods has a big impact on
the work of the production staff, completely changing the kind of interaction between worker and
production system. [31]
257
The HR Perspective shows what know-how improvements the workers need for a successful
MS4.0 implementation and translates the goals into key performance indicators. Possible KPIs are
costs for developing each employee or number of trainings in digitization for each worker.
IT Infrastructure Perspective: In this viewpoint, we focus on the question, which changes in the
data- and infrastructure are necessary for the implementation of MS4.0 methods. To introduce
MS4.0 methods the organization must update the data- & infrastructure. From a hardware point of
view intelligent sensors and a new generation of man-machine-interfaces are necessary. [32]
To implement new applications like a real-time image of the production, data handling and data
collection needs to be enhanced. A standardized data management and a media break free data
transfer gain importance [30]. Measures that can be used to capture the progress from a data point of
view are number of media breaks, data stock, number of data captured. Number of modern menmachine-interfaces, number of smart sensors or percentage of MS4.0 ready machines are KPIs for
the hardware point of view.
Process Perspective: This perspective addresses the question, how we need to change our
processes to make the implementation of MS4.0 more effective. Through the Internet of Things,
Artificial Intelligence and Machine Learning new forms of process automation become possible. An
increasing number of processes will be decentralized and self-controlled. [33]. Examples for KPIs
are the degree of automation, percentage of decentralized processes or the setup time.
Performance Perspective: The last perspective shows how the changes in all other perspectives
influence the performance (Time, Quality, Costs) of the production. In the end, all applied methods
should lead to an improvement in time, quality or costs. KPIs are Cost of quality correction, lead
time or productivity.
Application
The presented approach is currently implemented in the course of the Intro 4.0 research project.
The aim of the initiative is to invent implementation strategies for MSEs. One example is the
transformation towards a paperless production at era-contact GmbH, a specialist for electrical rail
coupling. At first, a clear vision of MS4.0 was defined and translated into a clear strategy, using
MS4.0 roadmap. Then the different MS4.0 methods were evaluated and after this, the Balanced
Scorecard 4.0 was used for the implementation process.
Summary
This paper introduces a three-step approach for a MS4.0 transformation and focusses on the third
step. There, we showed how MS4.0 vision can be translated into an implementation roadmap and
presented an approach for the evaluation of different MS4.0 methods to assess the method’s
strategic and monetary benefit. At last, we focused on the implementation recommending a
Balanced Scorecard as a tool to keep track of the process.
The presented approach can be adapted to different organizations regardless of the degree of
maturity of their MS4.0. We found a solution to four of the five problems we identified in the
introduction. It closes the coordination gap of putting an existing MS4.0 method into practice, by
using a step by step approach. It enables the decision-makers to put a monetary value on each
MS4.0 methods and encourages them to make necessary investments in the future. Furthermore, it
helps to communicate the aims within the firm by using the Balanced Scorecard, which puts, among
other things, emphasis on the development of the workforce.
Future research has to be done along the entire MS4.0 transformation process – structuring,
planning, evaluation and implementation. Also, it has to be discovered, which software tools are
useful to support the process.
We extend our sincere gratitude to the Bundesministerium für Bildung und Forschung for
supporting this research project 02P14B161 “Befähigungs- und Einführungsstrategien für Industrie
4.0 – Intro 4.0” (“Empowerment and Implementation Strategies for Industry 4.0”).
258
References
[1] McKinsey&Company, Industry 4.0 after the initial hype. Where manufacturers are finding
value and how they can best capture it, McKinsey Digital (2016) 1-36.
[2] McKinsey & Company, Industry 4.0: How to navigate digitization of the manufacturing sector,
München (2015)
[3] D. Spath, Ganzheitlich produzieren: Innovative Organisation und Führung, Logis, Stuttgart
(2003)
[4] A.G. Jondral, Simulationsgestützte Optimierung und Wirtschaftlichkeitsbewertung des LeanMethodeneinsatzes, Diss., Karlsruher Institut für Technologie (2013)
[5] G. Schuh, J. Gausemeier, W. Wahlster, R. Anderl, M.H. ten Hompel, Industrie 4.0 Maturity
Index. Managing the Digital Transformation of Companies (acatech Study), Herbert Utz Verlag,
München (2017)
[6] F. Aull, Modell zur Ableitung effizienter Implementierungsstrategien für Lean-ProductionMethoden, Technische Universität München (2012)
[7] V. Tang, S. Vijay, System Dynamics. Origins, development, and future prospects of a method,
Research Seminar in Engineering Systems (MIT) (2001)
[8] R. Phaal, C.J.P. Farrukh, D.R. Probtert, Technology roadmapping - planning framework for
evolution and revolution, Technological forecasting and social change 71 (1) (2016) 5–26.
[9] E. Westkämper, Strategische Investitionsplanung mit Hilfe eines Technologiekalenders, in:
Horst Wildemann: Strategische Investitionsplanung für neue Technologien in der Produktion.
Tagungsband; Ges. für Management und Technologie (1986) 143–182.
[10] B. Burgstahler, Synchronisation von Produkt- und Produktionsentwicklung mit Hilfe eines
Technologiekalenders. Zugl.: Braunschweig, Techn. Univ., Diss., Essen: Vulkan-Verl, (1996)
[11] M. Eikötter, Synchronisation der Produkt-, Technologie- und Fabrikplanung durch integratives
Roadmapping. Zugl.: Hannover, Univ., Diss., 2011. Garbsen (2011).
[12] F.T.S. Chan, M.H. Chan, H. Lau, R.W.L Ip, Investment appraisal techniques for advanced
manufacturing technology. A literature review, Integrated Mfg Systems 12 (1) (2001) 35–47.
[13] M. Kolakowski, R. Schady, K. Sauer, Grundlagen für die „Erweiterte Wirtschaftlichkeitsrechnung (EWR)“. Ganzheitliche Systematik zur Integration qualitativer Kriterien in der
Fabrikplanung, wt Werkstattstechnik online 4 (2007) 226–231.
[14] C. Zangemeister, Erweiterte Wirtschaftlichkeitsanalyse (EWA). Grundlagen, Leitfaden und PCgestützte Arbeitshilfen für ein "3-Stufen-Verfahren" zur Arbeitssystembewertung. Wirtschaftsverl.
NW Verl. für Neue Wiss, Bremerhaven (2000)
[15] T. Pietsch, Bewertung von Informations- und Kommunikationssystemen. Ein Vergleich
betriebswirtschaftlicher Verfahren. 2., neu bearb. und erw. Aufl. Berlin: Schmidt (2003)
[16] T. Chou, T. Seng-cho, G. Tzeng, Evaluating IT/IS investments. A fuzzy multi-criteria decision
model approach, European journal of operational research 173 (3) (2006) 1026–1046.
[17] E. Westkämper, H. Neunteufel, C. Runde, S. Kunst, Ein Modell zur Wirtschaftlichkeitsbewertung des Einsatzes von Virtual Reality für Aufgaben in der digitalen Fabrik,
Werkstattstechnik wt-online 3 (2006) 104–109.
[18] J. Isensee, S. Zeibig, M. Seiter, A. Martens, M. Elsweier, Ganzheitliche Wirtschaftlichkeitsbewertung von RFID-Investitionen am Beispiel der dezentralen Produktionssteuerung, IMMUNCHEN- 22 (4) (2007) S. 57–63.
259
[19] R. Kaplan, D. Norton, The Balanced Scorecard. Measures that drive performance, Harvard
Business Review (1992) 70-80.
[20] M. Fuchs, O. Schell, Die digitale Transformation und das Internet of Things werden
allgegenwärtig, Diplomatic Council (2016)
[21] M. Brieke, Erweiterte Wirtschaftlichkeitsrechnung in der Fabrikplanung. Zugl.: Hannover,
Univ., Diss., 2009. Garbsen: PZH Produktionstechn. Zentrum (2009)
[22] VDI-Fachausschuss Fabrikplanung, Strategien und nachhaltige Wirtschaftlichkeit in der
Fabrikplanung. Standorte gezielt auswählen, Investitionen sicher planen. Berlin: Beuth (2012)
[23] C. Liebrecht, A. Jacob, A. Kuhnle, G. Lanza, Multi-Criteria Evaluation of Manufacturing
Systems 4.0 under Uncertainty, ScienceDirect (2017) 1-6.
[24] C.L. Heger, Bewertung der Wandlungsfähigkeit von Fabrikobjekten. Zugl.: Hannover, Univ.,
Diss., 2006. Garbsen: PZH Produktionstechn. Zentrum (2007)
[25] W. Pelzer, Methodik zur Identifizierung und Nutzung strategischer Technologiepotentiale.
Zugl.: Aachen, Techn. Hochsch., Diss., Als Ms. gedr. Aachen: Shaker (1999)
[26] S.M. Ordoobadi, N.J. Mulvaney, Development of a justification tool for advanced
manufacturing technologies. System-wide benefits value analysis, Journal of Engineering and
Technology Management 18 (2) (2001) 157–184.
[27] T. Demirel, N.C. Demirel, C. Kahraman, Fuzzy Analytic Hierarchy Process and Its Application.
In: M.P. Panos, D, Ding-Zhu, K. Cengiz (Eds.), Fuzzy Multi-Criteria Decision Making. Theory and
Applications with Recent Developments. Heidelberg: Springer 53–84. (2008)
[28] D. Chang, Extent analysis and synthetic decision, Optimization techniques and applications 1
(1) (1992) 352–355.
[29] T. Ludiwg, C. Kotthaus, M. Stein, H. Durt, C. Kurz, J. Wenz, T. Doublet, M. Becker, V. Pipek,
V. Wulf, Arbeiten im Mittelstand 4.0 – KMU im Spannungsfeld des digitalen Wandels, Springer
Fachmedien Wiesbaden 53 (2016) 71-86.
[30] E. Abele, R. Anderl, J. Metternich, A. Arndt, A. Wank, Industrie 4.0 - Potentiale, Nutzen und
Good-Practice-Beispiele für die hessische Industrie, Meisenbach GmbH Verlag (2015) 1-47.
[31] A. Lüder, Integration des Menschen in Szenarien der Industrie 4.0. In: T. Bauernhansel, M.
Hompel, B. Vogel-Heuser, Industrie 4.0 in Produktion, Automatisierung und Logistik. Anwendung,
Technologien, Migration, Springer Vieweg (2014) 493-505.
[32] D. Gorecky, M. Schmitt, M. Loskyll, Mensch-Maschine-Interaktion im Industrie 4.0-Zeitalter,
In: H. Bauernhansel, M. Himpel, B. Vogel-Heuser, Industrie 4.0 in Produktion, Automatisierung
und Logistik. Anwendung, Technologien, Migration., Springer Vieweg (2014) 525-542.
[33] D. Spath, O. Ganschar, S. Gerlach, M. Hämmerle, T. Krause, S. Schlund, Produktionsarbeit der
Zukunft. Industrie 4.0., Fraunhofer Verlag (2016) 1-155.
260
Dynamically Interconnected Assembly Systems – Concept Definition,
Requirements and Applicability Analysis
Guido Hüttemann1, a, Amon Göppert1, b, Pascal Lettmann2, c and Robert H.
Schmitt1, d
1
Laboratory for Machine Tools and Production Engineering (WZL) of RWTH Aachen University,
Germany
2
RWTH Aachen University, Germany
a
g.huettemann@wzl.rwth-aachen.de, ba.goeppert@wzl.rwth-aachen.de,
c
pascal.lettmann@rwth-aachen.de, dr.schmitt@wzl.rwth-aachen.de
Keywords: Assembly, Manufacturing System, Reconfiguration
Abstract. The increasing complexity in manufacturing caused by a continuously rising number of
product segments, models and variants as well as shorter product lifecycles require a frequent
adaption of assembly systems. The consequent reconfigurations are movement, extension or removal
of assembly stations. These reconfigurations are associated with high efforts in both time and cost
with respect to the currently used rigidly linked assembly lines and the fixed sequence of assembly
stations. A possible solution is the organisational form of dynamically interconnected assembly
systems (DIAS), which enables a flexible sequence of assembly steps for each individual product,
referred to as job route. Therefore, this new organisational form is a paradigm shift in assembly
systems design. A control system manages the resulting complex product flow by determining job
routes depending on the system and product status in an event-driven and automated manner. DIAS
allow for a mapping of variant specific processes without efficiency losses and reconfiguration
without interrupting production. As a result, they are well suited for a wide range of applications. In
the following, the concept of DIAS and the state of the art is introduced. Furthermore, the
requirements for implementing this concept derived from expert discussions as well as a general
definition are presented. Finally, an outlook onto an applicability analysis is given.
Introduction
The currently deployed assembly systems use technologies such as fixed transfer systems, which
limit the system in the space domain. In addition, line balancing and the elimination of buffers limit
the system in the time domain. This design works efficiently in a stable market environment with
fixed cycle times. However, increasing number of variants, caused by the trend towards more specific
customer needs and volatile sales volumes demand for new approaches that enable the cost efficient
frequent reconfiguration of assembly systems [1].
The implementation of the paradigms scalability, flexibility and modifiability is necessary to
satisfy this demand. Associated paradigms have been a focus of recent research projects. Especially
Reconfigurable Manufacturing and Assembly Systems (RMS, RAS respectively) have been the
subject of extensive research including advances in mixed model assembly lines, modularisation and
plug and produce. Nevertheless, RAS do not provide the required flexibility, because of restrictions
resulting from their configuration [2].
This paper presents the concept of Dynamically Interconnected Assembly Systems (DIAS) as a
solution to the deficit of a suitable assembly system design. In the following, the state of the art is
discussed in more detail with a focus on RMS and a general definition of the DIAS concept is
provided. Furthermore, the technological and organisational requirements for implementing this
concept are presented. Subsequently, influencing factors on the applicability of DIAS are analysed
and a conclusion of the results and an outlook are given.
*
Submitted by: Guido Hüttemann
261
State of the Art – Organisational Approaches for Flexible Production Systems
Portal/
branch
Product variety management is the most effective method to achieve flexible manufacturing
systems [1]. As this is not always possible due to specific customer requirements, manufacturing
systems need to be reconfigurable regarding two aspects. One aspect is their ability to produce
different products, the other is their production capacity [3]. On the work station level this is achieved
through universality, inherent flexibility (e.g. CNC machining centres with tool magazine) or design
for reconfigurability by defining product family related design and solution spaces [5]. On the line or
segment level flexibility is largely influenced by the manufacturing systems’ configuration (i.e.
degree of parallelization, number of intersections) [6].
Conventional manufacturing systems, with a strong focus on machining, rely on either dedicated
manufacturing lines (DML) designed to produce mass production parts at the highest efficiency, using
purpose built machines. Another approach are Flexible Manufacturing Systems (FMS) [7,8] that
typically use general purpose CNC machines to produce a number of previously known different
parts at reduced efficiency. DML provide high throughput, but become inefficient when product
variants are required, whereas FMS can be used to produce a selection of products, but cannot be
scaled in their output without large investments in
parallel FMS [3].
Progress
Therefore, Koren and Shpitalni (2010)
introduced
Reconfigurable
Manufacturing
1a
2a
3a
Systems (RMS), combining both the high
throughput of DML and the flexibility of FMS.
Accordingly, a manufacturing system is
1b
2b
3b
reconfigurable when it can easily change its
physical structure (i.e. configuration) and when it
Figure 1: Example for a RMS configuration
is designed for a part family instead of a unique
(see [3]).
product [3]. DML, FMS and RMS share their
general components consisting of multiple manufacturing machines and a common transfer system.
Suitable buffers and parallelisation of machines allow the decoupling of the system’s cycle time (i.e.
average rate in which products exit the manufacturing system) and the cycle time for each individual
work station. Research on RMS has covered line balancing (e.g. [9,10]) and possible configurations
and their impact on productivity (e.g. [3,11,12]) largely with regard to machining.
RMS, as all manufacturing systems for industrial products, typically consist of multiple stages that
partially process the product until it is finished. The configuration of a system decides over its
productivity, responsiveness, convertibility and scalability [3]. Koren and Shpitalni (2010) provide a
method to classify the resulting configurations for multi-stage systems. Figure 1 gives an example for
a practicable RMS [3]. Scalability for RMS is achieved by adding more machines to a cell gantry
(stripped box (3b) in Figure 1) providing more capacity for the task assigned to that particular branch.
The machine allocation of each job is based on the availability of a machine and the job requirements.
The resulting scheduling is the biggest challenge, when implementing such systems [4]. Even though
the solutions introduced above are largely motivated from machining systems, they are generally
applicable to assembly systems as well.
Recent works focus on the adaptation of the RMS principle to assembly. Huettemann et al. (2016)
discuss the general applicability of the RMS principle to assembly and come to a positive conclusion
[13]. Greschke et al. (2014) introduce matrix structures motivated by independency from cycle times
[14]. Schönemann et al. (2015) further investigate matrix structures using simulation [1]. Both works
focus on automotive final assembly and refer to generic use cases. At the time of writing there are no
investigations regarding the requirements to implement such systems or general discussions of factors
that influence their applicability.
262
Concept Definition “Dynamically Interconnected Assembly System”
The general concept underlying a dynamically interconnected assembly system is defined with:
An assembly system is dynamically interconnected if it provides a flexible assembly
sequence for each individual product (job route), without any limitations in time and space.
A central aspect of DIAS is the job route that is
planned and managed by the control system. Job routes
Assembly System
Unique Product
are individually determined for each uniquely
identifiable object (e.g. by serial number). Based on the
„ Availabilities
product’s structure, the generally available processes
„ Requirements
„ Positions
within the assembly system (abilities) and further
„ Product Design
restriction (i.e. process dependencies) all possible
„ Abilities
assembly sequences are determined for each product
Control Systems
type. This can either be done automatically or through
1) Possible Assembly Sequences
initial user configuration. During runtime the actual job
„ Screw – Bond – Mark – Test
route for each object is determined by taking into account
„ Bond – Mark – Screw – Test
the current (or planned) status of the assembly systems.
„ …
This includes e.g. the availability of resources, the
2) Selection (Status Driven/Manual)
physical position of the object within the assembly, the
status of the transfer system and material availability. The
Result: Optimised Job Route
combination of the possible assembly sequences with the
„ Station A – Station E – Station F – Station H
current and planned status yields the optimised job route,
directing the product through the system (see Figure 2).
Different job routes that resulted from various reasons Figure 2: Flowchart for creating job
are depicted in Figure 3. For instance, job 2 (yellow) is routes.
different from job 1 (blue), since product B is produced
with different requirements. Furthermore, the route for job 3 (light blue) is not the same as for job 1,
although product A that has the same requirements is produced, because the availability of a
necessary station changed and the route was adapted accordingly by the control system. In addition,
the effect of a varying production scenario (red) is shown. An increase in production volume is
achieved by adding two stations (I, J) that complement the existing assembly system.
The control system is connected to the stations (grey dotted lines), so that it is able to request the
assembly system properties at any time during the assembly process. Hence, it is possible to
dynamically react to changes of the assembly system properties (e.g. availability). For instance, the
adjusted route for job 3 and the route expansion, due to scaling, are both caused by changes in the
system-wide availability.
Control System
Job 1
Route:
A-F-G-D
Product A
Job 2
Station
A
Station
B
Station
C
Station
D
Station
E
Station
F
Station
G
Station
H
Station
I
Station
J
Route:
E-F-C-H
Product B
Route:
A-B-G-D
Job 3
(Alternative route for
job 1, due to busy
Station F)
Product A
Volume
Varying
production
Time
Route:
F-I-J-G
(Route expansion,
caused by scaling)
Figure 3: The control system plans the job routes and is connected to the stations.
263
Organisational and Technological Requirements towards DIAS
Two central requirements categories were identified: organisation and technology. In the following
the requirements are presented within the scope of these categories. The requirements were discussed
with representatives from producing companies and system suppliers associated to industrial sectors
such as electronics, automotive supply and automation.
Organisational Requirements. A substantial requirement for implementing DIAS is the variability
of the assembly sequence, so that different possible sequences are available for a product. This
variability results from product design and requires that different assembly sequences are permitted
from a quality management point of view and allows for the computation of job routes according to
the procedure shown in
Figure 2. This is crucial for enabling a dynamically reacting assembly system. Moreover, a flexible
production control system demands for an agile vertical communication between enterprise resource
planning (ERP) systems and the field level.
The implementation of a dynamically interconnected assembly system affects the majority of
stakeholders from the categories long and short-term planning, as well as operation, maintenance and
production control. In this respect, it is a particular challenge to empower the employees to work with
such a system and the requirement to train the affected employees arises from this. The complexity
of the processes will increase with DIAS, since the material and product flow is non-linear and highly
dynamical. To manage this increased complexity a user and task specific information service is
required. With such a service, the employee is able to request specific information about the product
(e.g. unfulfilled assembly steps) or the assembly station (e.g. status, overall utilisation). The service
must be task-specific, since an operating employee, who tries to solve a problem, needs information
about the assembly station status, whereas a planning employee, who is optimising the assembly
system needs information about the utilisation. Moreover, a non-specific information service would
result in an information overload and, hence, in inefficiency.
Furthermore, for integrating DIAS in an existing production environment, the upstream and
downstream systems need to be considered with regard to the higher flexibility of DIAS. For instance,
logistical processes that succeed the assembly system are required to be as adaptable as the DIAS to
maintain global efficiency.
In comparison to conventional assembly systems, in case of a constant production volume, a DIAS
will overall result in higher initial investment. However, the potential of this system is to decrease the
financial risks and costs for reconfiguration in an unstable market environment. The resulting
requirement is a scenario-based cost accounting that takes varying production scenarios into account.
Technological Requirements. For rigidly linked systems, the transport is organised
deterministically, so after an initial planning no further effort is necessary. For DIAS, the transport
organisation depends on each individual job route, so that the product and components have to be
transported dynamically and with short-term adaptions through the assembly system. This demands
a flexible transfer system that is capable of transporting the products independently.
When confronting DIAS with a high number of variants, the assembly stations are required to be
capable of processing variants. Consequently, they require flexible feeding and handling technology.
Modularisation and the concept of DIAS demand for a particular consideration of buffer systems.
According to lean principles, the usage of buffers should generally be minimised. But, due to variant
specific process time variations, inevitable buffers need to be installed. Furthermore, avoiding buffers
leads to a superordinate cycle time, which contradicts the concept of DIAS.
To ensure an efficient utilisation of the assembly stations and the coordination of the product flow,
a central control system is required. This system executes the job routes by connecting product,
transport system and assembly stations. Especially for controlling the product flow a frequent
identification and traceability of products is essential. Additionally, technical abilities and states of
the assembly stations need to be supplied to the control system, so that it is able to plan and organise
the job routes according to the flowchart, shown in
Figure 2. For this, a horizontal communication on the field level to enable the exchange of
assembly station properties such as abilities and states of the stations is required.
264
This control system needs to be able to react dynamically to changes in the assembly system such
as station downtime, product integration or added assembly stations. The latter is conducted with plug
and produce technologies that enable scaling by using standardised interfaces. Cost and time are
reduced by using plug and produce technology and, hence, the assembly system is more flexible.
Summary and Classification of Requirements. The above described technological and
organisational requirements are summarised and classified in Figure 4. The requirements are
classified in two categories: primary and secondary. Primary requirements are crucial for the
implementation and realisation of DIAS. For instance, the specialised control system and the
variability of assembly sequences are essential for controlling the product flow and enabling different
job routes. In contrast, secondary requirements such as plug and produce or user-specific information
services are beneficial for the flexibility and efficiency of the DIAS, but they are not crucial for the
implementation of the system.
Organisation
User-Specific
Information Service
Technology
secondary
primary
Up-/Downstream
Processes
Scenario-based
Cost Accounting
Variable Assembly Sequences
Control
System
Adaptive assembly
station technology
Requirements
Buffer
Systems
Plug and
Produce
Training of
employees
Vertical
Integration
Flexible
transportation
Figure 4: Classification of requirements for implementing DIAS.
Applicability Analysis for Dynamically Interconnected Assembly Systems
In the following, the factors influencing the applicability of DIAS are identified and categorised.
Based on theoretical analysis, the impact of key influencing factors and their manifestations on the
applicability of DIAS are outlined.
Influencing Factors on Applicability. Besides meeting the organisational and technological
requirements listed above, the successful application of DIAS relies on the concept’s general
applicability for a specific production scenario. While applicability in a competitive scenario is
largely evaluated by the economic feasibility of an assembly system concept, it is also influenced by
technical characteristics and design decisions. Following the principle of cause-effect diagrams, four
categories of influencing factors were identified – product, process, production scenario and
environment. In addition, the influencing factors and the corresponding tendencies where determined
(see Figure 5). In the following, only factors that are derived from the technological environment,
where the assembly system is located in and factors that can be directly derived from the product and
the associated process are being considered.
The category product relates to factors that are directly derived from the intrinsic characteristics
of the product. Product size is considered to be the key influencing factor for the amount of effort that
is required for product transportation. The number of components affects the effort required for
logistical processes within the assembly system. Product variance is a prerequisite for DIAS as it is
the main motivational factor. The flexibility of the assembly sequence is determined by both product
design and process requirements. The more flexible the assembly sequence is for each product, the
higher the potential benefit of DIAS as the number of possible job routes increases.
The category process summarises all influencing factors that result from the designed assembly
technology as required by the product design. The number of assembly stations and the degrees of
freedom during job route decision making are influenced by the number of process steps, process
universality (e.g. universal robot spot welding station for multiple variants) and process divisibility
(i.e. dividing a process into sub-processes and allocating them to different stations). Process time
265
spread relates to different processing times for each job and affects the complexity of the job route
determination. The degree of automation affects the applicability through the level of inherent process
flexibility (i.e. manual processes are more flexible than automated ones).
The production scenario summarises all factors that are market driven. Cycle time affects the
frequency of transport between assembly stations within DIAS (i.e. with decreasing cycle time the
transport frequency increases). Lot sizes affect the process of determining job routes, as with larger
lot sizes a more stable operation is expected. The applicability of DIAS is expected to be higher with
increasing product integration frequency, as the effect on product integration in DIAS is
compartmentalised to affected assembly stations only. Similarly, higher scaling factors at higher
uncertainty of expected production volume development are in favour of DIAS applicability.
Lastly, the environment category includes employee related factors such as training level and cost
as DIAS require higher skilled employees. Furthermore, the availability of floor space and the
associated cost need to be considered in brown field planning scenarios as DIAS have similar floor
space requirements as conventional assembly systems but do not require a continuous area. The
supply network relates to up- and downstream processes.
Degree of
Pre-Assembly
Product
Product Size
Number of Components
Geometry
Product Design
Lot Size
Flexibility of the
Assembly Sequence
Variance
Process Time
Spread
Uncertainty
Expected Scaling Factor
Expected Production Volume
Process Requirements
Number of Variants
Universality
Production
Scenario
Frequency of
Product Integrations
Cycle Time
Availability
Supply Network
Applicability of
DIAS
Cost
Divisibility of
Processes
Degree of
Automation
Number of Process Steps
Pay Level
Floor Space
Training Level
Employees
Positive Tendency
Negative Tendency
Process
Environment
Complex Tendency
Figure 5: Cause-effect diagram for influencing factors on the applicability of DIAS.
Qualitative Impact Analysis on Influencing Factors. From the previously identified influencing
factors six were selected as being of highest significance and impact, based on their manifestations.
As the effort required for the transfer of products along the job routes is largely dependent on product
size, this parameter was chosen as well and it is superposed with all other factors where applicable.
The following hypotheses are based on the assumption that for large and very large products (e.g.
cars, machine tools, airplanes) the required transportation effort is very high. For small products (e.g.
electronic components) transportation efforts are low, however, for very small products single
product transfer is considered to be ineffective. Medium sized products (e.g. automotive power train
components, home appliances) are considered to be of average transport effort and suitable for the
transportation of one product at a time. Furthermore, it is generally assumed that for larger products
a higher number of processes can be done at one assembly station (e.g. several screwing operations)
resulting in a lower frequency of transport operations. The applicability of DIAS is estimated for each
influencing factor by product size and is depicted in Figure 6. The applicability threshold is indicated
by a dashed line. An estimated applicability above the threshold indicates that DIAS are considered
regarding a specific criterion. All indications are estimations for a general use case. Individual
requirements of a specific production scenario may result in deviations.
By defining the rate in which products need to be processed within the assembly system, the cycle
time has the largest immediate impact on the transportation effort. With increasing product size the
time required for product transport into an assembly stations increases, rendering short cycle times to
266
be less suitable for DIAS applications, especially for small products. With increasing cycle time
applicability remains constant. For medium sized products the applicability threshold is estimated to
be at around 1 minute based on expected 5-8 seconds of transfer time.
For large lot sizes (above 100 to 1000 based on product size) the applicability of DIAS is not given,
as in such a scenario the time between setups is sufficiently long to not have a negative impact on
productivity. DIAS are generally applicable for small to singular lot sizes. However, for very small
products, singular lot sizes may result in extensive transportation efforts, thus requiring batch transfer
(e.g. set of four products).
With an increase of the number of variants the applicability increases. For small numbers of
variants (< 10) there is no indication for the use of DIAS as it is assumed that this can be compensated
by using process flexibility. For very high numbers the applicability recedes, as the complexity of the
control system becomes very large.
The applicability of DIAS does not require known expected future scaling of the production
volume. However, the extent to which scaling is feasible depends on the product size and the
associated transportation efforts. When scaling, the space required for the transportation system
grows disproportionally to the space used for assembly stations for small products, whereas large
products require large amounts of floor space for intersections resulting in reduced applicability.
Regardless of the actual factor, the process time spread, indicated by the signed standard deviation
of the process times, does not reveal a situation in which DIAS are not applicable. DIAS can also be
used in scenarios of constant cycle time but highly varying process sequence. However, DIAS
become more beneficial, the more the process times varies for each process step and variant.
The applicability of DIAS is largely independent of the degree of automation. However, with an
increase of the degree of automation, assembly stations tend to be of less inherent flexibility resulting
in limitations regarding the applicability for fully automated scenarios depending on the type of
process required.
0.1
1
Applicability
Number of Variants
Applicability
Lot Size
Applicability
Cycle Time
100 [min]
10
1
100
1000 [#]
100
Product Size
1000 [%]
small
high [#]
medium
Applicability
Degree of Automation
Applicability
10
low
Process Time Spread
Applicability
Expected Scaling Factor
10
-
medium
0
large
+ sign. STD
process time
all
0 - manual
hybrid
100 – automated [%]
Applicability Threshold
Figure 6: Applicability estimations for DIAS with influencing factors relative to product size.
Conclusion and Outlook
By introducing the concept of job routes, DIAS enable flexible assembly sequences for each
product type and each individual job. The job routes are dynamical and can be changed due to
unforeseen changes in the assembly system such as machine downtime or blocked paths. By
decoupling assembly stations from each other, restrictions resulting from temporal or spatial
constraints are resolved. Accordingly, DIAS form a paradigm shift in assembly system design.
Emerging technologies such as smart devices, learning algorithms and a higher degree of connectivity
(i.e. Internet-of-Things / Industry 4.0 technologies) allow meeting the organisational and technical
requirements for DIAS. Based on the conducted applicability analysis and the discussions with
267
experts from research and industry regarding the requirements, DIAS appears to be a promising
concept for tackling challenges such as increasing number of variants and shorter lifecycles.
As an outlook, the communication and control architecture to organise the job routes needs to be
developed. Furthermore, job routing algorithms are required and need to be developed. Simulation
studies that provide an extensive scenario analysis are required for evaluating the applicability and
design criteria. The results from the simulation can also be used for comparing DIAS with a
conventional assembly system and, as a next step, simulations can be used to plan and optimise the
implementation of DIAS, which involves the layout of the assembly stations, the assignment of
assembly steps to the stations and design of the transportation system.
Acknowledgement
This research is funded by the German Federal Ministry of Education and Research (BMBF)
within the Program “Innovations for Tomorrow’s Production, Services, and Work” and managed by
the Project Management Agency Karlsruhe (PTKA). The author is responsible for the contents of
this publication.
References
[1] H. ElMaraghy, G. Schuh, W. ElMaraghy, F. Piller, P. Schönsleben, M. Tseng, A. Bernard,
Product variety management. CIRP Annals - Manufacturing Technology 62(2) (2013) pp. 629–652.
[2] H. ElMaraghy, W. ElMaraghy, Smart Adaptable Assembly Systems. Procedia CIRP 44 (2016)
pp. 4–13.
[3] Y. Koren, M. Shpitalni, Design of reconfigurable manufacturing systems. Journal of
Manufacturing Systems 29(4) (2010) pp. 130–141.
[4] A. Azab, B. Naderi, Modelling the problem of production scheduling for reconfigurable
manufacturing systems, Procedia CIRP 33 (2015) pp. 76–80.
[5] R. Müller, M. Esser, J. Eilers, Design method for reconfigurable assembly processes and
equipment. in Spath D, Ilg R, Krause T, (Eds.). Innovation in product and production: Conference
proceedings; 2011, Fraunhofer-Verl., Stuttgart.
[6] Eilers, Jan.: Methodik zur Planung skalierbarer und rekonfigurierbarer Montagesysteme,
Apprimus, Aachen, 2015
[7] M. Cantamessa, C. Capello, Flexibility in Manufacturing – An Empirical Case-Study Research.
in Tolio T, (Ed.). Design of Flexible Production Systems. Springer, 2009, pp. 19–40.
[8] W. Terkaj, T. Tolio, A. Valente, Designing Manufacturing Flexibility in Dynamic Production
Contexts. in Tolio T, (Ed.). Design of Flexible Production Systems. Springer, Berlin, pp. 1–18.
[9] T. Freiheit, M. Shpitalni, S.J. Hu, Productivity of Paced Parallel-Serial Manufacturing Lines
With and Without Crossover. J. Manuf. Sci. Eng. 126(2) (2004) pp. 361.
[10] J. Ko, S. Jack Hu, Balancing of manufacturing systems with complex configurations for delayed
product differentiation. International Journal of Production Research 46(15) (2008) pp. 4285–4308.
[11] M. Slipitalni, V. Remennik, Practical Number of Paths in Reconfigurable Manufacturing
Systems With Crossovers. Journal for Manufacturing Science and Production 6 (2004) pp. 9–20.
[12] P. Spicer, Y. Koren, M. Shpitalni, D. Yip-Hoi, Design Principles for Machining System
Configurations. CIRP Annals - Man. Tech. 51(1) (2002) pp. 275–280.
[13] G. Hüttemann, C. Gaffry, R. Schmitt, Adaptation of RMS for Industrial Assembly – Review of
Flexibility Paradigms, Concepts, and Outlook. Procedia CIRP 52 (2016) pp. 112-117.
[14] P. Greschke, M. Schönemann, S. Thiede, C. Herrmann, Matrix Structures for High Volumes and
Flexibility in Production Systems. Procedia CIRP 17 (2014) pp. 160–165.
[15] M. Schönemann, C. Herrmann, P. Greschke, S. Thiede, Simulation of matrix-structured
manufacturing systems. Journal of Manufacturing Systems 37 (2015) pp. 104–112.
268
Flexibility through mobility: the e-mobile assembly of tomorrow
Achim Kampker1,a,g Peter Burggräf2,b,h Kai Kreisköther3,c,i Matthias Dannapfel4,d,j Sebastian Bertram5,e,k and Johannes Wagner 6,f,l
1,3
Chair of Production Engineering of E-Mobility Components, RWTH Aachen University, Campus-Boulevard 30, 52074 Aachen, Germany
2
Chair of International Production Engineering and Management, University of Siegen, PaulBonatz-Straße 9 -11, 57068 Siegen, Germany
2,4,5,6
Laboratory for Machine Tools and Production Engineering, RWTH Aachen University,
Steinbachstraße 19, 52074 Aachen, Germany
a
A.Kampker@pem.rwth-aachen.de, bPeter.Burggaef@uni-siegen.de cK.Kreiskoether@pem.rwthaachen.de dM.Dannapfel@wzl.rwth-aachen.de, eS.Bertram@wzl.rwth-aachen.de, fJ.Wagner@wzl.rwth-aachen.de
g,i
+49 241 80-27397, h+49 271 740 2630, j,k,l+49 241 80-27427
Structure
1
Trends and Challenges for the Future of Automotive Assembly .............................3
1.1 Shift from Traditional Markets to Globally Distributed Metropolitan Markets ...........3
1.2 Growing Requirements through Market-Specific Framework Conditions ................4
1.3 Mass Customization for the Fulfilment of Customer Demands ...............................4
1.4 Reduced Project Durations through Shortened Innovation Cycles..........................5
2
New Production Form to Secure Competitiveness..................................................6
2.1 Need for Action for Future Automotive Assembly....................................................6
2.2 Status quo: Global Production Networks and Rigidly Linked Assembly ..................8
2.3 Three Objectives for the Fulfilment of Future Requirements ...................................9
3
Agile Low-Cost Assembly .....................................................................................11
3.1 Self-Driving Vehicle Chassis .................................................................................13
3.2 Augmented Reality................................................................................................14
3.3 Rapid Fixture.........................................................................................................15
3.4 Tolerance-compensation elements .......................................................................16
3.5 Smart Logistics......................................................................................................17
3.6 Assembly-control Cockpit......................................................................................18
3.7 Summary...............................................................................................................19
4
Application examples of the Agile Low-Cost Assembly.........................................20
4.1 The Agile Low-Cost Assembly for the StreetScooter Special Vehicles .................20
4.2 Aachen Demonstrator for the Agile Low-Cost Assembly.......................................21
5
Conclusion and Outlook ........................................................................................22
269
Abstract
Agile Low-Cost Assembly
As a result of increasing market dynamics, a shift from few core markets into globally
distributed metropolitan markets, as well as an increasing product diversity and decreasing innovation cycles can be observed which lead to changing requirements for automotive production. Market-specific restrictions such as high import taxes on finished products complicate the conquest of emerging markets. For this reason, the distribution of
value add amongst few central production sites and several smaller decentralized locations is gaining importance. The decentralized markets have to be strongly adapted to
the respective target market. In order to secure long-term competitiveness in spite of
those challenges, automotive manufacturers will need low-investment, highly flexible and
adapting assembly structures. Conventional rigid flow assembly lines no longer fully meet
the increasing flexibility requirements and additionally require extensive structural investment. The Factory Planning department of the WZL and the chair PEM of RWTH Aachen
University are developing the “Agile Low-Cost Assembly”, an innovative assembly concept, which is characterized on the one hand by a high degree of agility and on the other
hand by low investment structures. The assembly concept is empowered using new production engineering approaches, which include self-driving chassis, tolerance compensating elements, networked assembly stations and 3D printed fixture elements. Further
elements include augmented reality as well as autonomous and cross-station logistics
and equipment supply in connection with a decentral networked control via neural networks. The feasibility of these elements was demonstrated in an Applicatoin at the Aachen Demonstrataor for the Agile Low-Cost Assembly
270
1
Trends and Challenges for the Future of Automotive Assembly
Technological progress, new competitors, global production networks and the restructuring of markets constitute the most significant challenges for the automotive industry. Target markets are subject to steady dynamics. The importance of traditional markets decreases while new growth markets arise. Different market-specific restrictions regarding
legislation, sales structure and production impede the conquest of rising markets. Additionally, an increasing individualization and a growing environmental awareness enforce
sustainable adjustments of the product portfolio, e.g. through the introduction of alternative drive concepts like electric mobility. The results are a growing product variety as well
as significantly shortened innovation cycles, which manifest in increasing requirements
on project partners. These challenges and their consequences are explained in detail in
the following sections.
1.1
Shift from Traditional Markets to Globally Distributed Metropolitan Markets
Global markets are constantly changing. Currently, a shift from few core markets into
globally distributed metropolitan markets can be noticed [1]. Besides the growth opportunities offered by markets in Eastern Europe and the BRIC states (Brazil, Russia, India
and China), especially smaller regions in North Africa, Southeast Asia and South America
are gaining in importance. These markets remain unexploited so far and therefore offer
high growth opportunities for the automotive industry compared with traditional markets
[2] [3].
Metropolitan Markets
New registrations in
millions
-36 %
2.5
2014
1.6
2015
Development of the registrations of new vehicles in
Russia
Five large urban centres with
more than 550 million people
in total exist in the People‘s
Republic of China
Market Shift
Vehicle sales [million]
Local Market Volatility
91.4
100
80
60
57.8
40
20
0
2008
„ Triad states
„ Rest of the world
2020*
*Forecast
Fig. 1: Structural changes of global markets [4] [5] Fehler! Verweisquelle konnte nicht gefunden werden.
The global passenger car sales are expected to grow up to 91.4 million vehicles annually
by 2020. With 73.2 million vehicles sold in 2015, the forecasted sales will be achieved
with a current growth rate of 4.5 % average. This growth will particularly be driven by the
BRIC states and future sales markets in North Africa, Southeast Asia and South America,
whereas the established markets of the triad states1 are characterized by low growth or
stagnation. [3] Besides the BRIC states, in which 35 % of all new vehicles are currently
registred [5], mainly smaller regions gain in importance. According to studies, 20 % of all
new vehicles are expected to be registred in the growth regions North Africa, Southeast
Asia and South America in 2020. With about 6 %, the growth in these metropolitan markets will be four times higher than in the established triad states and it will significantly
surpass the growth of the BRIC states. [3]
1
Triad states: USA, Canada, Europe, Japan, Australia und New Zealand
271
The development into globally distributed metropolitan markets is not only driven by new,
smaller growth markets. A market diversification can also be noticed in existing markets.
An increasing urbanization leads to large metropolitan regions and megacities. Five of
these large urban centres exist in the People’s Republic of China alone, which constitute
important metropolitan markets with 100 million inhabitants or more each [4]. The competition in existing as well as in future markets is large and further increasing through
aspiring, high-performing and innovative competitors, especially from Asia [7]. In order to
permanently secure existing market shares and to gain new ones, it is necessary to focus
on individual demands of the customer groups in metropolitan markets.
1.2
Growing Requirements through Market-Specific Framework Conditions
The shift to globally distributed metropolitan markets implicates an increasing importance
of market-specific restrictions. Different metropolitan markets have specific requirements
regarding legislation, sales structure and production, which automobile manufacturers
must fulfil and should use to their advantage.
High import taxes and import barriers for the protection and development of an own automotive industry significantly impede the implementation of a pure export strategy from
a few main factories in the domestic markets and therefore the market entrance to emerging growth markets [8]. The local completion of vehicles prefabricated in the domestic
market is a strategy to avoid country- and market-specific taxes and duties on the import
of finished automobiles by performing parts of the value-adding process directly in the
targed market. This strategy is known as Completely Knocked Down (CKD). Volkswagen
for instance disassembles finished vehicles to export them CKD to countries like Indonesia, Malaysia or Russia afterwards [9]. Furthermore, different regulations regarding the
taxation of production in a country or municipality exist and need to be explicitly considered in the choice of location [10].
Moreover, it has been shown in examinations that a local production significantly increases the local awareness of a manufacturer and thus the loyalty to it. In this way,
manufacturers can gain additional market shares in the respective sales markets [8].
Vehicles must meet different customer requirements in different metropolitan markets.
Therefore, the same models need to have varying properties, e.g. regarding the design
or motorization.
A local production is subject to the local economic, social and political framework. The
cost structures in industrial countries are burdened by high labour costs, which lead to
increasing unit costs of vehicles. Local productions enable the use of cost advantages in
target countries. In this way, a reduction of production costs can be achieved. Additionally,
a local production offers potentials of including market-specific know-how. Furthermore,
there are different development stages regarding the infrastructure as well as the quality
and availability of services [8].
1.3
Mass Customization for the Fulfilment of Customer Demands
Nowadays, the conventional consumer is influenced by many trends, which inhibits the
allocation to only one market segment. A new customer type emerges, who considers the
car to express his individual personality. This type is referred to as hybrid consumer [11].
The high availability of information on the internet supports the comparability and encourages the wish to individualize. Companies react to these developments with a massive
increase of variant diversity in order to satisfy individuality demands in the market [12].
272
The result is a comprehensive extension of the model range of automobile manufacturers
with a simultaneous decrease of quantities per model. The number of US American models with less than 10,000 sales a year amounted to 54 in 1999, whereas this number more
than doubled to 117 models by 2005 [13]. Additionally, a significant increase of product
variants per model can be noticed since the millennium turn. The growths of product variants at Audi AG for instance is twice as high as the growths of production quantities in
the same period of time (compare Fig. 2).
Fig. 2: Development of product variants and production quantities at Audi AG [14]
The variants of vehicles are inter alia the result of number of body and design variants as
well as technology respectively component variances. Especially variant differences like
different drive technologies or body variants are complexity drivers in the assembly. Different characteristic values regarding design, color or material however mainly lead to
increasing logistics efforts. In summary, a decrease in share of standard variants respectively vehicles with the same configuration of variants can be noticed.
The increased product variety leads to a higher complexity of product and production as
well as to increasing costs for automobile manufacturers. Moreover, customers’ price acceptance is stagnating so that complexity costs due to the individualisation offerings of
the manufacturers and the increased market demands can not be passed on to the customer [15]. This causes a growing cost pressure on automobile manufacturers and their
suppliers.
1.4
Reduced Project Durations through Shortened Innovation Cycles
The efforts of manufacturers to correspond to customer trends at best in ever shorter
gaps with new models, derivates and equipment components leads to a decreasing of
innovation cycles. This development is further aggravated by the increasing technological
progress and the growing importance of information and communication technology. This
is especially true as from a customer perspective the manufacturer, who is the first to
introduce new technologies, is perceived as innovation leader, which makes him more
successful compared with other manufacturers. [2] [17] [18]
Fig. 3 exemplarily shows the innovation cycles of the model generations of the VW Golf
over the quantity of delivered vehicles. The innovation cycles as well as the produced
quantities are set to half referring to the first product generation. Regarding the increasing
273
Deliveries million
expenses on research and development combined with decreasing production quantities,
a reduction of the investment per derivate is necessary to be able to offer the manufactured vehicles to competitive prices in the market. [19] [20]
8
Golf I
6
4
?
2
Golf II
Golf IV
Golf III
Golf V
Golf VI
0
2
3
4
5
6
7
8
9
10
Duration of market availability years
Fig. 3: Product lifecycle of the VW Golf models [21]
Because of shortened innovation cycles of automobile manufacturers, the project durations of suppliers have shortened from more than five years to partly less than two years2.
Many manufacturers do not want to be exposed to high risks due to large investments
and to commit to long-term service or supplier relationships because of uncertain market
developments.
Simultaneous with the reduction of project durations, the requirements of the project partners of the OEMs become more and more heterogenous. The claims of the “New Players”
from Silicon Valley differ from those of the traditional OEMs as well as the related collaboration and cooperation models. Thus, suppliers are more and more pressurized as a
result of the fact, that they are not longer able to reliably target their strategy und production to specific manufacturers, but have to adapt more quickly and extensively to the different requirements.
2
New Production Form to Secure Competitiveness
Changing challenges cause the necessity for automobile manufacturers to shift parts of
their production from few central plants into decentralized sites close to the markets in
future. In this way, advantages of local content can be used and vehicles can be produced
matching the respective market requirements. This results in the demand of a higher
agility for future automotive assemblies due to decreased quantities and increased
product varieties compared with the conventional automotive production.
2.1
Need for Action for Future Automotive Assembly
The current and future challenges of the automotive industry as explained in chapter 1,
already cause a shift of structural requirements in the automotive production.
Because of the increasing relevance of local metropolitan markets, small and local assembly sites will gain in importance over central main plants in future. At present, global
markets are served by few main plants with quantities in scale of 500,000 vehicles. Audi
AG for example produces a large part of its annual automotive production of more than
2
WZL-Project: Interview with First-Tier Manager
274
1.8 million vehicles in the three plants Ingolstadt, Neckarsulm and Győr [22]. A decentralized manufacturing strategy leads to a reduction of produced quantities per plant. Japan
with approx. 127 million inhabitants [23] for instance constitutes an independent market
with specific customer requirements, comparable to a metropolitan market. The total vehicle sales of Audi AG in Japan accounted for approx. 30,000 units in 2015. South Korea
is another example with vehicle sales of approx. 31,000 units in 2015 [24]Fehler! Verweisquelle konnte nicht gefunden werden.. Hence, an annual production volume of
20,000 - 50,000 vehicles in a decentralized plant can be assumed. This is not a firm
boundary, but an interval derived from real examples, so that the annual decentralized
production of quantities of up to 100,000 or 200,000 vehicles can be reasonable.
The investment costs of an assembly system are proportionately calculated to the produced vehicles per year, so that the costs per unit are strongly depending on production
volumes. Structural investments are usually degressive and not proportional to the produced quantity. Thus, high investments in the assembly system against the background
of low quantities can endanger the profitability of a plant. In the future, shortened innovation cycles and project durations will increase the number of necessary adaptions of an
assembly system. Therefore, an economic implementation of adjustments regarding production volumes and product range needs to be feasible without losing in competitiveness.
This includes costs for equipment and for the setup or modification of necessary infrastructure.
A versatile product portfolio is not only challenging in terms of costs, but also sets up high
requirements to the production technology. Electric mobility has a special role in this context. Electric vehicles have a unique product architecture compared with vehicles with a
conventional powertrain making less but also new assembly extents necessary 3. Because
of the currently low demand, the deployment of specialized production facilities for the
accomplishment of these assembly extents is often not efficient [25]. Consequently, new
and conventional vehicles must be assembled for a short transition period with the same
systems for economies of scale respectively decreasing trend in costs to be useable. A
high system adaptability is required to produce as many vehicle models and their variants
as possible in one assembly system [26].
Besides short-term flexibility for different products and production volumes, adaptability
needs to be ensured against the background of unplanned changes. Through adjustments on a structural level exceeding the corridor of provided flexibility, an adaptable
assembly system enables the exploitation of further capacities and changes of the product portfolio regarding the introduction of new products. In terms of adaptability, necessary changes are feasible with little effort regarding time, costs and impairment of ongoing
operations. This includes the conversion of assembly stations at the implementation of
new products. A flexible and adaptable assembly system, which enables the production
of a variety of products and their variants and which is configurable regarding future requirements, is necessary to ensure competiveness [27].
The demanded flexibility and adaptability includes the scalability of the system besides
the adjustment to a new or enlarged product portfolio. Times of high demand require a
short-term increase of production performance, whereas a poor market demands a quick
reduction to save costs [26]. This development is reinforced by the structural change of
markets as well as the progressive urbanization. These trends lead to a growing importance of a decentralized production in local assembly sites within metropolitan markets.
3 Assembly extents of hybrid vehicles are even more important, since the assembly of the combustion
engine and of the electric drive is required.
275
On the one hand, decentralized sites need to benefit from local contents and they need
to illustrate the market-specific requirements onto the product program on the other hand.
Besides the enabling on a production system level, through which decentralized assembly units are feasible in the respective metropolitan markets.
Market shifts and shortened innovation cycles result in an increase of the amount and
frequency of production ramp-ups. During the production ramp-up phase, a prototype is
transferred from the design stadium into the serial production [28]. This period is significantly determined by the temporal expenditure for the adjustment of the logistics chain
and of production processes. During the production ramp-up phase, the related products
are not profitable, but mainly cause costs. Hence, the period in which manufacturers can
make profits shortens with decreasing durations of product life cycles. Accordingly, an
efficient production ramp-up holds time as well as cost advantages for manufacturers.
According to KUHN ET AL., additional potentials of up to five percentage points of the model
rate of return become accessible through a steeper ramp-up curve with regard to the total
product running time. In comparison to that, around 2 % to 15 % model return are currently achieved in the automotive industry [29]. Consequently, the control and an efficient
handling of production ramp-ups are of growing importance to be able to react to changed
customer requirements on metropolitan markets.
2.2
Status quo: Global Production Networks and Rigidly Linked Assembly
Automobile manufacturers are organized in complex production networks. It is tried to
benefit from the advantages of global procurement sources and low labour costs through
a systematic expansion of procurement networks. The development of global production
networks significantly increases the complexity of logistics activities [18].
The automobile production is characterized by vehicles with a great customer individuality
and complex product structures. For this reason, a continuous coordination of production
resources by means of the market demand is required. This procedure has a broad consistency among all manufacturers as the production almost exclusively takes place in flow
assembly lines with sequenced production programs [30]. Most products are produced
customer-specifically on present orders respectively demands on the European target
markets, whereas in the USA for example a higher proportion of the production program
is determined by the manufacturer. Regardless of the approach, the aim is to realize low
buffer stocks by creating a flow throughout the whole assembly system. Often, not inventory costs or unfavorable storage properties of parts or modules are the reason for this
principle, but the risk of a changing market situation [31].
The traditional automotive assembly is currently based on a rigid line structure. The production, which is specialized on a few models and their variants, takes place in this linear
structure in few large main plants. This form of unitized variant production in a flow assembly line is referred to as „Mixed Model Assembly“[33]. Based on these plants, the
needs of markets are satisfied through global supply and distribution networks. The organization of line balancing in a line structure offers good conditions for a high efficiency
regarding large quantities at a consistent quality. Yet, a flow assembly line requires high
planning and control efforts as well as high investment and operating costs. Main cost
drivers are complex means of conveyance like overhead conveyors or plate conveyors to
link assembly stations. Current flow assembly lines are designed for operations over a
long term in order to enable the amortization of the high initial investments.
In addition to high investment costs, the flow assembly line is limited in its flexibility regarding products and production volumes, due to its rigid links (Fig. 4) [33]. Different variants partially have different process times at the same assembly stations. Manufacturers
276
have developed different approaches like module building sets or platforms, which facilitate the realization and optimization of the assembly as well as a partial reduction of costs,
to master the high complexity and variance of vehicles [32]. Due to the tact balancing of
assembly stations, the levelling of the product program in terms of different assembly
times, which is made more and more difficult by the high product variety, is necessary
despite modular strategies. Since the applied conveyor systems can only be adjusted to
new models to a certain extent via adapter systems, the integration of altered products
into the existing infrastructure is limited. Moreover, the conveyor system restricts the
short- and long-term flexibility of the assembly system in terms of the production volume.
Short-term quantity adjustments are inter alia limited by the pace of the conveyor system,
whereas sustainable structural changes of the assembly system are limited due to high
adjustment efforts. Furthermore, the linked structure causes a high vulnerability as disruptions in particular assembly stations often lead to standstills of the whole assembly
and to costly production outages.
Production network
Production system
High structural investment
Limited production of
Different models
Centralized
production
Inflexible linked
flow line structure
Fig. 4: Schematic representation of the status quo in the automotive industry4
The processes of the final assembly with a degree of automation of only three to ten
percent are the most personnel-intensive stages of the automobile production [33]. Due
to diverse and complex assembly processes, an automation is often limited through technological restrictions combined with an inadequate investment return as well as the limited availability of contact surfaces.
2.3
Three Objectives for the Fulfilment of Future Requirements
In chapter 2.1, the necessity of flexible and adaptable assembly systems, the economic
production of small quantities and an efficient handling of increasing production processes have been emphasized as central requirements for the mastery of current and
future challenges. For these requirements to be attainable within the future automotive
assembly, they are transferred into a target image (Fig. 5) regarding the current production structure.
Flexibility and adaptability are basic prerequisites for controllability of a high product variety and dynamics on the one hand, and to enable the implementation of decentralized
locations by focussing on local content of market-specific requirements on the other hand.
In the following, the property of an assembly system for a proactive adaptability is referred
to as agility, which includes flexibility and adaptability [34]. With regard to the current
production structures in the automotive industry, agility is restricted by existing inflexible
4
Picture source: https://industriemagazin.at
277
structures like cycle times and conveyor systems in particular. The achievement of a
maximum degree of agility, through which requirements within the assembly system as
well as outside on the level of decentralized locations are attainable, are essential. Yet,
an agile assembly system only represents an efficient solution approach, if it is feasible
without respectively with minimal additional costs to inhibit an increase in the Total Cost
of Ownership (TCO). Otherwise, the advantages of agility are dominated by the high
counterbalance of additional costs. Thus, the aim is achievement of agility at zero cost.
With regard to the increasing frequency and amount of production ramp-ups, the period
needed for the ramp-up, the ramp-up time, is a critical success factor. The high implementation effort of new products and the complexity of the existing structures of traditional
automotive assemblies impede an increase in effiency of product ramp-ups. For the ensurance of future competitiveness of automobile manufacturers, a significant reduction of
not profitable ramp-up time is necessary to extend the profitable production period of a
product at its best [35]. This goal has to be archived within the scope of the production of
a variety of products in decentralized locations. Hence, it must be the aim to reduce the
required ramp-up time of products by at least half of the time, that is customary in the
automotive industry. A reduced ramp-up time can be achieved by deploying new methods,
so potentially cost-causing deficiencies in the assembly process do not endanger the effectiveness.
Ramp-up
Ramp-up time
time // 22
Cost
Agility at zero cost
Investment / 10
ൗ
ʹ
Agility
Conventionell automotive assembly
Ramp-up time
ൗͳͲ
Future objective
Fig. 5: Target definition for the future automotive assembly
The first step to successful transfer a central location into several decentralized locations
requires low investment costs. The achivment of this is only limitedly feasible with conventional flow assembly lines. Besides minimizing initial investments, it is necessary to
minimize follow-on investments for the adjustment to changed market conditions. As
plant-specific production quantities of decentralized assembly systems are considerably
lower compared with central plants, an economic production is only possible provided
adjusted investment costs for the implementation and reduction of the total investment.
Consequently, an equally significant reduction of the total investment is required. A central plant could be replaced by several decentralized plants in the future. Based on interviews with experts of the automobile industry5 a cost reduction to a tenth of the current
investments is an efficient aim to implement serveral productive decentrailized plants.
[36]
5 In preparation for the „Aachner Werkzeug Kolloquium“ a Konferenz with over 1000 Visitors up to 10 experts of the automobile industry were interview on the topic „automobile assembly of the future“.
278
USA
Japan
Southeast Asia
North
Africa
Russia
Shanghai
Cluster
From Product
Orientation…
Coupé
Plant
Werk
Europa
America
Estate
Werk
Plant
Asien
Europe
Van
W k
Werk
Plant
Nordam
Asia
erika
Quantity
Quantity
Plant 1
Plant 2
Plant 3
Plant 4
Plant 5
Plant 6
… to Sales Orientation
Fig. 6: Shift of production quantities from central into decentralized assembly units
The defined targets show, that the traditional flow assembly faces limits, regarding the
consequences of an increasing individualization and shortening innovation cycles in connection with a local production in metropolitan markets [17]. This is illustrated by the presented development of the production of fictive models in market examples in Fig. 6. The
traditional flow assembly line has an optimized design for the production of large quantities and few models in order to enable an economic production despite considerable initial
investments.
A shift from the production of few models in high quantities a production of various models
with comparable low quantities, causes that the optimization on a single operating point
is no longer appropriate. Instead, the efficient representation of different operating points
in the same assembly system is necessary in future. Thus, a conversion of the automotive
industry towards a concept, which allows the fulfilment of the target image, is required.
3
Agile Low-Cost Assembly
The Factory Planning Department of WZL of RWTH Aachen University and the Chair of
PEM of RWTH Aachen University develop a new assembly concept called „Agile LowCost Assembly“. Target of this concept is the development of an economical and agile
assembly to take account of future challenges in the automotive sector. For this reason,
the rigidly linked line structure is dispersed for the benefit of a new flexible form of organization. Because of the growing importance of electric vehicles the development is focussed on the production of electromobiles. In addition, alternative vehicle concepts and
different configurations are facilitated by electric vehicles, which enable a higher degree
of freedom in the adaptable vehicle architecture [37]. In consideration of appropriate technological and organizational adjustments, an application on vehicles with a conventional
powertrain is still possible.
The concept of the Agile Low-Cost Assembly contains various partial solutions to manage
the different requirements on future automobile production. The combination of different
partial solutions enables an ideal adjustment of the Agile Low-Cost Assembly to an individual range of requirements. Furthermore, the expandability of the concept and the later
use in other industrial sectors is enabled.
279
The assembly consists of technological components like machines and equipment as well
as employees for the social component and thus represents a socio-technical system [39].
For a holistic description of a socio-technical system, STROHM ET AL. developed the MTOconcept [40]. This concept contains the three perpectives human, organization and technology, which (in combination with the target vision from chapter 2.3) form the regulatory
framework for the Agile Low-Cost Assembly in terms of a 3x3 matrix. The matrix is illustrated schematically in figureFig. 7.
Agility at zero cost
Ramp-up time
2
Investment
10
Human
Technology
Organization
Fig. 7: Framework for the Agile Low-Cost Assembly
The three targets are located on the horizontal axis: agility at zero cost, reduction of rampup time by half and reduction of investment costs to a tenth, which need to be fulfilled.
The vertical axis contains the perpectives human, technology and organization for a holistic description of the assembly system as used in the MTO-concept.
The term “Agile Low-Cost Assembly“ is based on the target vision for the assembly of the
future and in this context compiled partial solutions, which enable the achievement of the
target. In contrast to the traditional line assembly, the Agile Low-Cost Assembly abandon
a fixed conveyor system. The substitution of the current complex conveyor systems enables a deviation from fixed chained stations and allows the organization in physically decoupled and digitally networked assembly stations [25]. An economical implementation
into a local metropolitan market is feasible in spite of low production figures, because
structural investments are saved at the same time. In this way a high level of agility at low
costs is achieved.
The individual assembly processes of the Agile Low-Cost Assembly are performed without a fixed cycle time prescribed by the entire system. Each vehicle shows its own process time instead, depending on the extent of the assembly. The sequence of the process
is not fixed, so the sequence can be unique for each type of vehicle depending on model
and version. The boundary conditions of the route are defined by the repective assembly
precedence diagram, which defines the general sequence of the assembly stations as
well as the flexibility of the vehicle routes. This procedure enables a continuous resequencing of the vehicles, the adjustment of the initial planned route within the final assembly as well as the ideal use of the available total capacity. The flexibility of the route
enables a high operative flexibility by skipping stations during temporary disruptions, so
the assembly is able to continue. Model-specific stations are frequented only by vehicles
with appropriate extents of assembly.
The decoupled form of organization of the Agile Low-Cost Assembly enables a structural
change of the assembly system without affecting current operations. As a result, a development of new sections of the assembly for the system adaption to new models or versions as well as changes of the production volume are feasible. This agility enables a
280
significant reduction of ramp-up times. The phase-out of previous products and the start
of production of the following products can be performed in parallel to the additional production, so there is no interruption during a change of the product. The specific requirements for metropolitan markets are imageable through the assembly system due to extensive scalability with regard to flexibility of product and volume.
Different partial solutions were identified to realize the Agile Low-Cost Assembly in the
presented way. In connection with the fulfillment of the defined target vision, six solutions
are evaluated as especially relevant. These are presented below.
3.1
Self-Driving Vehicle Chassis
The substitution of the converyor system primarily enables the resolution of the rigid
linked line structure by placing the transport function in the vehicle itself. Electric vehicles
basically possess nearly every necessary drive component to move through the assembly
by itself. Vehicles are supplied with suitable information and communication technology
and sensor systems. As a result, the fixed conveyor system is opened and a variable,
real-time capable routing of the vehicle chassis is enabled instead.
After body shop and paintwork the powertrain, energy storage and steering system are
installed in the final assembly. The steering system and the power unit can either be
installed as a temporary component or as a system which is intended to be part of the
final product. A control unit assumes the data processing and communication tasks, so
the vehicle is able to move through the final assembly by itself with an appropriate sensor
system. Every vehicle navigates to the single assembly stations on an individual route
depending on version and situational condition of the assembly systems. A situational
and dynamic adjustment of the route takes place by the networking of every participating
components and a real-time capable production management. In this way unnecessary
waiting periods of the vehicles and the route to non-required assembly stations are
avoided. As a result, the ideal capacitive condition of the whole system is achieved [25].
The implementation of autonomous vehicle chassis requires the accomplishment of diverse challenges relating to the product and assembly system. With regard to the product
structure the assembly sequence should be developed in a way that an early commissioning of the drive train is possible. The access to every area of the vehicles has to be
ensured for the following assembly processes despite the installed drive components.
Due to the use of drive components for the in-house transport, which are needed for
future use of the vehicles anyway, additional production effort and costs are saved. Because of frequently useage of high-voltage batteries, appropriate measures should be
taken to ensure the high-voltage security in assembly. For example, only some of the
battery modules can be used for the movement instead of using the whole battery pack
to enable a low-voltage activity [25].
Further requirements for safety at work arise out of the autonomous movement. The use
of an appropriate environment detection prevents the collision with employees or machines and the collision between the vehicles among each other. There is also a need for
a tracking system which is able to locate the exact position of the vehicles inside the
assembly area. Therefore an integration in the vehicle by RTLS6-Technologie or a complete external monitoring by camera systems are possible [25].
6
RTLS: Real-Time Locating System
281
Energy storage: Early commissioning
Sensors: Early commissioning
Positioning
Environmental
perception
Actuators: Early commissioning
Drive
Break
Steering
Equipment: Changing vehicle position
Solution evaluation and selection
Fig. 8: Solution space for the implementation of the autonomous movement in the final assembly
3.2
Augmented Reality
The application possibilities of Augmented Reality in assembly and close to assembly
organizational areas like logistics areas are multifaceted. The systems can be used for
the direct support in the assembly process or in logistics supermarkets close to the production. The fact, that particular stations in the Agile Low-Cost Assembly are designed
product-specific, would lead to a low utilization of these stations by the use of a fixed staff
allocation. For this reason assembly operators are employed across different stations for
a greater efficiency. These assembly operators basically have to be skilled in sufficient
knowledge to work at different assembly stations. The visual support of the assembly
process by means of Augmented Reality supports the worker in complicated tasks. Depending on the vehicle, workers receive all important informations about the assembly
process and about the components and tools which are needed for their tasks [41]. This
approach saves employee costs by reducing the training effort significantly. A scientific
study at the WZL of RWTH Aachen University showed that learning times can be reduced
up to 35% by using Augmented Reality devices. Regarding to required quality this approach only works for comparatively simple processes or needed additional trainings.
Training efforts for employees require a high capital outlay because they cannot be porductive during the training. Besides, investments have to be arranged for training institutions [42]. Due to the application of Augmented Reality Systems in the Agile Low-Cost
Assembly the training efforts can be reduced. Instead of teaching the employees at learning facilities, they are teached at the assembly operation after a short introduction training.
A special kind of glasses enables the overlay of all necessary information for the performance of unknown processes. Deviations from the target process are displayed, so quality defects and errors can be repaired immediately. In contrast to a cycle time assembly
this apporach is well suited for the Agile Low-Cost Assembly, because the initial slower
performance has no negative influence to adjacent assembly stations. The dispensable
training efforts as well as the good learning effect lead to a high efficiency of this method
despite the initial process losses.
Examples of methods and devices, which can be used to support the workers, are glases
like google glases or Microsoft hololense which show 3D animated assembly instructions
or on workstation table projected 3D assembly instructions.
282
3.3
Rapid Fixture
Short product lifecycles and, as a result, the acceleration of the product development
times lead to an increasing pressure on the production equipment planning and manufacturing. Because of the reliance on product design both processes take place just before or in the middle of the series launch instead after the product development. An early
integration into the product development process is only possible to a limited extent, because of the given timeline of production equipment planning and manufacturing. In the
Agile Low-Cost Assembly this problem is resolved by the Rapid Fixture concept. The
approach targets the acceleration of the provision of production equipment due to automated design processes as well as the use of additive manufacturing processes. The
development focuses on central applications like special load carriers and assembly fixtures which are initially considered as a simplified range of functions like supporting, holding or directing.[43]
The approach includes an application-specific architecture for production equipment
which serves the presentation of the relation between the functions and workpiece of the
respective production equipment typ. In this regard, a distinction is made between the
categories standard construction kit element, additive manufactured locator and conjunction elements. The standard construction kit elements are kept in a modular design in
various geometires. They are connected to the support structure in the prototype stage
by additive manufactured conjunction elements. Due to the conjuction elements and the
related plug-in principle, the production of all geometrics of devices is possible without
showing consideration for restrictions of the conventional manufacturing processes. The
also additive manufactured locator elements, which are the negative form of the respective workpiece geometry, form the interface between the equipment and workpiece. This
approach is possible by deriving rule-based data from CAD-workpiece data and combining this data with automatically captured requirements. This fixture solution consisting of
modular and additive manufactured elements provide fixtures with low expenditure as
well as a fast adaptation to other product variants, if necessay. As a result and depending
on the products, the required fixtures can be provided at every assembly station and in a
batch size one, if needed.[43][44]
A concept-based selection of the additive manufacturing technologies can prevent the
removal of support material and thus complex rework processes. Therefore, an appropriate constructive design of the components is provided which permits appropriate procedures like FDM-technology as well as the largely waiver of support materials. Tie rods,
which do not restrict the simple convertibility due to their good solubility, are used supportively to ensure a sufficient stability regardless of the plug-in principle. Figure Fig. 9
shows an exemplary fixture [44].
283
Additiv manufactured
locator element
Standard construction
kit element
Additiv manufactured
conjunction-element
Fig. 9: Exemplary assembly device according to the Rapid Fixture Approach [44]
Within the scope of the industrial application, an integration into the existing IT-infrastructure is necessary. In addition to the great importance of the automation of the development process of production equipment, a connection to the product data management
system is necessary to ensure the effectiveness of the implemented design algorithms
by feedback loops. For this reason, the system will be implemented as a learning system
which generates different selectable options for the user [44].
The Rapid Fixture approach advantage is the temporal reduction of production equipment
manufacturing. This is because these materials are directly available due to the automated design processes, so they do not have to be elaborately manually. In addition to
the design freedom of the elements, processes like the setup of machines can be lapsed
in the production. The plug-in priciple used for the production equipment can be converted
right in manufacturing and enables an additional saving of time during their use and rampup [44]. Within the scope of the Agile Low-Cost Assembly this approach is of great importance for the assembly of different models as well as the launche of new products.
3.4
Tolerance-compensation elements
Components dependent on geometrical functions require adjustment processes to ensure the orientation within the tolerance zones. The adjustment is a critical assembly process that is utilized to modify customer-specific features such as for instance specifying
the outer skin of an automobile by using the joint pattern. This process causes siginificant
costs because of the implementation in the final assembly. In the traditional assembly,
the compliance with the strict tolerances causes high investment costs, e.g. by the use of
expensive pressing tools as well as complex assembly processes. Thus is why the idenfication of the tolerance values describes a trade-off between production costs and quality
standards [45].
In the Agile Low-Cost Assembly the concept of the Tolerance-compensation elements
pursues another approach. Instead of aligning the assembly and production with strict
tolerances, the tolerance deviation has to be measured and corrected component-individual and without a high amount of time. This approach is enabled by the Tolerancecompensation elements. These elements are manufactured individually depending on the
component and its tolerances and are integrated into the linkage mechanism. In this way
the components can be perfectly aligned to each other due to a tolerance compensation.
284
1
2
3
R
4
r
l
Measurement of
components
Deduction of targetactual differences
Calculation of
Tolerancecompensation elements
Printinf of Tolerancecompensation elements
Differential analysis by
computer software
Calculation of geometry
by algorithm
Bushing in door hinge
Examples
Measurement with
laser system
Fig. 10: Presentation of the process chain for the production of a Tolerance-compensation Element
The production process of the element consists of four steps (Fig. 10). These steps essentially contain three-dimensional measurement methods as well as an algorithm to calculate the compensation element.
The first step contains the three-dimensional measurement of the components, which has
to be assembled, by an optical measuring method, so complex geometrical connections
can be documented within a short period of time. The documented real data are compared with the CAD-data afterwards to find out the deviations of the component. To calculate the Tolerance-compensation element it is necessary to determine the deviations
of the function surfaces. In this way, for example, defined measurement points make angular deviations visible, which may lead to a rotatory shift of the components.
An algorithm generates the calculation of the compensation element. The geometrical
relations are being transferred to the documented deviations on an element, which is
prescribed in its basic structure. The generated data can be used to the continuous production immediately for additive or conventional production methods. The Tolerancecompensation element is suitable for the adjustment in all three dimensions. Depending
on the chosen material, it can be used for static or dynamic components [45].
One example where this method could be used, is the door assembly. Normally automobile doors are complicated to assemble in terms of a proper fit of the closing and opening
mechanism and regarding the joint measurement. With this method, a tolerance-compensation element can be installed in the door hinge to allow an adjustment free assembly.
3.5
Smart Logistics
A part of the logistical material supply of the clocked line assembly, especially the supply
of A-parts, happens on basis of a long-term production program. A detailed planning enables principles of direct delivery like Just-in-Sequence and contributes to cost savings
significantly due to the reduction of stocks. An assembly concept based on dynamic paths
and resequencing does not dispose a detailed planning which ensures a sufficient lead
time for a reliable supply. For this reason, an adaptable and flexible logistics concept,
which ensures the external as well as internal timely delivery, is required [46].
Even the logistical system of the Agile Low-Cost Assembly will not waive the principles of
direct stockless delivery which is needed to reduce cost and space requirements. This
requires a decoupling of external and internal provision due to decentralized sequencing
buffers within the assembly area. The variant-specific components will be delivered justin-time for a determined range of the production program and sequenced or separated in
the buffer afterwards, as soon as the final demand of an assembly station is fixed. The
285
provision at the station occurs via the communication of driverless transport systems with
assembly stations and vehicles, which achieve a high track-flexibility as well as the ideal
capacity of the logistics units [41]. In this way, components and materials can be provided
to stations by an automatic supply car completely autonomous due to a robotic arm taking
on the handling7.
A high product variety leads to the use of many different tools, which are just model- or
variant-specific applied. This leads to high investment costs because these partly complex tools need to be at several stations. In the Agile Low-Cost Assembly this problem is
solved by automated tool trolleys which enable an efficient use of the same tools at several stations. If a vehicle needs a product-specific tool, an impulse with the required position and time of arrival will be initiated by a control system to one of the tool trolleys. In
this way, the trolley is able to carry the tools automatically to the required locations at the
best time. Because many tools do not have to be procured multiple times, the needsbased provision of tools at several stations leads to a significant reduction of the investment costs.
3.6
Assembly-control Cockpit
Within the scope of the Agile Low-Cost Assembly the importance of production planning
and production control changes. The production planning traditionally occurs medium- to
long-term and includes the production program, the material requirement as well as the
production process planning. The production control regulates the trigger as well as the
monitoring of the orders on basis of production planning and controls the entire production
and process flow. Furthermore, the regulation of measures against short-term productiondisruptions and failures is part of the scope. [47].
For the short-term flexibility to achieve an ideal capacity utilisation of the Agile Low-Cost
Assembly, it is necessary to turn away from the existing fixed planning process as well
as from the predetermined production sequence. Instead, a high-frequency, dynamic and
situational assembly planning is targeted, in which production planning and production
control increasingly merging. This form of the combined assembly planning and assembly
control is called „assembly cybernetics“. The merger of these fields of action in the field
of application of a decentralized non-linear assembly for the implementation and the significant reduction of planning effort is inevitable. Changes in the assembly structure, the
layouts and the short-term process adjustments can be realized by using digital innovations. In this way the system is put in a position to do a dynamic capacity alignment as
well as a resequencing [41].
The basis of this approach is the connectivity between all components of the assembly
system like production equipment, stations and vehicles as well as the ability to self-optimization due to the use of feedback and real-time data. The digital shadow, this means
the sum of all data from the different sources of information, enables the complete transparency of the production, so an image of the real-time assembly can be created [41].
Data on assembly processes and logistical processes as well as production equipment
and status information of assembly objects are taken into account.
Figure Fig. 11 shows an exemplary a production control cockpit, which prepares and visualizes the collected data. The cockpit supports the user to get an overview of the assembly processes by presenting the data in any level of detail depending on the use case.
The robot in the sketched example was developed by Zacobria Robots with the collaboration of MiR Mobile Industrial Robots and Universal Robots
7
286
Total
Production
Body Shop
Pre-Assembly
Final Assembly
Testing
…
Rework
10:27 | 18.05.2017
Production
n Quantities
1000
2
UNITS
1
500
0
KW 39
KW 40
KW 41
Target
4
3
Capacity
y Utilization
Lead Time
150
MIN
KW 42
Acutal
100%
100
50%
50
0%
0
5
Fzg. #7
Fzg. #8
Fzg. #9
Acutal
Maximum
Fig. 11: Exemplary presentation of the Assembly-control Cockpit
In the first place, the control cockpit presents information for the management level due
to the transfer into clear key figures. It is also possible to provide detailed condition data
on the vehicles or production equipment for the decision-makers on the shop-floor level.
The extent of collected data already exceeds the human ability to process information.
Furthermore, the required speed of reaction rises constantly. In the future, automated
data evaluation and intelligent control algorithms will be of vital importance to support the
production controller optimally, who will primarily be also a human in the future.
3.7
Summary
The classification of the previously explained modules of the Agile Low-Cost Assembly in
the framework presented in chapter 3 are shown below.
1
Self-driving Chassis
2
Augmented Reality
Rapid Fixture
3
Agility
Agility
Agility
Ramp- up / 2
Ramp- up / 2
Ramp- up / 2
Invest / 10
Invest / 10
Invest / 10
Agility at zero cost
Ramp-up time 2
Investment
10
Human
Technology
Organization
4 Tolerance compensation elements
5
Smart Logistics
6
Assembly-control Cockpit
Agility
Agility
Agility
Ramp- up / 2
Ramp- up / 2
Ramp- up / 2
Invest / 10
Invest / 10
Invest / 10
Fig. 12: Classification of the solution modules in the regulatory framework of the Agile Low-Cost
Assembly
The individual solution modules are evaluated qualitatively with regard to their contribution to fulfill the target in the fields of agility, ramp-up time as well as investment costs.
287
Furthermore, the solutions are assigned to the levels human, technology and organization
(see figure Fig. 12). The classification and evaluation, shown in figure 12, is based on
discussions about the automobile assembly of the future, which were performed with researchers and automobile experts in preparation for the “Aachener Werkzeug Kolloquium”.[36]
The structure of argumentation is equal for all of the modules, so this approach is exemplary presented with the self-driving chassis subsequently.
Self-driving chassis enable individual and vehicle-dependent routes. Furthermore, the
transport function is placed in the vehicle itself, so the concept is applicable regardless of
the model. In addition, the system is self-scaling because there is no need for an additional material handling in the form of FTS for a greater vehicle rotation. This is why the
solution module enables a higher agility at the technical as well as the organizational level.
The assembly of the vehicles is independent of a conveyor system, so as a result this
system does not need to be implemented at the beginning and it does not need to be
aligned with changes on the assembly itself. For this reason and due to the concept of
the self-driving chassis a reduction of the ramp-up time at technical and organizational
level can be reached. The waiver of cost-intensive conveyor systems as well as the use
of drive components, which are used for the operation of the vehicles anyway, lead to a
significant saving of investment costs at technological and organizational level.
4
Application examples of the Agile Low-Cost Assembly
While addressing the challenges described above will become increasingly important in
future, the implementation of alternative and flexible assembly structures, such as the
Agile Low-Cost Assembly, is already necessary within the framework of the production of
the StreetScooter in Aachen. The electric transporter has so far been produced in accordance with the requirements of package delivery [48]. Extending the application to
other areas requires the modification of currently produced derivates or implementation
of additional derivates. This requires the integration of the derivatives, which diverge significantly from each other, into the existing assembly system. This application is described
below. Furthermore, the Aachen Demonstrator for the Agile Low-Cost Assembly shows
that the implementation on the prototype level is already possible today with the available
technologies.
4.1
The Agile Low-Cost Assembly for the StreetScooter Special Vehicles
Cost-effective solutions for light electric utility vehicles, which are designed specifically
for the urban environment, are currently only available to a limited extent. With a range of
approximately 80 km per battery load and a possible payload of 710 kg, the StreetScooter,
the delivery vehicle of the Deutsche Post DHL Group, has high potential as a cost-effective solution in this market segment [51]. Within the framework of Deutsche Post's delivery
service, the StreetScooter will be manufactured in 2017 with a quantity of 10,000 units.
The StreetScooter focuses on efficiency by taking account of product specifications that
provide real value for the user in everyday application situations. In addition to the high
potential for delivery companies, there is a great deal of interest on the part of municipal
enterprises, craftsmen and private individuals in the context of different applications. Currently, the StreetScooter is optimized to meet the needs of delivery companies. The vehicle concept can be opened for further application areas via the modification of the loading area, drive train, battery or driver's cab.
288
For example, the StreetScooter can be equipped with a three-sided tipper or a flatbed
rather than a box set-up, as it is customary in mail delivery. This requires, on the one
hand, the adaptation of additional elements to the existing standardized chassis. On the
other hand, the realisation of the assembly processes is necessary. For this purpose, the
vehicles have to be adapted individually in a customizing area following the assembly
process of the driver's cab. The individual adaptation leads to the fact that a majority of
variant-specific assembly stations is required, which are not suitable within a line organization. Accordingly, the respective vehicle variants drive either only to those stations
which contain relevant assembly amounts or the different assembly amounts are assigned to the same station. Depending on the production program, no equal station utilization can be achieved, which is why a cross-station assembly Operators´ deployment is
necessary. Through the use of Augmented Reality as a mounting support for the employees, training efforts can be minimized and complex processes facilitated for the assembly
operators. Furthermore, the use of a few, rapidly convertible fixture constructions using
Rapid Fixture ensures a high efficiency over investment cost savings. The use of a uniform chassis results in partly high tolerance deviations to the different superstructures.
Through the use of Tolerance-compensation elements, these deviations can be overcome without a high allocation of ressources.
4.2
Aachen Demonstrator for the Agile Low-Cost Assembly
The Aachen demonstrator for the Agile Low-Cost Assembly is a practical demonstration
of the described concepts. On an area of approx. 1,500 m², the Agile Low-Cost Assembly
is presented in the Production Engineering Cluster at the RWTH Aachen Campus. Instead of a main assembly line with separate pre-assembly stations, the concept is presented with the principle of independent assembly stations. Due to the process-oriented
shifting of the drive train assembly into the early stages of final assembly, the vehicle
moves autonomously in the sense of a driverless transport system. Guided by a control
system and environmental detection, the chassis moves automatically to the assembly
station. An automated tool trolley follows the vehicle to the stations and provides the necessary tools.
The Agile Low-Cost Assembly is demonstrated within the four assembly stations using
the innovative assembly concepts. Fig. 13 shows a section of the layout with the positioning of the individual assembly stations as well as the driving route and direction of the
chassis.
Connection elements
Tolerance-compensation
Elements
3
2
4
1
Rapid Fixture
Disassembly
Fig. 13: Coarse Layout of the Aachener Demonstrator for the Agile Low-Cost Assembly
289
In the first station, the functionality of the Rapid Fixture concept is clarified by mounting
the component with the assistance of a fixture construction made of additive manufactured elements. Station 2 shows the principle of tolerance compensation for the realization of an optimum gap dimension by installing the door using additive manufactrured
elements. In addition to the solution modules of the Agile Low-Cost Assembly, further
innovation aspects from the assembly environment are demonstrated at the third station.
Innovative connection elements simplify the installation of the radiator grille and fender,
which optimizes the process as well as enables the remanufacturing, the extension of the
product life cycle by means of updateable vehicle concepts. Due to the fact that this demonstrator is a showcase all components are disassembled at the last station and then
transported to the respective stations by an autonomous transport car.
5
Conclusion and Outlook
There is currently a profound structural change of the global market structure in the automotive environment. Replacing the high importance of the triad markets, there is a shift
to globally distributed and in itself volatile metropolitan markets. External access to these
markets is made more difficult due to the different market-specific restrictions, such as
the intensification of political requirements. Different market needs and customer wishes
for more individual products have to be considered by the manufacturers. Therefore, they
react to these increasing customer requirements with almost unlimited configuration possibilities. The result is, on the one hand, an exponential increase in the variety of models
and variants. On the other hand, the innovation cycles are continually shortened, while
the number of units per model is reduced and the demands on the project partners of the
OEMs become more heterogeneous. Future production strategies therefore must be
characterized by the decentralized production of a large number of different models at a
production site.
Given the increased requirements, the conventional line assembly of the automotive industry is reaching its limits. The rigid linking and the spatial restrictions of the assembly
stations as well as the dependency on permanently installed conveyor systems not only
limit the flexibility but also cause high investment costs. To address these challenges, the
Factory Planning Department of WZL of RWTH Aachen University and the Chair of PEM
of RWTH Aachen University are developing a new assembly concept, which is especially
characterized by high flexibility and low initial investment.
The core element of the concept is the substitution of the line assembly in favor of a lowinvestment structure with independent assembly stations. This is made possible by the
replacement of stationary conveyor technology. The transport function is moved into the
vehicle itself by an early commissioning of the drive train so that it can navigate on an
individual path through the assembly area. A dynamic and real-time control allows a continuous adaptation of the assembly system to ensure an optimal overall condition. The
use of Augmented Reality supports the employee in complex work processes and reduces the necessary training effort. Furthermore, the use of additive manufacturing methods results in a cost-effective and timely production of devices as well as cost savings
due to lower tolerance requirements. Autonomous and networked provisioning units enable a high utilization of the logistics system and the safe supply of the assembly stations
by situational adaptation. With the introduction of the so-called Assembly-control Cockpit,
there is always transparency about the assembly processes at the required detail level.
Future research must concentrate on the efficient realization of the Agile Low-Cost Assembly. It requires primarily the management of the complex control effort caused by the
290
interaction of vehicles, assembly stations and logistics. In addition, it is important to identify for which applications the Agile Low-Cost Assembly is the optimal solution and offers
advantages over other assembly organization forms. The applicability of the Agile LowCost Assembly has already been verified in the framework of the StreetScooter special
vehicles and the Aachen Demonstrator for Agile Low-Cost Assembly.
6
Literature
[1]
Michalos, G.; Makris, S.; Papakostas, N.; Mourtzis, D.; Chryssolouris, G.:
Automotive assembly technologies review. Challenges and outlook for a flexible and
adaptive approach. In: CIRP Journal of Manufacturing Science and Technology. 2
Vol., 2010, Issue 2, p. 81–91.
[2]
Günthner, W.: Neue Wege in der Automobillogistik. Die Vision der Supra-Adaptivität.
Berlin, Heidelberg: Springer, Berlin, 2007.
[3]
Lang, N.; Dauner,T.; Frowein, B.: Beyond BRIC. Winning the Rising Auto Markets.
Boston Consoulting Group (ed.), 2013.
[4]
Die Welt 10.04.2015. https://www.welt.de/wirtschaft/article139351977/Mega-CityChina-baut-schon-an-Giga-Staedten.html (downloaded 02/15/2017)
[5]
Koers, M.: Die deutsche Automobilindustrie - Status Quo und künftige
Herausforderungen. VDA, Frankfurt, 2012.
[6]
Verband der Automobilindustrie e.V. (VDA) annual reports 2013-2016.
[7]
Göpfert, I.; Braun, D.; Schulz, M.: Automobillogistik. Springer Gabler, Wiesbaden,
2012.
[8]
Wolters, H.; Hocke, R.: Auf dem Weg zur Globalisierung - Chancen und Risiken. In:
Wolters, H.; Landmann, R.; Bernhart, W.; Karsten, H.; Arthur D. Little International
(ed.): Die Zukunft der Automobilindustrie. Herausforderungen und Lösungsansätze
für das 21. Jahrhundert. Gabler, Wiesbaden, 1999.
[9]
Schulz, R.; Hesse, F.: Das Produktionsnetzwerk des VW-Konzerns und die
Versorgung der Überseewerke : ein Beitrag des Volkswagen-Konzerns. In: Goepfer,
I.: Logistik der Zukunft. Gabler, Wiesbaden, 2009.
[10] Schelhas, J.: Die Kaluga-Rundlaufverkehre der DB Schenker AG - ein innovatives
Praxisbeispiel für die Materialversorgung des VolkswagenWerkes im russischen
Kaluga. In: Goepfert, I.; Braun, D.; Schulz, M.: Automobillogistik. Springer Gabler,
Wiesbaden, 2013.
[11] Huettenrauch, M.: Effiziente Vielfalt. Die dritte Revolution in der Automobilindustrie
Springer, Berlin, 2008.
[12] Grinninger, J.: Schlanke Produktionssteuerung zur Stabilisierung
Auftragsfolgen in der Automobilproduktion. TU München, 2012.
von
[13] Meichsner, T.: Migrationskonzept für einen modell- und variantenflexiblen
Automobilkarosseriebau. PZH, Garbsen, 2007.
[14] Kern, W.; Rusitschka, F.; Kopytynski, W.; Keckl, S.; Bauernhansl, T.: Alternatives to
assembly line production in automotive industry. In Proceedings of the 23rd
International Conference on Production Research (ICPR). Manila, August 2-5, 2015.
291
[15] Diez, W.: Automobil-Marketing. Navigationssystem für neue Absatzstrategien. miFachverlag, Landsberg am Lech, 2006.
[16] Klauke, A.: Zukunftsorienterte Fabrikstrukturen in der Automobilindustrie. In wt
Werkstatttechnik online. 92. Vol., 2002, Issue. 4, p. 144-148.
[17] Kampker, A.; Kreisköther, K.; Hollah, A.; Wagner, J.; Fluchs, S.: Kleinserien- und
releasefähige Montagesysteme. Der Schlüssel zur wettbewerbsfähigen
Elektromobilproduktion. In: Zeitschrift für wirtschaftlichen Fabrikbetrieb (ZWF). 111.
Vol., 2016, Issue 10, p. 608-610.
[18] Klug, F.: Logistikmanagement in der Automobilindustrie. Grundlagen der Logistik im
Automobilbau. Springer-Verlag, Berlin, 2010.
[19] Rumpelt, T.; Winterhagen, J.: Volkswagen-Konzernlagebericht 2013. Nachhaltige
Wertsteigerung. 2013.
[20] Ebel, B.; Hofer, M.: Automotive Management. Springer, Berlin, 2014.
[21] Statista.de: Absatz des VW Golf im Zeitraum der Jahre 1974 bis 2012 nach Modell
(in Millionen). https://de.statista.com/statistik/daten/studie/240184/umfrage/absatzdesvw-golf-nach-modell/ (downloaded 02/17/2017).
[22] Audi AG: Produktion am Standort Ingolstadt. Kennzahlen.
http://www.audi.com/corporate/de/unternehmen/produktionsstandorte/audiproduktion-weltweit/ingolstadt.html (downloaded 02/23/2017).
[23] Statistics Bureau: News Bulletin December 2016.
http://www.stat.go.jp/english/info/news/20161227.htm (downloaded 03/04/2017).
[24] Audi AG: Annual report 2015. 2015.
[25] Kampker, A.; Deutskens, C.; Kreisköther, K.; Schumacher, M.: Selbstfahrende
Fahrzeugchassis in der Fahrzeug-Endmontage. In: VDI-Z integrierte Produktion.
157. Vol., 2015, Issue 6, p. 23-26.
[26] Richter, M.: Gestaltung der Montageorganisation. In: Lotter, B.; Wiendahl, H.-P.
(ed.): Montage in der industriellen Produktion. Ein Handbuch für die Praxis. Springer,
Berlin, p. 95–125, 2014.
[27] Aurich, J.; Barbian, P.; Wagenknecht, C.: Prozessmodule zur Gestaltung
flexibilitätsgerechter Produktionssysteme. In: Zeitschrift für wirtschaftlichen
Fabrikbetrieb (ZWF), 2003, Issue 5, p. 214–218.
[28] Nyhuis, P.; Heinen, T.; Reinhart, G.; Rimpau, C.; Abele, E.; Wörn, A.:
Wandlungsfähige
Produktionssysteme.
Theoretischer
Hintergrund
zur
Wandlungsfähigkeit von Produktionssystemen. In: wt Werkstattstechnik 98 Vol.,
2008, Issue. 1/2, p. 85–91.
[29] Kuhn, A.; Wiendahl, H.-P.; Eversheim, W.; Schuh, G.: Fast Ramp-Up – Schneller
Produktionsanlauf von Serienprodukten. Verlag Praxiswissen, Dortmund, 2002.
[30] Claus, T.; Herrmann, F.; Manitz, M.: Produktionsplanung und –steuerung.
Forschungsansätze, Methoden und deren Anwendungen. Springer Gabler, Berlin,
2015.
[31] Becker, C.: Abstimmung flexibler Endmontagefließbänder in der Automobilindustrie.
Universität Jena, Norderstedt, 2007.
292
[32] Klauke, A.; Schreiber, W.; Weißner, R.: Zukunftsorientierte Fabrikstrukturen in der
Autombilindustrie. In: wt Werkstattstechnik, 92 Vol., 2002, Issue 4, p. 144-148.
[33] Shimokawa, K.; Jürgens, U.; Fujimoto, T.: Transforming Automobile Assembly.
Springer, Berlin, 1997.
[34] Wiendahl, H.-P.: Wandlungsfähigkeit. Schlüsselbegriff zur zukunftsfähigen Fabrik.
In: wt Werkstattstechnik, 92 Vol., 2002, Issue 4, p. 122-127.
[35] Koren, Y.: The global manufacturing revolution. Product-process-business
integration and reconfigurable systems. Hoboken, Wiley, 2010.
[36] Kampker, A.; Bartl, M.; Bertram, S.; Burggräf, P.; Dannapfel, M.; Fischer, A.; Grams,
J.; Knau, J.; Kreisköther, K.; Wagner, J.: Agile Low-Cost Montage, Internet of
Production für agile Unternehmen, Edition: 1, Chapter: 2.3, Apprimus Verlag
Aachen, 2017, pp. 231-259.
[37] Popp, J.; Wehking, K.-H.: Neuartige Produktionslogistik für eine wandelbare und
flexible Automobilproduktion. Logistics Journal: Proceedings, 2016.
[38] Gabler Wirtschaftslexikon, Online-Edition. http://wirtschaftslexikon.gabler.de/Archiv
/72556/baukastensystem-v7.html (downloaded 02/18/2017).
[39] Kiefer, J.: Mechatronikorientierte Planung automatisierter Fertigungszellen im
Bereich Karosserierohbau. Saarland University, Saarbrücken, Dissertation, 2007.
[40] Strohm, O.; Ulich, E.: Unternehmen arbeitspsychologisch bewerten. Ein MehrEbenen-Ansatz unter besonderer Berücksichtigung von Mensch, Technik,
Organisation. Vdf Hochschulverlag, Zürich, 1997.
[41] Kampker, A.; Kreisköther, K.; Wagner, J.; Fluchs, S.: Mobile Assembly of Electric
Vehicles: Decentralized, Low-Invest and Flexible. In: International Scholary and
Scientific Research & Innovation, 10 Vol, 2016, Issue. 12, p. 1920-1926.
[42] Braun, S.; Käfer, T.; Koriath, D.; Harth, A.: Individualisiertes Gruppentraining mit
Datenbrille für die Produktion. In: Mayr, H.; Pinzger, M. (ed.): INFORMATIK 2016.
Gesellschaft für Informatik, Bonn, 2016.
[43] Kampker, A.; Kreisköther, K.; Wagner, J.; Förstmann, R.: Automation of production
equipment design – Additively manufactured elements and a construction kit
approach as enabler for automated design processes, 2017.
[44] Kampker, A.; Kreisköther, K.; Wagner, J.; Förstmann, R.: Automatisierung der
Betriebsmittelgestaltung. Additiv gefertigte Elemente und Baukastenansatz als
Befähiger automatisierter Gestaltungsprozesse. To be published: wt
Werkstattstechnik, Edition April 2017.
[45] Kampker,
A.;
Kreisköther,
K.;
Wagner,
J.;
Hoffmann,
A.:
Toleranzausgleichselemente für eine justagefreie Montage. Messen Ausgleichselement herstellen - Montieren. In: Zeitschrift für wirtschaftlichen
Fabrikbetrieb, 111 Vol., 2016, Issue 11, p. 736-739.
[46] Reil, H.: Smart Faction. In: Audi Dialoge Smart Factory, 2015, p. 26-31.
[47] Seliger, G.: Montage und Demontage. In: Grote, K.-H.; Feldhusen J. (ed.): Dubbel.
Springer, Berlin, 2014.
293
[48] Ingenieur.de 24.08.2016. http://www.ingenieur.de/Themen/Elektromobi
litaet/Post-Fahrzeugbauer-1000sten-StreetScooter-ausgeliefert (downloaded
02/23/2017).
[49] StreetScooter GmbH, Online Webpage,http://www.streetscooter.eu/modelle/work
(downloaded 02/23/2017).
294
Evaluation of technology chains for the production of all-solid-state
batteries
Joscha Schnell1,a, Andreas Hofer1,b, Célestine Singer1,c, Till Günther1,d and
Gunther Reinhart1,e
1Institute
for Machine Tools and Industrial Management (iwb), Technical University of Munich,
Boltzmannstr. 15, 85748 Garching, Germany
ajoscha.schnell@iwb.mw.tum.de, bandreas.hofer@iwb.mw.tum.de, ccelestine.singer@tum.de,
dtill.guenther@iwb.mw.tum.de, egunther.reinhart@iwb.mw.tum.de
Keywords: Innovation Management, Production Planning, Technology Identification
Abstract. Solid electrolytes are the key for safer batteries with higher energy density. However,
little is known on the fabrication of large-format all-solid-state batteries (ASSBs) up to date. In this
paper, a method for the generation and evaluation of technology chains for mass production of
ASSBs is presented. Based on the development of a product structure, requirements for the
fabrication of ASSBs are identified by means of expert elicitation. Subsequently, search fields for
the identification of suitable manufacturing technologies are deduced, regarding, for example,
ceramic process technologies for fuel cell and capacitor fabrication. By a systematic comparison of
the identified technologies with the requirements of ASSBs, technology chains can be generated.
Finally, different material combinations and technology chains can be compared using an
assessment of performance indicators, such as technology readiness and cost efficiency. The
applicability of the method is illustrated for the evaluation of a tape casting process for oxide based
ASSBs applying a Monte-Carlo simulation for the assessment of the technology readiness.
Introduction and objective
Limited driving range is still among the main reasons for poor market acceptance of battery
electric vehicles. The energy density of lithium-ion cells, which significantly affects the driving
range, has been increased by a factor of almost four during the last 25 years. However, the current
technology may soon reach a limit, governed by the theoretical energy density of the materials used
[1]. A lithium-ion cell typically consists of a graphite anode, a porous separator membrane, a
lithium-metal-oxide cathode, and an electrolyte liquid for ionic transport, as depicted in Fig. 1a. By
replacing the graphite anode with a lithium metal anode, the volumetric energy density could be
enhanced by up to 70 % compared to conventional lithium-ion cells [1]. However, inhomogeneous
deposition of lithium during charge of the battery leads to the growth of dendrites that can penetrate
the porous separator, resulting in failure of the battery by short-circuiting [2]. This safety hazard
can be circumvented by the use of a dense solid electrolyte acting as a separator and ion conductor
at the same time (Fig. 1b) and representing a physical barrier for dendrites. Sulphide [3] and oxide
[4] based solid electrolytes seem to be most suitable for electric vehicle applications due to their
high ionic conductivities – some of them even exceeding those of conventional liquid electrolytes
[5].
Despite the intensive research activities on the respective chemistries and materials, the
replacement of the liquid electrolyte to form a bulk-type all-solid-state battery (ASSB) has turned
out to be challenging, as illustrated in Fig. 1c. Although solid electrolytes can have high ionic
conductivities, the interface resistance between solid electrolyte and electrodes can preclude high
rate capabilities necessary for fast charging [6]. Sufficient mechanical contact between solid
particles must be ensured, especially with regard to expansion and shrinkage of electrode materials
during charge and discharge [7]. The limited electrochemical stability of most solid electrolytes
against cathode and anode potentials can lead to decomposition of the materials [8]. Hence,
295
protective coatings of cathode particles [9] and lithium anodes [10] may be necessary for proper
functionality of the ASSB. A protective layer can also be used to homogenise the lithium flux at the
anode [10], reducing the risk of dendrite creeping along grain boundaries [11].
70% increase in energy
density
Cathode
Composite
Protection Layer
Aluminum
Lithium
Copper
Ceramic Ion Conductor
c)
Lithium-Metal-Oxide
Aluminum
Copper
Graphite
Porous Membrane
b)
Lithium-Metal-Oxide
Aluminum
a)
Challenges at interfaces
Figure 1: Advantages and challenges of all-solid-state batteries, adapted from [1]
In contrast to solid batteries based on polymer electrolytes [12] and thin film technologies [13],
so far the fabrication of bulk-type ASSBs has only been realised in laboratory scale to our
knowledge. Currently, powder pressing is one of the most commonly used fabrication methods in
research laboratories [14]. Here, the powders are pressed and heated to form highly densified
pellets. However, this process is difficult to scale up and competitive energy densities can hardly be
reached with the large amounts of solid electrolyte required [15]. Hence, alternative processing
methods need to be considered, taking into account the needs of commercial mass production. Only
few publications have been investigating fabrication of ASSBs using easily scalable production
processes, such as sheet coating [16] and screen printing [17]. A detailed description of the
fabrication steps on the lab scale for a sulphide based large-format pouch-bag cell with multiple
layers was given by Ito et al. (2014) [18]. Troy et al. (2016) [19] outlined a possible production
chain for the fabrication of oxide based ASSBs. However, in order to allow for a realistic
comparison of ASSB systems and fabrication technologies, a comprehensive overview of possible
industrial production scenarios and the respective challenges will be necessary.
Therefore, the scope of this paper is a method to generate and evaluate technology chains for the
large-scale production of bulk-type sulphide and oxide based ASSBs. The method consists of the
following five steps, on which the structure of this paper is based: At first, a generalised product
structure for ASSBs is developed. This product structure is utilised to generate a reference
technology chain and to identify requirements for industrial production of ASSBs by means of
expert interviews. Subsequently, search fields for the identification of production technologies are
deduced. By connecting the different technologies according to their technology functions and
comparing the identified requirements with the technologies, technology chains can be generated.
Finally, the created technology chains are evaluated and critical process steps can be detected.
Results and discussion
Product structure of all-solid-state batteries. In order to enable a systematic generation and
evaluation of technology chains for ASSBs, a detailed analysis of the corresponding product
properties is required, taking into account the needs of mass production. Lithium-ion battery
production is a complex matter due to the high number and diversity of processes, as well as
interactions between process parameters and product properties [20]. In contrast to conventional
lithium-ion batteries, little is known on possible product parameters of ASSBs, such as electrode
composition or cell geometry. The identification of requirements can be supported by developing a
product structure, in order to allow for a generalised, model-based description of ASSBs similar to
the product model presented by Reinhart et al. (2012) [21] for conventional lithium-ion cells.
296
Based on a literature and patent research, the main differences of ASSBs compared to
conventional lithium-ion batteries were identified. The product structure is composed of four
hierarchy levels from the product level to the module level, the component level, and the material
level, as depicted in Fig. 2. The product level represents the ASSB, which, on the module level,
consists of a cell stack, housing, isolation, etc. The cell stack consists of a multiplicity of anode,
separator, and cathode layers, which can be stacked with electrodes connected in series (bipolar
stacking) or in parallel. The major change compared to conventional lithium-ion cells is the
replacement of the porous, electrolyte soaked separator by a dense solid electrolyte layer. A lithium
metal layer with a protective film is assumed for the anode (cf. Fig. 1c) [10]. On the cathode side, in
order to allow for sufficient ionic transport, a composite electrode is necessary. Here, on the
material level, solid electrolyte particles are included into the electrode structure. Additionally, a
protective coating can be applied onto the cathode active material particles to ensure
electrochemical stability of the components [9]. Further attributes are, for example, the amount of
binders and additives, the particle size and distribution, the homogeneity, porosity, etc.
1
2
3
4
Product level
All-solid-state battery
Characteristics
Requirements
• …
Module level
Cell stack
Characteristics
Requirements
Component level
Characteristics
Requirements
Cathode composite
• …
Material level
Characteristics
Requirements
Housing, isolation, etc.
• …
• …
Cathode particles
Anode, etc.
Solid electrolyte separator
• …
• …
Solid electrolyte particles
Additives, binders, etc.
• Active material
• Solid electrolyte material
• Binder material
• Protective coating
• Average particle size
• Binder solubility
• Average particle size
• Particle size distribution
• Conductive agents
• Particle size distribution
• Ionic conductivity
• Electronic conductivity
• …
• …
• …
Figure 2: Excerpt of the product structure of a bulk-type large-format all-solid-state battery
This product structure can be used as a basis to generate technology chains for the production of
ASSBs, as illustrated in Fig. 3: At first, the process chain for conventional lithium-ion cell
production [22] is abstracted using so called technology functions. Technology functions describe
the fundamental task a technology needs to perform in order to enable the manufacturing of a
certain product. These technology functions represent a solution neutral description of the
respective process steps [23], such as “electrode compression” instead of “electrode calendering”.
The resulting technology function chain is then compared with the product structure presented in
Fig. 2 to deduce a reference technology function chain for the production of ASSBs. While the
material level is mainly defined by chemical process engineering, the component level can be
attributed to process engineering disciplines, such as “components mixing”, “electrode forming”,
and “electrode compression”, as well as manufacturing technology, such as “electrode cutting”. On
this level, the technology function “solid electrolyte application” needs to be added into the
technology function chain. On the module and product level, assembly technologies, such as “cell
stacking”, “current collector joining”, and “cell stack packaging” are predominant. In contrast to
conventional lithium-ion cells, no electrolyte filling process is required. The technology function
“cell formation” may be rendered void since an ASSB containing a lithium-metal anode is already
in a charged state directly after assembly of the cell stack. The resulting reference technology chain
can then be used for the generation and evaluation of technology chains for the production of
ASSBs, as will be explained in the following sections.
297
ProductStructure
Definitionofnew/modifiedrequirementsandtechnologyfunctions
Technology
function
level
TechnologyfunctionchainforlithiumͲioncells
…
Abstract
Solution
neutral
Technology
level
Tangible
Solution
specific
…
Function x
Function y
Electrode
forming
Electrode
compression
Function z
ReferencetechnologyfunctionchainforASSB
…
…
Cell
stacking
Function x
Function y
Function z’
Electrode
forming
Electrode
compression
Solid
electrolyte
application
Abstraction
Deduction
TechnologychainforlithiumͲioncells
PossibletechnologychainforASSB
Step a
Electrode
coating
Step b
Electrode
calendering
Step c
JellyͲroll
winding
…
…
A
…
Step a
Step b
Step c’
…
B
…
Step a’
Step b’
Step c
…
C
…
Step a
Step b’
Step c
…
…
…
Step a’
Step b
Step c’
…
Figure 3: Generation of technology chains for ASSB using a reference technology function chain
Identification of requirements. In order to identify technologies for the fabrication of ASSBs, a
deep understanding of the electrochemical and mechanical characteristics is required. Due to the
novelty of the materials and components used for ASSBs, expert knowledge is fundamental to gain
insight into the respective properties and requirements [24]. For the preparation of the expert
interviews, a morphological box was prepared for each technology function to facilitate the
identification of material and production requirements. Here, different production technologies
were taken into account for the respective technology functions, such as “tape casting”, “screen
printing”, or “extrusion” for the technology function “electrode forming”. For each technology,
additional process parameters and intermediate product properties were included into the
morphological box, for example “processing atmosphere”, “processing temperature”, or “bending
stiffness”.
Interviews were performed with 21 experts from automotive manufacturers, chemical industry,
and research institutes. During the interviews, twelve different cell designs were suggested by the
experts. Subsequently, technology functions were defined and combined by the experts according to
the respective cell designs. For each technology function, a specific production technology was
selected from the morphological box, with the possibility to add new technologies if necessary. Up
to eight different technologies were suggested, e.g., for fabrication of the solid electrolyte layer.
Finally, requirements and properties of the technologies and intermediate products were inquired.
For evaluation of the morphological boxes, a clustering of requirements for the different material
combinations enables the condensation of information. For this purpose, a product model can be
developed for each material combination according to the product structure presented in Fig. 2.
Technology identification. To enable a systematic development of possible production scenarios, a
comprehensive overview of suitable manufacturing technologies is required. Based on the approach
for systematic technology identification by Greitemann et al. (2016) [25], potential technologies
can be scouted and described. Potential and promising search fields are identified based on the
defined technology functions and production requirements. Due to the early research and
development stage of ASSBs, information sourcing was based on a literature research concerning
manufacturing technologies for conventional lithium-ion cells [26], fuel cells [27], and ceramic
capacitors [28], as well as elicitation of experts on lithium-ion cell production and ceramics
processing.
Generation and evaluation of technology chains. Figure 4 shows the procedure for the generation
and evaluation of technology chains, which was modified from Reinhart & Schindler (2012) [29] to
the specific characteristics of ASSBs.
298
Suitability of technologies
Generation of technology chains
Evaluation of technology chains
Consideration of
interdependencies
between
technologies
Analysis of
requirements
Comparison of
requirements
and technology
characteristics
Evaluation of
technology
readiness
Development of
a morphological
box
Evaluation of
technical
feasibility
Evaluation of
economic
characteristics
Categorization
and prioritization
of technology
chains
Linkage to
technologies
Selection of suitable technologies
Generation of technology chains
Selection of technology chains
Figure 4: Procedure for the generation and evaluation of technology chains for ASSBs
The method starts with a systematic analysis of product requirements. The technologies
identified for each of the above described search fields are documented with a special focus on
technology performance. The technical feasibility is summarised using a matrix with the
requirements to select suitable technologies for each search field. However, the evaluation of
technology suitability and technology potential is currently a research topic of its own. The second
part begins with the consideration of interdependencies between technologies and product features.
This reflection of reciprocal influence or possible exclusion of technologies provides the following
development of a morphological box. By a combinatorial analysis of possible paths through the
morphological box, technology chains can be generated. In order to allow for a comparison of
production scenarios for ASSBs, the generated technology chains need to be evaluated in terms of
performance indicators. A promising approach for technology evaluation has been described by
Reinhart et al. (2011) [30], where a multi-criteria evaluation considering product feasibility,
competitive potential, resource efficiency, technology maturity and profitability can be performed.
Last, technologies are compared and selected.
Based on this approach, the technology evaluation criteria within the underlying questionnaire
[31] were adapted to ASSB production. The questionnaire consists of specific questions concerning
seven maturity stages, which reach from basic research in stage 1, feasibility study in stage 2,
technology development in stage 3, technology demonstration in stage 4, integration into
manufacturing resources in stage 5, integration in the production system in stage 6, to complete
integration into series production lines in stage 7 [31]. The questionnaire was extended by the
consideration of uncertainty for each maturity stage and specific evaluation criteria in order to
represent technology maturity more precisely. The results from the questionnaire can be evaluated
using a Monte-Carlo simulation to account for the uncertainty of the experts’ statements, as
described in Fig. 5. Therefore, the certainty of each statement is assessed by the expert which
results in a standard probability distribution with standard deviation depending on the level of
uncertainty and the experts’ statement as mean value. Subsequently, a Monte-Carlo simulation is
performed for each statement and cumulated to identify the level-based readiness value as well the
overall technology readiness level.
Data
Modeling
TRL1
TRL1
0%
fulfillment
knowledge
uncertainty
Monte Carlo
Simulation
TRL1
100%
0%
Q1
Q1
Q2
Q2
100%
Readiness by
development stage
Overall readiness
Figure 5: Evaluation of the questionnaire for determination of technology readiness levels (TRL)
Application of the method. In order to illustrate the applicability of the method, one possible
production scenario is described for an oxide based half-cell consisting of cathode composite and
solid electrolyte layer, as depicted in Fig. 6a. For each technology function, one suitable technology
299
was chosen: After mixing of the cathode active material (AM) with the oxide solid electrolyte
particles and additives, such as conductive agents and binders, a tape casting process is used for
electrode forming. After evaporation of the solvent, the sheet is punched to the desired shape. A
sintering step is used for electrode compaction, followed by a laser cutting process for final
shaping. A vapour deposition process is suggested for the solid electrolyte application, since, in this
case, no subsequent high temperature sintering process is required. Interactions of a high
temperature sintering processes with the cathode materials [32] inhibit the use of a possibly cheaper
wet chemical coating process.
a)
Mixing
Tape Casting
Cathode AM +
Oxide + Additives
Drying
Punching
Sintering
Laser cutting
Vapour deposition
Polymer Tape
b)
Solid Electrolyte
c)
Readiness by Stages
Overall Technology Readiness
400
Stage 7
Monte-Carlo simulation
number of runs: n=1000
350
Number of hits
Stage 6
Stage 5
Stage 4
Stage 3
300
250
200
150
100
50
Stage 2
]68%; 70%]
]66%; 68%]
]64%; 66%]
]62%; 64%]
]60%; 62%]
]58%; 60%]
]56%; 58%]
]54%; 56%]
]52%; 54%]
100%
]50%; 52%]
80%
]48%; 50%]
60%
]46%; 48%]
40%
]44%; 46%]
20%
]42%; 44%]
0%
]40%; 42%]
0
Stage 1
Figure 6: Evaluation of a tape casting technology for production of a cathode composite
Exemplary outcomes of the evaluation of the tape casting technology are illustrated in Fig. 6b
and c. The results were obtained in a workshop with three production experts for coating
technologies, using a Monte-Carlo simulation for the evaluation of the questionnaire considering
uncertainty of expert knowledge. In Fig. 6b, the technology readiness is illustrated by the seven
technology stages. The bars each represent the fulfilment level of the stage, supplemented by error
bars which show the standard deviation as expression of the level of data uncertainty. In the case of
tape casting, stages 1 to 4 are sufficiently fulfilled which equals a successful demonstration of the
technology. Development needs can be identified from stage 5 to 7 where mainly the integration of
the technology into manufacturing resources (stage 5) and production system (stage 6) needs to be
optimized in the next steps. Hence, the overall technology readiness can be estimated within the
range of 54% to 60% with high certainty (Fig. 6c); 816 out of 1000 simulation runs of the MonteCarlo simulation can be allocated in this range.
The presented method will allow for a classification and prioritisation of technology chains for
different applications in the industrial environment. This will be a prerequisite for a well-founded
selection and conclusion about technology chains for the large-scale production of ASSBs.
Summary and conclusion
In conclusion, a method to generate and evaluate technology chains for the production of ASSBs
was presented. First, a product structure for ASSBs was developed. An abstract reference
technology function chain is utilised to identify production requirements using expert elicitation.
Subsequently, search fields for the identification of suitable production technologies are deduced.
300
By comparing the identified technologies with the weighted requirements, technology chains can be
generated. Finally, the generated technology chains are evaluated according to their performance
characteristics, such as technology readiness and economic aspects. The exemplary generation of a
technology chain for the fabrication of oxide-based ASSBs was used to illustrate the applicability of
the method, complemented by a technology readiness assessment of the tape casting process.
Further interdisciplinary research will be required to build up detailed product models for bulk-type
ASSBs and to identify corresponding technology function chains and technology alternatives to
facilitate a comprehensive overview of possible production scenarios and the respective challenges.
Acknowledgements
We extend our sincere thanks to the Federal Ministry for Education and Research
(Bundesministerium für Bildung und Forschung) for the funding of our research. The results
presented in this paper have been achieved within the scope of the project “FELIZIA” (grant
number 03XP0026I).
References
[1] J. Janek, W.G. Zeier, A solid future for battery development, Nat. Energy 1 (2016) 16141.
[2] A. Varzi, R. Raccichini, S. Passerini, B. Scrosati, Challenges and prospects of the role of solid
electrolytes in the revitalization of lithium metal batteries, J. Mater. Chem. A 4 (2016) 1725117259.
[3] Y.S. Jung, D.Y. Oh, Y.J. Nam, K.H. Park, Issues and Challenges for Bulk-Type All-Solid-State
Rechargeable Lithium Batteries using Sulfide Solid Electrolytes, Isr. J. Chem. 55 (2015) 472485.
[4] Y. Ren, K. Chen, R. Chen, T. Liu, Y. Zhang, C.-W. Nan, B. Vyas, Oxide Electrolytes for
Lithium Batteries, J. Am. Ceram. Soc. 98 (2015) 3603-3623.
[5] Y. Kato, S. Hori, T. Saito, K. Suzuki, M. Hirayama, A. Mitsui, M. Yonemura, H. Iba, R.
Kanno, High-power all-solid-state batteries using sulfide superionic conductors, Nat. Energy 1
(2016) 16030.
[6] A.C. Luntz, J. Voss, K. Reuter, Interfacial challenges in solid-state Li ion batteries, The journal
of physical chemistry letters 6 (2015) 4599-4604.
[7] A. Sakuda, A. Hayashi, M. Tatsumisago, Sulfide solid electrolyte with favorable mechanical
property for all-solid-state lithium battery, Scientific reports 3 (2013) 2261.
[8] Y. Zhu, X. He, Y. Mo, Origin of Outstanding Stability in the Lithium Solid Electrolyte
Materials: Insights from Thermodynamic Analyses Based on First-Principles Calculations,
ACS applied materials & interfaces 7 (2015) 23685-23693.
[9] N. Machida, J. Kashiwagi, M. Naito, T. Shigematsu, Electrochemical properties of all-solidstate batteries with ZrO2-coated LiNi1/3Mn1/3Co1/3O2 as cathode materials, Solid State Ionics
225 (2012) 354-358.
[10] W. Zhou, S. Wang, Y. Li, S. Xin, A. Manthiram, J.B. Goodenough, Plating a Dendrite-Free
Lithium Anode with a Polymer/Ceramic/Polymer Sandwich Electrolyte, J. Am. Chem. Soc. 138
(2016) 9385-9388.
[11] Y. Ren, Y. Shen, Y. Lin, C.-W. Nan, Direct observation of lithium dendrites inside garnettype lithium-ion solid electrolyte, Electrochemistry Communications 57 (2015) 27-30.
[12] Information on https://www.smartgrid.gov/files/Seeo_SolidStateBatteries_FTR_DEOE0000223_0.pdf
[13] A. Patil, V. Patil, D. Wook Shin, J.-W. Choi, D.-S. Paik, S.-J. Yoon, Issue and challenges
facing rechargeable thin film lithium batteries, Mat. Research Bulletin 43 (2008) 1913-1942.
[14] Y.-S. Hu, Batteries: Getting solid, Nat. Energy 1 (2016) 16042.
[15] Y.J. Nam, S.-J. Cho, D.Y. Oh, J.-M. Lim, S.Y. Kim, J.H. Song, Y.-G. Lee, S.-Y. Lee, Y.S.
Jung, Bendable and thin sulfide solid electrolyte film: a new electrolyte opportunity for free301
standing and stackable high-energy all-solid-state lithium-ion batteries, Nano letters 15 (2015)
3317-3323.
[16] T. Inada, T. Kobayashi, N. Sonoyama, A. Yamada, S. Kondo, M. Nagao, R. Kanno, All
solid-state sheet battery using lithium inorganic solid electrolyte, thio-LISICON, J. Power
Sources 194 (2009) 1085-1088.
[17] S. Ohta, S. Komagata, J. Seki, T. Saeki, S. Morishita, T. Asaoka, All-solid-state lithium ion
battery using garnet-type oxide and Li3BO3 solid electrolytes fabricated by screen-printing, J.
Power Sources 238 (2013) 53-56.
[18] S. Ito, S. Fujiki, T. Yamada, Y. Aihara, Y. Park, T.Y. Kim, S.-W. Baek, J.-M. Lee, S. Doo,
N. Machida, A rocking chair type all-solid-state lithium ion battery adopting Li2O–ZrO2 coated
LiNi0.8Co0.15Al0.05O2 and a sulfide based electrolyte, J. Power Sources 248 (2014) 943-950.
[19] S. Troy, A. Schreiber, T. Reppert, H.-G. Gehrke, M. Finsterbusch, S. Uhlenbruck, P.
Stenzel, Life Cycle Assessment and resource analysis of all-solid-state batteries, Applied
Energy 169 (2016) 757-767.
[20] M. Westermeier, G. Reinhart, T. Zeilinger, Method for quality parameter identification and
classification in battery cell production quality planning of complex production chains for
battery cells, in: 2013 3rd Int. Electric Drives Prod. Conference (EDPC), IEEE, 2013, pp. 1-10.
[21] G. Reinhart, J. Kurfer, M. Westermeier, T. Zeilinger, Integrated Product and Process Model
for Production System Design and Quality Assurance for EV Battery Cells, AMR 907 (2014)
365-378.
[22] T. Günther, N. Billot, J. Schuster, J. Schnell, F.B. Spingler, H.A. Gasteiger, The
Manufacturing of Electrodes: Key Process for the Future Success of Lithium-Ion Batteries,
AMR 1140 (2016) 304-311.
[23] Deutsches Institut für Normung e.V, Value Management - Vocabulary - Terms and
definitions, DIN EN 1325:2014-07, Beuth Verlag, 2014 (2014-07).
[24] U. Flick (Ed.), Designing Qualitative Research, SAGE Publications, 2007.
[25] J. Greitemann, M. Hehl, D. Wagner, G. Reinhart, Scenario and roadmap-based approach for
the analysis of prospective production technology needs, Prod. Eng. Res. Devel. 10 (2016) 337343.
[26] A. Kampker, P. Burggraf, C. Deutskens, H. Heimes, M. Schmidt, Process alternatives in the
battery production, in: Electrical systems for aircraft, railway and ship propulsion (ESARS),
IEEE, Piscataway, NJ, 2012, pp. 1-6.
[27] N.H. Menzler, F. Tietz, S. Uhlenbruck, H.P. Buchkremer, D. Stöver, Materials and
manufacturing technologies for solid oxide fuel cells, J Mater Sci 45 (2010) 3109-3135.
[28] M.-J. Pan, C.A. Randall, A brief introduction to ceramic capacitors, IEEE Electr. Insul.
Mag. 26 (2010) 44-50.
[29] G. Reinhart, S. Schindler, Strategic Evaluation of Technology Chains for Producing
Companies, in: H.A. ElMaraghy (Ed.), Enabling Manufacturing Competitiveness and Economic
Sustainability: Proceedings of the 4th International Conference on Changeable, Agile,
Reconfigurable and Virtual production (CARV2011), Springer Berlin Heidelberg, Berlin,
Heidelberg, 2012, pp. 391-396.
[30] G. Reinhart, S. Schindler, P. Krebs, Strategic Evaluation of Manufacturing Technologies, in:
J. Hesselbach, C. Herrmann (Eds.), Glocalized Solutions for Sustainability in Manufacturing:
Proceedings of the 18th CIRP International Conference on Life Cycle Engineering, Springer
Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 179-184.
[31] G. Reinhart, S. Schindler, A strategic evaluation approach for defining the maturity of
manufacturing technologies, World Academy of Science, Engineering and Technology 47
(2010) 730-735.
[32] T. Kato, R. Yoshida, K. Yamamoto, T. Hirayama, M. Motoyama, W.C. West, Y. Iriyama,
Effects of sintering temperature on interfacial structure and interfacial resistance for all-solidstate rechargeable lithium batteries, Journal of Power Sources 325 (2016) 584-590.
302
Scalable assembly for fuel cell production
Tom Stähr1,a, Florian Ungermann1,b and Gisela Lanza1,c
1
Karlsruhe Institute of Technology, wbk Institute of Production Science, Kaiserstraße 12,
Karlsruhe, 76131, Germany
a
tom.staehr@kit.edu, bflorian.ungermann@kit.edu, cgisela.lanza@kit.edu
Keywords: Production planning, fuel cell, scalability
Abstract. The reduced time-to-market and multiple innovations lead to a rising number of emerging
technologies and new products. Production systems for emerging technologies are subject to high
stress from highly volatile influencing factors such as volume and variants. In order to react to these
factors and to achieve cost-efficient production, companies need to establish scalable production
systems. This paper introduces a methodology which supports the production planner with an iterative
planning method for a scalable production system focussing on the scalability of the level of
automation. The methodology consists of four steps. Its basis constitutes in a scenario analysis of the
influencing factors for the production system. In the next step, alternative configurations of the
production system are generated. From the different configurations, possible scaling paths are derived
in accordance with the scenarios. The final step focusses on identifying the optimal scaling paths
according to production cost and risk. The methodology will be demonstrated with the use case of
fuel cell production within the European research project INLINE.
Introduction
Assembly, as the final step in many production processes and the tool for realising customer
specific diversification, is highly effected by globalisation and technological progress. Consequently,
assembly is confronted with growing variant diversity and shortened product life cycles [1]. In fact,
many different stresses are having an impact on assembly systems. They overlap and mutually
influence one another, thus creating a turbulent production environment [2]. Especially in the
production of emerging technologies, the receptors affecting the production system are extremely
volatile and difficult to predict. In order to react to these volatile receptors, companies are in need to
establish scalable production systems. During the planning phase, a scalable system leads to an
increase of cost because planning becomes more complex. However, if the life cycle of the system is
taken into account, a scalable system will not only reduce total production cost. It will also reduce
the investment risk due to the gradual implementation of the investment. The aim of this paper is to
propose a methodology that enables production planners to design a scalable assembly system and to
choose the right system configuration at a given moment in time.
The production of proton exchange membrane fuel cells (PEMFC) for mobile applications is an
example of an emerging technology with a high uncertainty regarding future demands and variants.
Due to the high degree of uncertainty, investing in a highly automated, high volume production line
bears a high risk. Consequently, suppliers of PEMFC need a scalable production line that allows to
quickly adapt to changes in volume and variants in order to plan a cost-efficient production system
under high uncertainty. Within the European research project INLINE [3], dedicated to the production
of PEMFC, the methodology will be demonstrated and further adapted to the needs of industry.
The paper is structured into five sections. The introduction is followed by an overview of current
works in the field of production planning for changeable production systems. Section 3 introduces
the use case of PEMFC production within the INLINE project and provides an overview of the value
stream. The main section is dedicated to the introduction of the methodology for the planning of a
scalable assembly system and the application in the INLINE project. Section 5 concludes the paper
with a short summary.
303
State of the art in scalable production systems
Volatile influencing factors are the key driver for scalable production systems. In literature,
different names can be found for production systems that are able to react to these influencing factors.
Scalable, changeable, reconfigurable or agile systems have been proposed. First contributions focused
on the description and categorisation of influencing factors. A common understanding is to view the
changes in time, quality, cost, variants and volume as receptors which impact one another as well as
the production system [2].
Further research has dealt with the clear distinction and definition of the concepts of agile,
changeable or scalable systems. The authors in [4] define adaptability as the sum of flexibility and
changeability. Flexibility is the ability of an assembly system to adjust quickly and without additional
investments to changes in the five receptors. The changes are therefore limited by a predefined
flexibility corridor. Changeability allows, on the other hand, an organisational and technical
adjustment of the assembly system beyond this flexibility corridor. Scalability can be part of
flexibility and changeability. Based on this definition, [5] developed a framework of measuring the
degree of changeability in a production system. The profitability and cost assessment of adaptable
production systems is evaluated in [6].
More recent approaches suggest a broad variety of concepts to actually plan and optimise adaptable
production systems. An early approach to this topic considers in the planning of reconfigurable
manufacturing systems with a focus on intralogistics in a system of machine tools [7]. A planning
approach based on reconfiguration - not through exchanging stations but through modifying stations
within a production system - is introduced in [8]. Within the field of reconfigurable manufacturing
system, approaches have been published to design these systems [9], optimise existing systems [10]
and to develop new layout principles [11]. Some approaches also focus on the adjustment of the
production planning process in order to develop modular systems [12] and adaptable systems [13].
With the evolvement of new production equipment, such as light weight robots for human robot
interaction, novel options for adaptable production systems have been developed. An example of a
dual-armed robot for real human-robot interaction has been developed by [14]. An important topic in
this field constitutes in the safety of participating workers, which is ensured by soft links and a limited
force of the robot in [14]. Different scaling mechanisms used to scale adaptable production systems
are characterised by [15]. A first approach to the realisation of a planning method for scalable
automation was published by [16]. The adaptation of existing approaches to the practical case of a
learning factory was performed by [17].
Numerous authors dedicate themselves to changeable assembly planning. A clear focus on scaling
the level of automation and supporting the planner with a methodology to identify an optimal scaling
path has not yet been thoroughly addressed. The methodology proposed in this paper aims at closing
this research gap. The approach pursued by this paper builds on existing approaches in scenario
analyses for the detection of the needed level of scalability and Markov decision problems for the
selection of optimal system configurations. A good overview of scenario analyses for production
planning can be found in [18]. An application of scenario analyses to change drivers and the use of
Markov decision problems in global production networks is conducted by [19]. An example of
Markov decision problems in capacity planning of production systems is provided by [20]. The author
uses backward induction to solve the problem of choosing the system configuration with the lowest
cost.
Production of PEMFC
In the context of the INLINE project, the entire production process of a PEMFC for the use in
mobile applications is considered. According to [21], the industrialisation of the PEMFC production
process for high volumes bears potential for reducing the cost of fuel cells to the respective level of
combustion engines. Consequently, the objective constitutes in the establishment of a globally
competitive PEMFC produced in Europe. As part of the INLINE project, a fuel cell is designed for
replacing lead-batteries in material handling equipment. The project partner provides a complete
304
solution including AC/DC inverters for the use of solar power, an electrolysis station for the
generation of hydrogen, the filling station and the actual fuel cell. This system offers lower life cycle
costs compared to battery based systems, at zero emissions. The lower cost is mainly based on a high
life time of the fuel cell and a significantly shorter refilling process for filling the hydrogen tank,
compared to the exchange of an empty battery. [22]
The production process of the fuel cell consists mainly of eight steps: (step 1) production of the
fuel cell stack, (step 2) production of the tank valve, (step 3) production of the control unit, (step 4)
battery assembly, (step 5) production of the filling interface, (step 6) production of the power box,
(step 7) production of the fuel cell housing and (step 8) final assembly. All of these process steps
need to be designed with respect to scalability of volume and variants. Production of the fuel cell
stack (step 1) and tank valve (step 2) are performed by suppliers who are part of the project. The
remaining process steps are executed at the fuel cell OEM. The project places specific focus on the
final assembly. The prior process steps are already quite mature and the production equipment can be
used for various other products, smoothing the lines in terms of volume volatility. In the final
assembly of the PEMFC, the volatility of variants and volume have the highest impact.
Presently, the final assembly of the fuel cell is a purely manual task. A key challenge is the secure
sealing of all parts containing hydrogen. Accordingly, the technologies applied in the process are
mainly screw fitting and installing tubes and cables. The automation of handling operations with limp
parts is extremely challenging. For that reason, the installation of tubes and cables will most likely
not be automated in the near future. Since the space within the fuel cell housing is very limited, the
screw fitting operations also constitute a great challenge for the final assembly.
Methodology for the planning of a scalable assembly line
The methodology is subdivided into four steps that are carried out during the rough-planning phase
and repeated iteratively before each decision to scale the assembly system (Figure 1). The first step
is the identification of possible scenarios for the volatile receptors. Step 2 contains in the generation
of line configurations fitting the different ranges of expected values for the receptors, based on the
scenario analysis. The scaling paths are developed by a timewise connection of different line
configurations in step 3. In the fourth step, the uncertainty described in the scenario analysis is used
as the basis for a risk analysis of the scaling paths. As a result, one initial configuration is
recommended as well as an ideal strategy for future scaling steps, considering different possible
developments of the receptors. This recommendation will be revised periodically due to the repetition
of the methodology based on new information. The following paragraphs describe the methodology
and the application to the use case in greater detail. The application of the methodology for PEMFC
production in the INLINE project has only been completed for the scenario analysis and started for
the generation of configurations. Consequently, the application of the methodology to the use case
cannot be covered in this article for steps 3 and 4.
1.
Scenario Analysis
2.
Generation of configurations
3.
Consideration of Scaling Cost
4.
Opt.
Selection of optimal configuration
Figure 1: Four steps of the proposed methodology
305
Scenario Analysis. The first step constitutes in the analysis of volatility within the receptors of the
assembly system (compare [2]). An interdisciplinary scenario team, consisting of the production
planner, product manager, sales representative and additional members if required, carries out this
analysis. The first step of the scenario analysis deals with the identification of volatile receptors. In
the case of PEMFC final assembly in Western Europe, the scenario team expects the receptors time,
quality and cost to remain stable over the considered time span. Accordingly, the team needs to predict
only the expected scenario for the development of the production volume and variants. Basically, two
main variants determine the production. These are the small 24 volt and the considerably bigger 80
volt fuel cells. Since these variants are at different development stages and address two different
markets, the volume scenario has been carried out for both variants. Figure 2 shows a simplified
model of the scenario of production volumes of the 24 volt fuel cell in the INLINE project.
H2-technology
breaks through
Growing market
Start
End
Project Phase
Li-Ion-technology
succeeds over H2
End of production
Figure 2: Simplified scenario analysis for the PEMFC market
The scenarios are modelled using an adaptation of the Business Process Model and Notation
(BPMN). During interviews with the scenario team, characteristic phases of the future development
of a receptor are developed. Each phase is described by a trend and a start value. The different phases
are connected by stochastic events that represent disruptive events within the planning horizon. Each
event is described by an occurrence probability and a stochastically distributed time of occurrence.
In the example of PEMFC production, the scenario starts out in a project phase with low volumes. In
the future, hydrogen technology might have its breakthrough leading to a transition towards the
growing market phase with an elevated starting value and a positive trend. On the other hand,
hydrogen technology might be replaced by Li-ion batteries leading to the end of the production phase
with a production volume of zero.
After a complete modelling of phases, trends and occurrence probabilities, the scenario model is
transferred into a simulation tool. Within the tool, a high number of possible realisations for the actual
scenario is computed as part of a Monte Carlo simulation. The result of the simulation is a collection
of possible future developments of a receptor over a defined time span. In the case of PEMFC
production, a collection of production volume over time trends for the next ten years has been
computed, both for the 24 volt fuel cell and the 80 volt fuel cell.
Generation of Configurations. Once the scenarios of the future developments of the relevant
receptors have been established, the planner knows for what degree of uncertainty the production
system must be planned. An important indicator constitutes in the frequency of expected changes in
the receptors. If major changes can be expected in a short time span – on a weekly basis for example
– a high level of flexibility is required. For changes with lower frequencies, scaling in terms of
changeability is needed. The reaction to these changes is achieved by applying the scaling
mechanisms as described in [15] to the assembly system. In order to broaden the toolbox for
scalability in this approach, the scaling mechanisms of [15] are extended by the concept of
reallocation of assembly tasks. This mechanism reallocates assembly tasks to a higher or lower
number of assembly stations to scale the tact time of the production system. The considered scaling
mechanisms can be subdivided into three categories (compare Table 1).
306
Table 1: Scaling mechanisms by impact (compare [15])
Changeability
Flexibility
Automation Potential
System
Station
Organisational
Duplication of
Adjustment of
Adjusting # of
bottleneck
automation level
workers
Duplication of
Reallocation of
Adjusting shift
system
assembly tasks
model
The scaling mechanisms either have an impact on the station configuration of production
equipment (category 1), the system configuration (category 2) or they are purely organisational
mechanisms (category 3). In the case of PEMFC, the volatility is expected to stem mostly from midterm to long-term effects such as life cycle, strategic decisions of large customers and subsidies.
Accordingly, the focus is set on the mechanisms in category 1 and category 2 having an impact on
the changeability of the system or single stations. All these mechanisms are considered while planning
the PEMFC line. In the following, the focus shall rest on scalable automation to provide a detailed
insight into one of the mechanisms.
Basically, any task can be automated with the use of modern automation technology. The effort,
and hence the cost of automation however differs greatly depending on the specific task to automate.
Accordingly, it is necessary to quickly identify the right tasks to automate. Planning technical
solutions of automated configurations of all modules on CAD level obviously requires too much
effort in the planning phase. Thus, a framework for the selection of the correct tasks to automate has
been developed. The framework consists of two dimensions: automation potential and automation
barrier. In an interview with process experts, the planner determines quantitative values for each
possible task to automate. The automation potential is determined by the expected reduction of tact
time, reduction of necessary workers and the improvement of the quality rate. On the other hand, the
automation barrier is determined by the variance, handle ability of parts and the reachability of the
station. The consolidated values for automation potential and barrier of all considered tasks are
compared in an automation graph. The planner can define the relation of potential over barrier that a
task needs to fulfil in order to classify as a task to be automated. Figure 3 shows an example of the
automation matrix. Only the tasks which are above the potential barrier line will be planned to a level
that gives the planner a value for investment, process time and number of workers of the station.
Tasks to be
automated
Neglected Tasks
Automation Barrier
Figure 3: Automation matrix
Within the INLINE project, the current state of production in the demonstrator is in a phase
between prototype and small series production. The bill of materials (BOM) and assembly
instructions are finalised. The final assembly is performed completely manually at five separate
assembly stations. This line setup is the starting point for the development of different system
configurations. The first step in creating new system configurations is to divide the existing assembly
307
stations into four different modules similar to those described in [12]. In this approach, stations are
divided into base, process, transport and feeding module (compare Figure 4).
Feeding
Module
Transport
Module
Process
Module
Transport
Module
Base Module
Figure 4: Station modules
Each of these modules can be realised at an individual level of automation. In the case of the final
assembly in the demonstrator, all four modules are realised in the manual option. Because of the
automation analysis, only the automation of the screwing processes will be considered. In the
following steps, the example of PEMFC production can no longer be used since the last two steps of
the methodology have not yet been applied to the production line.
With the application of the remaining scaling mechanisms to the initial configuration, the planner
ends up with many different line configurations, each defined by tact time, number of workers needed,
investment cost and personnel cost. Based on this information, the relation of production volume to
unit cost is calculated as the result of the second step in the methodology “Generation of
Configurations”.
Consideration of Scaling Cost. After the calculation of unit cost over production volume for each
line configuration it is possible to determine the configuration with the lowest unit cost for each value
range within the receptors. Of course, this does not include the cost of scaling to a new configuration.
When considering the scaling cost, it might be better to skip a configuration and scale directly to the
next. Also, it might be ideal to install a configuration that never has the lowest unit cost but offers
fundamental savings in the scaling cost. Accordingly, it is necessary to identify these additional
configurations that might be part of an ideal scaling path. Since the requirement for becoming one of
these configurations is a reduction of scaling cost, only a configuration that has lower scaling cost
than the successor and predecessor of a configuration constitutes an option. The scaling cost ࡿ࡯ࢇǡ࢈ of
scaling from configuration ࢇ to configuration ࢈ is defined as the sum of start-up cost ࡿࢁ࡯ࢇǡ࢈, down
timeࡰ࡯ࢇǡ࢈ and extraordinary write off ࢘࢏ for all production equipment ࢏ that are removed from
configuration ࢇ (Eq. 1).
ܵ‫ܥ‬௔ǡ௕ ൌ ܷܵ‫ܥ‬௔ǡ௕ ൅ ‫ܥܦ‬௔ǡ௕ ൅ σ௡௜ୀଵ ‫ݎ‬௜ .
(1)
The start-up and downtime cost need to be estimated by an expert for each pair of configurations.
For the determination of extraordinary write-off of production equipment, the degree of changeability
of each component is examined. This is done by the integrative evaluation of changeability of [5],
which is similar to a utility analysis. The assessment is based on a catalogue of different identically
weighted changeability characteristics which cover the areas universality, neutrality, mobility,
modularity, compatibility and object specific potential for change. For each production equipment ݅ ,
the degree of fulfilment of each changeability characteristic is determined in form of a percentage
value. The weighted sum of the degrees of fulfilment is the component’s degree of changeability. If
this value is below or equal to 25 %, it is assumed that the component must be scrapped because it is
too product-specific to be used in other assembly systems. Therefore, the extraordinary write-off is
308
estimated as the remaining book value of the production equipment. If the degree of changeability is
above 25 %, the extraordinary write-off is the components book value weighted with 1 - the degree
of changeability. Since the calculation of scaling cost requires some effort, it is not economically
feasible to determine it for each possible combination of configurations. Therefore, a similarity index
(Eq. 2) for configurations has been developed to facilitate an automated estimation of scaling cost
that allows a relative comparison between two configurations.
ͳΤ͓‫ ݏ݊݋݅ݐܽݐݏ݀݁ݒ݋݉݁ݎ‬൅ ͓ܽ݀݀݁݀‫ ݏ݊݋݅ݐܽݐݏ‬൅ ͓‫ݏ݊݋݅ݐܽݐݏ݀݁ݐܽܿ݋݈݁ݎ‬Ǥ
ሺʹሻ
Based on the similarity index, only configurations whose sum of similarity index from successor
and predecessor is higher than that of a configuration in the original solution set are added to the
selection. For the purpose of simplification, only the production volume will be considered as a
volatile receptor. The final collection of configurations is plotted on a common production volumeunit cost graph. All intersections of unit cost curves resemble possible scaling points. Based on this
information, the value range of a receptor can be divided into characteristic subranges that will remain
without scaling. These subranges are visualised by the dotted line connecting the two graphs in Figure
5. Based on the results of the Monte Carlo simulation carried out in Step 1 “Scenario Analysis”, the
transition probabilities p(t_t+1) from each subrange into the other subranges from time increment ‫ݐ‬
to time increment‫ ݐ‬൅ ͳ can be calculated for each time increment in the planning horizon (compare
Figure 5).
Production volume
1 2 3 (Configurations)
p(t_t+1)
p(t_t+1)
Unit Cost
Time
Figure 5: Scaling points and transition probabilities
As a result of step 3 “Consideration of Scaling Cost”, the production planner receives a complete
transition matrix of possible scaling points including transition probabilities and scaling cost for all
scaling possibilities between the considered configurations.
Selection of optimal configuration. Considering the complete planning horizon, different
configurations of the production line will be ideal at different times. The combination of
configurations installed over the course of the planning horizon is called a scaling path. With all
necessary information gathered, it is now possible to identify optimal scaling paths. The competing
scaling paths are compared in terms of their total cost over the life cycle considering the sum of
operating cost and scaling cost. When not considering any uncertainties, it is possible to directly
calculate the most economical scaling path of configurations over the planning horizon for a defined
scenario of receptors. However, when considering the uncertain development of the receptors,
different scaling paths could be optimal. Since the scenarios of receptors and cost of configurations
in dependence of the values within the receptors are well described, this problem can be modelled as
a Markov decision problem (MDP).
Depending on the frequency of changes in the receptors predicted during the scenario analysis, the
time is modelled as discrete moments in time, for example months or years. For each time increment,
the possible states of the system are defined by the product of considered configurations and
subranges of the receptors. The decision set in each time increment consists of the possibilities to
309
scale to any of the configurations as well as the decision to stick to the current configuration. The cost
function is defined by the operating cost plus the scaling cost between the configurations. By applying
a backward induction algorithm to the problem, the economically ideal decision is calculated starting
at the last time increment and going backwards until the ideal initial configuration has been detected.
Because of the complete calculation of ideal decisions for each state throughout the planning
horizon, the optimal configuration can be identified at any time independent of the actual realisation
of the receptor scenario. Still, an iterative approach is needed over the product life cycle. All changes
in receptor scenarios, cost structure, configuration set, etc., generate new information that must be
considered in a new cycle of the four steps, possibly leading to new ideal scaling strategies.
Summary
High stresses affecting production systems of emerging technologies lead to the need of scalability.
In this paper, a methodology has been introduced that enables production planners to consider future
volatility in the production receptors during the planning phase. With the use of the proposed
methodology, it is possible to anticipate various stresses and predevelop the solution on a
technological and conceptual level. If applied correctly, risks of investment can be reduced and the
production in a volatile environment can be executed at stable and competitive production cost. The
method is demonstrated within a publicly funded research project on the use case of PEMFC
production and a focus on the final assembly. In the course of the project, the method shall be further
developed and adjusted to the needs of industry.
Acknowledgements
The project leading to this article has received funding from the Fuel Cells and Hydrogen 2 Joint
Undertaking under the grant agreement No. 735367. This Joint Undertaking receives support from
the European Union’s Horizon 2020 research and innovation programme and Hydrogen Europe and
N.ERGHY.
References
[1] B. Lotter, H.-P. Wiendahl, Montage in der industriellen Produktion. Ein Handbuch für die Praxis,
Springer-Verlag, Berlin, 2012.
[2] R. Cisek, C. Habicht, P. Neise, Gestaltung wandlungsfähiger Produktionssysteme, Zeitschrift für
wirtschaftlichen Fabrikbetrieb, Bd. 97, Nr. 9 (2002) 441–445.
[3] Information on http://www.inline-project.eu
[4] P. Nyhuis, G. Reinhart, E. Abele, Wandlungsfähige Produktionssysteme. Heute die Industrie von
morgen gestalten, PZH Produktionstechnisches Zentrum, Hannover, 2008.
[5] C.L. Heger, Bewertung der Wandlungsfähigkeit von Fabrikobjekten, PZH, Produktionstechn.
Zentrum, 2006.
[6] N. Moeller, Bestimmung der Wirtschaftlichkeit wandlungsfähiger Produktionssysteme, Utz,
München, 2008.
[7] Y. Koren, U. Heisel, F. Jovane, T. Moriwaki, G. Pritschow, G. Ulsoy, H. Van Brussel,
Reconfigurable Manufacturing Systems, CIRP Annals-Manufacturing Technology 48.2 (1999) 527540.
310
[8] Y. Rao, P. Li, X. Shao, K. Shi, Agile manufacturing system control based on cell re-configuration,
International Journal of Production Research 44 (2006) 1881–1905
[9] Y. Korena, M Shpitalni, Design of reconfigurable manufacturing systems, Journal of
Manufacturing Systems 29 (2010) 130–141
[10] W. Wang, Y. Koren, Scalability planning for reconfigurable manufacturing systems, Journal of
Manufacturing Systems 31 (2012) 83– 91
[11] G. Rosati et al, Fully flexible assembly systems (FǦFAS): a new concept in flexible automation,
Assembly Automation 33 (2013) 8-21
[12] S. Kluge, Methodik zur fähigkeitsbasierten Planung modularer Montagesysteme. IPAIAOBericht, Jost-Jetter-Verlag, Heimsheim, 2011.
[13] J. Pachow-Frauenhofer, Planung veränderungsfähiger Montagesysteme, Tewiss, Hannover,
2012.
[14] S. Kock et al, Robot concept for scalable, flexible assembly automation: A technology study on
a harmless dual-armed robot, IEEE International Symposium on Assembly and Manufacturing
(ISAM), 2011.
[15] J. Eilers, Methodik zur Planung skalierbarer und rekonfigurierbarer Montagesysteme, Apprimus
Wissenschaftsverlag, Aachen, 2015.
[16] G. Lanza, T. Stähr, S. Sapin, Planung einer Montagelinie mit
Automatisierungsgrad, Zeitung für wirtschaftlichen Fabrikbetrieb 10 (2016) 614-617
skalierbarem
[17] J. Buergin et al, Demonstration of a Concept for Scalable Automation of Assembly Systems in
a Learning Factory, Procedia Manufacturing 9 (2017) 33-40
[18] J. Ude, Entscheidungsunterstützung für die Konfiguration globaler Wertschöpfungsnetzwerke Ein Bewertungsansatz unter Berücksichtigung multikriterieller Zielsysteme, Dynamik und
Unsicherheit, Forschungsberichte aus dem Wbk, Aachen, 2010.
[19] R. Moser, Strategische Planung globaler Produktionsnetzwerke - Bestimmung von
Wandlungsbedarf
und
Wandlungszeitpunkt
mittels
multikriterieller
Optimierung,
Forschungsberichte aus dem Wbk, Aachen, 2014.
[20] S. Peters, Markoffsche Entscheidungsprozesse zur Kapazitäts- und Investitionsplanung von
Produktionssystemen, Forschungsberichte aus dem Wbk, Aachen, 2013.
[21] H. Tsuchiya, O. Kobayashi, Mass production cost of PEM fuel cell by learning curve,
International Journal of Hydrogen Energy 29 (2004) 985-990
[22] S. Barrett, Fronius, Linde MH show HyLOG-Fleet fuel cell tow tractor, Fuel Cells Bulletin 7
(2011) 3-4
311
Conception of Generative Assembly Planning in the Highly Iterative
Product Development
Marco Molitor1,a, Jan-Philipp Prote1,b, Stefan Dany1,c Louis Huebser1,d and
Günther Schuh1,e
1
Laboratory for Machine Tools and Production Engineering (WZL) RWTH Aachen University,
Steinbachstraße 19, 52074 Aachen, Germany
a
c
M.Molitor@wzl.rwth-aachen.de, bJ.Prote@wzl.rwth-aachen.de,
S.Dany@wzl.rwth-aachen.de, dL.Huebser@wzl.rwth-aachen.de, eG.Schuh@wzl.rwth-aachen.de
Keywords: Assembly Planning, Highly Iterative Product Development, Agile Product Development
Abstract. Highly iterative product development processes are designed in order to meet customer
needs at high speed. The iterative procedure allows creating a functional prototype within a short
period of time. This involves a situation-driven interplay between scheduled phases in which requirements are defined as well as agile phases in which the customer requirements are detailed.
The challenges described above also have an impact on the discipline of assembly planning since
an iterative development method meets a sequential assembly planning method. The aim of this
paper is the introduction of a concept which adapts assembly planning to the requirements of highly
iterative product development. This conception enables a step-by-step creation of the assembly plan
in the prototype production – the generative assembly planning.
Introduction
Nowadays, most of the products compete in saturated markets. Overcapacity, globalization, price
pressure, as well as a variety of products, are drivers for continuous changes in the market environment. This results in a shortening of the product life cycle as well as a further division of customers
into smaller market segments [1]. The trend towards individual and customized products leads to an
increasing number of product variants [2]. In order to be able to react to continuously rising volatilities in the market and the associated customer needs, internal processes have to become more agile
[3].
One way to meet these demands are highly iterative development processes. The approach of
highly iterative product development divides the entire process into several smaller iteration cycles,
in order to be able to respond more quickly to possible design changes or customer requirements
[4]. At the end of each iteration cycle, a functional prototype for test and validation purposes is
available [5]. The related change requests of the test phase are input for the next iteration phase [6].
The high number of iterative loops allows a continuous adaptation to the customer specification by
an adaptation of the construction [7].
This results in considerable challenges for assembly planning as a discipline of serial production.
Whereas a largely finished design is transferred to assembly planning after each stage gate in the
traditional process, iteration cycles permit a continuous adjustment of the construction in the highly
iterative product development. The continuous modification of the design results in a continuous
change of the engineering and manufacturing bill of material (E-BOM/M-BOM) as well as the assembly plan. These conditions are reflected in as the existing approaches to assembly planning are
not adequately suited for highly iterative product development, an increased workload during the
assembly planning before the serial production is generated.
*
Submitted by: M. Sc., Marco Molitor
313
Research aim and approach. The presented research approach deals with the question how the
discipline of assembly planning can be adapted to highly iterative product development. This leads
to the following research questions:
I.
II.
III.
What are the differentiating features of highly iterative product development?
What are the requirements for assembly planning resulting from the differentiating features?
What is the concept for generative assembly planning in the highly iterative product development?
Classification of highly iterative product development approaches
Various product development methods were investigated to derive classification characteristics
for sequential, simultaneous and agile development processes. Based on this research a classification model with four main categories which are deduced from apparent differences between the
product development methods is introduced. Each main category is segmented into several attributes with corresponding characteristics of these. In the following section, various product development methods will be described shortly in the same succession of the main categories of the subsequent classification model. The outline follows roughly the historic course.
Phase-oriented methods. The waterfall model, first described by H. Benington [8] in 1956 and
formally published by W. Royce [9] in 1970 is closely related to software development but also
widely adapted across various industries. This method is based on a strict sequential procedure with
defined phases and hand offs. It is quite inflexible due to its dependency on results of previous
phases.
In the 1960s phased project planning often referred to as phased review process was thoroughly
practised by NASA [10]. This method consists of “Phase A-F”. The overall focus is on the corresponding review process which only allows proceeding if previous results are approved.
In 1986, the VDI 2221 standard [11] was introduced as a systematic approach to the development and design of technical systems and products which contains seven distinctive phases with
defined types of interim results where iterative returns are included.
The Stage-Gate-Model [12] was introduced by R.G. Cooper in 1986. Since then it became a
widely used standard for development processes. This method consists of distinctive phases and
gates which may vary from industry to industry. The main characteristic is the review and approval
of further progress within each gate, whereas each gate focuses on different topics ranging from
feasibility to product launch.
Simultaneous methods.
Simultaneous Engineering, often referred to synonymously as Concurrent Engineering [13]
arises in the 1980’s [14]. Eversheim states that the main target is the parallelisation of phases, teamoriented working approaches across departments and an intensive information flow in order to
adjust production and product requirements to market requirements.
Agile methods. In 1986, Nonaka and Takeuchi published their article The New Product Development Game where they introduced the idea that product development may occur iteratively and
dependent on a team’s learning behaviour in contrast to the conventional highly structured and organised approaches [15].
In 2001, 17 leading developers of lightweight-methodologies signed the Agile Manifesto as a
guideline for agile software development [16]. The Agile Manifesto arose in a counter reaction to
the non-flexible conventional development approaches which were unable to cope with the new and
volatile market and customer requirements.
In the same year, Schwaber and Beedle published their book Agile Software Development with
Scrum [17]. Scrum is based on the values of the Agile Manifesto and is considered as an empirical,
incremental and iterative development approach mainly focusing on aspects of project management.
314
In a desire to apply agile software methods to physical product development, hybrid approaches
were developed in recent time. In 2005 Kalström and Runeson published their article “Combining
Agile methods with Stage Gate Project Management” [18] and in 2014 R.G. Cooper proposed the
so-called Agile-Stage-Gate methodology to incorporate Scrum methods within the very phases of
the Stage-Gate model. [19]
Furthermore in 2015 Schuh et al. published the approach of the highly iterative product development with the aim to fit agile methodologies into engineering projects by combining deductive
methods from conventional development approaches and different types of sprint cycles as well as
their anchoring within the overall development process [20].
Based on the results, it is noticeable that the Agile Manifesto states the apparent differences between conventional and agile development methods. Hence, the four main categories of the classification model are derived from the four categories of the “Agile Manifesto”: “collaboration”, “development process”, “customer” and “change”. Each main category contains certain attributes and
three different degrees of the corresponding characterization of those (refer to Table 1).
Project Management
Process Flow
Customer
Collaboration
Attributes for
Classification
Development process
Cat.
Organisational approach
Concept of human social
behaviour
Characteristics of Attributes
Simultaneous
a-priori planning: certain
n degree
of freedom
task-actuated teams: defined,
f
task-oriented structures
collaboration within settled
tl
structures
parallelization of flow
sequences
reasonable degree of
documentation
Documentation
high degree of documentation
Synchronisation of
development tasks and
product requirements
not considered as requirements synchronisation of
are well pre-defined
development process
Definition of product
requirements
Customer Orientation
Time-to-Market
Intention of changes
Change
Phase-oriented
a-priori planning: detailed work
breakdown structure
low diversified teams: cohesive
and fixed structures
hierarchical decision, work
assignment structures
sequential and straightforward
flow with pre-defined phases
pre-defined by functional
specification documents
requirement specification and at
the end for approval
requirements of end customer
are mainly defined
periodically, especially by
approval
medium, sequential phases in
long, strict sequential approach
parallel
mistakes, error corrections,
part of the development
shall occur seldom
process
Responsiveness to
fairly low: lack of customer
changing customer
feedback possibilities
requirements
Course of product maturity
slow continuous increase
level
Agile
short-cycled micro planning
highly diversified teams: very
flat hierarchy
autonomous and participatory
structures
iterative approach
demand-actuated degree of
documentation
Continuous synchronization
c
continuous
specification by
collaborating
with customer
olla
active participation
a
in the
development process
short, short-cycled
h
iteration
phases
chancee to
t improve the product
and ccustomer satisfaction
medium scaled and slightly
delayed
high 'refresh rate' of customer
require-ments
accelerated compared to a
sequential increase
fast implementation of product
features
Hybrid Approach
Table 1: Classification model for product development methods with exemplary profile line of
a hybrid product development
The attributes of each of these overriding categories can be determined through an observant review of the literature. Especially if an author of recent literature emphasise certain points, it might
constitute appropriate characteristics as differences between the conventional phase-oriented and
more recent approaches evidently emerge from those and can be subsumed to attributes of the classification model. The derivation of these attributes within their category is based on a literature review.
315
Collaboration. Nonaka and Takeuchi describe some important modifications in The New Product
Development Game [21] regarding the application of more flexible approaches. One of those is the
concept of “shared division of labour” wherein each team member feels responsible and can work in
any aspect of the project. This is in contrast to phase-oriented approaches where certain tasks of the
product development are only conducted by specialists e.g. software engineers for code related
tasks. As a result, the organisational approach can be identified as an attribute of this category. Furthermore, Nonaka and Takeuchi emphasize the importance of learning and cross-fertilization within
a team which shall be highly diversified. In addition, Scrum uses very participatory methods like
Scrum Poker. The shifting tendency of the concept of human social behaviour from a very mechanic
and hierarchical to an individually and participatory view is evidently taking place within agile
product development. To successfully incorporate the before mentioned attributes, a team’s autonomy must be increased and therefore conventional management approaches have to be changed.
Therefore Nonaka and Takeuchi suggest a more subtle form of management control with less
management intervention instead of a rigorous review process like in phase-oriented approaches.
This different kind of management is incorporated into the classification model as project management.
Development Process. The outline of the process flow is also described by Nonaka and Takeuchi as
they differentiate between a relay-like phased approach versus an overlapping and a rugby-like
scrum approach. Further differences in the process flow are given by Cooper by combining iterative
characteristics with a phased-oriented approach [19] and by Schwaber within the Scrum method
based on iterations [17]. This leads to the classifying attribute of a process flow within the herein
proposed classification model. Eckstein’s model of an integrated development process exemplifies
synchronisation by coupling tasks to the progress and information uncertainty of related tasks [22].
Cooper mentions the real-time and adaptive planning ability of agile approaches in software development which is a remarkable difference to conventional phase and apriori planning approaches
[19]. The combination of these two aspects leads to the classifying attribute synchronisation of
development tasks and product requirements. The last classifying attribute documentation in this
category can be directly derived from the Agile Manifesto as it states the importance of a working
product over comprehensive documentation which can be interpreted as a demand-actuated
documentation style [16].
Customer. The classifying attribute customer orientation can also be directly derived from the
Agile Manifesto as it literally emphasizes customer collaboration [16]. Furthermore, the project
team shall respond to changes instead of solidly following a plan. The combination of this with the
Scrum method [17] as mentioned above which includes reviews of every sprint cycle with a variety
of stakeholders and the planning of the next sprint cycle leads to the classifying attribute definition
of product requirements as product requirements may change or may be further specified from each
sprint to the next. With many examples, Nonaka and Takeuchi point to shorter time-to-market as a
result of more flexible development approaches [21]. Because time-to-market is a crucial
competitive advantage for firms, it is also included as a classifying attribute.
Change. The scrum method illustrates that changes are conducted in coordination with the feedback
of the customer and thereby increases customer orientation which finally results in higher customer
satisfaction. In contrast, the VDI 2221 approach indeed allows returns within the process sequence
but only for the purpose of specifying or correcting previously given information which overall
sticks to the static process flow of this approach [11]. This shows that the intention of changes –
respectively the reason why changes are conducted – may vary so that this finding can be utilized as
a classifying attribute. Based on the intention of changes, Cooper states the benefit of implementing
agile methods as they increase responsiveness to changing customer needs [19] which eventually
causes changing product requirements. This underlines the contrast to phase-oriented approaches
like the waterfall model which is nearly incapable of incorporating changes during the process sequence. As a result of this consideration, responsiveness to changing customer requirements is integrated into the classification model as an attribute. The last classifying attribute – course of product
316
maturity – can be derived from the differences of phase-oriented and agile methods regarding how
results are accomplished during product development. As an example, the waterfall model is built
on the definition of requirements in the first phase of its model and generates results in
comparatively late phases [9] whereas agile methods produce a potentially releasable version of the
product after each sprint [19]. At what time of the development process and in what manner results
are generated determines the course of product maturity.
With those defined attributes development processes can be filled into the classification model.
Hybrid models like highly iterative product development from Schuh et al. appear to incorporate
characteristics of all three types of classes so that their profile line shows a “zig-zag” path.
Derivation of Requirements for Assembly Planning
Based on the classification model, certain requirements for assembly planning methods within a
highly iterative product development environment can be derived. This is accomplished by examining the attributed manifestations of highly iterative product development processes from the classification model.
In a second step, a pair-by-pair comparison of those requirements is conducted in order to prioritise the requirements of assembly planning within the highly iterative product development environment. The evaluation is based on an expert interview that was done in the prototype shop of the
demonstrations factory in Aachen. The demonstration factory is currently producing electric vehicles, which have been developed highly iteratively. The sum of all weights is scaled to a total of 100
(refer to Figure 1)
Planning
Process
Collaboration
Cat.
Weight
in %
Customer
Requirements for Assembly Planning
5,00 Project management
Short-cycled micro planning: team-related
autonomous project steering
5,42 Organisational approach
Highly diversified teams: very flat hierarchy
+
+
+
2,50 Concept of human social behaviour
5,0
9,6
10,0
Change
Flexibility
Requirements for Assembly Planning
2,1
9,6
Process Flow
Documentation
Autonomous and participatory structures
Iterative approach without highly structured
and guided flow
Customer Orientation
-
0
-
-
+
9,6
9,2
9,2
+
+: scores +1 in downward diagonal and -1 in upward diagonal
-: scores -1 in downward diagonal and +1 in upward diagonal
0: scores +0
ܹ݄݁݅݃‫ݐ‬௜ ൌ
+
+
-
+
-
-
0
0
+
-
-
0
-
0
+
+
Intention of changes
-
+
-
-
-
-
Short-cycled iteration phases
Change Requests must be considered as a
major input and output element
Responsiveness to changing customer High 'refresh rate' of design, reacting
requirements
instantaneously
Fast implementation of product features:
Course of product maturity level
degressive increase during early stages
-
+
-
-
-
0
-
-
-
Quick adaptable assembly planning processes
-
-
-
10,0 Time-to-Market
0
-
-
Demand-actuated degree of documentation
Synchronisation of assembly planning Continuous synchronization and ability to cope
to product maturity level and SOP
with undefined requirements
Continuous specification by collaborating with
Definition of product requirements
development and series production
0
-
-
-
0
+
0
+
+
0
0
σ ‫݁ݎ݋ܿݏ‬௜ ൅ ͳʹ
ȉ ͳͲͲ
ͳʹଶ
Figure 1: Pair-by-pair comparison of requirements for assembly planning methods within highly
iterative product development
317
The analysis demonstrates that the requirements of time-to-market and synchronization of assembly
planning to maturity level are the constituent features to develop a concept for generative assembly
planning.
Concept of the Generative Assembly Planning
The concept for generative assembly planning enables a situation-driven and incremental development of the assembly planning depending on the degree of product maturity and the start of production (SOP) – the generative assembly planning. An essential element is the situation-driven
interplay between deterministic-normative planning phases and empirically adaptive iteration cycles. The aim is to provide a cost-effective assembly planning system that adapts the detailed planning of the product maturity as well as the time gap to the SOP.
Plan value trade-off. The aim of each iteration cycle is assemble a functioning prototype, to test it
and to learn from problems for the next iteration phase. The prototyping phase represents an important source of information. However, there is a conflict between value and plan oriented approaches. While the objective of development is to build a testable prototype(value-oriented), serial
assembly is interested in the planning data that is generated during the assembly process (plan driven)
Situation Driven Interplay. The low degree of product maturity at the beginning of product development leads to corresponding high degrees of freedom in assembly planning. Therefore, the concept of generative assembly planning takes a minimum of planning in a deterministic normative
phase into account, which allows the assembly of the prototype. Subsequently, the assembly plan
can be completed in the empirically adaptive phase by m the technology-oriented externalization of
the employee's knowledge and the technologically-driven recording of the activities on the shop
floor. (refer to Figure 2).
Figure 2: Conception of generative assembly planning
Plan value coefficient. The plan value coefficient CPV is an essential component of the concept
presented. This represents the quotient of the operationalised values for the planning share XPlan and
the value share YValue. CPV thus forms a basis for deriving the planning portion of the respective
318
phase and provides a statement about the maturity of assembly planning. The concept represents a
qualitative design of the correlation between the characteristics ∆T_SOP, product maturity level and
the planned value coefficients (related to Figure 3).
CPV = XPlan / YValue
Figure 3: Derivation of correlation between ∆T_SOP and Product Maturity
Further research. Further research is needed in the operationalisation of the two variables. An
important step is the analysis of the correlation of both variables as a function of the planned value
coefficient in order subsequently to derive a phase model. The phase model describes the planning
phases and the associated planning granularity.
Summary
The aim of this paper was the introduction of a methodology for assembly planning which adapts
to the iteration cycle of highly iterative product development. The four categories of the Agile Manifesto were combined into a classification model. Based on this classification a pair-by-pair comparison was used to prioritise the requirements of assembly planning within the highly iterative product
development. The derived conception for generative assembly planning enables a stepwise creation
of the assembly plan in the prototype production.
Acknowledgements
The authors would like to thank the German Research Foundation DFG for the kind support
within the Cluster of Excellence – “Integrative Production Technology for High Wage Countries”.
Literature References
[1]
van Iwaarden, J.; van der Wiele, T.: The effects of increasing product variety and shortening
product life cycles on the use of quality management systems. In: International Journal of
Quality & Reliability Management. 29. Jg., 2012, Nr. 5, S. 470–500.
[2]
Festge, R.; Malorny, C.: The future of German mechanical engineering. Operating successfully in a dynamic environment, 2014.
319
[3]
Schuh et al.: Lean Innovation. Auf dem Weg zur Systematik. In: Brecher, C.; Klocke, F.
(Hrsg.): Wettbewerbsfaktor Produktionstechnik. Aachener Perspektiven. Aachen: Apprimus
Verl., 2008, S. 473–512.
[4]
Diels, F.; Riesener, M.; Schuh, G.: Methodology for the Suitability Validation of a Highly
Iterative Product Development Approach for Individual Segments of an Overall Development
Task. In: Advanced Materials Research. 1140. Jg., 2016, S. 513–520.
[5]
Cooper, R. G.; Sommer, A. F.: Agile-Stage-Gate. New idea-to-launch method for manufactured new products is faster, more responsive. In: Industrial Marketing Management. 59. Jg.,
2016, S. 167–180.
[6]
Gartzen, T.; Brambring, F.; Basse, F.: Target-oriented Prototyping in Highly Iterative Product
Development. In: Procedia CIRP. 51. Jg., 2016, S. 19–23.
[7]
Boston Consulting Group: The Lean Advantage in Engineering, 2015.
[8]
Benington, H.: Production of Large Computer Programs. In: Annals of the History of Computing, 1983, S. 350–361.
[9]
Royce, W.: Managing the development of large software systems. In: Proceeding; ICSE '87
Proceedings of the 9th international conference on Software Engineering, 1987, S. 328–338.
[10]
National Aeronautics and Space Administration Headquaters: NASA Space Flight Program
and Project Management Handbook. URL:
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150000400.pdf [Stand: 17.03.2017].
[11]
VDI. Verein Deutscher Ingenieure 2221 (Mai, 1993). Methodik zum Entwickeln und Konstruieren technischer Systeme und Produkte.
[12]
Cooper, R. G.; Edgett, S. J.: Lean, rapid and profitable. New product development. Ancaster:
Product Development Inst, 2005.
[13]
Schuh, G. (Hrsg.). In: Innovationsmanagement. Handbuch Produktion und Management 3
(Reihe: VDI-Buch). 2. Aufl. Berlin, Heidelberg: Springer, 2012.
[14]
Lee, D.: Concurrent engineering as an integrated approach to fast cycle development. URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=183623 [Stand: 17.03.2017].
[15]
Takeuchi, H.; Nonaka, I.: The New New Product Development Game. In: Harvard Business
Review, 1986.
[16]
Beck et al.: Manifesto for Agile Software Development. URL:
http://nlp.chonbuk.ac.kr/SE/ch05.pdf [Stand: 17.03.2017].
[17]
Schwaber, K.; Beedle, M.: Agile software development with Scrum. Upper Saddle River:
Prentice Hall, 2002.
[18]
Karlström, D.; Runeson, P.: Combining Agile Methods with Stage-Gate Project Management.
In: IEEE Software, 2005, S. 43–49.
[19]
Cooper, R. G.: Agile-Stage-Gate Hybrids. In: Research-Technology Management. 59. Jg.,
2016, Nr. 1, S. 21–29.
[20]
Schuh, G.; Diels, F.; Rudolf, S.: Highly Iterative product development process for engineering
projects. In: Applied Mechanics and Materials, 2015, Nr. 794, S. 532–539.
[21]
Hirotaka Takeuchi; Ikujiro Nonaka: The New New Product Development Game, 1986.
[22]
Eckstein, H.; Eichert, J.: Konstruktionsintegrierte Arbeitsvorbereitung. In: Westkämper, E.;
Spath, D.; Constantinescu, C.; Lentes, J. (Hrsg.): Digitale Produktion. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2013, S. 201–221.
320
Automated calibration of a lightweight robot using machine vision
David Barton a, Jonas Schwab and Jürgen Fleischer b
Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe, Germany
a
david.barton@kit.edu, bjuergen.fleischer@kit.edu
Keywords: Robot, Calibration, Commissioning
Abstract. Calibration of industrial robots can greatly reduce commissioning time by avoiding
expensive online programming. This article presents an approach to automating the measurement and
compensation of kinematic errors, applied to a lightweight robot arm. A measurement station is
designed to automate this calibration process. The measurement is based on the detection of a
chequerboard pattern in camera images of the end-effector. An electronic description of the robot
including its individual compensation parameters is then provided in the standardised format
AutomationML and transmitted via OPC UA. The approach requires only minimal manual input and
is shown to significantly improve the positioning accuracy of the robot.
Introduction
Significant gains in efficiency can be expected in the commissioning of production equipment
thanks to digitalisation. Commissioning time in production systems is often prolonged due to missing
component information. This information has to be either entered manually based on a data sheet, or
determined through measurements with the component in its application environment. One approach
to reducing these costs is to provide the component with an electronic description including
calibration data, based on measurements of the specific component instance at the manufacturer’s
plant. This is demonstrated in [1] for ball screws. Thus a component can become a cyber-physical
system with plug-and-work capability [2].
In a typical industrial robot, the position repeatability is sufficient to allow a task to be reliably
fulfilled once it has been programmed accordingly. However the absolute positioning accuracy is
much lower, which means that the predefined trajectory has to be adjusted to each individual robot in
order to compensate systematic errors. This is often achieved through online programming: the robot
is physically “taught” the desired positions within the production system. Online programming leads
to increased downtime when commissioning robots and reduces the potential for reusing a program
when replacing a robot or using several robots to perform the same task [3].
Positioning errors can be divided into geometric (or kinematic) and non-geometric errors such as
compliance [4]. Static geometric errors are systematic and can be determined without knowledge
about the environment and task the robot is to be used for. This allows the measurement to be
performed by the robot manufacturer before delivery and commissioning for example. The
positioning errors can then be reduced through calibration and compensation. Robot calibration can
be divided into four steps: kinematic modelling, pose measurement, kinematic identification and
kinematic compensation [3].
Various types of measurement systems have been shown to be suitable for measuring positioning
errors in order to calibrate robots [5]: telescopic ballbars, a laser tracker or an optical CMM. It is also
possible to determine the position of a robot using a multi-camera system or a single camera [6].
Camera systems can either observe the robot from a fixed position or use moving cameras attached
to the robot hand [7].
To increase productivity, the calibration process must require little manual effort and calibration
time. A cost-efficient method combining an automatic calibration in the robot manufacturer’s factory
and electronic transmission to a control system in the field was not found in the available literature.
321
Approach. This paper aims to develop a low cost automatic calibration station that can be integrated
in a robot manufacturer’s production. The proposed concept is shown in Fig. 1. A single fixed camera
is used for the position measurement to allow greater flexibility while keeping the physical structure
of the calibration station simple. When considering the cost of calibrating a robot, the integration of
this process into the component lifecycle must also be taken into account. For this reason the results
of the calibration are integrated into a digital representation of the component in AutomationML. This
description is subsequently combined with live information from the component’s sensors and control
electronics and made available via OPC UA. The approach is demonstrated using the lightweight
robot LWA 4P manufactured by SCHUNK.
Figure 1: Concept for the measurement station
Kinematic modelling
The LWA 4P is a serial link manipulator with 6 degrees of freedom. It consists of three spherical
units each containing two perpendicular revolute joints, connected by two arms. Before designing the
calibration station, the errors of an LWA 4P are analysed in order to evaluate the potential for
improving the accuracy through calibration. To this end a measurement arm is fixed at a predefined
distance from the base of the robot. The robot is fed a desired pose in the form of a set of joint angles
and the Cartesian coordinates of defined points on the robot flange are measured. This is repeated
once for each of 29 poses and the measured positions are compared to the positions predicted by an
ideal kinematic model. The distance between measured and predicted positions is on average 5 mm.
The measured positions are used to fit two different error models:
- A reduced 6-parameter model considering only an offset in the angle of each joint,
- A more detailed model that additionally considers errors in the orientation of joint axes and
the distance in between joints.
The reduced model exhibits an average deviation of 1.5 mm from the measured poses, whereas the
detailed model reduces the deviation to an average of 1.1 mm. Both models allow a significant
accuracy improvement compared with the ideal kinematic model.
Based on this preliminary study, the simpler offset-angle model is chosen. The axes of the second
and third joints are parallel. In the classic Denavit Hartenberg model, would mean small errors in the
alignment of the axes lead to large errors in the model parameters. To avoid this difficulty, the
kinematic model used in this project is based on the model formulated by Hayati and Mirmirani [8].
The positioning errors are modelled as an offset ߮௜ in each of the 6 joint angles ߠ௜ (ߠ௜‫ כ‬designates the
desired joint angle set by the control unit):
ߠ௜ ൌ ߠ௜‫ כ‬൅ ߮௜
322
(1)
Design of a measurement station
The measurement station is designed to fulfil the following requirements:
- Fast and easy mounting and dismounting of the robot;
- Well-defined and repeatable positioning of the robot;
- Insignificant deformation of the structure for all relevant positions of the robot;
- Manually adjustable distance in between robot and camera;
- Minimal restrictions to the robot’s movement.
These properties depend on the mechanical structure, the camera system and the selected target.
Interfaces. The first joint of the LWA 4P is attached to a base that provides electrical energy (24V
DC) and field bus connectivity (CANopen). The mechanical interface of the measurement station
consists of an aluminium plate with threaded holes to accommodate bolts for fixing the robot and two
pins to ensure a well-defined position. The bolts can be inserted and tightened from above. Thus the
robot can be mounted and dismounted quickly. The electric interface can also be connected easily by
plugging one signal cable and one energy cable into the robot base.
Figure 2: Measurement station for automated calibration
Mechanical structure. The mechanical structure consists of a base frame, a robot platform and a
camera support, as shown in Fig. 2. The base frame is built out of aluminium profiles designed to be
especially stiff with respect to bending loads. The robot platform consists of aluminium profiles and
aluminium plates. It is designed to withstand loads due to overhanging robots with no significant
deformation, while intruding as little as possible in the robot’s workspace. The camera support is
adjustable in three directions to allow for experimentation with different distances and positions
relative to the robot.
Image acquisition. The camera Basler ace2500-14u is used to acquire the images needed for the
position measurement. This camera has a CMOS sensor with a resolution of 2590 x 1942 pixels and
a pixel size of 2.2 μm x 2.2 μm. In choosing the lens, the trade-off in between a wider field of view
at lower focal lengths and lower distortion at higher focal lengths must be considered, in order to
ensure sufficient precision of each measurement while allowing to measure many different robot
323
positions. The Basler lens C125-0618-5M with a focal length f = 6 mm is chosen, leading to a
horizontal angle of view of 52.4° to 53.1° and a vertical angle of 39.6° to 40.1°.
Target. In order to measure the position of the robot, features need to be recognised and localised
within the camera images. These can either be pre-existing features on the robot or part of an endeffector that is mounted on the robot for measurement purposes. In this project a specially designed
end-effector is used as a target. The target is an aluminium cube with an edge length of 80 mm, of
which 5 faces are covered with a black and white 3x3 chequerboard pattern (Fig. 3). The 6th face is
provided with a mechanical interface that can easily be centred and fixed to the robot flange. The
central square in each pattern is larger in order to increase the distance between the corner points
while maintaining a sufficient distance from the corners on the adjacent faces of the cube. Thus each
face carries 4 points that can be used as features for position measurement. Depending on the
orientation of the cube relative to the camera, up to 3 faces of the cube and 12 corner points can be
visible in one image.
Figure 3: End-effector with chequerboard target
Pose measurement
The target features corners where two white fields and two black fields meet, also known as Xcorners. First the image is rectified based on a previous camera calibration in order to compensate
lens distortion. The end-effector position predicted by the non-compensated forward kinematics is
used to determine a disc-shaped region of interest in the image for each point on the cube. The Xcorners in these regions of interest are detected using the subpixel algorithm described in [9]:
- The image is smoothed using a Gauss operator (with ߪீ ൌ ͳͲ);
- Saddle points in the intensity are located using the second directional derivatives (Fig. 4);
- Sub-pixel accuracy in the position of the corners is achieved by applying a Taylor polynomial
to the local intensity around each saddle point.
324
Figure 4: Corner detection based on saddle points in the intensity function
The pose of the target cube then needs to be reconstructed from the image points by taking into
account the projection from a 3 dimensional world onto a 2 dimensional image. This can be expressed
as a perspective-three-point problem (P3P). Following an approach presented in [10], the known
distance in between three points on the surface of the cube is used to determine up to four possible
solutions for the position of the cube. An arbitrary fourth point can then be used to select the most
plausible cube position.
Given that there are up to 12 points in each image and the computing time is not subject to any
strong constraints in this application, the P3P problem is solved for all possible combinations of three
visible points. Out of the computed positions (up to 220 alternatives), the solution that fits the image
points best in terms of mean square errors is selected.
The measurement algorithm is tested by placing the target directly on the platform in known
positions. The results show an average Euclidian distance of 0.51 mm in between the exact position
and that measured using the image processing algorithm. The standard deviation of the calculated
Euclidian distances is 0.32 mm.
Identification and compensation
Using the measurement method described above, 15 different poses are each measured once for a
given robot. The measurements are compared with the expected poses based on the ideal kinematic
model (Fig. 5). Based on the results of this measurement, the average bias or offset ߮௜ of each joint i
is estimated. These are combined to form a model for error compensation.
The process is validated by measuring 35 poses not used for determining the error model. Each of
these poses is measured first without error compensation and then using the offsets߮௜ . The errors are
compared using the Euclidian distance in between the position calculated using the kinematic model
and that measured using the camera.
Before calibration the 50 poses show an average absolute error of 5.05 mm, as measured by the
Euclidian distance, with a standard deviation of 1.96 mm. After calibration, there remains an average
error of 3.62 mm (standard deviation 0.98 mm) among the poses used for calibration. The test poses
not used for calibration also improve, achieving an average error of 3.34 mm with a standard deviation
of 1.46 mm.
The results show that a significant improvement in accuracy can be achieved using the described
approach for compensation of geometric errors. The calibration of the same robot using a
measurement arm, as described above, led to a higher accuracy than when using a camera. This
325
suggests that an improvement in the measurement setup and image processing could lead to a better
accuracy after calibration.
Figure 5: Image after processing
Digital representation and integration in component lifecycle
In order to further reduce manual effort when commissioning the robot, the communication of the
calibration information into the control system must also be automated. This can be achieved using a
plug-and-work approach. The component is provided with a digital representation in the form of a
description file in the standardised format AutomationML. The description has a hierarchical
structure, as shown in Fig. 6. Each joint is represented by an internal element that is described by
attributes. Attributes include type data as well as instance-specific data (i.e. calibration parameters).
The digital representation can be hosted on a single-board computer, serving as a plug-and-work
adapter. This device is equipped with an SD card to save the static information and a CANopen
interface to exchange live information with the component. In order to make the component
description available to control units and other systems, the adapter provides an OPC UA server via
TCP/IP. The OPC UA server combines the information from the AutomationML file with live data
from the component’s control electronics and sensors. Thus an up-to-date digital representation of
the robot is available within the local network.
326
Figure 6: Digital representation of a robot in AutomationML, as shown in AML Editor
Summary and conclusion
Expensive online programming during the commissioning of robots can be avoided by calibrating
the kinematic model beforehand. This paper presents a method for automating the calibration of a
lightweight robot and an appropriate measurement station design. In order to calibrate the robot, its
geometric positioning errors must be measured before delivery to the end-user. A camera-based
measurement station is designed and the corresponding software is developed. Corners in a
chequerboard pattern on a specially designed end-effector are used to determine the robot’s pose.
Thus the geometric errors can be estimated and integrated in a digital representation of the robot, so
that they can be compensated by the control system. The calibration method is shown to significantly
reduce positioning errors. Further work should focus on increasing the accuracy, for example by
improving the measurement setup and the image processing.
Acknowledgements
This paper is based on results from “Secure Plug and Work”, a research and development project
that was founded by the German Federal Ministry of Education and Research (BMBF) within the
framework concept “Research for Tomorrow’s Production”. The project was managed by the Project
Management Agency Karlsruhe (PTKA). The authors are responsible for the contents of this
publication.
References
[1] S. Dosch, A. Spohrer, J. Fleischer, Reduced commissioning time of components in machine tools
through electronic data transmission, Procedia CIRP 29 (2015) 311-316.
[2] M. Schleipen, A. Lüder, O. Sauer, H. Flatt, J. Jasperneite, Requirements and Concept for Plugand-Work, at-Automatisierungstechnik 63 (2015) 801-820.
[3] H. Zhuang, Z. Roth, Camera-aided robot calibration, CRC Press, Boca Raton, 1996.
327
[4] Z. Roth, B. Mooring and B. Ravani, An overview of robot calibration, IEEE J. Robot. Autom. 3
(1987) 377–385.
[5] A. Nubiola, M. Slamani, A. Joubair, I.A. Bonev, Comparison of two calibration methods for a
small industrial robot based on an optical CMM and a laser tracker, Robotica 32 (2014) 447-466.
[6] J. M. Motta, G.C. de Carvalho, R.S. McMaster. Robot calibration using a 3D vision-based
measurement system with a single camera, Robotics and Computer-Integrated Manufacturing 17.6
(2001) 487-497.
[7] H. Zhuang, K. Wang, Z. Roth, Simultaneous Calibration of a Robot and a Hand-Mounted Camera,
IEEE Transactions on Robotics and Automation 11.5 (1995) 649-660.
[8] S. Hayati, M. Mirmirani, Improving the Absolute Positioning Accuracy of Robot Manipulators,
J. Robotic Syst., 2 (1885) 397-413.
[9] D. Chen, G. Zhang, A New Sub-Pixel Detector for X-Corners in Camera Calibration Targets,
WSCG SHORT papers proceedings (2005) 97-100.
[10] X. Gao, H. Chen, New Algorithms for the Perspective-Three-Point Problem, J. Comput. Sci. &
Technol. Vol. 16 No. 3 (2001) 194-207.
328
&KDSWHU
2UJDQL]DWLRQRI0DQXIDFWXULQJ
Monetary and Quality-Feature-Based Quantification of Failure Risks
in Existing Process Chains
Kevin Nikolai Kostyszyn1,a,d , Robert Schmitt2,b,e
1
Fraunhofer Institute for Production Technology IPT, Steinbachstr. 17, 52074 Aachen,
Germany
2
Laboratory for Machine Tools and Production Engineering (WZL), RWTH Aachen
University, Steinbachstr. 17, 52074 Aachen, Germany
a
kevin.kostyszyn@ipt.fraunhofer.de, b r.schmitt@wzl.rwth-aachen.de
d
+49 (0)241 8904-603, e +49 (0)241 80-20283
Keywords: Quality Assurance, Process Control, Monitoring
Abstract:
Common approaches for the analysis and optimisation of processes, such as the Statistical
Process Control (SPC) or the Failure Mode and Effects Analysis (FMEA), do not support a
systematic and reproducible priorisation of quantified, quality-feature-based failure risks. For
example, process capability indices can only be interpreted from a process-specific and
technical perspective but do not imply any information that describe the importance for the
company. In contrast, the Risk Priority Number (RPN) includes a factor that describes the
importance. However, this factor can be considered as subjective and, therefore, it can affect
the reproducibility. Moreover, common approaches do not give any instruction how to
aggregate quantifying values of several risks. This complicates a comparison of existing process
chains based on their quantified failure risks.
To overcome these deficiencies, a new method is introduced in this paper. It enables a
monetary quantification of failure risks related to selected quality features. This includes the
systematic design of quality-feature-based cost functions that quantify expected failure costs as
well as so-called near misses. Furthermore, it supports the aggregation of risk-quantifying
values. This enables a simple risk-related comparison of whole existing process chains.
Acknowledgement
The authors gratefully acknowledge, that the presented method is a result of the research
project “Qua2Pro – Quality-Feature-Based Quantification of Failure Risks in Production”
(SCHM1856/52-1) which was granted by the German Research Foundation (DFG).
Introduction
A well-established risk management can support producing companies in enhancing the
effectiveness and efficiency of their processes. Based on systematic identifications, analyses
and evaluations of risks, it promotes a continuous development of process knowledge as well
as the derivation of effective risk treatments. Thus, high product qualities with low variances
can be achieved. This can be viewed as an important benefit as it can ensure the company
success and let the company remain competitive in the modern economy. [1-3] For risk
management, there exist several standards and frameworks, such as the ISO 31000, the IRM
(Institute of Risk Management) Standard or the COSO ERM (Enterprise Risk Management)
Framework. [3-4]
Before quality-feature-based failure risks can objectively be evaluated, prioritised and
treated, they have to be quantified. Until now, existing methods are focusing on the application
331
in different areas, e.g. in the financial area or in supply chains. However, the consideration of
the value added chain has not been established yet. [5]
Existing approaches, such as the Hazard and Operability Method HAZOP or the Failure
Mode and Effects Analysis FMEA and the associated Risk Priority Number RPN, provide
information that often bases on subjective estimations. An objective, quantified description of
quality-feature-based failure risks, e.g with relation to monetary consequences, is not provided
[6-7]. The determination of the so-called Value at Risk VaR and Conditional Value at Risk
CVaR is widely established in the financial and insurance sector. These values enable monetary
descriptions of failure risks. Their calculations require preliminary definitions of threshold
probability values [8-10]. However, focusing on quality features in production, those values
cannot be provided. Instead, quality-feature-based tolerance levels are thresholds that have to
be considered. Other approaches, such as sensitivity and scenario analyses or Monte Carlo
simulations, only describe experimental and simulative quantification methods [11]. Their
applications are either limited or they require high preparation efforts since they cannot be
integrated in running productions or since they require complex simulation models as a base of
calculations. Furthermore, common methods do not provide any instructions how to aggregate
failure-risk-quantifying values based on a selection of quality features. Therefore, evaluations
of failure risks can only be based on single quality features generated by single processes but
not on process chains.
In the application of Statistical Process Control (SPC), process capability indices are
determined to quantify existing failure risks. However, these values can only be interpreted
from a technical perspective and do not imply any information about the importance for the
company. Moreover, they cannot be aggregated. As an alternative, so-called loss functions can
be defined. Depending on their definition, they can be used to describe expected failure costs
and, therefore, they can provide comparable values that can be taken into consideration for risk
priorisations. The most traditional approach is the step-loss function [12-13]. This function is
equal to zero within the tolerance field of a considered quality feature. Outside the tolerance
field, it describes a constant cost value that can be interpreted as a scrap cost value. Hence, a
measurable process output, characterised by its mean value and its variance, exceeding given
tolerance levels will generate costs according to the function. However, those costs are mostly
not realistic as they do not consider all possible types of failure costs such as rework costs.
Among more complex loss function designs, the Taguchi function is supposed to be the most
popular one. According to this function, every value that is not equal to the process setpoint
generates costs. Those costs are expressed by a parabolic function. Since the function is
exponentially increasing in both directions and since all process outputs except for the setpoint
are supposed to generate costs, the loss function values cannot be interpreted as expected failure
costs. According to Taguchi, the function rather describes a quality loss for the society. [14-16]
For the design of cost functions representing expected costs, different alternatives exist in the
scientific community. In one approach, the step-loss-function out of the tolerance field is
combined with the Taguchi function inside the tolerance field. Other approaches enable the
asymmetric and free loss function design, e. g. by using piecewise linear functions or mirror
images of standard Gaussian functions. [17-21] However, all those approaches do not provide
any detailed instructions for quantifications of quality-feature-based failure risks based on
expected failure costs.
To overcome these deficiencies, a new method is introduced that supports a monetary and
reproducible quantification of quality-feature-based failure risks. Different steps of the method
include the definition of piece-wise linear cost functions for the description of expected failure
costs and of so-called near misses. Furthermore, they include two different ways to aggregate
quantified failure risks to enable risk-related comparisons of process chains. The application of
the method is described by means of an example.
332
Method Steps
The introduced method for the monetary and quality-feature-based quantification of failure
risks in existing process chains can be divided into six different steps as shown in Fig. 1.
Figure 1: Method steps for the monetary quantification of quality-feature-based failure risks
Identification of Risks. At the first step, an identification of quality-feature-based failure risks
is performed. Common methods, such as the Ishikawa diagram, can be applied. To save time
and to prevent high efforts, the method is recommended to be only applied for critical quality
features. For quality feature expressions that always meet the requirements, the priority of risk
analysis and treatments can be defined as low. On the other hand, if the expressions often do
not meet the requirements, the priority should be set to high. In the introduced example, see
Fig. 2, the focus is put on a shaft production.
Figure2: Production of cut and turned shaft
Two critical quality features have been identified. The first one is the cut length of the shaft
that is required to be between 1498.8 mm and 1501.2 mm. The subsequent turning process has
to provide a diameter that is between 29 mm and 30 mm. It is assumed that within a predefined
cycle time, only one shaft can be cut and only one shaft can be turned. The failure risks, related
to mentioned quality features, and their monetary consequences will be quantified in the
following. Since the required calculations and visualisations can be very complex, a software
tool has been developed giving support during the following method steps (see Fig. 1).
Classification of Risks. The last four steps of the introduced method differ between the
different scale types that are used to describe the expressions of the considered quality features
[22]. Choosable scale types are the nominal scale, the ordinal scale and the metric scale. In the
example and in the following step descriptions, only the metric scale will be put into
consideration.
Quantification of Risks. In this step, the quality-feature-based failure risks are quantified. At
first, this includes the calculation of probabilities of occurring failures. Monetary consequences
will be quantified in a further step. In this step, a control chart, a histogram and an
333
approximating distribution curve is generated for every considered quality feature. Fig. 3
illustrates the results relating to the diameter of the produced shaft.
Figure 3: Control chart, histogram and distribution curve of quality feature diameter
According to the central limit theorem, it can be stated, that the means of large numbers of
independent and identically distributed sample values are approximately normally distributed
[23]. In the context of this method, the sample values are measured values describing the
expressions of the produced quality features that have to be analysed. The values in the
illustrated control chart represent the means of each 10 sample values. Solid lines represent the
mean of the measured values as well as the upper and lower tolerance levels of the quality
feature. A dashed line marks the half distance of the tolerance field. Since control levels do not
play a role in this quantification method, they are not shown. The histogram on the right side in
Fig. 3 is approximated by a normal distribution curve. Based on the integration of a normalized
distribution curve and on the consideration of the tolerance levels, the software tool is able to
calculate the probabilities of occurring failures.
Quantification of Near Misses. Referring to common definitions, a near miss can be
understood as a narrowly avoided mishap [24]. In this method, a near miss describes an
expression of a quality feature that belongs to a preliminary defined near miss interval inside
the tolerance field. As typical characteristics of near miss intervals, one boundary is equal to
one of the tolerance levels. The second boundary is located inside the tolerance field. A slight
change of the process can already cause a shift of the near-miss-representing percentage of
products to the rework and discard area outside the tolerance field. That is why the quantified
percentage of near misses serves as a warning value as it could generate real failure costs in
future. The integration of near misses in the monetary quantification is described in the next
step. The near miss intervals for the quality feature diameter in the example are marked in Fig.
3. It is recommended to set the widths of the near miss intervals to 1-15 % of the tolerance
field’s width. In this example, all length values between 1498.8 mm and 1499.0 mm as well as
all values between 1501.0 mm and 1501.2 mm are considered as near misses. Near misses of
the diameter are between 29.0 mm and 29.1 mm as well as between 29.9 mm and 30.0 mm.
Quantification of Failure Costs. Expressions of quality features that are outside the
tolerance levels are generating failure costs. These costs can be divided into internal failure
costs, that are, e.g., caused by rework, repeat examination and reduction in value or scrap and
into external failure costs, such as warranty, good will and external rework or scrap costs. The
334
value of failure costs can depend on the specific expression of the related quality feature. For
example, if the cut length of the shaft is too small, rework is not possible and the shaft can be
viewed as scrap. Thereby, it generates failure costs of 25€. If the length is too large, the shaft
can be cut to the right length. Due to further personnel engagement and tool wear, resulting
failure costs of 10€ are defined.
The cost function for the quality feature diameter is more complex. If the turned diameter is
too small, the shaft can be defined as scrap. On the other hand, if the diameter is too big, it can
be rectified by rework. Since the processing time for this rework depend on the distance to the
desired diameter, rework costs depend on the measured diameter size. Fig. 4 shows the
description of the cost function.
Figure 4: Failure cost function for quality feature diameter
The cost function design, supported by the implemented software tool, includes the
definition of piecewise linear cost function segments. With an increasing absolute distance to
the tolerance level, the cost function is monotonically increasing. The most left and the most
right segments have to be horizontal since infinitely increasing cost functions can be viewed as
not realistic. The costs defined for the near miss intervals are equal to the failure cost value at
the corresponding tolerance level. Within the tolerance field and outside the near miss intervals,
no costs occur.
Quality-feature-based failure risks can be described by the integral of the probability density
function multiplied by the cost function over the expression of the quality feature. For the
quality feature length, the resulting failure and near miss costs are 0.63 € and 0.96€ per each
single shaft. The diameter generates failure and near miss costs of 1.97 € and 1.34 € per each
single shaft.
A more detailed description of the resulting failure costs can be provided by calculating
probability values for pre-defined cost values or cost value intervals. The resulting probabilities
for a monetary resolution of 0.1 € are shown in Fig. 5 for both quality features length and
diameter.
335
Figure 5: Probabilities of occurring costs
For example, the diagrams show that the turning process of one shaft will generate failure
costs of 11.00-11.10€ with a probability of 9.28 %. Near miss costs of 11.00 € will occur with
a probability of 9.02%.
Cost Aggregation and Evaluation. In the last step of the method, the quantified monetary
consequences of failure risks can be evaluated by comparing with each other. High monetary
consequences imply a stronger need for risk treatments, and therefore, higher priorities. In the
example, the turning process appears to be more critical than the cutting process since it
generates higher failure costs. However, for the quality feature length, near miss costs are higher
than failure costs. A slight change of the cutting process can lead to a high increase of costs.
That is why those failure risks should not be neglected in future.
If the whole chain of cutting and turning processes is intended to be compared to other
process chains, monetary consequences of failure risks can be expressed in an aggregated form.
The introduced method provides two different ways for calculation. The first one includes a
simple summation of the calculated failure cost values. For example, the aggregated failure
costs of the cutting and turning process are described by the sum of 0.63 € and 1.97 € that is
equal to 2.6 €.
In the second way, the detailed cost descriptions, presented in Fig. 5, can be aggregated to
one single detailed description for both considered quality features. The required mathematical
operations are denoted as convolution. Through convolution, the probabilities of all possible
combination of costs caused by both quality features of one shaft are calculated. For example,
the probability of no occurring costs is equal to the product of 97.41% and 84.26% (see Fig. 5).
The results of the convolution are shown in Fig. 6.
Figure 6: Probabilities of occurring aggregated costs intervals
As a special requirement for this application field, the percentage for occurring scrap costs
of 25€, caused by a too small shaft length, cannot be combined with occurring failure costs
caused by the shaft diameter. The reason is that due to their scrap status, those shafts will not
be processed by the turning lathe anymore.
Summary
336
With the application of the introduced method, a monetary and reproducible quantification of
quality-feature-based failure risks can be realized based on the definition of piecewise linear
cost functions. Thus, risk-related saving and optimisation potentials become visible and can be
considered for risk priorisations. Calculated near miss costs are describing cost increases that
can be expected in future in case of minimal process changes.
At the current state of the method development, there are no provided guidelines that describe
which specific cost types have to be integrated into the cost function. In the course of future
research, a systematic sequence of cost information requests will be developed to provide a
detailed guidance to the user.
References
[1] J. Lam, Implementing Enterprise Risk Management, John Wiley & Sons Inc., Hoboken,
New Jersey, 2017, pp. 11-12.
[2] T. Meyer, G. Reniers, Engineering Risk Management, second ed., de Gruyter, Berlin, 2016,
pp. 30-31.
[3] P. Hopkin, Fundamentals of Risk Management, Understanding, evaluating and
implementing effective risk management, fourth ed., Kogan Page Ltd., London, 2017, pp. 4-5.
[4] R.J. Chapman, The Rules of Project Risk Management, Implementation Guidelines for
Major Projects, Routledge, New York, 2014.
[5] L. Condamin, J.-P. Luisot, P. Naim, Risk Quantification: Management, Diagnosis and
Hedging, John Wiley & Sons, West Sussex, 2006, pp. 43-117.
[6] B. Chatterjiee, Applying Lean Six Sigma in Pharmaceutical Industy, Routledge, New York,
2016, pp. 37-41.
[7] M. Tortorella, Reliability, Maintainability and Supportability, Best Practices for System
Engineers, Wileys & Sons, Hoboken, New Jersey, 2015, p. 248.
[8] A. Bensoussan, D. Guegan, C. S. Tapiero, Future Perspectives in Risk Models and Finance,
Springer Cham, Heidelberg, 2015, p. 71.
[9] P. Jorion, Value at Risk: The New Benchmark for Managing Financial Risk, third ed.
McGraw-Hill, New York, 2007, pp. 105-138.
[10] P. Best, Implementing Value at Risk. John Wiley and Sons: West Sussex, 1999; pp. 1-13.
[11] D. Dash Wu, Modeling Risk Management in Sustainable Construction, Springer Verlag,
Berlin Heidelberg, 2011, p. 277.
[12] CT. Su, Quality Engineering: Off-Line Methods and Applications. CRC Press, Broken
Sound Parkway, 2012, pp. 77-78.
[13] T. Hill, P. Lewicki, Statistics: Methods and Applications, Statsoft, Tulsa, 2006, p. 208.
[14] WD. Mawby, Integrating Inspection Management: Into Your Quality Improvement
System, ASQ Quality Press, Milwaukee, 2006, pp. 24-27.
337
[15] G. Keller, Statistics for Management and Economics, ninth ed., Cengage Learning, Mason,
2012, p. 580.
[16] RP. Mohanty, Quality Management Practices, Excel Books, New Delhi, 2008, pp. 282284.
[17] F. Spiring, An Alternative to Taguchi’s Loss Function, Annual Quality Congress,
American Society for Quality ASQ, Milwaukee, 1991, pp. 660-665.
[18] CH. Chen, Determing the optimum process mean for a mixed quality loss function, In: The
international journal of Advanced Manufacturing Technology, (2006) 571, DOI:
10.1007/s00170-004-2375-1.
[19] J-N. Pan, J. Pan, A Comparative Study of Various Loss Functions in the Economic
Tolerance Design, IEEE International Conference of Management of Innovation and
Technology, (2006) 783-787, DOI: 10.1109/ICMIT.2006.262327.
[20] A-B. Shaibu, B. R. Cho, Development of realistic quality loss functions for industrial
applications, Journal of Systems Science and Systems Engineering, (2006) 385-398,
DOI: 10.1007/s11518-006-6048-5.
[21] L. Morel-Guimaraes, TM. Khalil, YA. Hosni, Management of Technology: Key Success
Factors for Innovation and Sustainable Development, Elsevier, Amsterdam, (2005) 448.
[22] R. Schmitt, K. Kostyszyn, Fehlerrisiken in der Produktion, In: wt Werkstattstechnik online,
11/12-2015, pp. 775-780.
[23] O. Johnson, Information Theorem and The Cental Limit Theorem, Imperial College Press,
London, 2004, p. 30
[24] R. C. McKinnon, Safety Management, Near Miss Identificaiton, Recognition, and
Investigation, CRC Press, Boca Raton, Florida, 2012, p. 1.
338
Development of a cost-based evaluation concept for production
network decisions including sticky cost aspects
Julian Ays1,a, Jan-Philipp Prote1,b, Bastian Fränken1,c, Torben Schmitz1,d and
Günther Schuh1,e
1
Laboratory for Machine Tools and Production Engineering (WZL) RWTH Aachen University,
Steinbachstraße 19, 52074 Aachen, Germany
a
J.Ays@wzl.rwth-aachen.de, bJ.Prote@wzl.rwth-aachen.de,
B.Fränken@wzl.rwth-aachen.de, dT.Schmitz@wzl.rwth-aachen.de,
e
G.Schuh@wzl.rwth-aachen.de
c
Keywords: Manufacturing network, Cost, Sticky costs
Abstract. In consequence of the continuously rising competitive constraints, effectively configured
global production networks become an increasingly important factor for the corporate success.
However, production network decisions about a relocation of production capacities to another
location often lack the determination of all necessary cost information due to the usage of simplistic
approaches. In this paper, the aim is to design a cost-based evaluation concept to support such
relocation decisions. The approach includes pagatoric costs, dynamic cost effects over a longer time
horizon and sticky cost effects on a remaining location after a production relocation is realised.
Introduction
Nowadays, the effects of globalisation have become reality for many companies worldwide [1].
They relocate their production capacities to other countries to benefit from cost reduction and market
exploitation effects. The objective is to stay competitive in the increasingly global, connected world.
In this context, various production network decisions need to be made to assess whether or not a
relocation is profitable enough to compensate the needed expenses. Due to the rising network
complexity, those decisions are getting more and more difficult for companies. Challenges arise from
the correct and complete depiction of all effects and costs. To reduce the efforts of data gathering,
many companies use simplistic approaches that do not capture all connections and costs throughout
the entire network and its environment [2]. By that, an incorrect cost evaluation of a network decision
can occur. Cost advantages are often not achieved to the desired extent and some decisions even need
to be reversed. Recent studies show that in last years, one out of four German location decisions
needed to be revoked due to incorrect cost or quality expectations [3]. Many different approaches for
the feasibility calculation of a relocation exist. Nevertheless, most of them just concentrate on the
possible cost advantages at a new location. Effects on the remaining location, such as remaining costs,
so called sticky costs, are usually not considered.
To overcome those challenges and especially assess a possible relocation, the aim of this paper
is to design a cost-based evaluation concept to support such decisions. This evaluation should include
pagatoric costs, dynamic cost effects over a longer time horizon and effects on the remaining location.
In particular sticky costs, costs that remain unintendedly after the decision, and their effect on the
network are identified.
Deficiencies of existing approaches
The topic of evaluating and designing production networks in terms of cost flows is widely
discussed in literature. Many approaches use quantitative optimisation models to minimise various
cost targets. Schilling’s mathematical optimisation model is especially designed to support decision
making in global production networks [4]. Vahdani and Mohammadi suggest a bi-objective
339
optimisation model to minimise the total landed costs and waiting times in service [5]. Mourtzis
directly simulates a production network by creating and using a software tool to get to a sufficient
network configuration [6]. In several cases, not only quantitative but also certain qualitative criteria
of a production network decision are taken into consideration. For example, Yang is including both
quantitative and qualitative factors to support location decisions, but does not provide a sufficient
cost modelling approach [7]. Lanza and Moser on the other hand, develop an approach for configuring
and planning a production network based on a multi-objective optimisation and future scenarios.
Although certain multiple criteria are taken into account, not all decision relevant costs are considered
and the modelling effort is very high [8].
The problem of many optimisation or simulation approaches lies in the transparency and usability
for a decision maker. The resulting decision suggestions of the models and tools often lack
comprehensibility. Furthermore, the variability and flexibility of usage in different situations is
usually limited. Thus, they are often not applicable in reality.
Other authors use concepts in their approaches that are more practical. In their cost evaluation,
Buhmann and Schön present a net-present-value (NPV) for three different future scenarios (a
pessimistic, a realistic and an optimistic case) [9]. Kinkel & Zanker combine static and dynamic
methods to get a holistic consideration of pagatoric and imputed costs [10]. Christodoulou et al. use
a multi-stage approach to continuously configure and improve production network, but focus rather
on the decision making process than on the cost model [11]. Nevertheless, none of the approaches
considers sticky costs or the effects of a relocation of production capacities on the remaining location.
Although sticky costs were researched in the recent decades [12], they are mostly just discussed in
theory or examined for their appearance in studies [13]. Usually, no method to quantify sticky costs
is provided [14]. Beltz suggests a formula to define sticky costs, but also does not focus on a usage
of sticky costs in industrial practice [15]. Thus, sticky costs have not yet been included in production
network decisions, apart from first considerations of the mentioned authors [16].
To compensate these deficiencies, the evaluation concept presented in this paper is designed as a
comprehensible, flexible cost assessment of a relocation decision. In addition, sticky costs are also
included in the evaluation.
Concept
The evaluation concept is based on a previously developed cost modelling method of the authors,
which depicts the basic, necessary pagatoric costs [17]. It is departed into two phases, cost analysis
and decision support (cf. Fig. 1).
Sticky costs
Quantif ication
Cost analysis
Location
‫ܥ‬௦௧௜௖௞௬ǡ஻ǡ௛
buildings
Decision support
• Pagatoric
•
Weakening of the
remaining location
Unit cost calculation
• Sticky costs
y
Inv estment calculation
• NPV
• Change of the
• Change of the
x
Dynamic NPV calculation
calculation
remaining
location by the
sticky costs
+
Imputed
costs
Figure 1: Methodology of the concept
340
dyn. process
costs
+
x
‫ܥ‬௦௧௜௖௞௬ǡெǡ௛
machines
Unit cost calculation
at new location
+
‫ܥ‬௦௧௜௖௞௬ǡௌǡ௛
staff
‫ܥ‬௦௧௜௖௞௬ǡாǡ௛
energy
Unit cost calculation
Remov al
y
marginal return
• Investment
amortisation
0
1
2
…
n
The first cost analysis phase concentrates on the evaluation and determination of sticky cost effects
during or after a relocation. For their determination, three steps are followed. First, the possible
locations in which these costs could occur are described. Second, all identified cost categories are
quantified into formulae. Third, the possible removal of sticky costs by reorganisation is discussed.
The second phase is the actual decision support. First, the unit costs of the relocated production
capacities at a new location are evaluated. This provides the possibility not only to include pagatoric
but also imputed costs. Here, especially the effects of the sticky costs are considered. After that, the
evaluation of a possible weakening of the remaining location is focused. The NPV calculation
concludes the decision support. By that, the profitability of the execution of a decision, taking into
account the triggered cost savings per period, is assessed.
Sticky cost analysis. Sticky costs are generally described as costs which increase more when activity
rises than they decrease when activity falls by an equivalent amount [12]. For the purpose of
production network decisions in this paper, this definition is slightly adjusted. Here, sticky costs are
defined as costs which stay unintendedly at the remaining location after a relocation of production
capacities. Possible origins can be fixed or variable costs. Sticky costs based on variable effects
mainly occur due to a possible discontinuation of economies of scale after an implemented relocation
process. By that, the variable costs of remaining production capacities are changing. Although this
effect will probably happen, in most cases it seems to be dominated by the effect of fixed costs which
is more likely to emerge in a higher amount [18].
Sticky costs in remaining fixed costs can be identified in four main cost categories. Fixed sticky
costs can originate from building, staff, energy and machinery costs. Building costs can be either
rental costs or non-recurring, but by depreciation noticeable, purchase costs. Sticky costs can emerge
in this cost category due to vacant areas after a relocation process. These areas still cause unnecessary
costs. Furthermore, they are normally non-reducible if the empty areas do not include a whole
building and are only re-plannable in a longer time horizon. In the second category, staff that is still
required but not working to capacity after a relocation or non-callable staff creates sticky staff costs.
The problem here is that the payment usually is not connected to a certain amount of workload. Hence,
every non- or only half-working employee causes more costs than necessary which stick to the
remaining location and can often not be removed. For example, a plant manager still gets his full
salary after relocation half of his production capacities.
Machinery costs occur due to the same principle, mainly because of a decreased utilization. Fixed
energy costs, on the other hand, are strongly connected to the used machinery or buildings. They
depend on the further operation of the respective object. However, only the energy consumption used
to ensure functionality of the plant is included here, e.g. energy for lights, air conditioning etc. This
energy consumption can be seen as fixed. In contrast, the variable energy necessary to produce a
component is excluded. Not all categories may contain the same importance for every company. Thus,
every decision maker can concentrate only on the most necessary sticky cost categories.
In the following, all of these categories are described by equations which are loosely based on the
same principle: a ratio to measure the unwillingly remaining building area, staff, machinery or energy
in comparison to the total area etc. This ratio is multiplied by the total costs of the examined object.
Furthermore, the “initiated” sticky costs can be divided in a pagatoric and an imputed part for the
building and the machinery category. Staff and energy costs on the other hand can only be found in
pagatoric costs. All of these equations are displayed below for each category.
The building costs can be divided in rental costs, which are pagatoric, and depreciation and interest
costs, which are imputed. Both are measured by an area ratio of free or partly free square meters per
total area. The partly free areas are further defined by a ratio of how much of it was used by the
relocated production capacities. An example for that could be a break room which is used less after a
relocation of production capacities and its employees. The sticky staff costs are divided into direct
workers of the production line and indirect employees. This is done because direct production workers
are more directly connected to a production than indirect employees are e.g. in the administration.
For those it is potentially harder to measure their workload directly connected to a production. The
341
sticky machinery costs are calculated with the same principle as the building costs. Only maintenance
costs are additionally included in the pagatoric sticky machinery costs due to their frequent
occurrence in this category. The sticky fixed energy costs at last are dependent on the regarded
machine or building and the partly usage of them by other still remaining productions.
‫ܥ‬௣ௌ௧௜௖௞௬ǡ஻ ൌ σ௜ ‫ܥ‬௥௨௡ǡ௜ ‫כ‬
ೣ೛ೌೝ೟ǡ೔
ೣ೟೚೟ೌ೗ǡ೛ೌೝ೟ǡ೔
஺೑ೝ೐೐ǡ೔ ା஺೛ೌೝ೟೑ೝ೐೐ǡ೔ ‫כ‬
஺೟೚೟ೌ೗ǡ೔
‫ܥ‬௜ௌ௧௜௖௞௬ǡ஻ ൌ σ௜൫‫ܥ‬ௗ௘௣௥ǡ௜ ൅ ‫ܥ‬௜௡௧ǡ௜ ൯ ‫כ‬
‫ܥ‬௣ௌ௧௜௖௞௬ǡௌ ൌ σ௝ ‫ܥ‬௪ǡ௝ ‫כ‬
௬ೕ
௬೟೚೟ೌ೗ǡೕ
ೣ೛ೌೝ೟ǡ೔
ೣ೟೚೟ೌ೗ǡ೛ೌೝ೟ǡ೔
஺೟೚೟ೌ೗ǡ೔
௭೗
௭೟೚೟ೌ೗ǡ೗
‫ܥ‬௜ௌ௧௜௖௞௬ǡெ ൌ σ௟൫‫ܥ‬ௗ௘௣௥ǡ௟ ൅ ‫ܥ‬௜௡௧ǡ௟ ൯ ‫ כ‬௭
‫ܥ‬௣ௌ௧௜௖௞௬ǡா ൌ σ௜ ‫ܥ‬௘ǡ௜ ‫כ‬
(1)
஺೑ೝ೐೐ǡ೔ ା஺೛ೌೝ೟೑ೝ೐೐ǡ೔ ‫כ‬
൅ σ௞ ‫ܥ‬௦ǡ௞ ‫כ‬
‫ܥ‬௣ௌ௧௜௖௞௬ǡெ ൌ σ௟൫‫ܥ‬௥௨௡ǡ௟ ൅ ‫ܥ‬௠ǡ௟ ൯ ‫כ‬
.
௬ೖ
௬೟೚೟ೌ೗ǡೖ
௭೗
.
(3)
(4)
.
ೣ೛ೌೝ೟ǡ೔
ೣ೟೚೟ೌ೗ǡ೛ೌೝ೟ǡ೔
஺೟೚೟ೌ೗ǡ೔
(2)
.
೟೚೟ೌ೗ǡ೗
஺೑ೝ೐೐ǡ೔ ା஺೛ೌೝ೟೑ೝ೐೐ǡ೔ ‫כ‬
.
(5)
൅ σ௟ ‫ܥ‬௘ǡ௟ ‫כ‬
௭೗
௭೟೚೟ೌ೗ǡ೗
‫ݑ כ‬௟ .
(6)
CpSticky, B/S/M/E = pagatoric sticky building/staff/machine/energy costs
CiSticky, B/S/M/E = imputed sticky building/staff/machine/energy costs
Crun/depr/int/w/s/m/e, i = running/depreciation/interest/wage/salary/maintenance/energy costs of object i
Afree/partfree/total, i = free/partly free/total area of object i
zl/ztotal,l = variable ratio which describes the occupancy of the area/staff/machine/energy
ul = binary indicator if machine l is still running
In this context, the parameters “x,y,z” are used as variables which can be determined by the
decision maker. For example, a turnover, quantity or time ratio could be used in here to describe how
much of an area or staff was occupied by the relocated production capacities and is now (unwillingly)
free. Fig. 2 shows the connection of the formulated costs and the summarised pagatoric, imputed and
total sticky costs.
buildings
staff
CSticky,B,h
machinery
CSticky,S,h
energy (fixed)
CSticky,M,h
Pagatoric:
‫ܥ‬௣ௌ௧௜௖௞௬ǡ௛ ൌ ‫ܥ‬௣ௌ௧௜௖௞௬ǡ஻ǡ௛ ൅ ‫ܥ‬௣ௌ௧௜௖௞௬ǡௌǡ௛ ൅ ‫ܥ‬௣ௌ௧௜௖௞௬ǡெǡ௛ ൅ ‫ܥ‬௣ௌ௧௜௖௞௬ǡாǡ௛
Imputed:
‫ܥ‬௜ௌ௧௜௖௞௬ǡ௛ ൌ ‫ܥ‬௜ௌ௧௜௖௞௬ǡ஻ǡ௛ ൅‫ܥ‬௜ௌ௧௜௖௞௬ǡெǡ௛
CSticky,E,h
‫ܥ‬ௌ௧௜௖௞௬ǡ௛
Sticky costs, w hich arise per product h and per year w ithout further reorganisation
Figure 2: Determination of the overall sticky costs
As a last step, the removability of sticky costs is examined (cf. Fig. 3). Three different cost types
could be identified. In a realistic time horizon, non-removable sticky costs are the easiest to include
in further calculation because they do not change over time. Removable sticky costs, on the other
hand, can be distinguished in terms of planned and unplanned removal and thus determined by a
known removal rate or estimated with an assumed removal rate. Both removable types of sticky costs
342
require a development of removal steps for further investigation and to improve the transparency of
the situation.
Sticky costs
Non-removable sticky costs*
Removable sticky costs *
Unplanned prospective removal
Planned prospective removal
ƒ
¾
ƒ
Planned removal of sticky costs w ith time of
execution (Expiry of contracts, retraining of
employees, time to conversion…)
Know n removal rate
¾
Non-determined removal of sticky costs, which are
likely to be removed over time
Estimated removal rate
1
3
2
possible removal process
Estimation possibilities
Development of removal stages
*in realistic time horizon
Figure 3: Removability of sticky costs
III. Dynamic NPV
I./II. Static unit cost calculation
Decision support. Based on the sketched evaluation of sticky costs, the actual decision support
process is based on three different steps (cf. Fig. 4). First, the decision maker analyses the change of
the unit costs with and without a relocation of a certain volume of production, including all imputed
and especially sticky cost effects. One of the reasons for a relocation in the first place often are the
advantages of the factor costs of a new location. Therefore, it is examined if a long-term reduction of
the unit costs can be achieved by a relocation decision when sticky costs are taken into account.
Although sticky costs are originally caused by a remaining location, in this context, they are shifted
to a new location because they just occur due to a relocation decision. If a long-term cost reduction
can be negated, the decision should be adjusted or rejected.
Unit costs calculation
at new location
Is a (long-term)
reduction of the unit
costs possible to
achieve by the
decision?
Rejection or
adjustment of
the decision
No
Yes
Weakening of the
remaining location
Are the remaining products
after the relocation still
profitable enough?
Not as desired
Rejection or
adjustment of
the decision
Rejection or
adjustment of
the decision
Dynamic NPV calculation
Is it possible to generate
a positive NPV in the
predetermined duration?
No
Yes
Execution of
the project
Figure 4: Methodological structure of decision support
After looking at the situation of a new location, the remaining location should be focused. For that
purpose, the weakening of this location is analysed and evaluated. The main question is whether the
remaining production is sufficiently profitable to keep the complete remaining location open. If so,
the dynamic NPV calculation can be initiated. If the situation is not as desired, an adjustment should
343
be considered. For example, a relocation of other production capacities could be beneficial, if as a
result a whole building could be sold or re-planned. By that, less sticky costs and a more stable
situation for the remaining location could be achieved. Although also a rejection is possible in this
step, the main aspect should be to enhance the transparency of the situation and not to enforce a
decision situation.
In the dynamic NPV calculation, the decision maker assesses whether or not it is possible to
generate a positive NPV in a predetermined duration with the relocation decision. In here, especially
the investment expenses are taken into account and are compared to the continuous cost savings over
time. An affirmation triggers the execution process or leads to next possible steps such as the
examination of qualitative factors. A negation rejects the decision as a whole or gives the decision
maker feedback about the cost situation and adjustment possibilities.
In the first process step, all pagatoric and imputed costs, including the sticky costs, can and should
be used to calculate the unit cost at the “new” location after the relocation of production capacities
(cf. Fig. 5). These basic pagatoric and imputed costs could be gathered by e.g. using the mentioned
cost modelling method of the authors [17] or any other comparable method. Although the focus lies
in the long-term contemplation, the short-term development of the unit costs can also be examined
for further enhancement of cost transparency. For that reason, the unit costs at certain points of time
of the migration process, e.g. synchronised with changes of the removable sticky costs, can be
evaluated.
I./II. Static unit cost calculation
Imputed costs
Pagatoric costs
Basic imputed.
costs
Non-removable
sticky costs
Removable
sticky costs
Ci,bas,h
CSticky,nr,h
CSticky,r,h
+
Cp,h
=
Unit costs
new location
Ch
‫ܥ‬௜ǡ௕௔௦ǡ௛ ൅ ‫ܥ‬ௌ௧௜௖௞௬ǡ௡௥ǡ௛ ൅ ‫ܥ‬ௌ௧௜௖௞௬ǡ௥ǡ௛ ൅ ‫ܥ‬௣ǡ௛ ൌ ࡯ࢎ
(constant over time)
(decreasing over time)
‫ܥ‬௣ௌ௧௜௖௞௬ǡ஻ǡ௛ ൅ ‫ܥ‬௣ௌ௧௜௖௞௬ǡௌǡ௛ ൅ ‫ܥ‬௣ௌ௧௜௖௞௬ǡாǡ௛ ൅ ‫ܥ‬௣ௌ௧௜௖௞௬ǡெǡ௛ ൅
‫ܥ‬௜ௌ௧௜௖௞௬ǡ஻ǡ௛ ൅ ‫ܥ‬௜ௌ௧௜௖௞௬ǡெǡ௛ ൅
Ch,x -
+
x
y
(costs without
the relocation)
Ch,y
> 0
?
(costs with
the relocation)
Figure 5: Unit cost calculation of the relocated product
The weakening of the remaining location can be assessed by concentrating on either the product
or the location level. In both, the effects of sticky costs are evaluated (cf. Fig. 6). On the product level,
the sticky costs are separated and attached to the different remaining products. Thereby, the drifting
costs increase and, at some point, might exceed the target costs. A possible target cost gap is the result
which needs to be closed by the decision maker through different sanctions. In the location
consideration, all sticky costs are summarised and their influence on the margin of the total location
is examined. The question is whether or not the resulting costs are still acceptable compared to a
defined objective or the costs of other locations. As described, an affirmation leads to the NPV
calculation.
The last step of the decision support is the dynamic NPV calculation [19]. In here, the amortisation
and the NPV are calculated based on the investment expenses, normally caused at the beginning, and
the cost savings caused by the decision over time. Here, the sticky costs cannot be included due to
344
their imputed cost character. These are not based on actual, new payment transactions. This step
concludes the concept and further enhances the transparency of the overall decision situation,
especially including dynamic cost effects.
II. Weakening of the
remaining location
Sticky costs
N-removable
sticky costs
Removable
sticky costs
CSticky,nr,h
CSticky,r,h
Distribution of sticky costs on remaining products resp. location
no
Drifting costs
Product level
Investment calculation
Target cost gap?
Target costs
yes
Sanction initiation
Margin after
Location level
non acceptable
Comparison to others
Margin before
acceptable
NPV calculation
Figure 6: Evaluation of the possible weakened location
Summary
The developed evaluation concept of relocation decisions includes a detailed analysis of sticky
costs and a decision support separated into three parts. First, a unit cost calculation should be
conducted. Second, the profitability of the remaining location is evaluated. Third, a dynamic NPV
calculation concludes the evaluation.
The concept with its two phases of cost analysis and decision support including sticky costs is
highly adaptable to many different business situations without exceeding a certain complexity. It
enhances the transparency of the cost situation. By the integration of both static and dynamic
calculations, the concept is also able to consider imputed and dynamic effects. Furthermore, the
tripartite decision support is enabling the decision maker to reflect the cost consequences of a decision
by looking at a new and the remaining location. In addition, the sticky costs are analysed and
quantified in the context of production network decisions and the removal of them is included.
For the final confirmation of practicability, a validation of the concept by using a practical example
is necessary. Additionally, although the consideration of cost is one of the main parts of a decision,
qualitative factors, such as cultural aspects, infrastructure etc., should also be considered for a
decision. Therefore, these could be included in further investigations. In this sense, the paper can be
used as a basis for further research in the relevant fields of study to lastly develop a holistic decision
support for production network decisions which not only concentrates on cost effects but also includes
qualitative characteristics.
Acknowledgements
The authors would like to thank the German Research Foundation DFG for the kind support within
the Cluster of Excellence "Integrative Production Technology for High-Wage Countries".
References
[1] R. Hayes, G. Pisano, D. Upton, S. Wheelright, Operations, Strategy, and Technology: Pursuing
the Competitive Edge, Wiley, Hobocken, 2005.
[2] McKinsey & Company, How to Go Global – Designing and Implementing Global Production
Networks, PTW, 2004.
345
[3] C. Zanker, S. Kinkel, S. Maloča, Globale Produktion von einer starken Heimatbasis aus:
Verlagerungsaktivitäten deutscher Unternehmen auf dem Tiefstand, Modernisierung der Produktion
63 (2013), Fraunhofer ISI.
[4] R. Schilling, Manufacturing network development in fast-moving consumer goods industries, in:
Schriftenreihe Logistik-Management in Forschung und Praxis 42, Kovač, Hamburg, 2012.
[5] B. Vahdani, M. Mohammadi, A bi-objective interval-stochastic robust optimization model for
designing closed loop supply chain networks with multi-priority queuing system, International
Journal of Production Economics 170/A/1 (2015) 67-87.
[6] D. Mourtzis, M. Doukas, F. Psarommatis, A multi-criteria evaluation of centralized and
decentralized production networks in a highly customer-driven environment, CIRP Annals Manufacturing Technology 61/1 (2012) 427–430.
[7] J. Yang, H. Lee, An AHP decision model for facility location selection, Facilities 15 (1997) 241254.
[8] G. Lanza, R. Moser, Strategic Planning of Global Changeable Production Networks, Procedia
CIRP 3/1 (2012) 257-262.
[9] M. Buhmann, M. Schön, Dynamische Standortbewertung - Denken in Szenarien und Optionen,
in: S. Kinkel (Ed.), Erfolgsfaktor Standortplanung, In- und ausländische Standorte richtig bewerten,
2nd ed., Springer, Berlin, 2009, pp. 279–299.
[10] S. Kinkel, C. Zanker, G. Lay, S. Maloc̆a, P. Seydel, Globale Produktionsstrategien in der
Automobilzulieferindustrie:
Erfolgsmuster
und
zukunftsorientierte
Methoden
zur
Standortbewertung, Springer, Berlin, New York, 2007.
[11] P. Christodoulou, D. Fleet’s, P. Hanson, R. Phaal, D. Probert, Y. Shi, Making the right things in
the right places, University of Cambridge Institute for Manufacturing, Cambridge, 2007.
[12] M.C. Anderson, R.D. Banker, S.N. Janakiraman, Are Selling, General, and Administrative Costs
"Sticky"?, Journal of Accounting Research 41/1 (2003) 47–63.
[13] R. Balakrishnan, E. Labro, N.S. Soderstrom, Cost Structure and Sticky Costs, Journal of
Management Accounting Research 26/2 (2014) 91–116.
[14] D. Baumgarten, The cost stickiness phenomenon: Causes, characteristics, and implications for
fundamental analysis and financial analysts' forecasts, Gabler Verlag, Wiesbaden, 2012.
[15] P. Beltz, Analyse des Kostenverhaltens bei zurückgehender Beschäftigung in Unternehmen:
Kostentheoretische Fundierung und empirische Untersuchung der Kostenremanenz, Springer
Fachmedien Wiesbaden, Wiesbaden, 2014.
[16] C. Reuter, J.P. Prote, T. Schmitz, Cost modelling approach for the source specific evaluation of
alternative manufacturing networks, Proceedings APMS 2016 Advances in Production Management
Systems: Production Management Initiatives for a Sustainable World, September 3-7, 2016 Iguassu
Falls, Brazil.
[17] C. Reuter, J.P. Prote, T. Schmitz, A top-down/bottom-up approach for modeling costs of a
manufacturing network, Proceedings of the 23rd EurOMA Conference 2016, 17th-22nd June 2016,
Trondheim, Norway.
[18] W. Kilger, J. Pampel, K. Vikas, Flexible Plankostenrechnung und Deckungsbeitragsrechnung,
Wiesbaden, 2007.
[19] R.H. Garrison, E.W. Noreen, P.C. Brewer, Managerial Accounting for Managers, McGraw-Hill
Irwin, Boston, 2014.
346
The effect of different levels of information exchange on the
performance of resource sharing production networks
Marit Hoff-Hoffmeyer-Zlotnik1,a, Daniel Sommerfeld1,b and Michael Freitag1,2,c*
1
University of Bremen, Faculty of Production Engineering,
Badgasteiner Straße 1, 28359 Bremen, Germany
2
BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen,
Hochschulring 20, 28359 Bremen, Germany
a
hhz@biba.uni-bremen.de, bsom@biba.uni-bremen.de, cfre@biba.uni-bremen.de
Keywords: Distributed manufacturing, Simulation, Resource sharing
Abstract. Resource sharing is becoming increasingly attractive as it opens up opportunities for
savings on investment costs and higher utilization of production resources. In order to coordinate a
shared usage of production resources, it is necessary that companies exchange certain information.
Companies, however, are generally sceptic about information exchange and research did not yet
reveal, how much information exchange is actually needed for an efficient resource sharing. This
paper investigates exactly this issue by examining the performance of a resource sharing production
network under different levels of information exchange. Furthermore, different dynamics of the
network’s environment are considered and a dynamics key figure is introduced for comparison. The
investigations are carried out by means of a discrete event simulation and reveal that sharing basic
information has already a large impact. Further increase of information sharing can, but need not,
contribute to a better performance of the resource sharing network. The benefit does also vary for
different stakeholders. In addition, the value of information sharing is strongly dependent on the
dynamics of the system’s environment.
The necessity of resource and information exchange
A shared usage of production machines among different companies (resource sharing) is
becoming increasingly attractive in times of increasing competition and decreasing product
lifecycles [1]. It opens up opportunities for savings on investment costs as well as higher utilization
of a company’s resources through subcontracting. Industry 4.0 allows for an easy and user-friendly
implementation of the resource sharing concept as machine data can be made available in real time
to any device within a network [2]. Therefore, information such as availability of free capacities can
be published automatically on sharing platforms where users in need can book these capacities.
A main impediment for putting resource sharing into practise, however, is the unreadiness of
companies to exchange information. Sharing a company’s information with others usually poses a
threat as it allows for conclusions on the economic condition. Nevertheless, it is necessary to
exchange a certain amount of information so that resource capacities can be allocated effectively.
Using the exchanged information would be the foundation of new methodologies for coordination
of resource sharing [3]. A first step to increase the readiness of companies to participate in
information sharing is to examine how many and which information is actually needed in order to
improve the performance of resource sharing systems. Additionally, the benefits of information
sharing should be quantified to build the basis of motivation for an increase in information
exchange [4]. The paper at hand contributes to this very topic.
Previous work on information exchange in the context of resource sharing
The combination of resource sharing and information exchange is a new field of study. So far,
only Freitag et al. [3] have explicitly studied this issue. They set up a scenario with resource
*
Submitted by: Prof. Bernd Scholz-Reiter
347
requesters, shared production resources and different levels of information exchange between both.
They examined the performance of the system in terms of inventory level and throughput times and
found that initially an increase in information exchange leads to an increase in performance. From a
certain level onwards, however, a further increase in information exchange can result in a
decreasing performance of the resource sharing system.
More studies on information exchange have been conducted in the related field of supply chain
management. Huang et al. [4] review researches about the possible impacts of sharing production
information. They establish guidelines for supply chain planning and the usage of information
sharing in order to reach lower prediction failures and logistical costs. An example are better
ordering decisions by manufacturers, when they know the capacity of each supplier. This idea builds
a fundamental part of our study. Ryu et al. [5] have conducted another review in which they
especially focus on demand information sharing. The studies measure the performance of the supply
chains in terms of inventory levels and costs savings. In particular, Yu et al. [6] find that
information sharing can allow achieving Pareto improvement in the performance of the entire chain.
Moreover, in contrast to Freitag et al. [3], the study finds that the performance of supply chains does
not only improve up to a certain level of information exchange but along all considered levels of
information exchange. Huang et al. [7] highlight that information sharing can improve the
efficiency of inventory holding through better demand predictions. Zhao et al. [8] measure the value
of information sharing and find that it varies under different circumstances, for instance for different
demand patterns. Forecasting errors and their impact on the performance of supply chains for
retailers and supplier is researched. They name some damage for single partners, for instance the
retailers, with a higher information exchange and identify further research potential in case of more
dynamic demand, additional types of information or no constant capacity in combination with the
impact of information sharing. These ideas are incorporated in our study.
Research questions and case study description
Freitag et al. [3] found interesting and counterintuitive results in their simulation study.
However, they assumed environmental conditions that involve stochasticity. Therefore, it is not
clear whether their results actually arise from the different levels of information exchange or
whether the stochastic environment biased the system’s behaviour. In order to examine this issue
more profoundly, the paper at hand will investigate a similar scenario but assume strictly
deterministic behaviour of the system and its environment. Furthermore, it will extend the study
with aspects already researched and deemed interesting in the field of supply chain management,
namely a variation of the order dynamics [8]. The paper will investigate how the response of the
resource sharing network changes depending on the order dynamics and whether this environmental
condition also influences the value of information sharing in resource sharing production networks.
Like Freitag et al. [3], this paper relates its scenario to the steel industry. In the steel industry
resource sharing is already in practice today, however, only among companies of the same corporate
group. The latter is owed to the scepticism of exchanging information with potential competitors. It
would be very favourable though to extend these first approaches of resource sharing as the steel
industry involves very cost intensive and often overdimensioned resources. Moreover, also other
branches of industry are expected to largely benefit from the concept of resource sharing.
Simulation Study
A resource sharing scenario is designed and implemented in terms of a discrete event simulation.
Meaningful key figures are chosen and used to evaluate (i) the behaviour of the system for different
levels of information exchange, (ii) the extent to which the dynamics are transferred throughout the
resource sharing system and (iii) the behaviour of the system for different order dynamics.
Scenario Description. The resource sharing network consists of 3 steel companies (resource
requesters) that share 2 galvanizing lines (shared resources; Fig. 1). The scenario is set up
348
symmetrically such that two companies (A and C) have a galvanizing line in close vicinity and by
default deliver their coils to galvanizing line 1 (GL1) and GL2, respectively. Company B is located
equally distant from both galvanizing lines and therefore has no geographic preference on one of the
GLs. However, it always delivers its steel coils to the GL with less coils already planned in for
processing. In cases where the number of planned in coils is equal for both GLs, the affected
deliveries are sent to GL1 and GL2 in an alternating fashion, starting with GL1.
Company A and C can make exceptions for their default delivery to the closest GL in case their
inventory for outgoing goods exceeds a certain level (100 coils) and the alternative GL has a lower
number of planned in coils than their default one. In this case, company A or C will deliver a batch
of 100 coils by ship to their alternative GL.
The default delivery to the nearby GLs happens by truck. Company B always delivers by train.
Trucks and trains run at a fixed frequency. Trucks are adjusted exactly to the average production
frequency of the companies; trains are adjusted to 1.5 times the average production frequency. A
truck can transport 1 coil, a train always transports 36 coils. Ships only run when required. The
transportation times are 1 hour for trucks, 24h for trains and 3 days for ships.
Figure 1: Resource sharing scenario consisting of 3 resource requesters (steel companies) and
2 shared resources (galvanizing lines). For the different modes of transportation, the
transportation capacities and transportation durations are indicated.
In order to model order dynamics of a production system, it is common to introduce seasonal
dynamics in form of a sin wave [9]. Thus, the production of all three companies is modelled in form
of a sin wave with an average production rate of 200 coils per day. The default amplitude is S = 0.2
and the default periodicity accounts for P = 60 days. All three companies are phase shifted to one
another by φ = 120° (φA = 0°, φB = 120°, φC = 240°).
The capacities of the GLs are initially set to cover the overall production of the three companies.
Throughout the simulation, the capacities are regularly adjusted to the amount of upcoming orders.
Upcoming orders can represent (i) coils that are located in the inventory for incoming goods in the
GL, (ii) coils that are en-route and (iii) coils that are in the inventory for outgoing goods in company
A or C. The number of upcoming orders is recorded once a day and averaged after one week. On
this basis, the capacity is updated according to the equation by Freitag et al. [3]
ca = cp + kc (I - Ip)
(1)
where cp is the expected daily amount of coils per GL and set to 0.8 here. The capacity adjustment
parameter kc accounts for 0.036 day-1 in order to guarantee a smooth behaviour in case of sudden
disturbances and I is the average number of upcoming orders. The planned number of upcoming
349
orders, Ip, accounts for 20 days of work, i.e. 6000 coils. The update is implemented with a 1-week
delay in order to make the capacity adjustments realistic in terms of work plan adaptations.
All three companies have inventories for outgoing goods. Both galvanizing lines have
inventories for incoming goods. All inventories are sufficiently dimensioned.
Levels of information exchange. Simulations with 6 different levels of information exchange
(L0-L5; Table 1) are investigated. The level of available information varies for the resource
requesters as well as for the shared resources:
L0: Company A and C only deliver to their default GL, company B delivers to GL1 and GL2 in
an alternating fashion. The capacity adjustment is inactive. The capacities of GL1 and GL2 are set
to the sum of the average production of company A, B and C.
L1: The companies decide where to send their coils based on the amount of coils already in
queue in the inventory of GL1 and GL2. Capacity adjustments are active and only such coils that are
already located in the inventory of the GLs are taken into consideration as upcoming orders.
L2: In addition to L1, the companies also consider coils en-route for their shipping decision.
L3: In addition to L2, the GLs also count coils en-route as upcoming orders for their capacity
adjustments.
L4: In addition to L3, the companies also plan in the capacity adjustments of the GLs that are
planned in but not implemented yet.
L5: In addition to L4, GL1 and GL2 also count coils in the inventories of company A and C,
respectively, as upcoming orders for their capacity adjustments.
Table 1: Different levels of information exchange (L0-L5). The level of available information
(inf.) varies for the resource requesters (RR) as well as for the shared resources (SR).
Inf. available to RR
Inf. for capacity adjustments in SR
None
L0 None
Inventory of SR
L1 Inventory of SR
L1
L2 L1 + orders en-route
L2 + orders en-route
L3 L2
L3
+
planned
capacity
adjustments
of
SR
L3
L4
L4 + inventories of associated RR
L5 L4
Variation of Order Dynamics. In order to investigate the effect of different order dynamics on
the resource sharing system, the amplitude and periodicity of the coil production in the companies
are varied and account for S = [0.1, 0.2, 0.4] and P = [30, 60, 120] days. While a variation of the
amplitude simulates different strength of dynamics, a variation of the periodicity simulates different
time scales of dynamics.
Simulation. The scenario is implemented in Tecnomatrix Plant Simulation 13 and simulated for a
time period of 2000 days. In order to avoid influences of the transient phase, only data from day 200
onwards are analysed. In total, all combinations of amplitude and periodicity are simulated for all
levels of information exchange.
Data Analysis. First of all, the paper analyses the behaviour of the system with S = 0.2 and P =
60 days for each level of information exchange. To this end, the paper evaluates (i) the inventory
levels in GL1 and GL2, (ii) the capacities of GL1 and GL2, (iii) the overall lead times for coils of
type A, B and C as well as its composition. i.e. waiting time in the company’s warehouse,
transportation time and waiting time in the galvanizing line’s warehouse and (iv) how many coils of
each company are directed to GL1 and GL2, respectively. It is analysed how these values change
depending on the level of information exchange.
Furthermore, the paper analyses the time courses of the upcoming orders in GL1 and GL2 for
each level of information exchange. This is to investigate to which extent the dynamics within the
coil production propagate to the level of capacity planning within the galvanizing lines.
350
Last, the paper compares the performance of the system for the different order dynamics, i.e.
combinations of S and P. The performance is measured in terms of standard deviation – i.e. amount
of temporal variation – of inventory levels and capacities of GL1 and GL2, which allow for
conclusions on the planning security (inventory levels) and effort in terms of work plan adaptations
(capacities). In addition, the mean lead times of A, B and C type coils are analysed and the
dynamics key figure D = S˜P is introduced for better comparison of the different dynamics.
Results and Discussion
Variation of information exchange.
Inventory Levels. The inventory levels of both GLs are in the range of 6000 coils (Fig. 2a, top)
which corresponds to the planned number of upcoming orders (Ip). In case of L1-L2, the value of
6000 coils is reached exactly because only those coils that are already present in the inventories of
the GLs are counted as upcoming orders. For L3-L5 also coils that are en-route (and in the
warehouses for outgoing goods of the associated companies) are considered as upcoming orders.
Therefore, the number of coils already physically present in the GLs is slightly below 6000. In L0,
the capacity adjustment is not active and can thus not balance the inventory levels.
The standard deviation of the inventory levels generally decreases with increasing information
exchange (Fig. 2a, bottom). Only in case of L4 for GL1 and L5 for GL2 the standard deviation
slightly increases compared to the previous level of information exchange. A decrease in standard
deviation and thus variation within the inventory levels is beneficial for the GL operators as less
variation in inventory levels contributes to their planning security.
Capacity Adjustments. The average capacity is in the range of 300 coils per day (Fig. 2b, top).
Its standard deviation is relatively high for L1-L2, reaches its minimum for L3 and then slightly
increases for L4-L5 (Fig. 2b, bottom). From L4 on, the resource requesters plan in the capacity
adjustments of the shared resources that are planned in but not implemented yet. The fact that now
both parties of stake holders (resource requesters and shared resources) are aiming to adapt to each
other is of disadvantage for the shared resources because the higher variance in capacity implies that
larger efforts is necessary in order to adapt the work plans of the employees in the GLs.
Lead Times. The lead times lie in the range of 20 days and vary with the level of information
exchange (Fig. 2c). For coils of company A and C the lead times are decreasing with increasing
information exchange. The largest decrease occurs from L0 to L1. The reason is that there are no
ship transports involved in L0 and this increases the waiting time at the exit of the warehouses. For
L1-L5, the decrease in lead time is due to a decrease in inventory levels and thus shorter waiting
times within the GLs. For coils of type B the lead time first of all increases. This is because for L0,
company B has an advantage over companies A and C since transportation by train offers a capacity
of 1.5 times the average production, while transportation by truck is adjusted to exactly the average
production. Once companies A and C also deliver by ship, the coils that used to accumulate at the
exits of the warehouses in L0 now accumulate and cause higher waiting times at the GLs. For L3L5, the lead time decrease again due to the smaller inventory levels in the GLs.
In conclusion, the effects of information sharing can differ between stakeholders. While
company B bears disadvantages from information sharing, companies A and C as well as all three
companies combined benefit in terms of lead times.
Shipping Decisions. The level of information exchange does not affect the percentage of how
many coils of companies A, B and C are directed towards GL1 and GL2 (Fig. 2d).
Discussion. As one considers all results together, one can see that initially an increase in
information exchange clearly improves the situation in the shared resources (Fig. 2a, b, bottom) and
in sum also for the resource requesters (Fig. 2c). Up to L3 the dynamics of the inventory levels as
well as those of the capacities decrease. Especially the latter is beneficial as less effort is necessary
in order to adapt the work plans of the employees in the galvanizing lines. At the same time, the
lead times of all three resource requesters combined decrease, which makes the production system
351
more efficient. A further increase in information exchange, however, does not immediately result in
additional benefits. The increase from L3 to L4 enlarges the variation of the capacities but does not
yet lead to lower lead times for the resource requesters. Only with another increase of information
exchange an improvement in lead times is achieved. Thus, the value of an increase in information
exchange depends on the stakeholder as well as on the information that is added.
Another insight is that the inclusion of upstream information for upcoming orders makes it
possible for the shared resources to lower the level of safety stock in their own inventory (Fig. 2a,
top). The reduction in safety stocks reduces warehousing costs as warehouses can be designed
smaller. This also allows for faster processing times as smaller inventory levels mean shorter
waiting times.
b)
a)
2a) - d)
2c)
2d)
c)
d)
Figure 2: Mean and standard deviation (std.) of inventory levels (a) and capacities (b) for GL1
and GL2. Mean lead times (c) and shipping decisions (d) for A, B and C-type coils.
Propagation of order dynamics.
Results. The upcoming orders vary in the form of quasi-periodic behaviour (Fig. 3). For L0-L1
the amplitude accounts for app. 450 coils, for L2 for app. 250 coils and for L3-L5 for app. 200 coils.
In all cases, the periodic length is about 60 days, according to the order dynamics. For L0, the phase
shift is determined by the phase shift of the order dynamics of company B. Since for companies A
and C the amount of trucks is adjusted to the average production frequency, the variation of order
dynamics cannot propagate towards the galvanizing lines. Also for L1-L5, the phase shift in
upcoming orders is mostly determined by the order dynamics of company B. In addition, the ship
transports add noise to the time courses. For L1, L3, L4 and L5, in GL1 the upcoming orders are
noisier on the increase, in GL2 noise levels are higher on the decrease. This is due to the timing of
the ship transports that the galvanizing lines receive from companies A and C. While company C,
which delivers to GL1, has its peak of production just before company B, the peak of production of
company A, which delivers to GL2 occurs after the peak of company B. This explains why GL1 is
affected on the increase and GL2 is affected on the decrease. For L4-L5, the time courses deviate
from the mean for larger periods of time. This explains the increase in standard deviation of the
capacity in Fig. 2b. Whether this happens on a regular basis cannot be deduced from the simulated
352
time span. For L5 the dynamics within upcoming orders are phase shifted by 180° as now the
inventory level of the warehouses of companies A and C are also considered. Therefore, GL1 is
affected by train deliveries from company B, ship deliveries from company C and by inventory
levels of company A. The latter shifts the peaks in upcoming orders of GL1 to a later time point.
The opposite holds for GL2. The time courses of the capacities (not shown) reflect those of the
upcoming orders, but then time shifted by app. 2 weeks.
Discussion. The order volume dynamics within the coil production clearly propagate to the level
of the galvanizing lines. However, how strong the influence is depends more on available means of
transportation than on the level of information exchange. Only when the inventory levels of the
companies are directly considered (L5), all three companies affect the dynamics within the
galvanizing lines to a similar extent.
Figure 3: Upcoming orders for GL1 and GL2 and all levels of information exchange. For week
100-125 both curves are shown, then only that for GL1 and then only for GL2.
Variation of order dynamics.
Results and Discussion. The performance under different order dynamics D is most wide spread
for levels of low information exchange and tends to converge for levels of higher information
exchange (Fig. 4). This especially holds for the standard deviation of inventory levels (Fig. 4a) and
lead times of A-type coils (Fig. 4c). The standard deviation of capacities (Fig. 4b) has a similar
tendency, however, the data are most converged for L3 and then diverge again. The lead times of Btype coils (Fig. 4d) are most diverse for L0 and least for L1-L2. The results for GL2 and C-type
coils (not shown) are similar to those of GL1 and coils of type A, respectively. In addition, it holds
that in cases of low order dynamics a variation of information exchange has least effect, while it has
the largest effect for high order dynamics. This means that for production networks that experience
environmental dynamics which are strong and act on long time scales, an increase in information
exchange is more beneficial than for production networks that experience environmental dynamics
that are weak and act on shorter time scales. In conclusion, the value of information exchange
depends on the environmental dynamics of a system.
Conclusion and Outlook
Overall, the authors find that an increase in information exchange most often, but not always,
yields benefits to a resource sharing system. This depends on the situation of the single stakeholder
as well as on the specific information that is added to the information sharing system. These
findings are in line with Freitag et al. [3]. In addition, this paper shows that the propagation of order
dynamics from the level of resource requesters to the level of shared resources depends on the mode
of transportation rather than on the level of information exchange. Furthermore, in accordance with
353
Zhao et al. [8] the authors show that the value of information sharing strongly depends on
environmental conditions of the resource sharing system. For future work, the authors are planning
to analyse the propagation of order dynamics in more detail and further investigate the performance
of the resource sharing system under different environmental conditions.
a)
b)
d)
c)
Figure 4: Standard deviation (std.) of inventory level (a) and capacity (b) in GL1 and mean lead
time of A- (c) and B-type coils (d) over the levels of information exchange (L0-L5).
References
[1]
J.R. Duflou, J.W. Sutherland, D. Dornfeld, C. Herrmann, J. Jeswiet, S. Kara, M. Hauschild,
K. Kellens, Towards energy and resource efficient manufacturing: A processes and systems
approach, CIRP Annals – Manufac. Technol. 61 (2012) 587-609.
[2]
N. Jazdi, Cyber physical systems in the context of Industry 4.0, IEEE International
Conference on Automation, Quality and Testing, Robotics, Cluj-Napoca, 2014, 3 pages.
[3]
M. Freitag, T. Becker, N.A. Duffie, Dynamics of resource sharing in production networks,
CIRP Annals – Manufac. Technol. 64 (2015) 435-438.
[4]
G.Q. Huang, J.S.K. Lau, K.L. Mak, The impacts of sharing production information on
supply chain dynamics: A review of the literature, Int. J. Prod. Res. 41 (2003) 1483-1517.
[5]
S-J. Ryu, T. Tsukishima, H. Onari, A study on evaluation of demand information-sharing
methods in supply chain, Int. J. Production Economics, 120 (2009), 162-175.
[6]
Z. Yu, H. Yan, T.C.E. Cheng, Benefits of information sharing with supply chain
partnerships, Ind. Manag. Data Syst. 101 (2001) 114-121.
[7]
Y-S. Huang, J-S. Hung, J-W. Ho, A study on information sharing for supply chains with
multiple suppliers, Comput. Ind. Eng. 104 (2017) 114-123.
[8]
X. Zhao, J. Xie, Forecasting errors and the value of information sharing in a supply chain,
Int. J. Prod. Res. 40 (2002) 311-335.
[9]
B. Scholz-Reiter, M. Freitag, C. de Beer, T. Jagalski, Modelling and simulation of a
pheromone based shop floor control, Proc. CIRP-DET (2006) 1-7.
354
Evaluation of Planning and Control Methods for the Design of Adaptive
PPC Systems
Susanne Schukraft1,a, , Marius Veigt1,b and Michael Freitag1,2,c,*
1
BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen,
Hochschulring 20, 28359 Bremen, Germany
2
University of Bremen, Faculty of Production Engineering,
Badgasteiner Straße 1, 28359 Bremen, Germany
skf@biba.uni-bremen.de, bvei@biba.uni-bremen.de, cfre@biba.uni-bremen.de,
a
Keywords: Production planning, Scheduling, Autonomous control
Abstract. Due to high market volatility and individual customer demands, high flexibility and
reactivity are important requirements for production planning and control. Producing companies
have to cope with dynamically changing production situations. Examples are changing predictability
of customer demands or differences in order characteristics regarding batch sizes and processing
times. Thereby, the logistics efficiency of planning and control systems depends onto the extent, in
which the applied methods support the specific characteristics of the current production situation. A
possibility to achieve consistently high logistics efficiency despite changing requirements is the
selection of planning and control methods depending on the production situation. This paper
provides a simulation based analysis of the logistics performance of different planning and control
methods based on production situations with different levels of complexity. The results will show
the high potential of a situation dependent method application and thus, the design of adaptive
planning and control systems.
Introduction
Nowadays, producing companies have to cope with increasing dynamics and complexity. High
market volatility, individual customer demands and various other factors pose high challenges to
production planning and control (PPC) [1]. The efficiency of PPC depends on the applied methods’
ability to take into account the specific requirements of the production scenario. Consequently, PPC
methods are often designed for specific production situations. However, in high volatile production
environments, there are permanent changes of the initial situation. Examples are changing order
characteristics considering type and quantity of customer orders or fluctuating order arrivals due to
unpredictability of customer demands. These influences lead to changing requirements to PPC and
thus, require the adaption of PPC methods to the modified production situation in order to sustain a
high logistics objective achievement. However, in practical application, commonly used enterprise
resource planning (ERP) systems are generally based on a centralised and deterministic planning
approach [2]. These systems normally provide detailed production schedules in advance, which
enable high efficiency for the assumed situation. However, these systems show deficits to adapt the
applied planning methods due to changes in the production environment.
In this context, this paper provides a simulation based comparison of different planning and
control methods considering the methods’ logistics efficiency for production situations with
different levels of complexity. The defined production situations consider different characteristics of
order specific criteria as well as dynamic influences. The simulation study is based on a job shop
environment with real data of a medium sized company of aviation industries. The results show the
potential of an adaptive PPC system which is able to flexibly vary the applied methods due to
changes of the initial production situation.
*
Submitted by: Prof. Bernd Scholz-Reiter
355
Characterisation of Production Situations
Real production systems are systems of high complexity which can be described by a wide range
of criteria. In literature, there are several approaches for the description and classification of
production systems. These approaches systemise production systems either in the context of
production scheduling, e.g. [3] or production control, e.g. [4]. Furthermore, other approaches focus
on the complexity of production systems in general, e.g. [5] or deal with the classification of
disturbances, e.g. [6]. Generally, basic categories of classifications in the field of PPC contain
criteria to describe the machine environment and the characteristics of production orders.
The machine environment of production systems can be basically differentiated into single
machine, flow shop, flexible flow shop, job shop and open shop environments. The arrangement of
machines on the shop floor also basically determines the material flow leading to flow pattern with
different levels of complexity (e.g. material flow with or without backflows). Further criteria to
specify the machine environment are the number of machines and workshops, processing and setup
times as well as the description of related buffers for the temporal storage of unfinished products
[7]. The characterisation of production orders contains various aspects. Examples are the definition
of order availability, the number and type of production orders as well as demanded batch sizes.
Furthermore, PPC has to consider possible restrictions such as sequence restrictions or product
dependent process routes [8].
In practical application, the machine environment will usually remain constant for a longer
period of time. There might be a temporal unavailability of individual machines, e.g., due to
machine breakdowns or maintenance, but the quantity, types and the arrangement of machines will
only change, if basic adaptions on the shop floor, e.g., the purchase of a new machine, are carried
out. Contrary to the machine environment, the production orders’ characteristic is mainly influenced
by the product mix and thus, depends on customer demands and requirements. Thus, especially in
high volatile environments these characteristics will underlie permanent changes.
Besides these basic elements of production systems, the consideration of dynamic influences is
also an important issue for the selection of adequate PPC methods. These influences can basically
be differentiated into internal and external ones [6]. Internal influences result from internal failures
of the production system such as machine breakdowns or process time deviations due to incorrect
information in working schedules. External influences are often caused by cooperating companies
and result, e.g., in delayed material supplies, rush orders or demand changes due to changing
customer requirements.
Production Planning and Control Methods
The major tasks of PPC are the planning of the production programme, the production
requirements planning, the planning of in-house production and the planning of external
procurements [9]. The production programme contains the quantities which have to be produced for
each product type and planning period. The production requirements planning subsequently derives
the necessary material and resource demands. In case of in-house production, production planning
includes the scheduling and sequencing of production orders. The main task of production control is
the implementation of the planning results and the achievement of the planned objectives despite
occurring disturbances [10].
On side of production planning, the focus of this paper is on scheduling in the context of inhouse production planning. Generally, scheduling methods provide detailed production schedules
including a spatiotemporal allocation of production orders to production resources [11]. In
literature, there is a wide range of scheduling approaches which can be classified according to
several criteria. Basically, it can be differentiated into optimisation and heuristic solution
approaches. Optimisation methods are used for the calculation of production schedules that
optimally fulfil a predefined target function. However, the calculation of optimal schedules goes
along with high computational effort and thus in practical application mainly heuristic solution
356
approaches are used for production scheduling [12]. The resulting production schedules inherit a
high planning accuracy but are of limited suitability in dynamic production environments.
Therefore, there are different strategies to cope with occurring dynamic influences. A common
approach in dynamic scheduling is the partial or complete rescheduling in case of deviations from
the predefined schedule. Contrary, robust scheduling approaches try to avoid rescheduling
procedures through the generation of production schedules which are insensitive to disturbances.
Finally, reactive approaches do not provide a detailed production schedule but control the material
flow locally via the use of priority rules [13].
An alternative to the described dynamic scheduling approaches is the application of autonomous
control methods. These methods are based on decentralised decision making by single logistic
objects within a heterarchical organisation structure. Logistic objects are able to interact with each
other, to exchange information about the current system state and to decide for themselves based on
the gathered information [14]. Therefore, autonomous control methods are able to recognise
changing conditions and to include occurring influences immediately into their decision making.
The main disadvantage of autonomous control is their lack of planning accuracy, e.g., regarding
workstation assignments or the sequencing of production orders. A simulation based comparison
proved that central planning methods show a higher logistic efficiency in static production
environments whereas autonomous control methods reach better results in rather complex
environments [15].
Simulation Study
Problem Description. The simulation study is based on a real job shop environment of a medium
sized supplier in aviation industry. The experimental setup comprises 28 workstations grouped into
five workshops. Each workshop has an average processing time varying between 30 and 80 minutes.
Workstations within a workshop are heterogeneous. The exact processing times vary around 25% of
the average processing times depending on workstation and product type. Setup times are sequencedependent and vary between 10 and 120 minutes depending on processing step, product type,
workstation and precursor. The simulation comprises a time period of 320 hours which represents a
planning period of one month with 20 working days and 16 working hours per day. The release of
production orders depend on the arrival rate and are modelled using a Poisson process. The due
dates are internally determined as a multiple of 1.3 of the scheduled throughput time using the
Giffler&Thompson algorithm described below.
Considered Production Situations. The simulation study focuses on the comparison of the
methods’ performance depending on the characteristic of the production situation. As depicted in
Fig. 1, the definition of production situations bases on criteria which have been derived from a
detailed analysis of the considered use case. Each criterion can be specified in form of specific
characteristics. These characteristics serve as a morphological pattern to describe different
production situations. The considered situations focus on the variation of order relevant criteria that
are exposed to permanent changes, e.g., due to varying product mixes and changes of customer
demands. Furthermore, different levels of dynamic influences are considered. For each criterion the
complexity of the production situation increases from characteristic 1 to 3 (c.f. Fig. 1). For instance,
the complexity of situations generally increases with an increasing number of product types [11].
First, the considered use case faces production situations with different sizes of production
orders. The order size depends on the number of processing steps (varying between 1 and 5) and the
batch size (varying between 1 and 10). The simulation study assigns the number of processing steps
and the batch size based on a percentage probability as depicted in Fig. 1. Furthermore, the arrival
rate of production orders varies between 24 and 48 orders/day and decreases with an increasing size
of production orders.
Second, the simulation study considers production situations with different numbers of product
types (varying between 15 and 72). Generally, the product type defines the number, type and the
357
sequence of processing steps for each production order. A number of 15 resp. 25 product types
represent 3 resp. 5 product types for each different number of processing steps. Production situation
with 72 product types consider all identified product types of the use case.
Level of Complexity
Criteria
1
Characteristics (char.)
2
3
Size of production orders
Number of processing steps
Batch size
Arrival rate
Number of product types
Number of related production orders
Dynamic influences
Workstation availability
1: 35%, 2: 35%, 3: 20%, 1: 20%, 2: 20%, 3: 20%, 1: 5%, 2: 5%, 3: 20%,
4: 5%, 5: 5%
4: 0%, 5: 20%
4: 35%, 5: 35%
(average: 2)
(average: 3)
(average: 4)
1: 60%, 2-5: 30%,
6-10: 10%
(average : 2.5)
1: 35%, 2-5: 30%,
6-10: 35%
(average : 4.0)
1: 10%, 2-5: 30%,
6-10: 60%
(average : 6.0)
48 orders/day
15
38 orders/day
25
24 orders/day
72
1: 100%
1: 50%, 2-5: 20%,
1: 0%, 2-5: 40%,
6-10: 20%, 11-20: 10% 6-10: 20%, 11-20: 20%
(average : 4)
(average : 8)
100%
90%
80%
Ratio of rush orders
0%
10%
20%
Ratio of product type deviations
0%
10%
20%
Station-dependent deviation interval of processing times
0%
10%
20%
Variant-dependent deviation interval of processing times
0%
10%
20%
Figure 1:
Criteria for the definition of production situations
Third, a major challenge of the use case is the handling of related production orders. Related
production orders belong to a superior main order and thus, have to be consolidated after production
prior to the delivery. The simulation study determines the number of related orders (varying
between 1 and 20) analogously to the size of production orders based on a percentage probability.
Furthermore, the assigned due date is identical for all related production orders and set to the
maximum of the throughput times calculated as described above.
Finally, the simulation study considers different levels of dynamic influences. In the case of
workstation breakdowns, the mean time to repair is set to 120 minutes. Production orders have a
percentage probability to become rush orders when entering the production system. In case of a rush
order, the due date of the order is reduced by 30% of the sum of the average processing times.
Product type deviations represent the probability that the scheduled product type changes when the
order is released. This dynamic influence represents short-term changes of customer orders due to
changing product requirements. Deviations of planned processing times specify the interval in
which the processing times differ from the planned values. Processing time deviations are
considered to be station and product type dependent. Such deviation may occur, e.g., if workers
have different levels of experience and thus, require more or less time for order processing.
The simulation study considers all possible combinations of the superior categories. Thus, in
total 81 different production situations (3 sizes of production orders x 3 numbers of product types x
3 numbers of related production orders x 3 levels of dynamic influences) are considered.
Applied Planning and Control Methods. Since the exact scheduling algorithm for the use case is
unknown, a production schedule was generated using the Giffler&Thompson (G&T) algorithm [16].
It was implemented as an active schedule with a timeframe of five parts and a rolling time horizon.
Control methods are divided into methods for sequencing and workstation assignment. For
sequencing, the simulation uses different dispatching rules. Depending on the dispatching rule, the
production orders are sequenced according to their arrival time (FIFO), their earliest due date
(EDD), the shortest processing time (SPT) or slack based according to the critical ratio (PRIO). For
workstation assignment, the queue length estimator method (QLE) is used which decides on the
next workstation based on the expected waiting time. The pheromone based method (PHE) decides
358
based on the job type specific, mean processing time of the last parts [17]. Based on the described
methods, the simulation considers the following method combinations: Each method for
workstation assignment (G&T, QLE, PHE) is applied in combination with each dispatching rule
(FIFO, EDD, SPT, PRIO) leading to 12 method combinations. For reasons of comparability, the
simulation study also considers both sequencing and scheduling according to the G&T algorithm.
Thus, in total 13 method combinations are applied.
Performance Measurements. The methods’ performance is measured by the basic logistics
objectives due date reliability, throughput time, work in process (WIP) and utilisation. The due date
reliability gives the percentage of production orders finished in time or earlier. As described above,
the due date is calculated as a multiple of 1.3 of the throughput times. Thus, the calculation contains
a time slot between the scheduled completion time and the due date which means, that orders can
still be delivered in time even if they do not meet the scheduled completion time. The mean
throughput time calculates the mean time between release and finishing of production orders. The
calculation of WIP is based on the work content of production orders and thus, considers the
scheduled processing and setup times. The utilisation is defined as the ratio of average and maximal
possible output and is given as mean value over all workstations.
Results
As depicted in Fig. 2, the analysis first focuses on the methods’ logistics efficiency depending on
the complexity of the production situation. Therefore, three production scenarios with low (char. ‘1’
for all criteria, c.f. Fig. 1), middle (char. ‘2’ for all criteria) and high complexity (char. ‘3’ for all
criteria) are considered. For matters of comparability, the values are normalised for each scenario
between the best and worst values. In addition to the methods’ performance for each production
situation, Fig. 2 also shows the average performance over all three levels of complexity.
Figure 2:
Logistics efficiency depending on the level of complexity of the production situation
Generally, the results confirm the motivation for a selection of methods depending on the current
production situation visible by the changing methods’ ranking for different production situations.
For example, the best performing methods for the mean throughput time are the combinations of
Plan/SPT (low complexity) and QLE/EDD (middle and high complexity). However, due to the low
performance of the method combination Plan/SPT for production situations of middle and high
complexity, the average performance of this combination is comparatively low with a value of 0.44.
359
The highest performance of 1.00 can be reached by switching between the method combinations
Plan/SPT and QLE/EDD depending on the current production situation. If the PPC system does not
support the change of planning and control methods, the best performing method over all planning
situations would be the method combination Plan/EDD with an average performance of 0.84.
Concerning the efficiency of different method combinations, the results show that method
combinations with central workstation assignment mostly show higher results for situations with
low complexity. Contrary, combinations with autonomous workstation assignment (QLE and PHE)
are especially promising for planning situations with higher levels of complexity. This also confirms
the results of previous simulation studies which show that the logistic efficiency increases with an
increasing autonomy of the applied methods [18]. Furthermore, the methods’ efficiency seems to be
mainly influenced by the methods for workstation assignment (Plan, QLE, PHE) since all
dispatching rules mostly show a similar trend for each method of workstation assignment. An
exception are method combinations with the dispatching rule SPT which is most obvious
considering the efficiency of the combinations QLE/SPT and Plan/SPT for mean throughput time.
For utilisation, autonomous control methods clearly show a higher performance compared to
combinations with planned workstation assignment.
The comparison of the normalised methods’ efficiency generally shows that in relation to all
applied methods, the methods’ efficiency changes depending on the complexity of the production
situation. However, in practical application the situation dependent switching of methods implies
large organisational effort. Thus, switching should only be implemented if significant improvements
can be expected. Therefore, Fig. 3 shows the methods’ performance exemplarily for throughput time
and due date reliability as percentage deviation of the respectively best performing method for each
production situation.
Figure 3: Percentage deviation of logistics efficiency depending on the level of complexity of
the production situation
The percentage deviation gives the performance difference between the best and worst
performing method for each production situation and thus, serves as an indicator for the importance
of an adequate method selection. Fig. 3 shows that an adequate selection of methods is especially
important in production situations of middle and high complexity. For example, the selection of an
inappropriate method in the worst case leads to a throughput time of 242% (QLE/SPT) compared to
the best performing method (QLE/PRIO). For due date reliability, in production situations of middle
and high complexity, workstation assignment and sequencing according to the G&T algorithm
achieves only 20% of the best performing method (QLE/SPT).
So far, the results show the general potential of an adequate method selection depending on the
level of complexity. However, it can be assumed that the considered criteria to describe the
complexity of different production situations have different impacts on the methods’ performance.
To analyse the impact of different criteria, the production situations are grouped into situations of
low, middle and high complexity each for the considered criteria. The results of this analysis
depicted in Fig. 4 are based on two indicators. First, the percentage deviation of the methods’
360
performance shows the deviation range for each level of complexity and illustrates the importance
of an adequate method selection. Thereby, the higher (mean throughput time) resp. the lower (due
date reliability) the values the higher is the potential of an adequate method selection. Mostly, the
deviation range increases with an increase of the complexity for the respective criterion. Exceptions
are the values for the ‘number of product types’ (mean throughput time) and the ‘size of production
orders’ (delivery reliability). The second indicator is the methods’ performance difference. The
performance difference is calculated based on the normalised values for each method as the
difference between the best and worst value over all three complexity levels and averaged over all
methods. The higher the performance difference the higher is the possibility of a changed method
ranking which means that different methods lead to best results for different production situations.
Thus, e.g., a performance difference of 0.33 states out, that in average the methods’ performance for
at least one production situation is 0.33 lower than the best reached performance. Consequently, a
high average performance difference indicates that there is a high possibility that a high logistics
objective achievement requires the switching between different methods. Contrary, a low average
performance indicates that the efficiency differences between the methods remain rather constant.
Referring to the values given in Fig. 4, especially different sizes of production orders, different
numbers of product types (due date reliability) and dynamic influences (due date reliability) seem to
cause different method rankings depending on the complexity level. Contrary, the throughput time
performance difference for the number of product types is comparatively low which indicates that
the methods’ relative performance is rather independent from the explicit number of product types.
Logistics objective
Level of
complexity
Criteria
Dynamic influences
Number of related
production orders
2.84
2.15
1.99
2.30
2.40
2.75
1.20
2.36
2.47
0.13
0.17
0.18
0.48
0.52
0.70
0.82
0.53
0.22
0.73
0.64
0.59
0.33
0.24
0.11
Size of production orders Number of product types
Percentage deviation of methods' performance
CL low
1.34
Mean through-put CL medium
1.96
time
CL high
2.61
Average methods' performance difference
0.34
Percentage deviation of methods' performance
CL low
0.57
CL medium
0.61
Due date reliability
CL high
0.73
Average methods' performance difference
0.21
CL: complexity level
Figure 4:
Impact of single criteria on the methods’ performance
Summary and Outlook
This paper introduced a simulation based comparison of different planning and control methods
considering production situations with different levels of complexity. The analysis showed that the
methods’ efficiency depends on the characteristics of the situation. The usage of adaptive PPCsystems which are able to flexibly switch between different methods can lead to a significant
increase of logistics objective achievement. The criteria for the definition of production situations
have been defined based on a real use case from aviation industry but are also part of existing
classification patterns in literature. However, the simulation study focuses on criteria that are of
particular interest for the considered use case. Therefore, further simulation studies are required to
enable a general applicability of the results and the overall transferability to other use cases.
Acknowledgement
This work is part of the project JobNet 4.0, funded by the German Federal Ministry of Education
and Research (BMBF) under the reference number 02P14K530.
361
References
[1] P. Nyhuis, H.-P. Wiendahl, Fundamentals of production logistics: Theory, Tools and
Applications, Springer, Berlin, 2009.
[2] D. Spath. (Eds.), O. Ganschar, S. Gerlach, M. Hämmerle, T. Krause, S. Schlund,
Produktionsarbeit der Zukunft - Industrie 4.0, Fraunhofer Verlag, Stuttgart, 2013.
[3] R.L. Graham, E.L. Lawler, J.K. Lenstra, A.H.G. Kann, Optimization and approximation in
deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics 5 (1979) 287326.
[4] H.-P. Wiendahl, Produktionsplanung und -steuerung, in: W. Eversheim W, G. Schuh (Eds.),
Produktion und Management: Betriebshütte, Springer, Berlin, 1996, pp. 14-1-14-130.
[5] H.-P. Wiendahl, P. Scholtissek, Management and Control of Complexity in Manufacturing.
CIRP Annals 43(1994) 533-40.
[6] G. Yu, X. Qi, Disruption management, World Scientific, Singapore, 2004.
[7] S. Schukraft, S. Grundstein, B. Scholz-Reiter, M. Freitag, Evaluation approach for the
identification of promising methods to couple central planning and autonomous control,
International Journal of Computer Integrated Manufacturing, 29 (2015) 438-461.
[8] M.L. Pinedo, Scheduling. Theory, Algorithms and Systems, Springer, New York, 2008.
[9] G. Schuh, A. Gierth, Aachener PPS-Modell, in: G. Schuh (Eds.), Produktionsplanung und
-steuerung: Grundlagen, Gestaltung und Konzepte, Springer, Berlin/Heidelberg, 2006, pp. 11-28.
[10] H.-P. Wiendahl, 
Download