Instructions for Oral /Poster Presentations
Welcome and thank you for your contribution for 2012 HKICEAS in Hong Kong! To ensure
each session going smoothly and all attendees enjoy the presentation and opportunities
interacting with presenters, please take care of the followings.
Oral Presentation
Equipment Provided by the Conference Organizer:
 Laptop Computer (MS Windows Operating System with MS PowerPoint & Adobe Acrobat
Reader )
 Built-In LCD Projector with Screen
 Cable Microphone
 Whiteboard, Flip Chart
Materials Provided by the Presenters:
 File(s) in Microsoft PowerPoint or PDF format
Duration of each Presentation:
 Regular Oral Session: approximately 15 minutes for each oral presentation, including 3
minutes for Q&A.
 Please check the conference program online for the precise date and time of your
presentation session.
 Presenters are suggested to enter the specific session room 5 minutes earlier before the
session begins.
i
Poster Presentation
Materials Provided by the Conference Organizer:
 X Racks & Base Fabric Canvases (60cm×160cm, see the figure below)
 Adhesive Tapes or Clamps
Materials Prepared by the Presenters:
 Home-made Poster(s)
Requirement for the Posters:
 Material: Abstract/Full paper printed on a poster/A4 paper(s) which can be posted on
the Canvases
 Size: smaller than 60cm×160cm
 Content: The research findings in the poster presenter’s abstract/ full paper
ii
Contents
Instructions for Oral /Poster Presentations ................................................................................. i
Poster Presentation .....................................................................................................................ii
Contents ................................................................................................................................... iii
Tentative Program for Oral Presentation ................................................................................... 1
Tentative Program for Poster Presentation ................................................................................ 7
Program- Oral Sessions............................................................................................................ 11
Biomedical Engineering....................................................................................................... 11
241.................................................................................................................................... 13
Automatic Segmentation of Cardiac Tumor in Echocardiogram based on Sparse
Representation and modified ACM ................................................................................. 13
256.................................................................................................................................... 22
Model-based photoacoustic image reconstruction based on polar coordinates ............... 22
347.................................................................................................................................... 29
Correction of Light Source Fluctuation error of Quantitative Spectroscopic Tomography
for the Non-invasive Measurement of theBiogenic-substances....................................... 29
400.................................................................................................................................... 39
Fabrication and Characterization of Glass Spheres & SU-8 Mixture Mold and Its
Microchannel for an Electrochemical Immunosensor ..................................................... 39
412.................................................................................................................................... 47
Coercivity weighted Langevin magnetisation; A new approach to interpret
superparamagnetic and nonsuperparamagnetic behaviour in single domain magnetic
nanoparticles .................................................................................................................... 47
377.................................................................................................................................... 58
Synthesis and Characterization of Carbonated Hydroxyapatite Mesoporous Particles ... 58
Chemical Engineering & Fundamental and Applied Sciences ............................................ 66
255.................................................................................................................................... 59
Biomarker Signatures and Depositional Environment Study of Source Rocks in the
Boxing Area of Dongying Depression, Eastern China .................................................... 59
285.................................................................................................................................... 68
Platinum-supported Nanoporous Carbon (Pt/CMK-3) as ................................................ 68
Electrocatalyst for Direct Methanol Fuel Cell ................................................................. 68
333.................................................................................................................................... 69
Conjugate conduction convection and surface radiation in the annulus of two concentric
vertical cylinders .............................................................................................................. 69
070.................................................................................................................................... 79
iii
Path-finding in a Maze-like Puzzle using Multipartite Graph Algorithm ....................... 79
304.................................................................................................................................... 88
Wetting characteristics on patterned surfaces by Lattice Boltzmann method ................. 88
315.................................................................................................................................... 97
Students’ Response on the Detailed Handout Lecture Notes .......................................... 97
Thian Khoon TAN ........................................................................................................... 97
349.................................................................................................................................... 98
Scale-up of Polymethacrylate Monolithic Column: Understanding Pore Morphology by
Electron Microscopy ........................................................................................................ 98
Civil Engineering ............................................................................................................... 102
056.................................................................................................................................. 103
Preliminary study on the use of microwave permittivity in the determination of asphalt
binder penetration and viscosity .................................................................................... 103
068.................................................................................................................................. 113
Durability enhancement of the strengthened structural system joints ........................... 113
307.................................................................................................................................. 121
Statistical Evaluation of Redundant Steel Structural Systems Maximum Strength and
Elastic Energy ................................................................................................................ 121
353.................................................................................................................................. 131
Risk analysis of cost overrun in multiple Design and Build projects ............................ 131
Computer and Information Sciences I ............................................................................... 138
228.................................................................................................................................. 139
Design and Implementation of Microcontroller based Computer Interfaced Home
Appliances Monitoring and Control System.................................................................. 139
242.................................................................................................................................. 147
Design and Implementation of Computer Interfaced Voice Activated Switch using
Speech Recognition Technology ................................................................................... 147
251.................................................................................................................................. 157
Geo-databases: Expanding Geo-expert Modeling Perspectives .................................... 157
252.................................................................................................................................. 166
A Non-Voice based Digital Speech Watermarking ....................................................... 166
332.................................................................................................................................. 173
Autonomous UAV support for rescue forces using Onboard Pattern Recognition ....... 173
243.................................................................................................................................. 181
Neural Network Modeling of Breaking Force in Variable Road and Slope Conditions181
Computer and Information Sciences II & Electrical and Electronic Engineering ............. 190
257.................................................................................................................................. 191
2D to stereoscopic 3D conversion for photos taken from single camera ...................... 191
iv
309.................................................................................................................................. 199
Native and Detection-Specific Log Auditing for Small Network Environments .......... 199
336.................................................................................................................................. 211
Thailand Museum Data Exchange via XML Web Service ............................................ 211
378.................................................................................................................................. 219
A New Villain: Investigating Steganography in Source Engine Based Video Games .. 219
239.................................................................................................................................. 233
The Design of Gain Controllable Current-mode First-order All-pass Filter for Analog
Integrated Circuit ........................................................................................................... 233
240.................................................................................................................................. 240
Electronically Tunable Low-Component-Count Current-Mode quadrature Oscillator
Using CCCFTA ............................................................................................................. 240
Environmental Science ...................................................................................................... 246
059.................................................................................................................................. 247
Sulfonated Poly Ether Ether Ketone/TiO2 Nano composites as Polymer Electrolyte for
Microbial Fuel Cell ........................................................................................................ 247
073.................................................................................................................................. 252
Mathematical modeling of industrial water systems ..................................................... 252
232.................................................................................................................................. 261
Design on low-noise tire groove using the CFD simulation technique ......................... 261
301.................................................................................................................................. 270
An Exploratory study of Green Supply Chain in SMEs Cluster in India ...................... 270
Material Science and Engineering ..................................................................................... 278
186.................................................................................................................................. 280
Parameters affecting Voids Elimination in Manufacturing of Polyester Impregnated
High Voltage Automotive Ignition coils........................................................................ 280
272.................................................................................................................................. 289
Nanostructured metal oxides for dye sensitized solar cell from transparent conductors to
photoanode materials ..................................................................................................... 289
305.................................................................................................................................. 291
Performance evaluation and preventive measures for photo-thermal coupled aging of
different bitumens .......................................................................................................... 291
308.................................................................................................................................. 300
Platinum nanoparticles/graphene composite catalyst as a novel composite .................. 300
counter electrode for high performance dye-sensitized solar cells ................................ 300
318.................................................................................................................................. 301
Recovery of Valuable Metal from Waste-Computer ..................................................... 301
329.................................................................................................................................. 307
v
Photocatalytic Degradation of Acid Orange 7 by ZnO and Sm-Doped ZnO ................ 307
335.................................................................................................................................. 315
Preparation and properties of layered double hydroxides/SBS modified bitumen for
waterproofing membrane ............................................................................................... 315
Mechanical Engineering .................................................................................................... 323
283.................................................................................................................................. 324
Multi-dimensional condition-based maintenance for gearboxes: .................................. 324
A review of methods and prognostics ............................................................................ 324
314.................................................................................................................................. 341
Effects of Gas Velocity and Pressure in the Serpentine 3-dimensional PEMFC Model
........................................................................................................................................ 341
372.................................................................................................................................. 349
Application of Numerical Optimization Techniques to shaft design............................. 349
392.................................................................................................................................. 360
Development of a Single-Wheel Test Rig for Traction Dynamics and Control of Electric
Vehicles.......................................................................................................................... 360
411.................................................................................................................................. 368
Material removal rate prediction for blind pocket milling of SS304 using Abrasive
Water Jet Machining Process ......................................................................................... 368
312.................................................................................................................................. 376
Design and Experimental Implementation of Time Delay Control for Air Supply in a
PEM Fuel Cell................................................................................................................ 376
Program - Poster Sessions ...................................................................................................... 385
Biomedical Engineering..................................................................................................... 385
258.................................................................................................................................. 386
The Effect of Coronary Tortuosity on Coronary Pressure: A Patient-Specific Study ... 386
316.................................................................................................................................. 393
Modification of Poly(3-Hydroxybutyrate-Co-4-Hydroxybutyrate)/Chitosan Blend Film
by Chemical Crosslinking .............................................................................................. 393
Chemical Engineering ........................................................................................................ 394
276.................................................................................................................................. 395
Aromatic-turmerone’s anti-inflammatory and neuroprotective effects in microglial cells
are mediated by PKA and HO-1 signaling .................................................................... 395
282.................................................................................................................................. 405
The anti-inflammatory effect of Nardostachys chinensis on LPS or LTA stimulated BV2
microglial cells ............................................................................................................... 405
302.................................................................................................................................. 415
vi
Synthesis and characterization of ZnS nanopowder prepared by microwave-assisted
heating ............................................................................................................................ 415
417.................................................................................................................................. 421
Bridging the gap between gas permeation properties at the transient and steady states 421
Civil Engineering ............................................................................................................... 422
292.................................................................................................................................. 424
The Analysis for Service Quality of Internet Traffic Information ................................. 424
319.................................................................................................................................. 431
Performance Evaluation of Penetration Reinforcing Agent for Power Plants Facilities
........................................................................................................................................ 431
330.................................................................................................................................. 437
A study on the improvement for measuring accuracy about the high speed
Weigh-In-Motion ........................................................................................................... 437
331.................................................................................................................................. 445
A study on the method for measuring the field density ................................................. 445
Computer and Information Sciences .................................................................................. 453
231.................................................................................................................................. 454
A Chinese Text Watermarking Algorithm Resistive to ................................................. 454
rint-Scan Process ............................................................................................................ 454
277.................................................................................................................................. 463
Traversal Method for Connected Domain Based on Recursion and The Usage in Image
Treatment ....................................................................................................................... 463
Lanxiang Zhu; Yaowu Shi;Lifei Deng .................................................................................. 463
397.................................................................................................................................. 471
Quick Calculation for Multi-Resolution Bag-of-Colors Features ................................. 471
409.................................................................................................................................. 478
Speaker-Independent Isolated Word Recognition Based on Enhanced Cross-Words
Reference Templates for Embedded Systems ................................................................ 478
414.................................................................................................................................. 487
Intelligent framework for heterogeneous wireless networks ......................................... 487
Electrical and Electronic Engineering I ............................................................................. 494
054.................................................................................................................................. 495
Prototype of A Microcontrolled Device for Measuring Vibrating Wire Sensors .......... 495
260.................................................................................................................................. 504
Numerical Study of Thin-GaN LED with Holes on n-electrode ................................... 504
Farn-Shiun Hwu ................................................................................................................. 504
Environmental Science ...................................................................................................... 513
248.................................................................................................................................. 514
vii
Removal of Nickel Ions from Industrial Wastewater Using Sericite ............................ 514
Electrical and Electronic Engineering II ............................................................................ 515
268.................................................................................................................................. 516
Coordination of voltage regulation control between STATCOM and OLTC ............... 516
299.................................................................................................................................. 525
Maximising Incident Diffuse and Direct Solar Irradiance on Photovoltaic Panels
Through the Use of Fixed Tilt and Limited Azimuth Tracking..................................... 525
343.................................................................................................................................. 532
A Modified Correlation Coefficient of Circularly Polarized Waves for the UMCW
Radar Frequency Band ................................................................................................... 532
Material Science and Engineering ..................................................................................... 538
263.................................................................................................................................. 539
The effect of sericin content on the morphology, structural characteristics, and properties
of electrospun regenerated silk web ............................................................................... 539
291.................................................................................................................................. 544
An analysis of creep deformation in Inconel 617 .......................................................... 544
by small punch creep test ............................................................................................... 544
293.................................................................................................................................. 546
The Effects of Chloride on Durability of Concrete Mixed with Sea Sand .................... 546
296.................................................................................................................................. 554
Deep sub-nanosecond reversal of magnetic vortex in ferromagnetic nano-pits ............ 554
402.................................................................................................................................. 555
Top-Down Processed Nanowire Field Effect Transistors in Wide Bandgap
Semiconductors .............................................................................................................. 555
Mechanical Engineering .................................................................................................... 557
184.................................................................................................................................. 558
Numerical Simulation for Shape Optimization of Sheet–formed Part .......................... 558
334.................................................................................................................................. 565
Water Consumption of Carbon Dioxide Capture System for Coal-fired Power Plants . 565
375.................................................................................................................................. 575
Adaptive Electronic Converter of Linear ....................................................................... 575
Acceleration with Output Characteristics ...................................................................... 575
306.................................................................................................................................. 581
Experimental Study on Detonation Characteristics of ................................................... 581
Acetylene and Oxygen Mixture ..................................................................................... 581
viii
Tentative Program for Oral Presentation
Friday, December 14, 2012
Committee Meeting (Committee Only)
Saturday, December 15, 2012
08:00-16:00 Registration (G/F, Hong Kong SkyCity Marriott Hotel)
Time
Meeting Room 5
Session Field:
Environmental Science & Electrical and Electronic Engineering
Session Chair:
059: Sulfonated Poly Ether Ether Ketone / Tio2 Nano Composites As Polymer
Electrolyte For Microbial Fuel Cell
Sangeetha Dharmalingam & Prabhu Narayaswamy Venkatesan(Anna University, India)
073: Mathematical Modeling of Industrial Water Systems
Pavel Gotovtsev (Moscow Power Engineering Institute, Russia), Julia Tikhomirova (JSC
Power-Engineering Schemes and Technologies, Russia) & Ekaterina Khizova (BWT
08:45-10:15
Company, Russia)
232:
Design on Low-Noise Tire Groove Using the CFD Simulation Technique
Min-Feng Sung , Yean-Der Kuan (National Chin-Yi University of Technology, Taiwan),
Shi-Min Lee (Tamkang University, Taiwan) & Rong-Juin Shyu (National Taiwan Ocean
University, Taiwan)
301:
An Exploratory study of Green Supply Chain in SMEs Cluster in India
Shubham Gandhi, Abhishek Dwivedi, Aishwarya Mor (Delhi Technological University,
India)
Welcome Reception
10:15 - 10:30
Group Photography I
Opening Ceremony & Invited Speech @ Meeting Room 2
10:30-12:00
Professor Victor F. S. Sit (Hong Kong Baptist University)
Topic: "Hong Kong 1997–2012: A Report on the HKSAR Since the Handover"
Time
Meeting Room 5
Session Field:
10:30-12:00
Chemical Engineering & Fundamental and Applied Sciences
Session Chair:
1
255: Biomarker Signatures and Depositional Environment Study of Source
Rocks in the Boxing Area of Dongying Depression, Eastern China
Ying WANG & Luofu LIU (China University of Petroleum, Beijing)
285: Platinum-supported Nanoporous Carbon (Pt/CMK-3) as Electrocatalyst
for Direct Methanol Fuel Cell
Parasuraman Selvam & Balaiah Kuppan (National Centre for Catalysis Research and
Department of Chemistry,Indian Institute of Technology-Madras)
333: Conjugate Conduction Convection and Surface Radiation in the Annulus
of Two Concentric Vertical Cylinders
Abhay K Sahu, R.K. Saini & M. Bose (Indian Institute of Technology, India)
349: Scale-up of Polymethacrylate Monolithic Column: Understanding Pore
Morphology by Electron Microscopy
Clarence M Ongkudon, Ratna Dewi Sani (Universiti Malaysia Sabah, Malaysia)
070: Path-finding in a Maze-like Puzzle using Multipartite Graph Algorithm
Nien-Zheng, Yew (Universiti Malaysia Sabah, Malaysia), Kung-Ming, Tiong & Su-Ting,
Yong (The University of Nottingham Malaysia Campus)
304: Wetting characteristics on patterned surfaces by Lattice Boltzmann
method
Ping Chen, Guo Tao (China University of Petroleum,Beijing), Mingzhe Dong (University of
Calgary, Canada) & Bing Wang(China University of Petroleum,Beijing)
315: Students’ Response on the Detailed Handout Lecture Notes
Thian Khoon Tan (The University of Nottingham Malaysia Campus)
Luncheon
12:00 - 13:00
Meeting Room 5
Session Field:
Computer and Information Sciences
Session Chair:
228: Design And Implementation Of Microcontroller Based Computer
Interfaced Home Appliances Monitoring And Control System
13:00-14:30
Nwankwo Nonso Prince, Nwankwo Vincent (Federal Polytechnic Oko, Nigeria) & Azubuike
Onuorah Patrick (Patech Electronics Co., Nigeria)
242: Design And Implementation of Computer Interfaced Voice Activated
Switch Using Speech Recognition Technology
Azubuike Onuorah Patrick (Patech Electronics Co., Nigeria) & Nwankwo Nonso Prince
(Federal Polytechnic Oko, Nigeria)
251: Geo-databases: Expanding Geo-expert Modeling Perspectives
Elzbieta Malinowski (University of Costa Rica, Costa Rica)
2
252: A Non-Voice based Digital Speech Watermarking
Mohammad Ali Nematollahi, S.A.R Al-Haddad, S.A.R Al-Haddad, Shayamala Doraisamy,
Laith Emad Hamid (University Putra Malaysia, Malaysia)
332: Autonomous UAV support for rescue forces using Onboard Pattern
Recognition
Florian Segor, Chen-Ko Sung (Fraunhofer IOSB, Germany)
243:
Neural Network Modeling of Breaking Force in Variable Road and Slope
Conditions
Recai KUS & Ugur Taskiran (Selcuk University,Turkey) Huseyin Bayrakceken (Afyon
Kocatepe University, Turkey)
Afternoon Coffee Break
14:30-14:45
Meeting Room 5
Session Field:
Material Science and Engineering
Session Chair:
186: Parameters Affecting Voids Elimination in Manufacturing of Polyester
Impregnated High Voltage Automotive Ignition Coils
Ahmad Nawaz, Sahar Noor (U.E.T Peshawar, Pakistan) & Bilal Islam (U.M.T Lahore,
Pakistan)
272: Nanostructured Metal Oxides for Dye Sensitized Solar Cell from
Transparent Conductors to Photoanode Materials
Ghim Wei Ho (National University of Singapore, Singapore)
305: Performance Evaluation and Preventive Measures for Photo-thermal
Coupled Aging of Different Bitumens
14:45-16:15
Zhengang Feng, Jianying Yu & Bo Zhou (Wuhan University of Technology, China)
308: Platinum Nanoparticles/Graphene Composite Catalyst
as A Novel
Composite Counter Electrode for High Performance Dye-Sensitized
Solar Cells
Chen-Chi Ma & Li-hsueh Chang (National Tsing Hua University, Taiwan)
318: Recovery of Valuable Metal from Waste-computer
Busayamas Phettong, Pitsanu Bunnaul & Manoon Masniyom (Prince of Songkla
University, Thailand)
329: Photocatalytic Degradation of Acid Orange 7 by ZnO and Sm-Doped ZnO
Pongsathorn Sathorn, Nattiya Reungtip & Apisit Songsasen (Kasetsart University,
Thailand)
335: Preparation and Properties of Layered Double Hydroxides/SBS
Modified Bitumen for Waterproofing Membrane
Song Xu, Zhengang Feng, Jianying Yu & Lian Li (Wuhan University of Technology, China)
3
Sunday, December 16, 2012
08:00-16:00 Registration (G/F, Hong Kong SkyCity Marriott Hotel)
Time
Meeting Room 5
Session Field:
Biomedical Engineering
Session Chair:
241: Automatic Segmentation of Cardiac Tumor in Echocardiogram based on
Sparse Representation and modified ACM
Yi Guo, Yuanyuan Wang (Fudan University, China)& Dehong Kong, Xianhong Shu
(Zhongshan Hospital of Fudan University, China)
256: Model-based photoacoustic image reconstruction based on polar
coordinates
Yan Zhang, Yuanyuan Wang & Chen Zhang (Fudan University, China)
347:
Correction of Light Source Fluctuation error of Quantitative
Spectroscopic Tomography for the Non-invasive Measurement of the
Biogenic-substances
08:45-10:15
Pradeep Kumara (Kagawa University, Japan)
400: Fabrication and Characterization of Glass Spheres & SU-8 Mixture Mold
and Its Microchannel for an Electrochemical Immunosensor
Yoonkyung Nam (Korea University, Korea), Youngmi Kim Pak (Kyung Hee University, Korea)
& James Jungho Pak (Korea University, Korea)
412: Coercivity weighted Langevin magnetization; A new approach to
interpret superparamagnetic and nonsuperparamagnetic behaviour in
single domain magnetic nanoparticles
Dhanesh Kattipparambil Rajan, Jukka Lekkala (Tampere University of Technology, Finland)
377: Synthesis and Characterization of Carbonated Hydroxyapatite
Mesoporous Particles
Fei-Yee Yeoh, Nur Farahiyah Mohammad, Radzali Othman(Universiti Sains Malaysia,
Malaysia)
Morning Coffee Break
10:15 - 10:30
Group Photography II
Session Field:
Computer and Information Sciences & Electrical and Electronic
Engineering
10:30-12:00
Session Chair:
257: 2D to Stereoscopic 3D Conversion for Photos Taken from Single Camera
4
Chwee Keng Tan (Temasek Polytechnic, Singapore)
309: Native and Detection-Specific Log Auditing for Small Network
Environments
Brittany Wilbert& Lei Chen (Sam Houston State University, USA)
336:
Thailand Museum Data Exchange via XML Web Service
Pobsit Kamolvej, Usa Sammapun, Nitirat Iamrahong, Guntapon Prommoon, (Kasetsart
University, Thailand) & La-or Kovavisaruch (NECTEC, Thailand)
378: A New Villain: Investigating Steganography in Source Engine Based
Video Games
Christopher Hale, Lei Chen & Qingzhong Liu (Sam Houston State University, USA)
239: The Design of Gain Controllable Current-mode First-order All-pass Filter
for Analog Integrated Circuit
Winai Jaikla (King Mongkut’s Institute of Technology Ladkrabang, Thailand) & Totsaporn
Nakyoy (Rajabhat University, Thailand)
240: Electronically Tunable Low-Component-Count Current-Mode
quadrature Oscillator Using CCCFTA
Chaiya Tanaphatsiri (Rajamangala University of Technology Srivijaya, Thailand) & Narong
Narongrat (Suan Sunandha Rajabhat University, Thailand)
Luncheon
12:00 - 13:00
Meeting Room 5
Session Field:
Civil Engineering
Session Chair:
056: Preliminary Study on the Use of Microwave Permittivity in the
Determination of Asphalt Binder Penetration and Viscosity
Ratnasamy Muniandy (Universiti Putra Malaysia, Malaysia)
13:00-14:30
068: Durability Enhancement of the Strengthened Structural System Joints
Bassam A. Tayeh (Universiti Sains Malaysia, Malaysia)
307: Statistical Evaluation of Redundant Steel Structural Systems Maximum
Strength and Elastic Energy
Amanullah Rasooli (Nagoya Institute of technology, Japan)
353: Risk Analysis of Cost Overrun in Multiple Design and Build Projects
Ramanathan Chidambaram (Kumpulan Liziz Sdn. Bhd, Malaysia) & Narayanan Sambu
Potty (Universiti Teknologi PETRONAS, Malaysia)
14:30-14:45
Afternoon Coffee Break
14:45-16:15
Meeting Room 5
5
Session Field:
Mechanical Engineering
Session Chair:
283: Multi-dimensional Condition-based Maintenance for Gearboxes: A
Review of Methods and Prognostics
Trent Konstantinu, Muhammad Ilyas Mazhar, Ian Howard (Curtin University, Australia)
314: Effects of Gas Velocity and Pressure in the Serpentine 3-dimensional
PEMFC Model
Woo Joo YANG, Hong Yang WANG & Young Bae KIM (Chonnam National Univ., Korea)
372: Application of Numerical Optimization Techniques to Shaft Design
Abdurahman M Hassen, Neffati M Werfalli & Abdulaziz Y Hassan (University of Tripoli,
Libya)
392: Development of a Single-Wheel Test Rig for Traction Dynamics and
Control of Electric Vehicles
Apirath Kraithaisri, Suwat Kuntanapreeda & Saiprasit Koetniyom (King Mongkut’s
University of Technology North Bangkok, Thailand)
411: Material Removal Rate Prediction for Blind Pocket Milling of SS304
Using Abrasive Water Jet Machining Process
V K Gupta Thammana (PDPM IIITDM Jabalpur, India)
312: Design and Experimental Implementation of Time Delay Control for Air
Supply in a PEM Fuel Cell
Ya Xiong Wang (Chonnam National University, Korea), Dong Ji Xuan (Wenzhou University,
China) & Young Bae Kim (Chonnam National University, Korea)
6
Tentative Program for Poster Presentation
Saturday, December 15, 2012
08:00-16:00 Registration (G/F, Hong Kong SkyCity Marriott Hotel)
Meeting Room 3
Session Field:
Computer and Information Sciences
231: A Chinese Text Watermarking Algorithm Resistive to Print-Scan
Process
Xing Huang, Lingwei Song, Jianyi Liu & Ru Zhang (Beijing University of Posts and
Telecommunications, China)
277: Traversal Method for Connected Domain Based on Recursion and the
Usage in Image Treatment
Lanxiang Zhu, Yaowu Shi, Lifei Deng & Hongwei Shi (Jilin University, China)
397: Quick Calculation for Multi-Resolution Bag-of-Colors Features
Yu Ma & Yuanyuan Wang (Fudan University, China)
409: Speaker-independent Isolated Word Recognition Based on Enhanced
Cross-words Reference Templates for Embedded Systems
Chih-Hung Chou & Guan-Hong He (National Cheng Kung University, Taiwan)
414: Intelligent Framework for Heterogeneous Wireless Networks
10:30-12:00
Yu-Chang Chen (Shu-Te University, Taiwan)
Session Field:
Electrical and Electronic Engineering
054: Prototype of a Microcontrolled Device for Measuring Vibrating Wire
Sensors
Guilherme Natsutaro Descrovi Nabeyama, Rolf Massao Satake Gugisch, João Carlos
Christmann Zank & Carlos Henrique Zanelato Pantaleão (State University of Western
Parana, Brazil)
260: Numerical Study of Thin-GaN LED with Holes on n-electrode
Farn-Shiun Hwu (Taoyuan Innovation Institute of Technology, Taiwan)
Session Field:
Environmental Science
248: Removal of Nickel Ions from Industrial Wastewater Using Sericite
Choong Jeon & Taik-Nam Kwon (Gangneung-Wonju National University, Korea)
Luncheon
12:00-13:00
7
Session Field:
Electrical and Electronic Engineering
268: Coordination of Voltage Regulation Control between STATCOM and
OLTC
San-Yi Lee, Jer-Ming Chang (Taipei Chengshih University of Science and Technology,
Taiwan)
299: Maximising Incident Diffuse and Direct Solar Irradiance on
Photovoltaic Panels Through the Use of Fixed Tilt and Limited Azimuth
Tracking
Yun Fun Ngo, Fu Song & Benjamin Kho (National University of Singapore, Singapore)
343: A Modified Correlation Coefficient of Circularly Polarized Waves for
the UMCW Radar Frequency Band
Deock-Ho Ha (Pukyong National Univ., Korea)
Session Field:
Chemical Engineering
14:30-16:15
276: Aromatic-turmerone’s anti-inflammatory and Neuroprotective Effects
in Microglial Cells are Mediated by PKA and HO-1 Signaling
Sun Young Park, Mei Ling Jin, Young Hun Kim & Sang-Joon Lee (Pusan National
University, Korea)
282: The Anti-inflammatory Effect of Nardostachys Chinensis on LPS or LTA
stimulated BV2 Microglial Cells
Ah jeong Park, Sun Young Park, Meiling Jin, Hye Won Eom, Young Hun Kim & Sang Joon
Lee (Pusan National University, Korea)
302: Synthesis and Characterization of ZnS Nanopowder Prepared by
Microwave-assisted Heating
Wei Huang & Min-Hung Lee (National Taiwan Normal University, Taiwan), San Chan,
Yueh-Chien Lee, Ming-Kwen Tsai (Tungnan University, Taiwan), Sheng-Yao Hu (Tungfang
Design University, Taiwan), Jyh-Wei Lee (Ming Chi University of Technology, Taiwan)
417: Bridging Gap Between Transient and Steady State Gas Permeation
Kean WANG (The Petroleum Institute, United Arab Emirates)
Sunday, December 16, 2012
08:00-16:00 Registration (G/F, Hong Kong SkyCity Marriott Hotel)
Meeting Room 3
10:15-12:00
Session Field:
Material Science and Engineering
8
263: The Effect of Sericin Content on the Morphology, Structural Characteristics,
and Properties of Electrospun Regenerated Silk Web
In Chul Um & Jae Sang Ko (Kyungpook National University, Korea)
291: An Analysis of Creep Deformation in Inconel 617 by Small Punch Creep Test
Jong Gu Lee & Jong Hoon Lee, ( Sungkyunkwan University, Korea), Bum Joon Kim(Dongyang Mirae
University, Korea), Moon Ki Kim & Byeong Soo Lim (Sungkyunkwan University, Korea)
293: The Effects of Chloride on Durability of Concrete Mixed With Sea Sand
Hojae Lee & Jongsuk Lee (Korea Institute of Construction Technology, Korea) , Myunk-Sug Cho (KHNP Central
Research Institute, Korea Hydro&Nuclear Power Co., LTD) , Dogyeum Kim(Korea Institute of Construction
Technology, Korea)
296: Deep Sub-nanosecond Reversal of Magnetic Vortex in Ferromagnetic
Nano-pits
Ruifang Wang, Xinwei Dong & Zhenyu Wang (Xiamen University, China)
402: Top-down Processed Nanowire Field Effect Transistors in Wide Bandgap
Semiconductors
Sang-Mo Koo, Min-Seok Kang & Jung-Ho Lee (Kwangwoon University, Korea), Hyung-Seok Lee
(Massachusetts Institute of Technology, USA)
Session Field:
Mechanical Engineering
184: Numerical Simulation for Shape Optimization of Sheet–formed Part
Kyu-Taek Han (Pukyong National University, Korea)
334: Water Consumption of Carbon Dioxide Capture System for Coal-fired Power
Plants
Xin He, Pei Liu & Zheng Li (Tsinghua University, China)
375: Adaptive Electronic Converter of Linear
Natalya Korobova, Sergey Timoshenkov, Aleksey Timoshenkov, Andrey Shalimov (National
Research University of Electronic Technology (MIET), Russia)
306: Experimental Study on Detonation Characteristics of Acetylene and Oxygen
Mixture
Min Son, Chanwoo Seo (Graduate School of Korea Aerospace University, South Korea)
&Chanwoo Seo, Jaye Koo (Korea Aerospace University, South Korea)
12:00-13:00
Luncheon
9
Session Field:
Civil Engineering
292: Analysis on Service Quality of Internet Traffic Information
Feng Li, Weoneui Kang, Bumjin Park, Hyokyoung Eo (Korea Institute of Construction Technology,
Korea)
319: Performance Evaluation of Penetration Reinforcing Agent for Power Plants
Facilities
Ki Beom Kim & Jong Suk Lee (Korea Institute of Construction Technology, Korea),Myong Suk Cho
(Hydro & Nuclear Power CO.LTD Central Research Institute, Korea) & Do Gyeum Kim (Korea
Institute of Construction Technology, Korea)
330: A Study on the Improvement for Measuring Accuracy about the High Speed
Weigh-In-Motion
Hyo Kyoung EO, Bum Jin PARK, Weon Eui KANG, Feng Li (Korea Institute of Construction
Technology, Korea)
331: A Study on the Method for Measuring the Field Density
Weoneui Kang, Bumjin Park, Feng Li, Hyokyoung Eo (Korea Institute of Construction Technology,
Korea)
Session Field:
14:30-16:15
Biomedical Engineering
258: The Effect of Coronary Tortuosity on Coronary Pressure: A Patient-Specific
Study
Xinzhou Xie, Yuanyuan Wang & Hu Zhou (Fudan University, China)
316: Modification of Poly (3-Hydroxybutyrate-Co-4-Hydroxybutyrate)/Chitosan
Blend Film by Chemical Crosslinking
Amirul A. A, Rennukka M. (Universiti Sains Malaysia, Malaysia)
10
Program- Oral Sessions
Biomedical Engineering
8:45-10:15, December 16, 2012 (Meeting Room 5)
Session Chair:
241: Automatic Segmentation of Cardiac Tumor in Echocardiogram based on Sparse
Representation and modified ACM
Yi Guo
Fudan University
Yuanyuan Wang
Fudan University, China
Dehong Kong
Xianhong Shu
Zhongshan Hospital of Fudan
University
Zhongshan Hospital of Fudan
University
256: Model-based Photoacoustic Image Reconstruction Based on Polar Coordinates
Yan Zhang
Fudan University
Yuanyuan Wang
Fudan University
Chen Zhang
Fudan University
347: Correction of Light Source Fluctuation error of Quantitative Spectroscopic
Tomography for the Non-invasive Measurement of the Biogenic-substances
Pradeep Kumara
Kagawa University
400: Fabrication and Characterization of Glass Spheres & SU-8 Mixture Mold and Its
Microchannel for an Electrochemical Immunosensor
Yoonkyung Nam
Korea University
oungmi Kim Pak
Kyung Hee University
James Jungho Pak
Korea University
412: Coercivity Weighted Langevin Magnetization; A New Approach to Interpret
Superparamagnetic and Nonsuperparamagnetic Behaviour in Single Domain
Magnetic Nanoparticles
Dhanesh Kattipparambil Rajan
Tampere University of Technology
Jukka Lekkala
Tampere University of Technology
11
377: Synthesis and Characterization of Carbonated Hydroxyapatite Mesoporous
Particles
Fei-Yee Yeoh
Nur Farahiyah Mohammad
Radzali Othman
Universiti Sains Malaysia
Universiti Sains Malaysia
Universiti Sains Malaysia
12
241
Automatic Segmentation of Cardiac Tumor in Echocardiogram based on
Sparse Representation and modified ACM
Yi Guo a, Yuanyuan Wang a,*, Dehong Kong b, Xianhong Shu b
a
Department of Electronic Engineering, Fudan University, Shanghai 200433, China
E-mail addresses: guoyi@fudan.edu.cn, yywang@fudan.edu.cn
b
Department of Echocardiography, Zhongshan Hospital of Fudan University,
Shanghai200032, China
E-mail address: kongdh99@hotmail.com
Abstract
The automatic segmentation of cardiac tumor in echocardiogram is essentially required to
quantitatively evaluate the performance of cardiovascular disease. This paper presents an
automatic segmentation approach for the cardiac tumor and atrial wall based on the sparse
representation and the modified active contour model (ACM). The K-Singular Value
Decomposition (K-SVD) algorithm is applied to represent the echocardiogram sparsely to
obtain the initial contour. With the sparse coefficient as the external force, a modified active
contour model is implemented to refine the final boundary. The feasibility of this approach is
evaluated on echocardiogram image sets by comparing with the traditional ACM and
manually defined contour. Results demonstrate that this algorithm greatly outperforms the
traditional ACM and its segmentation result is close to the manual boundary.
Keyword: echocardiogram segmentation, cardiac tumor, sparse representation, active contour
model
1. Introduction
Primary cardiac tumors are rare entities in cardiovascular disease. It has been reported that
cardiac tumors are present in 0.001%-0.28% of autopsy cases. Approximately 75% of them in
adults are benign, with the majority composed of myxomas [1]. The primary cardiac tumors
are generally asymptomatic, in which 12% of cases are found incidentally during evaluation
of an unrelated medical condition [2].
With a detection rate of 95.2%, tranthoracic echocardiography is one of the most widely used
medical imaging modalities to diagnose intracardiac tumors, for its noninvasive nature, low
cost, and continuing improvements in the image quality. It provides plenty of cardiac
13
information, including the size and shape of the heart and the tumor, its pumping capacity and
the location and extent of any damage to its tissues [3]. In general, echocardiogram shows
that most myxomas have a stalk, are gelatinous and have a broad base. The surface may be
friable or villous. The internal echoes are heterogeneous. The tumors show continuity with
the atrial wall, with high degree of mobility, as shown in Fig.1 [4]. These ultrasonic features
help the doctors to diagnose cardiac tumors. Since the myxomas have a stalk with the atrial
wall, as the first step, the segmentation of the tumor and the atrial wall is of great importance,
which is the basis of the further feature estimation. Manual segmentation carried out by
experts is time consuming and tedious. The results are highly dependent on the experience of
experts and suffer from variability among different observers. Hence, the demand for fully
automatic cardiac tumor and atrial wall segmentation is increasing. Automatic segmentation
of echocardiograms is a challenging task because of the poor image contrast, large amount of
speckle noise, signal drop-out, artifacts and missing contours. Especially it is much more
difficult in an echocardiogram with a cardiac tumor. This is because that the chamber changes
hugely during the systolic and diastolic phases. In the systolic stage, the chamber shrinks so
small that it is filled with a cardiac tumor, leading to the overlap of the atrial wall and tumor
boundary.
Fig.1 Primary cardiac myxomas in an echocardiogram.
In this paper, a novel automatic segmentation method is proposed for the cardiac tumor by
using sparse representation and active contour model (ACM). Firstly, the K-Singular Value
Decomposition (K-SVD) algorithm is applied to represent the echocardiogram sparsely to
obtain the initial contour. Then after defining an external force, a modified active contour
model is implemented to refine the final atrial wall and cardiac tumor boundary.
2. Method
2.1 Preprocessing
The speckle in echocardiogram is a form of locally correlated multiplicative noise, which
degrades the image contrast resolution, limiting the detectability of small, low-contrast
14
lesions, thus making images generally difficult for the non-specialist to interpret. Thus, it is
essential to remove the speckle [5].
Here, a modified non local-based filtering algorithm (MNL) is used to eliminate the speckle
[6]. It uses the region comparison instead of the pixel comparison, extending “neighborhood
window” to the “whole image”. The MNL method can effectively reduce the speckle without
affecting important image features and destroying anatomical details, which is useful for the
further boundary and feature extraction.
2.2 The K-SVD decomposition
The K-SVD algorithm is then introduced as a method for the sparse signal representation,
which is a generalization of the K-means algorithm for the iterative construction of a
dictionary of prototype signal atoms and the sparse coding of an image [7]. Through the
overcomplete dictionary, the original image can be decomposed into a sparse coefficients
matrix populated primarily with zeros. Only several non-zero coefficients reveal the nature of
the image, greatly reducing the complexity of the original image and saving the computer
memory. More details are given in [7]. Here, we concentrate on the decomposing of the
echocardiograms.
For an original echocardiogram image Y∈RM×N, the K-SVD algorithm defines an
overcomplete dictionary matrix D∈RK×L that contains K prototype atoms for columns {d j }Kj1 .
The image Y can be represented as a sparse linear combination of these atoms. The K-SVD is
designed to seek the sparsest representation coefficients X of Y that gives the best image
representation [8],
min
X
Y  DX
2
2
s.t. || X ||0  T0
(1)
where ||.||0 counts non-zero coefficients, ||.||2is the l2-norm, and T0 is a sparsity threshold.
The K-SVD replaces the pixel intensities in an image patches with the sparse coefficients. It
divides the echocardiogram Y into L overlapping image patches of the size b×b, ordered
2
lexicographically as column vectors yi∈ R b , L=(M-b+1)×(N-b+1). The sparse representation
of L image patches is:
min xi yi  Dxi
2
s.t. || xi ||0  T0
2
i 1,2,...L
(2)
The parameters D and X are unknown, which may be optimized through a number of
iterations. Here two steps are included. In the sparse coding step, according to (2), D is fixed
while the coefficient X is optimized through an orthogonal matching pursuit (OMP) method.
During the dictionary updating step, a better dictionary D is improved sequentially, along
with X fixed. In order to reduce the mean square error of X, we fix all other column in D,
~
while only one column dj is updated at a certain time, finding a new column d j and new
15
values. We assume to update dj whose coefficient corresponds to the kth row in X in the Tth
iteration, called X Tj . Then (2) can be rewritten as:
Y  DX
2
2
 Y   j 1 d j X Tj
K
2
2


~
~
 Y   j k d j X Tj  d k X Tk
2
2
~
 Ek  d k X Tk
2
2
(3)
Define ωk as the group of the indices to {yi}, which uses the atom dj. Then, we have
k  {i | 1  i  K , X Tk (i)  0}
(4)
The Ωk is a matrix of L×|ωk|, with ones on the (ωk(i), i)th entries and zeros elsewhere. We
R
R
define Yk  Y k , Ek  E k . Then, (3) is rewritten as:
2
Ek k  d k X Tk k
2
 EkR  d k X Rk
2
2
(5)
R
The SVD decomposes E k into U∆VT. dj is replaced by the first output basis vector of U.
After several iterations of sparse coding and dictionary updating, all atoms in D are optimized.
Then, according to (1), the original echocardiogram image Y is represented by the sparse
coefficients matrix X∈ R b L .
2
2.3 Initial contour
As shown in Fig.1, the echocardiogram is composed of two kinds of areas. The cardiac
chamber is in the middle of the image with lower intensities and uniformed distributions. By
contrast, the cardiac tumor and the atrial wall have much higher intensities, varying greatly.
Hence, after several iterations, the K-SVD decomposition makes the overcomplete dictionary
D containing these two types of atoms, corresponding to the texture area (cardiac tumor, the
atrial wall and the myocardium) and the atoms for the homogeneous area (cardiac chamber),
respectively. The sparse coefficients X of these two areas are quite different. For the texture
areas, their coefficients columns contain a small number of non-zeros coefficients, while the
coefficients columns of the homogeneous areas are all zeros. Then, we can classify these two
sorts by the sum of each column in X, as X_sum∈R1×L.
X _ sumi 

b2
j 1
X ji
i  1...L
(6)
The X_sums of the texture areas are non_zeros, as those of the homogeneous parts are zeros.
According to the classification result, the original image is converted into a binary image
with the homogeneous areas whose intensities are 1, as shown in Fig. 2(a).
Because of intensity inhomogeneous in echocardiogram, some regions in myocardium are
misclassified. We apply the mathematical morphology method to choose 8-connectedness
16
sub-regions and calculate the Euclidian distance between each sub-region to the center of the
cardiac rectangle, to remove those irrelevant ones and obtain the chamber, as shown in Fig.
2(b). The canny edge operator is utilized to get the rough initial contour of the cardiac tumor
and the atrial wall, as shown in Fig. 2(c).
(a)
(b)
(c)
Fig. 2. The processing steps, (a) the binary image of Fig. 1 after the classification, (b) the
cardiac chamber after the mathematical morphology, (c) the initial boundary.
2.4 Refine the boundary with modified Active Contour Model
The ACM, also called as snakes, is based on curve evolution and energy function
minimization. It is a set of vertices moving in reaction to two components: the internal forces,
derived from the assumed properties of the shape; and the external forces, stemmed from the
image features [9]. The ACM can be formulated as:


11

E     | X ( s) |2   | X ( s) |2  Eext [ X ( s)]ds
0 2


(7)
where X (s) and X (s) are first and second derivative of the image, α and β are their weighting
constants, a measure of the elasticity and the stiffness respectively. X (s) and X (s) constitute
the internal energy caused by bending contours. Eext[X(s)] is called as the external energy,
attracting to image features, usually the high gradient. Through the internal and external
forces, the contour iteratively approaches the boundary of an object through the minimization
of the energy.
Though the ACM has been successively applied to medical image segmentation, it suffers
from two key difficulties. First, it’s sensitive to the initial contour. The initial contour must be
close to the true boundary or it’s difficult to converge to the desired segmentation. Here, from
Fig.4, we can see that the implementation of the sparse representation can easily seek the
closest initial contour, offering the solution.
Another weakness is the external forces, which may push the ACM to boundary concavities.
Usually, the gradient or the gradient vector flow (GVF) [10] is utilized as the external forces,
which is vulnerable to the image noise, especially challenging in the echocardiogram with
poor image contrast and inhomogeneous intensities. As mentioned in Section 2.3, X_sum
17
represents local characteristics of each image patch, including the information of local
intensities, variances, neighborhood distributions and texture features. It’s more robust than
the gradient, the traditional external force. Therefore, the ACM is modified here by
presenting the sum of sparse coefficients as a new class of external forces.
The modified ACM helps the deformable contour to move iteratively in a desired manner,
finally eliminating the ambiguity and converging to the true boundary.
3. Experiments and Results
For evaluating the algorithm, we obtained 15 tranthoracic echocardiogram image sets from
Zhongshan Hospital, Shanghai. The data was collected without any regard to image quality
during routine echocardiographic examination. All experiments were implemented in Matlab
2010 on a PC with 3.10 GHz, Intel Core 2 processor. The reference standard is manual
segmentation result by the expert. To validate the efficacy and robustness of our algorithm,
the performance is compared with the traditional ACM. Two methods had the same
preprocessing and the initial contour. Here the parameter α is 0.15 and β is 0.15, and the
snake iteration number is 50. For evaluation, the mean absolute difference (MAD), the
maximum difference (MAXD) and the area overlap (AO) [11] are taken into account.
3.1 Visual Comparison
Fig. 3 is one example of atrial wall and cardiac tumor segmentation in echocardiogram. Fig.
3(a) is manually segmented by an expert, while (b) and (c) are results of the traditional ACM
and our algorithm respectively. Clearly, the contour extracted by our algorithm is most
smooth, closely approximates that of the expert. It has a broad capture range and superior
convergence properties. As the external force of the traditional ACM is merely dependent on
the gradient, it is quite sensitive to the noise. In Fig. 3(b), the left region and the top right
corner of the contour is incapable of progressing into the true boundary.
(a)
(b)
(c)
Fig. 3 The segmentation results by, (a) the manual method, (b) the traditional ACM, (c) our
method.
18
3.2 Analysis of algorithm performance
Given the algorithm-segmented contour (A) with vertices {ai: i=1…K} and the manually
segmented contour (M) with vertices {mn: n=1...N}. The distance between a vertex ai and
contour M is defined as:
d (ai , M )  min | ai  mn |
(8)
n
Based on it, MAD is calculated as:
K
MAD  (1/ K ) d (ai , M )
(9)
MAXD  max{d (ai , M )}
(10)
i 1
MADX is formulated as:
i[1, K ]
Another quantitative measurement is the area-based metrics, which are used to compare the
area enclosed by two contours. The different regions are shown in Fig. 4.
manually segmented contour
algorithm segmented contour
Fig. 4. Area-based metrics.
Here TP is the true positive, FP is the false positive, TN is the true negative, FN is the false
negative. Then, the parameter AO can be calculated as:
AO = [TP/(TP+FN+FP)]*100%
(11)
Table 1 shows the mean and standard deviation of the segmentation in 15 tranthoracic
echocardiogram images. MAD represents the mean error between two contours, while MADX
is the maximum error.
Table 1. The MAD, MAXD and AO comparison.
Mean | Standard Deviation
MAD (pixel)
MAXD (pixel)
AO (100%)
our algorithm
2.733 | 0.642
12.31| 13.71
74.04 | 0.002
the traditional ACM
2.930 | 1.005
14.83 |48.88
73.41 | 0.003
19
Our algorithm has the smaller MAD and MAXD. Meanwhile, the standard deviation of our
algorithm is far lower, especially for MAXD, which indicates our method is more robust. AO
measures the proportional area correctly identified by the algorithm. The mean AO of our
method is higher than that of the traditional ACM. It verifies that our curve is most similar to
the manually segmented contour.
4. Conclusion
In this paper, we introduce a novel segmentation methodology for cardiac tumor and atrial
wall in echocardiogram. The algorithm is a combination of sparse representation and a
modified ACM method. Evaluations on 15 tranthoracic echocardiogram image sets show that
our method is capable of mimicking expert’s manual segmentation, therefore, providing the
high robust and accurate results.
Our method will be further applied to other kinds of atrial masses segmentation in
echocardiogram, including the malignant cardiac tumors and thrombi. In addition, the future
work will explore more advanced ultrasonic features of benign, malignant cardiac tumors and
thrombi, which helps the doctors in diagnosing and distinguishing these three masses.
5. Acknowledgement
This work was supported by the National Natural Science Foundation of China (Grant No.
61271071 and No. 11228411), the National Key Technology R&D Program of China (No.
2012BAI13B02) and Specialized Research Fund for the Doctoral Program of Higher
Education of China (No.20110071110017)
[1]
[2]
[3]
[4]
[5]
[6]
[7]
6. References
S. Maraj, G.S. Pressman, and V.M. Figueredo, Primary cardiac tumors, International
Journal of Cardiology, Vol. 133, No. 2, pp.152–156, 2009.
K. Reynen, Cardiac myxomas, The New England Journals of Medicine, Vol.33, No. 24,
pp.1610-1617, 1995.
Y. Guo, Y. Wang, D. Kong and X. Shu, Automatic Endocardium Extraction for
Echocardiogram, Proceeding of Biomedical Engineering and Informatics (BMEI), Vol.
1, pp. 155-159, Shanghai, 2011.
J.M. Sarjeant, J. Butany, and R.J. Cusimano, Cancer of the heart: epidemiology and
management of primary tumors and metastases, American Journal of Cardiovascular
Drugs: drugs, devices and other interventions, Vol. 3, No. 6, pp. 407–421, 2003.
J.M. Sanches, J.C. Nascimento, and J.S. Marques, Medical image noise reduction using
the Sylvester–Lyapunov Equation, IEEE Transaction on Image Processing, Vol. 17,
No. 9, pp. 1522-1539, 2008.
Y. Guo, Y. Wang, and T. Hou, Speckle filtering of ultrasonic images using a modified
non local-based algorithm, Biomedical Signal Processing and Control, Vol. 6, No. 2pp.
129-138, 2011.
M. Aharon, M. Elad, and A. Bruckstein, K-SVD: an algorithm for designing
overcomplete dictionaries for sparse representation, IEEE Transaction on Signal
Processing, Vol. 54, No. 11, pp. 4311-4322, 2006.
20
[8]
M. Elad M., and M. Aharon, Image denoising via sparse and redundant representations
over learned dictionaries, IEEE Transaction on Image Processing, Vol. 15, No. 12, pp.
3736-3745, 2006.
[9] M. Kass, A. Within, and D. Terzopoulos, Snakes: active contour models, International
Journal of Computer Vision, Vol. 1, No. 4, pp. 321-331,1988.
[10] C. Xu and J. Prince, Snake, shapes, and gradient vector flow, IEEE Transactions on
Image Processing, Vol. 7, No. 3, pp. 359-369, 1998.
[11] N. D. Nanayakkara, J. Samarabandu and A. Fenster, Prostate segmentation by feature
enhancement using domain knowledge and adaptive region based opeartions, Physics in
Medicine and Biology, Vol. 51, No. 7, pp. 1831-1848, 2006.
21
256
Model-based photoacoustic image reconstruction based on polar
coordinates
Yan Zhang, Yuanyuan Wanga*, Chen Zhangb
a
*Department of Electronic Engineering, Fudan University, No. 220 Handan Road., Shanghai,
China
E-mail address: yywang@fudan.edu.cn
Abstract
In this paper, we present a novel model-based reconstruction algorithm for 2D
circular-scanning photoacoustic imaging (PAI). In the proposed algorithm, the forward model
is established based on the polar coordinates. For the circular-scanning PAI, the geometric
symmetry property is neglected in the Cartesian coordinates. While in the polar coordinates
system, this geometric property can be fully utilized. By taking advantage of this geometric
property, much unnecessary calculation can be avoided and the storage can also be saved in
the proposed algorithm. Through the numerical simulation, the proposed algorithm is verified
and compared to other reconstruction method. It is revealed that the proposed algorithm costs
less calculation time and provides an equivalent reconstructed image. With the demonstration
of the high performance of this algorithm, it can be concluded that the proposed algorithm
may be an efficient reconstruction algorithm for 2D circular-scanning PAI.
Keyword: photoacoustic imaging; model-based reconstruction; forward model; polar
coordinates.
1. Introduction
Photoacoustic imaging (PAI) is a noninvasive medical imaging technique with great potential
in many aspects of clinic applications, e.g. early cancer detection [1], vascular imaging [2, 3],
structural and functional brain imaging [4]. It is effective to image deeper tissues compared
with the traditional optical imaging. On the other hand, the PAI can also reconstruct images
with higher contrast and resolution than the traditional ultrasound imaging. So it combines
the advantages of optical imaging and ultrasound imaging [5].
In the photoacoustic imaging, the laser pulse is used to irradiate the biomedical tissue and the
ultrasound wave will be excited. The ultrasound signal is received by the scanning transducer
or the transducer array. The photoacoustic images can be reconstructed by using the
reconstruction algorithms. Many algorithms have been proposed for the PAI image
22
reconstruction [6]-[8]. Among these methods, the FBP method is a widely used for its
simplicity and ease of implementation. Temporal back-projection reconstruction algorithms
have been developed in various geometries. However, the FBP reconstruction is not enough
exact that there often exist artifacts in the FBP reconstructed image, leading to the
degradation of the reconstruction quality. A typical artifact is negative optical adsorption
pixels in the sharp edges or the detailed features of the image. These negative optical
absorption pixels do not have physical interpretation, and these artifacts could not be
prevented by using the FBP method [9].
Different from the analytical reconstruction methods, another way to accomplish the
reconstruction is to establish a forward model for the photoacoustic wave propagation.
Instead of solving the photoacoustic equations analytically, we obtain the PAI image by
minimizing the error between the measured signals and the signals theoretically predicted by
the forward model [9]. The choice of an appropriate forward model is important during the
reconstruction. The forward model should be established with preserving the inherent
physical property, so that the signal fidelity could be guaranteed. Along with the forward
model, the image regularization can be involved in the reconstruction [10], and many
regularization coefficients in the aspect of image processing can be employed. In this way,
the algorithm requires less data than conventional ones and the reconstruction error can be
reduced. Meanwhile the computational complexity will be enhanced in respond. Besides, the
practical applicability should also be considered. For example, a forward model may be
accurate but it is so sophisticated that it takes an unacceptable long time to accomplish the
reconstruction. Hence this model must be simplified for practical utilizing. The model-based
PAI image reconstruction is expected to be accelerated with the image preserved in a high
quality level.
In this paper, we proposed a novel model-based reconstruction algoithm for 2D
circular-scanning PAI. The proposed algorithm is based on a novel forward model, which is
established in the polar coordinates. In the circular-scanning PAI, the scanning trajectory is
centrosymmetric. Different from the forward model in the Cartesian coordinates, the
proposed model takes advantage of this symmetry. In this way, we need to calculate the
measurement matrix at only one detection point, and measurement matrices at the other
detection points can be obtained by reordering the columns in the matrix. Therefore, the
measurement matrix is unnecessary to be repeatedly loaded and the calculation of
measurement matrices can be saved. Through the numerical simulation, the proposed
algorithm is verified and compared to other reconstruction algorithms. The performance of
the proposed algorithm is shown in the simulation results, and the calculation time is also
compared.
23
2. Theory and method
In this paper, 2D circular-scanning PAI is concerned. The scheme of 2D circular-scanning
PAI is shown in Fig.1.
Fig.1 The scheme of the 2D circular-scanning PAI.
For the spatially uniform laser pulse, the relationship between the photoacoustic signals and
the optical absorption distribution is [6]
1  2 p(r, t )

I (t )
 p(r, t )  2

A(r )
2
c
t
Cp
t
2
,
(1)
where p(r, t) is the acoustic pressure at the position r and the time t, c is the sound speed in
the tissue, β is the isobaric expansion coefficient, Cp is the specific heat, A(r) is the spatial
deposition of the absorbed laser energy and I(t) is the temporal profile of the laser pulse.
The equation (1) can be solved by using Green’s function, p is obtained as
 
A(r' ) 2
p(r0 , t ) 
d r' ,
(2)

4C p t r 'r0 ct t
where r0 is the norm of the vector r0, meaning the radius of the scanning circle.
Fig.2 Discretization of integral path in the forward model.
24
The acoustic pressure signal is a curvilinear integral of the energy deposition along a certain
arc. As the optical deposition is discretized in polar coordinates, the integral path is also
discretized in the forward model. Half of the integral path is uniformly divided into M parts
as shown in Fig.2. With this method, the calculation of the rest half can be deduced by using
the axisymmetry. Then the curvilinear integral is converted into the summation of the
absorbed energy deposition at the points S1,S2,…,SM+1.
As depicted in Fig.2, the O is the center of the polar coordinates and the point D is detection
point, the coordinates of D is (0,r0). The length of SiD is ct.
According to the law of cosines, the angular coverage θ can be calculated as

  arccos( (2  r02  ct 2 ) 2  r02 )
.
(3)
2
can be calculated correspondingly, assuming that they are
The coordinates of S1,S2,…,SM+1
(φ1,ρ1), (φ2,ρ2),…, (φM+1,ρM+1). By defining a new variable g, the equation (2) can be
discretized as
M 1
g (t )   p(t )dt   A(i , i )ds .
t
0
(4)
i 1
Here, the bilinear interpolation is used, and A(φi,ρi) becomes the sum of optical deposition
value of four neighbor points in the discretized grid. Here A is given as
A(i , i )  A( floor ,  floor )  (ceil  i )  ( ceil  i )  A( floor , ceil )  (ceil  i )  ( i   floor )
 A(ceil ,  floor )  (i   floor )  ( ceil  i )  A(ceil , ceil )  (i   floor )  ( i   floor )
, (5)
where (φfloor,ρfloor) is the floor of the coordinates (φi,ρi) in the grids and the (φceil,ρceil) is the
ceil of the coordinates (φi,ρi) in the grids.
After interpolation, the relationship between the variable g and the energy deposition A can
be expressed as
N
g (t )  W ( k ,  k )  A( k ,  k ) ,
(6)
k 1
where N is the total number of the grids’ elements in the polar coordinates.
At the other sampling point, because the scanning trajectory is centrosymmetric, the
corresponding measurement matrix can be get as
W (i ,  )  W (mod(i , 2 ) ,  ) ,
(7)
where α is the biased angle and mod(.) is the operator of the remainder.
Converting A and p into column vectors, the equation (6) can be converted into the matrix
multiplication form
(8)
g W  A.
It is the photoacoustic forward model and can be used for the image reconstruction. We set
the objective function as the error between the measured signals and the signal theoretically
25
predicted using the forward model. The reconstructed image can be obtained by minimizing
the objective function.
A  arg min( g  W  A )
(9)
Usually the iteration is implemented to accomplish the image reconstruction. In this paper,
the iteration formula is given as
A  
W
(W  A  g ) .
W
(9)
The image A is updated at each sampling point. Then the iteration is repeated unless the
existing criterion is met. Generally the exiting criterion can be set as that the error is under a
certain level or the iteration number is more than a pre-specified number. After the iterations,
the photoacoustic image reconstruction will be accomplished.
3. Simulation
The numerical simulation is utilized to validate the proposed model-based photoacoustic
image reconstruction algorithm. The simulation experiments are performed using Matlab
(version 7.8.0.0347) on a PC with 2.0 GHz Quad-CPU and 2.93 GB memory.
In the simulation, the original absorbed laser energy deposition image is given as Fig3.a. The
radius of the scanning circle is 50 mm and the location of the scanning circle is depicted in
Fig3.a with the yellow circle. The sound speed is 1500 m/s, which is consistent in the
simulation.
By using the equation (2), we calculate the photoacoustic signals at 60 different detection
points. These detection points are uniformly distributed in the scanning circle. The image
reconstructions are accomplished by using the iterative algorithm (IR) [11] in the Cartesian
coordinates and the proposed algorithm in the polar coordinates. In the IR method in the
Cartesian coordinates, the interval of the discretized grid is 0.2 mm, while in the proposed
method in polar coordinates, the angular interval is 2° and the radial interval is 0.2 mm.
The reconstruction results are shown in Fig3.b and Fig3.c.
Fig3. (a) A given optical deposition image, (b) the reconstructed image by the IR method
inCartesian coordinates, and (c) the reconstructed image by the proposed method in polar
26
coordinates.
It is shown in Fig3.b and Fig3.c that there is no substantial different between these two
images. The proposed algorithm is able to offer the equivalent reconstruction quality as the
IR method in Cartesian coordinates. The strong energy absorber is clearly shown in the
reconstructed image and the edge information is preserved well. For both reconstructed
images, the noises in the background are depressed effectively. The reconstructed image is
consistent with the original image.
However, we recorded the calculation time of both algorithms, there exist differences. Firstly,
we compare the time cost in calculating the measurement matrices. The IR algorithm in
Cartesian coordinates costs 73.9 s, while the proposed algorithm costs 35.7 s. Then we
compare the time cost in reconstructing the image. The IR algorithm costs 189.3 s, while the
proposed algorithm costs 61.0 s. The better efficiency of the proposed algorithm is revealed
from the comparison.
4. Conclusion
In this paper, we present a novel model-based reconstruction algorithm for 2D
circular-scanning photoacoustic imaging. This algorithm is based on the forward model
established in the polar coordinates. In this forward model, the detected signal is represented
as a summation of the energy deposition at a series of points. We minimize the error between
the detected signal and the signal calculated by the proposed forward model to obtain the
photoacoustic image. Because the proposed model fully utilizes the centrosymmetric property
of the scanning circle, the calculation can be reduced in respond. The proposed algorithm is
verified through the numerical simulations. It is demonstrated in the results that the proposed
method is able to provide an equivalent reconstructed image with the less calculation time.
The reconstructed image remains in a high quality level as the previously proposed iterative
algorithm in the Cartesian coordinates. It could be concluded that this model-based algorithm
may be an efficient algorithm for the circular-scanning PAI.
5. Acknowledgement
This work was supported in part by the National Natural Science Foundation of China under
Grant 10974035 and in part by the Program of Shanghai Subject Chief Scientist under Grant
10XD1400600.
6. Reference
[12] M. Pramanik, G. Ku, C. Li, and L. V. Wang, Design and evaluation of a novel breast
cancer detection system combining both thermoacoustic (TA) and photoacoustic (PA)
tomography. Medical Physics, vol. 35, no. 6, pp. 2218-2223, 2008.
[13] E. Z. Zhang, J. G. Laufer, R. B. Pedley, and P. C. Bead, In vivo high-resolution 3D
photoacoustic imaging of superficial vascular anatomy. Physics in Medicine and
Biology, vol. 54, no. 4, pp.1035-1046, 2009.
27
[14] J. J. Niederhauser, M. Jaeger, R. Lemor, P. Weber, and M. Frenz, Combined ultrasound
and optoacoustic system for real-time high-contrast vascular imaging in vivo, IEEE
Transactions on Medical Imaging, vol. 24, no.4, pp. 436-440, 2005.
[15] X. Wang, Y. Pang, G. Ku, X, Xie, G. Stoica, and L. V. Wang, Noninvasive
laser-induced photoacoustic tomography for structural and functional in vivo imaging
of the brain. Nature Biotechnology, vol. 21, no. 7, pp. 803-806, 2003.
[16] C. Li, and L. V. Wang, Photoacoustic tomography and sensing in biomedicine. Physics
in Medicine and Biology. vol. 54, no. 19, pp. R59-R97, 2009.
[17] M. Xu and L. V. Wang, Universal back-projection algorithm for photacoustic computed
tomography, Physical Review E, vol. 71, no. 1, pp. 016706, 2005.
[18] P. Burgholzer, J. Bauer-Marschallinger, H. Grun, M. Haltmeier, and G. Paltauf,
Temporal back-projection algorithms for photoacoustic tomography with integrating
line detectors, Inverse Problems, vol. 23, no. 6, pp. S65–S80, 2007.
[19] M. Xu, Y. Xu, L. V. Wang, Time-domain reconstruction algorithms and numerical
simulations for thermoacoustic tomography in various geometries, IEEE Transactions
on Biomedical Engineering, vol. 50, no. 9, pp. 1086-1099, 2003.
[20] X. Luis Dean-Ben, Vasilis Ntziachristos, and Daniel Razansky, Acceleration of
optoacoustic model-based reconstruction using angular image discretization, IEEE
trasactions on Medical Imaging, vol. 31, no. 5, pp. 1154-1162, 2012
[21] Y. Zhang, Y Wang, and C Zhang, Total Variation Based Gradient Descent Algorithm
for Sparse-View Photoacoustic Image reconstruction, Ultrasonics, 2012,
10.1016/j.ultras.2012.08.012
[22] G. Paltauf, J. A. Viator, S. A. Prahl, and S. L. Jacques, Iterative reconstruction algorithm
for optoacoustic imaging, Journal of the Optical Society of America A, vol. 112, no. 4,
pp. 1536-1544, 2002.
28
347
Correction of Light Source Fluctuation error of Quantitative Spectroscopic
Tomography for the Non-invasive Measurement of theBiogenic-substances
Pradeep K.W. Abeygunawardhanaa,Wei Qia,Akira Nishiyamab, Ichirou Ishimaru a
a
Dept. of Intelligent Mechanical Systems Engineering, Faculty of Engineering, Kagawa
University, 2217-20 Hayashi-cho, Takamatsu, Kagawa-pref., 761-0396, Japan
b
Faculty of Medicine, Kagawa University, 1750-1 Miki-cho, Kita, Kagawa-pref.,
761-0793, Japan
Abstract
The non-invasive blood sugar sensor by using imaging-type 2-dimensional Fourier
spectroscopy is to be realized in this work. The spectroscopic imaging, that observes
thebiological tissue by the dark-field image, can measure the biogenic substance
quantitatively such as the glucose concentration. For the quantitative analysis with high
accuracy, the correction of the background such as the light-source fluctuation and the
phase-shift uncertainty is inevitable issue. Thus, the quantitative band-pass plate on which the
grating is locally formed has been already proposed by [REM paper]. In that paper, the
diffractive light, whose diffraction angle depends on the wavelength, has been usedas the
reference light. Object lens is used to narrow down the reference light and narrowed band
pass diffraction light is obtained.The changes of imaging intensities with interference
phenomenon on whole area of the observation image can be confirmed using the quantitative
band pass filter.This paper proposed to light source fluctuation error correction method of the
interferogram in spectroscopic tomography. Compared to the signal frequency, light source
fluctuation error is low frequency signal. This low frequency error signal is estimated by
using envelop detection method. Those estimated signal is used to correct the
measuredinterferogram which is measured using imaging type two-dimensionalFourier
transform spectroscopy.
1. Introduction
Imaging type 2-dimensional Fourier spectroscopic [2] has been using for development of
non-invasive blood sugar sensor and Wide-field-of-view spectroscopicimaging.
Spectroscopic tomography of the mouse’s ear by using the imaging-type 2-dimensional
Fourier spectroscopy was obtained at Ishimaru laboratory in Kagawa University, Japan for
the first time in world tomographic history [3].The spectroscopic imaging, that observes
29
thebiological tissue by the dark-field image, can measure the biogenic substance
quantitatively such as the glucose concentration. However, for the quantitative analysis with
high accuracy, the correction of the back ground such as the light-source fluctuation and the
phase-shift uncertainty is inevitable issue. Thus, the quantitative band-pass plate on which the
grating is locally formed is proposed previously. The diffractive light, whose diffraction
angle depends on the wavelength, is used as the reference light. The object lens is used to
narrow down the reference light and narrowed band pass diffraction light is obtained. This
paper proposed to light source fluctuation error correction method of the interferogram in
spectroscopic tomography. Compared to the signal frequency, light source fluctuation error is
low frequency signal. This low frequency error signal is estimated by using envelop detection
method. Those estimated signal is used to correct the measuredinterferogram which is
measured using imaging type two-dimensionalFourier transform spectroscopy.
This paper has been organized as follows. Paper starts with introduction and section II
describes the imaging-type 2-dimensional Fourier spectroscopy. Section III explained the
spectroscopic tomography of the biological tissues. The background correction is described
in section IV.. The paper is concluded with conclusion.
2. Imaging-type 2-dimensional Fourier Spectroscopy
The proposed imaging-type 2-dimensional Fourier spectroscopy is the wavefront-division
interferometer, as shown in Figure 1.Thus, the rays from one of the single bright-points on
the object surface interferes each other on the imaging plane. Meanwhile, the rays from the
out-of-focus plane, which are illustrated as dotted lines, cannot intersect on the imaging plane
and cannot interfere in each other. So, our proposed method can limit the measuring depth
into the focal plane. If we scan the measurement plane mechanically in the depth direction,
the spectroscopic tomography can be obtained.Next, the principle of the proposed method is
explained. The variable phase-filter in to the infinity optical system on the Fourier transform
plane was installed. This can give the arbitrary phase difference to the half wavefront.The
variable phase-filter consists of the two mirrors. One is the fixed-mirror, another one is the
moving-mirror. The phase difference is given between the reflected rays from the each mirror.
In accordance with the phase-shift value caused by the mirror movement, the cyclic intensity
changesare observed on the imaging plane at every pixel simultaneously. The summation of
the cyclic interference intensity changes in each wavelength is detected as the interferogram
on the light-receiving device. As known as the Fourier transform spectroscopy, the spectrum
is analyticallyacquired from the interferogram. Hence, the spectrum distribution on the focal
plane can be acquired.
30
Intensity
2-D arrayed device
Imaging plane
0
λ
Focal plane
Ray from
out-of-focus plane
Imaging lens
Object
lens
Relative
intensity
Intensity
Movingmirror
Sample
λ/2
Phase-shift value
Each wavelength
Phase
shift
Fixedmirror
FFT
Variable phase-filter
Wavelength
Spectrum
Interferogram
Figure 1. Imaging-type 2-D Fourier spectroscopy by phase-shift interference between object beams
3. The Spectroscopic Tomography of the biological tissues
The spectroscopic tomography of the living biological tissue in the near-infrared region was
successfully acquired. Based on the distinct feature of our method, the spectroscopic
tomography near the skin surface was obtained.
Figure 2 shows the experimental optics whose reflection-illumination is applied also for the
light-path-length clarification that will be explained in the next chapter.However, the regular
reflected light from the tissue surface is extremely stronger than the back scattering light from
the inner structure. Thus,to detect the weak back-scattering-light by eliminating the regular
reflected light, the dark-field imaging-optics was introduced for the tomographic image.The
2-D arrayed device
Light source
Sample: Mouse’s ear
Halogen
lamp
Back-scatteringlight
InGaAs
Camera
Imaging lens
Fixed
mirror
Regular reflected
light
Imaging
lens
Half mirror
Object lens
N.A.:0.39
Relay lens
Variable phase
Movable mirror
filter
Stroke:30μm, Resolution:0.5nm
Sample
Focal
plane
Variable phase
filter
Relay lens
Half
mirror
Object
lens
Sample
100[mm]
Figure 2. Schematic diagrams for the dark-field spectroscopic-tomography of the biological tissue with the near-infrared light
fixed and moving mirrorswere set up with the spaced gap. The regular reflected light was
passed through the gap of these 2 mirrors not to reach the imaging device.The sample is
illuminated through the condenser lens(Material: BK7) using the halogen lamp (Maker: KLV,
31
Inc., Type:64528) as the light source.The object lens (N.A.:0.39, Material:BK7) is used for
the observation optics for the detection of the weak back scattering light.The interference of
the weak back scattering light is formedby the imaging lens usingthe relay lens on the
InGaAs camera (Maker:Hamamatsu Photonics, Type:C10633-13).
Upper left-hand side photo in Figure 3(a) shows the observed image of mouse’s ear withthe
near-infrared light. These computed false-colored images were generated from the spectral
absorptance in the each pixel. The false color is determined by the area of the spectral
absorptance within the certain wavelength area as shown in upper right in Figure 3. Figure 3
(b-1), (b-2) and (b-3) shows the absorptance ration within 1100nm to 1300nm, 1300nm
to1500nm, and 1500nm to 1700nm respectively. Thedifferent textures that are indicated by
the solid circle and the dotted circle can be confirmed from different wavelength regions.
Thequantitativecluster analysis of the biological components from these spectroscopic
tomographic images will be our future studies. Thequantitativecluster analysis of the
biological components will be carry out to realize the non-invasive blood sugar sensor.
A.
4. Background Correction Method
Why the Background Correction is Necessary
Our researches are aiming at the realization of the non-invasive measurement of the
biogenic-substance, such as the blood glucose concentration, by using imaging-type
2-dimensional Fourier spectroscopy.Here, high accuracy of the quantitative analysis and
precision measurement of components are very crucial.Thebackground such as the
light-source fluctuation (vertical axis, about 2%) and the phase-shift uncertainty (horizontal
axis, about 0.2%)are inevitable issues when obtaining high accuracy.Reference light is
necessary to correct errors due to light source fluctuation and phase shift uncertainty. The
quantitative bandpass plate in which the grating is locally formed has been proposed to make
reference light on the measurement plane.The reference photon detection area can be
established on the spectroscopic imagewhich can be obtained by usingimaging-type
2-dimensional Fourier spectroscopy.Also, the mirror reflection light from a skin surface has
about 1,000-time strong light volume compared with scattering light. So, we use the dark
field optics to eliminate the mirror reflection light with the oblique lighting(as shown in
figure 3).
Quantitative band pass plate
Biological
sample
(Finger)
The dark filed optics with
oblique lighting
Light
source
2-D
arrayed
device
Scattering
light
Grating
Window
material
Mirror reflection
light
Variable
phase filter
Figure 6. Proposed method of the quantitative band pass plate
32
For achievement of high accuracy measurement, error has to be controlled less than
0.01%.As explained earlier, the diffractive light, whose diffraction angle depends on the
wavelength was used as the reference light. This reference light is the narrowed band-pass
diffractive light, which is passed through the objective lens. Thus, the light-source fluctuation
from the amplitude of the reference light intensity and the phase-shift uncertainly from the
interference-phase can be corrected respectively.
However, the reference light’s bandwidth and light volume serve as a relation of a
trade-off.That is, if the reference light’s bandwidth is narrowed too much, light volume will
decrease, and amplitude compensation becomes difficult. So, it is necessary to take the
bandwidth and light volume of reference light into balance.
B.
The Narrowed BandpassReference Light
The quantitative band-pass plate forcorrectionof the light-source fluctuation (vertical axis)
and the phase-shift uncertainly (horizontal axis) was used. Because the reference light and the
weak scattering lightcan be obtained at the same time from the image, the reference light’s
interferogram is used for correction of the vertical axis and horizontal axis.The mimetic
diagram of the optical system for the biological measurement which used the quantitative
bandpass plate is shown in Figure 6.
Reference area with grating is designed such a way that it is different from measurement area.
Moreover, since the diffraction angle is different for each wavelength, the long wavelength
has the big diffraction angle, the short wavelength has the small diffraction angle. So, the
wavelength band can be limited by the N.A. (Numerical Aperture) of the object lens. The
longest wavelength’s diffraction angle is the
,the shortestwavelength’s diffraction angle
is the
. Thus, the narrow wavelength band can be chosen for the reference light. The
reference light and the weak scattering light pass through the phase shifter and the
imaginglens, then will be imaged on the image plane. The interferogram can be obtained
from the image plane, then thecorrection of the light-source fluctuationand the phase-shift
uncertainty can be conducted with the interferogram at the same time.
C.
Design of the grating period
Proper design of grating period is very important for the selection of reference light
wavelength. Grating is established on the quantitative band pass plate and grating period is
designed to obtain the suitable bandwidth. Narrow bandwidth was obtained using the optical
system shown in figure8.
Initially, the focaldistance ( ) and diameter ( ) of the object lens can be decidedbased on the
resolution of the measurement object. Thus, the angle of aperture ( ) can be calculated.
33
Then, the incident angle (
angle(
angle(
) is decided. From the difference between the incident
)and the angle of aperture (
)can be calculated.
(1)
), the short wavelength’s first order diffraction
And the long wavelength’s first order diffraction angle (
equation (2).
) can be obtained as shown in
(2)
The relation of grating period (d), diffraction angle
and the wavelength
) are shown as the
equation (3).
(3)
Then, based on the wavelength(
period (d) can be calculated.
D.
,
) and the diffraction angle (
,
) the grating
Feasibility demonstration of obtaining narrow band reference light from the halogen
lamp
Feasibility demonstration of obtaining narrow band reference light from the wide wavelength
band of halogen lamp is explained in this section. As explained in section (C), the object
lens’s focal distance ( )and the diameter ( ) are selected as 400mm and 25mm respectively.
Thus, the angle of aperture( ) wascalculated as 1.8 [deg.]. Moreover, the incident angle( )
was obtained as 44.21 [deg.]. Then,using equation (1), the short wavelength’s first order
diffraction angle was calculated as 38.78 [deg.]. From the equation (2), the long wavelength’s
first order diffraction angle was calculated as 42.41 [deg.]
In this experiment, the grating period =0.83 [µm]. The short wavelength
=522 [nm]
and the
=562 [nm] can be calculated by equation (3). So, the wavelength band is
522~562 [nm], the width is 40nm.The optical system is shown in figure 9.
The halogen lamp with the wide wavelength band was used as the light source. The light
from the halogen lamp with the incident angle ( =44.21[deg.]) is diffracted by the grating.
Then, the first-order diffraction of light passed through the object lens and it is reflected by
the variable phase filter that is installed at the angle of 45 degrees on the Fourier transform
plane. Then the reflected beams form an image on the camera (Maker:SONY, Type: XC-75)
through the imaging lens.
The experimental results of the narrow wavelength band for the reference light by the
quantitative bandpass plate as shown in figure 4. Phase differences are given to the half flux
34
of objective rays by variable phase filter.We confirmed the changes of imaging intensities
with interference phenomenon on whole area of the observation image. Because the curve of
interferogram is nearly same the sine curve, the single wavelength is approached.A
bright-line spectrum that is transformed by Fourier transform from the interferogram.
Interferogram
Intensity
65
6060
55
50
50
4540
500
1000
1500
24.8
15.5
31.0 27.9 46.5
Phase shift [μm]
F.T.
Relative intensity
40
0
21.7
0
2000
31.0
62.0
Spectral characteristic of
the narrowed wavelength band
Half bandwidth:14 [nm]
Wavelength band:36 [nm]
400
500
600
Wavelength [nm]
700
800
4
Figure 10. Figure
The experimental
results of the narrow wavelength band for
the reference light by the quantitative band pass plate
The bright-line spectrum has the half bandwidth 14nm and the wavelength band about 36nm
which is very nearly about the calculatedvalue 40nm. So, the narrow wavelength band from
the wide wavelength band was obtained by the proposed quantitative band-pass plate. Thus,
the nearly single wavelength which is obtained by the quantitative band-pass plate as the
reference light can be expected.
E. Correction of light source fluctuation
Light source fluctuation error is the low frequency signal compared to the signal frequency.
Therefore in this approach, low frequency signal which included in the interferogram is to be
extracted. Following steps were used to extract the low frequency signal associated with
signal frequency.
1. Apply moving average filter
2. Then, calculate the square of intensity
3. After filter using buttorworth low pass filter
4. Take the square root of filtered data
Moving average filter was applied to reduce random noise without deteriorating the original
signal shape. Equation of moving average filter is given in equation (4).
y[i] 
1
M
j  M 1
 x[i  j ]
(4)
j 0
In our interferogram , 12 data points have been collected during one cycle. Therefore, we
selected M =3. Then square of intensity values were taken. This will result the all positive
35
values so that any envelop can be easily detected.
Butterworth filter is the recursive filter which has excellent passband response.
xn  a0 xn  a1xn 1  a2 xn  2  b1xn 1  b2 xn  2

Where, xn  Filtered output data ,
(5)
xn  Unfiltered data,
a0 ~ b2 Filter coefficients
n  Sample number
Coefficients a0 ~ b2 are functions of cut-off frequency and sampling frequency.By applying
Butterworth lowpass filter high frequency components can be eliminated. Taking square root
of filtered signal law frequency signal can be extracted. This extracted signal will be used to
correct the light source fluctuation error in the interferogram.
Experimental result
Intensity values were collected using imaging type 2-Dimentional Fourier Transform
Spectroscopy and Figure (5) shows the interferogram. Light source was the He-Ne laser light.
Figure 5
Figure 6
Figure (6) shows the extracted law frequency signal and Figure (7) shows the corrected
interferogram with original interferogram.
Using Figure (7), it is clear that the error has been reduced from the original interferogram.
36
Figure 7
5. Conclusion
Thespectroscopic tomography of the living biological tissue in the near-infrared region
was successfully acquired. Based on the distinct feature of our method, the spectroscopic
tomography near the skin surface was obtained. To realize the non-invasive the blood sugar
sensor, the theoretical accuracy of Fourier spectroscopy that was calculated with the
numerical simulation. The background correction is very important for the accuracy. There
are two error sources which affects the accuracy of the system. Light source fluctuation and
phase shift error. This paper proposed the light source fluctuation error correction through the
estimation of low frequency error signal. Results showed the method was effective. As the
future work, we are working to correct error due to phase shift and to develop solid
background correction algorithm.
6. Acknowledgment
This project is supported by Regional Innovation Strategy Support Programof Kagawa
University, Japan
7. References
[1] Wei Qi, Daisuke Kojima, Shun Sato, Satoru Suzuki, Pradeep K.W.
Abeygunawardhana,AkiraNishiyama and Ichirou Ishimaru, “Quantitative Spectroscopic
Tomography for the Non-invasive Measurement of the Biogenic-substances”,
Mechatronics REM 2012, November 2012, Paris, France. (Accepted)
[2] Y. Inoue, I. Ishimaru, T. Yasokawa, K. Ishizaki, M. Yoshida, M. Kondo, S.Kuriyama, T.
Masaki, S. Nakai, K. Takegawa, and N. Tanaka, “Variable phase-contrast fluorescence
spectrometry for fluorescently stained cells”, Applied Physics Letters ,89,121103 (2006)
[3] D. Kojima, T. Takuma, A. Inui, W. Qi, R.Tsutsumi, T.Yuzuriah, H.Kagiyama,
A.Nishiyama and I.Ishimaru, “Spectroscopic tomography of biological tissues with the
near-infrared
radiation
for
the
non-invasive
measurement
of
the
biogenic-substances,”SPIE BiOS, Optical Diagnostics and Sensing XII, Vol.8229,
pp82290M1- 82290M7
37
[4]
S.N. Yao, T. Collins, P. Jancovic, “Hydrid method for designing digital butterworth
filter”, Journal of Computing and Engineers, vol. 38(2012), pp 811-818.
38
400
Fabrication and Characterization of Glass Spheres & SU-8 Mixture Mold
and Its Microchannel for an Electrochemical Immunosensor
Yoonkyung Nama, Youngmi Kim Pakb, James Jungho Pakc,*
a
Korea Univ, # 534 Engineering Bldg., Anam-Dong, Seongbuk-Gu, Seoul, Rep. of Korea
E-mail address: namazm@korea.ac.kr
b
c
Kyung Hee Univ, Hoegi-dong #1, Dongdaemoon-gu, Seoul, Rep. of Korea
E-mail address: ykpak@khu.ac.kr
Korea Univ., # 504 Engineering Bldg., Anam-dong, Seongbuk-gu, Seoul, Rep. of Korea
E-mail address: pak@korea.ac.kr
Abstract
This paper suggests a simple and low cost technique for PDMS microchannel modification
using glass spheres. The glass spheres & SU-8 mixture mold is made by the same process as
the SU-8 mold. But it is made with glass spheres and SU-8 mixtures that have various
volume ratios instead of only SU-8. Glass spheres on the mold surface make the inside
surface of the PDMS microchannel rough. This roughness increases as the amount of glass
spheres increase at glass sphere low concentrations. The rough surface increases the inner
surface area and contact angle, which would improve immobilizing biomolecules in the
microchannel. To confirm the improvement of the biomolecule immobilization in such
microchannels, an antibody conjugated with FITC fluorescence was injected into the
fabricated microchannel modified with the mixture mold. As a result, the fluorescent intensity
increase was obtained in proportion to the content of glass spheres with high correlation
coefficient (=0.96). In order to demonstrate that the fabricated microchannel can improve the
sensitivity of an electrochemical immunosensor, H5N1 was used as an analyte with ELISA
method. After boding the fabricated microchannels with substrates with gold electrode, the
sandwich structure was fabricated in the microchannels. Then they were characterized
through the cyclic voltammetry. Compared to the curve of the immunosensor without H5N1,
that with H5N1 shows one high reduction current peak at 0.17 V. It confirms the
improvement of an electrochemical sensor using a microchannel with such a rough internal
surface which can be made with a simple glass spheres & SU-8 mixture mold.
Keyword: microchannel, glass spheres, immunosensor, electrochemical sensor, ELSIA
39
1. Introduction
Immunoassay is currently one of the predominant analytical techniques for the quantitative
determination of a wide variety of analyte of clinical, medical, biotechnological and
environmental significance [1]. The high specificity of the analysis is provided by the
antibody molecule, which recognizes the corresponding analyte. Among the most important
advantages of immunoassays are their speeds, sensitivity, selectivity, and cost effectiveness.
Electrochemical enzyme immunoassay is based on the measurement of the small amounts of
the enzyme-generated product which can be detected electrochemically. Electrochemical
enzyme immunosensors have several advantages of instrumental simplicity, moderate cost,
portability, accuracy, and high sensitivity for medical diagnosis [2]. For fabricating
electrochemical enzyme immunoassay, such as an enzyme-linked immunosorbent assay
(ELISA), immobilizing biomolecules technique is very important. Self-assembled
monolayers (SAMs) are widely used to immobilize biomolecules on gold electrodes, but
SAMs and biomolecular layers formed can cause electrode fouling [3],[4]. Electrode fouling
may lead to reduced analytical signal by blocking the direct electron transfer between
electroactive species. It can result in reduced sensitivity of electrochemical immunosensors.
To overcome this problem, we employed microchip with modified microchannel.
Recently, a microchip-based immunoassay technology has been introduced using
microelectrodes. When considering cost, time and labor, polymers and elastomers are
becoming more and more attractive in microchip applications. Moreover, microchip-based
immunosensors need only extremely small volume of samples for analysis. Some of the
various polymers typically used include poly (dimethylsiloxane) or simply PDMS [5],
poly(methyl methacrylate) or simply PMMA[6], polyethylene terephthalate (PET) [7], or
polycarbonate (PC) [8]. Among these polymers, PDMS is widely used in microchip
fabrication. This can be attributed to its various salient features including its elastomeric
properties, biocompatibility, gas permeability, optical transparency, ease of molding into
(sub) micrometer features, ease of bonding to itself and glass, relatively high chemical inertia,
and low manufacturing cost [9]. For the successful immobilizing biomolecules on the inner
surface of PDMS channel, various techniques have been reported for modifying the
biomolecules on the PDMS surface [10]. Unfortunately, most of them are time-consuming
and expensive process due to the large number of manipulations needs for fabrication. Some
of them may need complicated chemical process which is hard to control and requires
elaborate or chemical treatments.
This paper suggests a simple and low cost technique for PDMS microchannel modification
using glass spheres. Glass spheres and SU-8 mixture were employed to make a mold of
microchannel. Since the specific gravity of glass spheres is lower than that of SU-8, some of
glass spheres floated on the SU-8 layer surface after spin coating process. The mold made
with this mixture has a very rough surface due to the floated glass spheres on the SU-8
40
surface. Moreover, glass spheres do not interfere with light path during photolithography
process because of their transparency. Using there attractive properties of glass spheres, the
mixture mold can make microchannel to have a proper condition to immobilize biomolecules.
This paper shows the fabrication and characterization results from an electrochemical
immunosensor based on the above-mentioned glass spheres and SU-8 mold. The volume ratio
of the glasses spheres to SU-8 mixture has been varied in order to obtain an optimal condition
of proposed mold and microchannel. Fabricated microchannel was characterized through the
optical images analysis, contact angle measurement, and fluorescent intensity of immobilized
antibody measurement. As an application demonstration, H5N1 virus was used as a target
analyte for sandwich immunoassay to verify proposed microchip immunosensor
performance.
2. Materials and methods
2.1 Chemicals
A bottle of glass spheres was purchased from Sigma-Aldrich, and the model name of SU-8 is
SU-8 2035 from MICROCHEM. PDMS pre-polymer and curing agent (Sylgard 184) were
purchased from Dow Corning. Anti-e.coli O157 antibody (FITC) as fluorescent conjugated
antibody was purchased from Abcam. H5N1 virus pair set for ELISA was purchased from
Sino biological Inc.
2.2 Fabrication of the Glass Spheres & SU-8 Mixture Mold and PDMS Microchannel
Fig. 1 A schematic of fabrication process of the glass spheres & SU-8 mixture mold and PDMS
microchannel
Fig. 1 shows the fabrication process of glass spheres & SU-8 mixture mold and its
microchannel. The procedure of the glass spheres & SU-8 mixture mold is very similar to that
of SU-8 mold. Glass spheres with 9 µm ~ 13 µm diameters and SU-8 were mixed at the
volume ratios of 1:4, 1:5, 1:10, and 1:20 at 1000 rpm for 2 hours. These mixtures were
41
spin-coated on the Si wafer which is pretreated with O2 plasma. The thickness of the mixture
layer was 50 µm. After that, the mixtures were exposed to UV light. For UV exposure
process, it originally needs 200 mJ/cm2 of exposure energy. The glass spheres caused
interfering of light path and hence the exposure energy of the manual should be increased to
complete photo process perfectly. Therefore, the mixture exposed to UV light with 300
mJ/cm2 of exposure energy and developed. The PDMS pre-polymer and curing agents were
mixed at the ratio of 10:1. Mixed PDMS was poured on the fabricated mold and cured in an
oven at 70°C for 30 minutes, before removing the mold. After making the inlet and outlet at
both ends of channel for the connection, the channel was completed.
2.3 Immobilizing of the fluorescent conjugated antibody
To evaluate fabricated microchannels, the fluorescent conjugated e.coli antibody was
employed. The antibody samples were injected to each channel by a syringe pump (KDS 200;
KD scientific Inc.). The concentration of the antibody was 50 ng/mL. Then, each channel was
rinsed with PBS solution to remove the unabsorbed antibody
2.4 Fabrication of the sandwich structure of H5N1
To confirm the possibility of the application for the electrochemical immunosensor, the
sandwich structure of H5N1 was fabricated and characterized with cyclic voltammetry. First
of all, gold electrodes were patterned on the glass substrate, and combined with the fabricated
PDMS microchannel using O2 plasma treatment. After microchip formation, the capture
antibody with 1 µg/mL concentration was injected and immobilized overnight. Then, H5N1
antigen was immobilized for 2h. The concentration of H5N1 was 4 ng/mL. For
electrochemical detection, horseradish peroxidase (HRP) enzyme conjugated detection
antibody was injected into the microchannel and reacted with H5N1. After each process, the
microchannel was rinsed with 0.05% tween20 solution to remove unbound molecules and
other protein impurities. Finally, the sandwich structure of H5N1 was fabricated in the
microchannel surface, as shown in Fig.2.
Fig. 2 A schematic view of H5N1 ELSIA on the modified microchannel surface
3. Results and discussions
3.1 Surface characterization of the fabricated molds and microchannels
42
PDMS has recently been demonstrated as an excellent material for microfluidic immunoassay
applications because of its ease of casting, low cost, optical transparency, hydrophobicity
leading to its ability for protein adsorption[11]. Hydrophobicity depends on both the chemical
composition and the surface morphology. In the proposed structure, the rough surface created
by glass spheres can cause improvement of hydrophobicity. To observe the inner surface of
the fabricated microchannels, an optical microscope was used. The Table 1 shows that the
more the content of glass spheres is, the rougher the surface is, which can be expected from
many other previous experiments
Table 1 Optical microscope images of the inner surface of the fabricated PDMS channel with
different mixture ratios.
Volume ratio
1:4
1:5
1:10
1:20
Inner surface
(x100)
To confirm this, the contact angles of DI water on the inner surface of the fabricated
microchannels were measured. The droplets with 15 µL volumes were dropped on the five
different sites of the microchannel inner surface positions and their contact angles were
measured with contact angle analyzer (Pheonix 300, SEO). Fig. 3 shows that the contact
angle increase as the content of glass spheres increases. Especially, the contact angle of the
inner surface at the volume ratio of 1:4 which had the highest hydrophobicity was 10° larger
than that of flat PDMS channel (105°).
Fig. 3 Contact angle on the surface of the fabricated microchannel versus glass spheres
43
3.2 Fluorescent intensity measurements
In order to confirm the possibility as an immunosensor, FITC conjugated e.coli antibody was
injected into the fabricated microchannel, and observed its intensity by fluorescent
microscope (GX71, Olympus Corp.) (Fig. 4(a)). The fluorescent images of the bottom of
microchannel were analyzed by image analysis program (i-solution, IMTechnology). Using
the program, the fluorescent intensity resulted from multiplying brightness (0~255) of
fluorescent by fluorescent detected area. As a result, the fluorescent intensity increased in
exact proportion to the content of glass spheres shown in Fig. 4 (correlation coefficient =
0.96). This means that the largest number of the antibody was immobilized in the
microchannel which was made by the mold with the highest content of glass spheres. The
reason why most of the antibody was immobilized in the microchannel which was made by
the 1:4 volume ratio mixture mold may be attributed to the largest inner surface area and the
highest hydrophobicity of this microchannel.
(a)
(b)
Fig. 4(a) Fluorescent image of the fabricated microchannel with FITC conjugated e.coli antibody,
(b) the calibration graph of the fluorescent intensity versus glass spheres and SU-8 volume ratio.
3.3 Electrochemical Measurement for H5N1 using fabricated microchannel
The fabricated immunosensor was evaluated for its electrochemical response via cyclic
voltammetry, which was carried out at the potential ranging from 0.7 to 0.1 V versus the
Ag/AgCl reference electrode in a 0.1-M acetate buffer containing a 0.1-mM TMB substrate
and 2-mM H2O2. The chemical reactions can be described as follows:
······························································
(1)
··················································
(2)
The TMB would be oxidized by an HRP enzyme to generate the TMB radical, TMB∗ as
shown in Eq. (1). The consumed TMB would then be regenerated from TMB∗ through its
44
reduction by the electrochemical reaction of the three-electrode system as shown in Eq. (2).
Figure 5 shows the cyclic voltammograms of the fabricated immunosensor with or without
H5N1 antigen in the proposed microchannel. The curve of the immunosensor with H5N1
shows one high reduction current peak at 0.17 V (Fig. 5), compared with the immunosensor
without H5N1. The reduction current peak is attributed to the reduction reaction of the
activated TMB radical, TMB∗, which may exist as an intermittent substance before being
completely oxidized by the HRP enzyme. This result can confirm that the fabricated
microchannel using glass spheres & SU-8 mixture mold can be applicable to electrochemical
immunosensor.
Fig. 5 Cyclic voltammograms of the fabricated immunosensor with (red line) or without H5N1
antigen (black line) in the proposed microchannel
4. Conclusions
A simple and low cost method using glass spheres to fabricate a new type of SU-8 mold and
PDMS micro channel is proposed. This method is very similar to existing method to make
SU-8 mold. It needs only glass spheres & SU-8 mixture instead of just SU-8. Not only a
transparency property of glass spheres can make light path be undisturbed, but also glass
spheres on the surface can make the rough inner surface of microchannel for immobilizing
biomolecules. Glass spheres and SU-8 were mixed at the volume ratios of 1:4, 1:5, 1:10, and
1:20 to find an optimal condition for immunoassay. From the optical images of fabricated
microchannels, it was observed that as the content of glass spheres increases, the surface
becomes rougher. Contact angle measurement also showed higher contact angles as the
content of glass spheres increases. In order to evaluate the characteristics as an immunosensor,
FITC conjugated e.coli antibody was injected into the fabricated microchannel. As a result,
the fluorescent intensity increased in exact proportion to the content of glass spheres. To
demonstrate the fabricated microchannel as an electrochemical immunosensor microchip,
H5N1 ELSIA method was employed. Compared to the curve of the immunosensor without
45
H5N1, that with H5N1 showed one high reduction current peak at 0.17 V. This result
confirms that the proposed microchannel can be used for the electrochemical sensor of
variety of proteins that have two different binding sites for the sandwich ELSIA method.
5. Acknowledgements
This work was supported by Korea Ministry of Environment grant “Projects for Developing
Eco-Innovation Technologies” (GT-11-G-02-001-1), Seoul R&BD Program (No. 10920). and
National Research Foundation of Korea Grant funded by the Korean Government (Ministry
of Education, Science and Technology) (No. K20903001812-11E0100-01700).
6. References
[1] R. P. Ekins, "Immunoassay, DNA Analysis, and Other Ligand Binding Assay
Techniques: From Electropherograms to Multiplexed, Ultrasensitive Microarrays on a
Chip," Journal of Chemical Education, vol. 76, p. 769, 1999.
[2] E. Bakker and M. Telting-Diaz, "Electrochemical Sensors," Analytical Chemistry, vol.
74, pp. 2781-2800, 2002.
[3] A. Ulman, "Formation and Structure of Self-Assembled Monolayers," Chemical
Reviews, vol. 96, pp. 1533-1554, 1996.
[4] S. J. Kwon, et al., "An electrochemical immunosensor using ferrocenyl-tethered
dendrimer," Analyst, vol. 131, pp. 402-406, 2006.
[5] E. Eteshola and D. Leckband, "Development and characterization of an ELISA assay in
PDMS microfluidic channels," Sensors and Actuators B: Chemical, vol. 72, pp. 129-133,
2001.
[6] T.-K. Lim, et al., "Microfabricated On-Chip-Type Electrochemical Flow Immunoassay
System for the Detection of Histamine Released in Whole Blood Samples," Analytical
Chemistry, vol. 75, pp. 3316-3321, 2003.
[7] J. S. Rossier and H. H. Girault, "Enzyme linked immunosorbent assay on a microchip
with electrochemical detection," Lab on a Chip, vol. 1, pp. 153-157, 2001.
[8] Y. Liu and C. B. Rauch, "DNA probe attachment on plastic surfaces and microfluidic
hybridization array channel devices with sample oscillation," Analytical Biochemistry,
vol. 317, pp. 76-84, 2003.
[9] A. Mata, et al., "Characterization of Polydimethylsiloxane (PDMS) Properties for
Biomedical Micro/Nanosystems," Biomedical Microdevices, vol. 7, pp. 281-293, 2005.
[10] J. Zhou, et al., "Recent developments in PDMS surface modification for microfluidic
devices," ELECTROPHORESIS, vol. 31, pp. 2-16, 2010.
[11] Y. Gao, et al., "Development of a novel electrokinetically driven microfluidic
immunoassay for the detection of Helicobacter pylori," Analytica Chimica Acta, vol. 543,
pp. 109-116, 2005.
46
412
Coercivity weighted Langevin magnetisation; A new approach to interpret
superparamagnetic and nonsuperparamagnetic behaviour in single domain
magnetic nanoparticles
Dhanesh Kattipparambil Rajana and Jukka Lekkalab
a,b
Department of Automation Science and Engineering,
Tampere University of Technology, Tampere, P.O. Box 692, FIN-33101 Finland
E-mail: a dhanesh.kr@tut.fi
b
jukka.lekkala@tut.fi
Abstract
Superparamagnetism (SPM) is an attractive material property often appearing in nanoscaled
single domain (SD) configurations. However, not all SD particles are superparamagnetic,
which depends on a few parameters including material type, temperature, measurement time
and magneto crystalline anisotropy. The non-linear magnetisation response of magnetic
particles can be interpreted by classical Langevinapproach but its applicability is limited to
pure SD-SPM behaviour. The classical Langevin equation lacks parameters to account for
possible remanence and coercivity in SD regime, resultantly, the SD-nonSPM possibility is
left untreated. To solve this issue, we propose a new model by including SD coercivity
parameters in classical Langevin equations. The new model 1) combines steady or time
varying magnetisation dynamics and temperature or particle size dependent coercivity and 2)
helps to calculate coercivity compensated magnetisations and susceptibilities directly. The
model covers full spectrum of SD diameters and defines the switching between
superparamagnetic and non-superparamagnetic states more precisely.
Keywords: Magnetic nanoparticles, superparamagnetism, single domain coercivity,
temperature dependent coercivity, time dependent coercivity
1. Introduction
Superparamagnetic particles have been widely utilised in recent years for their applications in
biosensors, targeted drug delivery, therapeutic hyperthermia and tomographic imaging
[23][24][25][26]. Superparamagnetism (SPM) is often directly interpreted as a material
property achieved by scaling the particle volume down to nanoscale dimensions with the
formation of single domain (SD) configuration. But in reality, besides the particle volume, a
few other parameters including available thermal energy, magneto crystalline anisotropy and
47
measurement period together determines whether the unique magnetic dipole moment
fluctuates randomly ending up in classical superparamagnetic behaviour [27][28][29][30].
Therefore depending on the proportion of these parameters, the SD particles might appear as
superparamagnetic (SD-SPM) and non-superparamagnetic (SD-nonSPM). Experimentally, all
sorts of SPM behaviour of any material particle is often monitored by magnetisation
hysteresis plots and conceived susceptibility measurements [31][32]. There exist a few
theoretical models to predict the non-linear magnetisation response with high field saturation
mostly using the Langevin approach [31][33][34][35][38]. But the Langevin approach is
strictly applicable only in pure SD-SPM cases since it never considers the SD-nonSPM
formulation. The SPM to non-SPM transition in SD configuration and the SD remanence and
coercive force observed in many experiments [28][29][30] also cannot be interpreted by the
conventional Langevin approach. Particle samples from most of the vendors are not strictly
mono-disperse, so the probability to have the volume dependent SD remanence in room
temperature applications and SPM to non-SPM transition in below room temperature
applications is high. In this context we propose a new model, developed from the classical
Langevin equations, which combines steady or time varying magnetisation dynamics and
temperature or particle size dependent coercive force. The new model helps to calculate
coercivity compensated DC and component AC magnetisations and susceptibilities directly
from particle and suspension medium properties. The new model covers full spectrum of SD
diameters and defines the switching between superparamagnetic and non-superparamagnetic
states more precisely. The calculations have been carried out using the material properties of
the most used magnetic particle materials of magnetite and maghemite.
2.
The distinctive SD and SPM configurations
2.1. Single domain and superparamagnetic radii
In the absence of an external field, the critical diameter for single domain configuration is a
function of exchange length lex as follows [27]
dSD  72klex
(1)
wherek is the dimensionless hardness parameter. Substituting for k and lex yields
dSD  72
K
 0 Ms 2
A
 0 Ms 2
(2)
whereK is first anisotropy constant, o vacuum permeability , Ms saturation magnetisation
and A exchange stiffness constant. For a given particle, though its diameter is below dSD, it
48
need not necessarily be superparamagnetic below a certain transition temperature since the
surrounding thermal energy is not sufficient enough to flip the dipole moment randomly
inside the domain in the considered observation time. This leads to state the critical diameter
dSPM[27] for superparamagnetic behaviour as a function of temperature and magneto
crystalline anisotropy as follows,
dSPM  2 3
6kbT
K
(3)
wherekb is Boltzmann’s constant T absolute temperature. The dSD and dSPM calculated for
magnetite and maghemite spherical particles at 300K using equation (2) and (3) are given
inTable. 1. The variation of dSPM with temperature is shown in 0
Table. 1 Anisotropy and crystalline parameters defining SD and SPM critical
diameters at 300K[27][36]
First
Exchange
Saturation
anisotropy stiffness magnetisation,
constant, constant,
Ms (kA/m)
3
K (kJ/m ) A (pJ/m)
Single
domain
critical
diameter,
Superparamagnetic
critical diameter,
dSPM (nm)
dSD (nm)
Magnetite
13.5
13.3
446
~ 103
~ 24
Maghemite
4.6
10
380
~ 85
~ 35
Fig. 1
Single domain critical diameter dSD, superparamagnetic diameter dSPM as a
function of temperature for magnetite and maghemite particles
49
2.2. Relaxometric parameters and complex susceptibility.
The magnetic moment flips between parallel or antiparallel easy axes and the effective
relaxation time constant for a magnetic particle suspension is
 eff 
 N B
 N  B
(4)
whereτN= τ0 exp(KV/ kbT), the Neel relaxation time by Neel-Arrhenius formulation [35]
andτB= (KrV/2kbT), the Brown relaxation time due to Brownian rotational diffusion of
suspended particles in carrier medium. 1/o is attempt frequency characteristic to material, V
the particle volume, Kr geometric rotational shape factor and  carrier medium viscosity. The
eff for magnetite and maghemite spherical particles at different SD diameters is in Fig. 2
Fig. 2
The effective relaxation time eff
particles at different SD diameters
for magnetite and maghemite spherical
In an external alternating field, the absolute susceptibility of particle suspension is
exclusively determined by the effective relaxation time. The Debye convention to predict the
frequency dependent complex susceptibility [31][33] in this case can be written as
   ' i  '' 
o
2
1 2 eff
i
o eff
2
1 2 eff
(5)
Where o is the DC susceptibility and ω the angular frequency.For equation (5) to be
theoretically useful, other approximations for eg.Langevin approximation for o is essential.
For a given volume fraction ɸ, the Langevin magnetisation [34][35] can be expressed as
50


MDC   Ms coth( ) 
whereα= πμoMs d3Hx/ 6kbT,
1
 
(6)
d is the particle diameter and Hx the intensity of applied field.
2.3. Langevin magnetisation with relaxometric parameters
For an AC field of strength, Hx sin t , the Langevin variable in equation (6) can be modified
with the notions ′=ocosωt and ″=osinωt, to include the real and imaginary susceptibility
and frequency components as [35][37][38],

1
M AC   Ms 
2 2
1    eff
 eff
1


 coth( cos t ) 

 cos t  1   2 eff2

1 

coth(

sin

t
)



 sin t  

(7)
At 0Hz Equation (7) converges to Equation (6). This equation is useful for predicting volume
magnetisation at high temperature and only in the SD-SPM regime and never predicts
coercivity or remanence observed in many SD magnetisation experiments [28][29][30].
2.4. Langevin magnetisation with relaxometric and coercivity parameters
The temperature dependent SD magnetic coercivity for a randomly oriented non interacting
particle system can be expressed as,
1


Hc  Hco 1  (T TB ) 2 


(8)
Where Hco=2K/ μoMs is the coercivity at 0K according to the Stoner–Wohlfarth theory
[39]&TB=KV/ kbln(τm /τo), is the critical superparamagnetic transition temperature (blocking
temperature) [36][40] . By substituting for Hc[41] and TB, the volume dependence of
coercivity is derived
1



k
bT

m  2



Hc  Hco 1  
ln
  KV   O   


(9)
where1/τm is measurement frequency. Equation (8) is valid when T<TB since Hc cannot have
negative values in forward magnetisation. When substituted for TB in equation (9), the same
approximation is followed hence the coercivity Hc ≥ 0. The temperature and frequency
dependence of coercivity of magnetite particles at different single domain diameters is plotted
in Fig. 3
51
Fig. 3
Coercivity as a function of particle diameter a) at different temperatures and
b) at different field frequencies. The zero coercivity corresponds to the
superparamagnetic transition which is clearly a function of temperature (blocking
temperature) and measurement frequency.
To account for coercive force in magnetisation, equation (7) can be modified by including Hc
in and is rewritten for forward and backward measurements as
 eff 
 Msd
0
3
H
x 
Hc 
(10)
6k T
b

1
MAC   Ms 
2 2
1    eff


 eff
1
 coth( eff cos t ) 
 
 eff cos t  1   2 eff2



1
 coth( eff sin t ) 
 

sin

t
eff

 
(11)
Equation (11) accounts for the frequency dependent volume magnetisation and volume and
temperature dependent coercive force. The equation covers all diameters (SPM and nonSPM)
in the complete SD regime. The MAC plots using equation (11) for SD magnetite and
maghemite particles at different temperatures are shown inFig. 4.
The equation for instantaneous volume susceptibility can be derived by differentiating
equation (11) with respect to effective field either for forward Heff = Hx + Hc or backward
Heff = Hx - Hc magnetisation measurement as follows
52
inst 
 k 2
k 2  k 1 eff 
d
 Ms  k 1
( MAC ) 
coth 2 (k 1)  1  eff  coth 2 (k 2)  1 


2 2 
dHeff
1    eff  Heff
Heff
k 1k 2 Heff 
(12)
Where k1 = αeffcosωt and k2 = αeffsinωt. The inst plots for magnetite and maghemite based on
equation (12) are given inFig. 5.
Fig. 4
The magnetisation plots for a) SD magnetite and b) SD maghemite particles at
different temperatures. Two diameters 10% above and below the critical d SPM are
considered. For diameters above the dSPM large coercivity appears. Also a
superparamagnetic particle at room temperature is not superparamagnetic at a lower
temperature. (For computations, f = 10Hz, particle concentration = 0.1mmol/L,
suspension medium = distilled water)
53
Fig. 5
The instantaneous susceptibility (full volume susceptibility) plots for two
diameters 10% above and below the critical dSPMfor a) SD magnetite and b) SD
maghemite at different temperatures. The maximal influence of coercive field at low
temperature (blue) and above critical dSPM(magenta) is seen as peaks in full
susceptibility measurement. As the strength of the applied field increases, the peak
susceptibility is seen when the maximum magnetic energy is used to overcome the
demagnetising coercive field. Thereafter the superparamagnetic behaviour
dominates.
A very useful application of equation (12) is to approximate the DC susceptibility (0Hz)
which can be reduced to,


 eff
1

coth 2 ( eff )  1

  eff Heff Heff

 DC   Ms 
(13)
In reality, equation (13) consists of real and imaginary components which can be separately
redefined as
'
 Ms
1  w2 eff2
 1

 eff

coth 2 ( eff )  1


  eff Heff Heff

(14)
 '' 
w Ms
1  w2 eff2
 1

 eff

coth 2 ( eff )  1


  eff Heff Heff

(15)
The ʹ and ʺ plots for SD- SPM and SD- nonSPM
54
particles for magnetite and
maghemite at different frequencies are given inFig. 6.
Fig. 6
Theʹand” plots for SD- SPM and SD- nonSPM
maghemite at different frequencies.
particles for magnetite and
Finally the cusp observed in experimental ʹ versus T plots [42] can be effectively predicted
by our model as in Fig. 7
Fig. 7
ʹ versus T curve for magnetite particle of diameter equals 90% of dSPM
3. Conclusion
A new model to interpret superparamagnetic and nonsuperparamagnetic behaviour in single
domain magnetic nanoparticles weighted by coercivity influence is presented. Equations for
directly computing coercivity weighted stationary or time varying magnetisation and
susceptibility for non-interacting nanoparticle samples are derived. All equations are derived
for monodisperse particles but in reality most of the particle samples from different vendors
are polydisperse. The polydispersity can be included in the presented model by replacing the
55
volume fraction ‘ϕ’ by the ‘log normal diameter distribution’ of particles. Direct calculation
of magnetisation and susceptibility would be helpful in many biomedical areas where
parameters like magnetisation dependent voltage, magnetisation dependent polarisation,
magneto optic effect etc. are to be estimated.
4. Acknowledgements
The authors acknowledge Hugues de Crémiers, Merck Millipore for certain helpful
discussions. This work was carried out during the tenure of funding from the project
MPI-SPARE of Human Spare Parts Project supported by Finnish Funding Agency for
Technology and Innovation (TEKES) and the Council of Tampere Region
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
5. References
JuergenWeizenecker, Bernhard Gleich and JoernBorgert, Magnetic particle imaging
using a field free line, J. Phys. D: Appl. Phys. 41, 2008, pp.105009
Rudolf Hergt, Silvio Dutz, Robert M¨uller and Matthias Zeisberger, Magnetic particle
hyperthermia: nanoparticle magnetism and materials development for cancer therapy,
J. Phys.: Condens. Matter 18 , 2006, pp.S2919–S2934
JarkkoMäkiranta, Jukka Lekkala, optimization of a novel magnetic nanoparticle sensor,
xviii Imeko world congress, 2006, Brazil
Babincova M, Babinec P, magnetic drug delivery and targeting:principles and
applications, Biomed Pap Med FacUnivPalacky Olomouc Czech Repub. 2009 Dec;
153(4), pp 243–250
Nguyen Thi Kim Thanh, Magnetic Nanoparticles: From Fabrication to Clinical
Applications, by CRC Press, 2012 pp.16-30
G. Herzer, grain size dependence of coercivty grain size dependence of coercivity and
permeability in nanocrystallineferromagnets, ieee transactions on magnetics, vol.
26, 1990, 1397 -1402
Juan C. Diaz Ricci, Joseph L. Kirschvink, Magnetic domain state and coercivity
predictions for biogenic greigite (Fe3S4): A comparison of theory with magnetosome
observations, journal of geophysical research, vol. 97, no. b12, 1992, Pp 17,309-17315
V. Franco, C. F. Conde, and A. Conde, L. F. Kiss, Relationship between coercivity and
magnetic moment of superparamagnetic particles with dipolar interaction, Physical
Review B 72,2005, pp1744241-4
P.C. Fannin, Investigating magnetic fluids by means of complex susceptibility
measurements, Journal of Magnetism and Magnetic Materials, 2003, pp.446–451
Anit K. Giri, Krishna M. Chowdary and Sara A. Majetich, ac magnetic properties of
compacted feconanocomposites, Mater.Phys.Mech.1 , 2000, pp.1-10
P. C. Fannint, S .W. Charlest and T. Relihant, On the use of complex susceptibility data
to complement magnetic viscosity measurements, J. Phys. D: Appl. Phys. 27 (1994)
pp.189-193
S.Biederer, T. Knop, T. F. Sattel , K. Ludtke-Buzug, B.Gleich, J Weizenecker,
J.Borgert and T.M. Buzug, Magnetization response spectroscopy of
superparamagnetic nanoparticles for magnetic particle imaging, J. Phys. D: Appl. Phys.
42, 2009, pp. 205007 – 205014
56
[35] Adam M. Rauwerdink,
John B. Weaver, Viscous effects on nanoparticle
magnetization harmonics, Journal of Magnetism and Magnetic Materials 322, 2010,
pp.609–613
[36] R. E. Rosensweig, Ferrohydrodynamics, Dover Publications, pp.57-63
[37] O. Petracic, A. Glatz, W. Kleemann, Models for the magnetic ac susceptibility of
granular superferromagneticCoFe/Al2O3, Physical Review B 70, 2004, pp.214432 -37
[38] Ferguson RM, Minard KR, Khandhar AP, Krishnan KM, Optimizing magnetite
nanoparticles for mass sensitivity in magnetic particle imaging, Med Phys. 2011
Mar;38(3):1619-26.
[39] Qi Chen and Z. John Zhang, Size-dependent superparamagnetic properties of
MgFe2O4 spinel ferrite nanocrystallites, Applied Physics Letters volume 73, 1998,
pp3156-3158
[40] J.P. Vejpravova and V. Sechovsk, Superparamagnetism of Co-Ferrite Nanoparticles,
WDS'05 Proceedings of Contributed Papers, Part III, 2005, 518–523
[41] Clara Pereira, Andr M. Pereira, Carlos Fernandes, et. alMariana Rocha, Ricardo
Mendes, Mar a Paz Fernandez-Garc a, Alexandra Guedes, Pedro B. Tavares,
Jean-Marc Greneche, Joao P. Arau jo and Cristina Freire, Superparamagnetic MFe2O4
(M = Fe, Co, Mn) Nanoparticles: Tuning the Particle Size and Magnetic Properties
through a Novel One-Step Coprecipitation Route, Chemistry of materials, American
Chemical Society, 2012, pp.1496-1504
[42] Dinesh Martien, Introduction to AC susceptibility: Application Notes, Quantum Design
57
377
Synthesis and Characterization of Carbonated Hydroxyapatite
Mesoporous Particles
Nur Farahiyah Mohammada,b, Radzali Othmana, YEOH Fei Yeeb,*
a
School of Materials and Mineral Resources Engineering, Engineering Campus,
Universiti Sains Malaysia, 14300 Nibomg Tebal, Penang, Malaysia
E-mail address: *srfeiyee@eng.usm.my
b
Biomedical Electronic Engineering Programme, School of Mechatronic,
Pauh Putra Campus, Universiti Malaysia Perlis, 02600 Arau, Perlis, Malaysia
Abstract
Mesoporous carbonated hydroxyapatite (CHA) have a great potential in nanoscaled delivery
system such as gene, protein and drug delivery due to its excellent biocompatibility,
bioactivity, and bioresorbability. Previously, mesoporous hydroxyapatite (HA) was found to
be a potential implantable drug carrier for treating diseases, however, achieving high drug
loading capacity and a more consistent drug-release profile with mesoporous HA is still a
challenge. Therefore, it is expected that tinier mesoporous CHA is able to perform better as
drug carrier particles than HA. The mesoporous carbonated hydroxyapatite (CHA) particles
were synthesized using non-ionics surfactants (F127 and P123) in this study. The mesoporous
structures were introduced in the samples via-self assembly mechanism between CHA and
the surfactants by co-precipitation synthesis method. SEM image revealed a sphere-like
particle shape of CHA was produced after calcination for all samples. Pure CHA phase was
obtained for all samples. FTIR spectrums revealed the substitution of carbonate ion into the
apatite and confirm the formation of CHA. The FTIR results also demonstrated that the
surfactants had been removed completely through calcinations process. Mesoporous structure
was observed from the direction parallel and perpendicular along the mesochannels CHA
particles. It was found that both F127 and P123 demonstrated similar pore diameters = 15.6
nm, but different surface areas (F127 = 68.4 m2/g, P123=84.4 m2/g). The results imply that
mesoporous carbonated hydroxyapatite successfully synthesized using the proposed
technique and non-ionic surfactants were suitable to be used as pore template.
Keyword: carbonated hydroxyapatite, mesoporous, surfactant, co-precipitation, pluronics,
1. Introduction
Mesoporous materials are materials with pores of 2-50 nm diameters. It demonstrated
extremely large surface areas and narrow pore size distribution which are promising
58
properties for application as catalyst, absorbents, separation and host material [1]. Recently,
their application has been extended to biomedical application such as nanoscaled delivery
system for drug, gene and protein and tissue engineering [1]. Previously, macroporous HA
(pore size > 50 nm) has been used as drug delivery particle in the study of drug delivery
system using bioceramics [2]. However, they likely to exhibit ‘burst’ release profile which is
the main limitation for controlled- released application [3]. A desired biomedical delivery
system should be biocompatible, able to load and release a large quantity of guest molecule
and has a controllable rate of release. Palazzo et. al and Kim et. al previously demonstrated
that lower porosity of HA showed more significant initial burst release due to the tendency of
the drug molecules to loosely bound and concentrated on the external macropore walls rather
than internal pores [4, 5]. Thus, mesopore CHA which has an excellent biocompatibility,
bioactivity, bioresorbability, high porosity are seen to be a capable new candidate for drug
delivery agent. Thus, our study aimed to synthesize mesoporous CHA using non-ionics
surfactants (F127 and P123) and to investigate the effect of different types of surfactant on
morphology and pore characteristic of CHA powders.
2. Methodology
2.1 Synthesis of Mesoporous CHA
Mesoporous CHA were syhthesized using triblock copolymer, F127 and P123 as structure
directing agent, calcium nitrate tetrahydrate (Ca(NO3)2.4H2O) and diammonium hydrogen
phosphate ((NH4)2.HPO4) as calcium and phosphorous sources, respectively. The
surfactant-calcium solution was prepared by dissolving F127 or P123 in 100 ml of deionised
(DI) water and followed by the addition of 9.45 g of Ca(NO3)2.4H2O. The solution was
stirred for 30 minutes to ensure that the self-organization mechanism was completed.
Separately, 3.17 g of (NH4)2.HPO4 was dissolved in 60 ml of DI water. Then, 3.793 g of
ammonium hydrogen carbonate (NH4HCO3) was added to the solution as carbonate sources.
The phosphate-carbonate solution was then added dropwise into the mixed surfactant-calcium
solution under continuous stirring. The solution was then stirred homogenously for another
15 minutes. The pH of the solution was maintained at 11 using 1M NaOH solution
throughout the mixing process. The solution was then transferred into a Teflon bottle and
aged at 120 ºC for 24 hours. After ageing, the solution was centrifuged for 15 minutes at 3000
rpm to obtain the white precipitate. Then, the precipitate was washed five times using DI
water, dried in oven for 24 hours, and then calcined at 550 ºC for 6 hours to remove the
organic template completely. The control sample was synthesized without adding any
surfactant using the same process as described above. The samples were named according to
the preparation method as shown in Table 1.
59
Table 1: Sample descriptions
Sample
Sample descriptions
Control-P
Control
CHA-F127-P
CHA-F127
CHA-P123-P
CHA-P123
CHA precursor (without surfactant template)
CHA (without surfactant template)
F127 templated CHA precursor
F127 templated calcined CHA
P123 templated CHA precursor
P123 templated calcined CHA
2.2 Characterization
The phase of the synthesized mesoporous CHA was characterized by X-Ray Diffraction
(XRD) analysis (Bruker AXS D8, CuKα radiation) over the range 10° ≤ 2θ ≤ 90. A Zeiss
SUPRA 35VP field emission scanning electron microscopy (FESEM) was used to study
surface morphologies of the particles and powder particle size was measured by Zetasizer
Nano ZS (Malvern Instruments). The existence of mesopores structure of the CHA was
verified using Phillips CM12 transmission electron microscope (TEM). The nitrogen
adsorption-desorption isotherm were measured using a Quantachrome Autosorb® IQ gas
sorption analyzer at 77K after degassing the samples at 300°C for 3 hours. The surface area
and pore size distribution of the sample powders were calculated from the nitrogen sorption
data based on the Barrett-Emmett-Teller (BET) and Barret-Joyner-Halenda (BJH) model,
respectively. The functional groups of the samples were studied by Fourier transform infrared
(FTIR) spectroscopy (Perkin Elmer Spectrum One spectrophotometer) in the frequency range
4000-400 cm-1 using KBr pellet method and scanned for 16 times.
3. Result and Discussion
3.1 X-ray Diffraction
Fig.1 presents the XRD patterns of the calcined samples synthesized with F127 and P123
surfactants. The XRD results indicates that the phase of the samples was highly pure CHA
(hexagonal P63/m space group lattice constant of a= 9.3892 Å and c = 6.9019, PDF
01-089-7834). XRD results show that the addition of F127 and P123 surfactant will not
induce any secondary phase in the CHA. The typical diffraction peaks of CHA, especially the
three distinct peaks of CHA at 2θ = 25.8°, 31.8° and 32.9° [6], which can be indexed at
(0,0,2), (2,1,1), and (3,0,0), respectively, can be observed in Fig.1. The peaks within the small
angle range provide the information on the arrangement presented by porous structure [7].
Fig. 1 shows that the small peaks at low angles are observed for all samples; these indicate
that the ordered hexagonal array of parallel pore tubes present in these samples but at lower
regularity. Furthermore, the low-angle peaks are not the characteristic of any calcium
phosphate based materials [8].
60
(300)
(211)
(002)
CHA
CHA-P123
CHA-F127
Control
Fig. 1: XRD patterns of the precursor samples after calcined at 550°C for 24 hours.
3.2 Microstructure Study
The SEM image of CHA nanoparticles prepared using different surfactants are shown in Fig.
2. There is no significant difference in the surface morphology between the sample
synthesized with F127 and P123 surfactant. All samples that synthesize either with or without
surfactants consist of fine agglomerate nanoparticles. Agglomeration is probably due to Van
der Waals force attraction between the fine nanoparticles [9, 10]. The mean particle size (in
radius) of Control, CHA-F127 and CHA-P123 were 514 nm, 777 nm and 294 nm,
respectively. The mean particle size measured was quite big in size due to the possibility that
powder’s agglomerates were measured instead of individual particle. The result were in good
agreement with the SEM and TEM image, as the powders observed consists of collection of
aggregates that held together at point-to point contact by weak Van der Waals forces (Fig.2,
Fig.5 and Fig.6).
a
b
c
Fig.2: The SEM images of CHA sample synthesized (a) without surfactant (control), using (b)
F127 and (c) P123.
61
3.3 Surface area and pore size from N2 adsorption studies
The nitrogen adsorption/desorption isotherms of CHA-F127 and CHA-P123 are shown in Fig.
3. Both of the samples displayed typical type IV isotherms according to IUPAC classification
with a distinct hysteresis loop, demonstrating that mesoporous CHA particles had been
successfully synthesized. Sample CHA-P123 shows a slightly higher adsorption amount
compared to CHA-F127. This due to the similar value of pore volume which is 0.57 cc/g. The
BET surface area is calculated to be 68.4 m2/g for CHA-F127 and 84.4 m2/g for CHA-P123
(Fig.4). Even though the sample CHA-F127 had a lower surface area than CHA-P123, both
exhibit a similar pore size which is 15.6 nm. Based on these results, it can be that there was
no significance difference in surface area and pore size for sample synthesized with
surfactant F127 or P123.
Fig.3 Nitrogen adsorption-desorption isotherm of
Fig.4 Surface area and pore size of synthesized
sample CHA-F127 and CHA-P123
mesoporous CHA
3.4 Pore observation
TEM analysis was performed on the sample CHA-F127 and CHA-P123 to confirm the
existance of mesopores within the nanoparticles. Mesopores in sample CHA-F127 was
observed from the direction parallel and perpendicular to the long axis of the mesochannels
(Fig.5, within red circle). Similarly, mesopores can be observed in sample CHA-P123 from
the direction parallel and perpendicular to the long axis of the mesochannels (Fig.6, within
red circle). The pore size measured from TEM images of both samples was in the range of 2
nm to 12 nm, which are consistent with the result derived from nitrogen adsorption test which
was confirmed within the mesopore range. The nanoparticles of both samples have a
sphere-like shape. Interestingly, the parallel pore structures that can be seen in the TEM
images indicate that two-dimensional pore channels that similar to mesoporous silica [11]
were formed in CHA particles. It also indicated that presence of ordered pore arrangement
was brought about by the formation of cylindrical micelles of F127 and P123 through
self-assembly mechanism which then removed by calcination process.
62
a
b
Fig.5: TEM images of sample CHA-F127 recording from the direction (a) parallel and (b)
perpendicular to the long axis of the mesochannels.
a
b
Fig.6: TEM images of sample CHA-P123 recording from the direction (a) parallel and (b)
perpendicular to the long axis of the mesochannels.
3.5 FTIR characterization
The FTIR spectra of the precursor and calcined sampel of CHA synthesis with F127
surfactant are shown in Fig.7. Meanwhile, the FTIR spectra of the precursor and calcined
sample of CHA synthesis with P123 surfactant are shown in Fig.8. Phosphate absorption
band for the all of the samples (Fig. 7 and 8 ) occurred at 1092, 1034, 962, 603, 567 and 470
cm-1 [12, 13] and the hydroxyl absorption bands at 3568 cm-1 were characteristic of a typical
HA FTIR spectrum. The bands at 1467, 1416 and 872 cm-1 were ascribed to carbonate ions
[12-14] and these indicate that B-type CHA had been successfully synthesized. The
substitution of F127 into the particles was ensured based on the band at 2925 cm-1 (Fig. 7 (a).
Complete removal of F127 and P123 during calcination process was proven by the spectra in
63
Fig. 7 (b) and Fig. 8 (b). There was no other absorption band found in the spectra after
calcination except the characteristic bands of CHA.
OH-
CO32-
CHA-F127 (b)
PO432PO43 PO43- CO3
CO32-
PO43PO43PO43-
-
470
872
962
Transmittance (%T)
3568
470
1467 1416
603
CHA-F127-P (a)
3568
2925
567
872
962
1467 1416
603 567
1092
4000.
3600
3200
2800
2400
2000
1800
1600
1400
Wavenumber (cm-1)
1200
1034
100
800
600
400
0
0
Fig.7: FTIR spectra of the CHA synthesized with F127 surfactants. (a) precursor sample,
CHA-F127-P (b) calcined sample, CHA-F127
PO43-CO32OH-
CO32- CO
3
2-
PO43- PO43-
PO433PO43- PO4
Transmittance (%T)
CHA-P123
470
CHA-P123-P
962
3568
872
603 567
1467 1416
1092
4000
3600
3200
2800
2400
2000
1800
1600
1400
Wavenumber (cm-1)
1200
1034
1000
800
600
400
Fig.8: FTIR spectra of the CHA synthesized with P123 surfactants. (a) precursor sample,
CHA-P123-P (b) calcined sample, CHA-P123
64
4. Conclusion
Mesoporous CHA was successfully prepared using non-ionic surfactants F127 and P123
through wet co-precipitation technique. The synthesized CHA classified as Type B. The CHA
nanoparticles had an average diameter of 15.6 nm, with a specific surface area of 68.44 m2/g
(CHA-F127) and 84.43 m2/g (CHA-P123). The pore size observed under TEM is consistent
with those measured with BJH method which within the range of mesopore. The result
implies that the proposed technique able to produce mesoporous carbonated hydroxyapatite
technique and non-ionic surfactants were suitable to be used as pore template.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
5. References
Li, Y., W. Tjandra, and K.C. Tam, Synthesis and characterization of nanoporous
hydroxyapatite using cationic surfactants as templates. Materials Research Bulletin, 2008.
43(8-9): p. 2318-2326.
Uchida, A., et al., Slow release of anticancer drugs from porous calcium hydroxyapatite
ceramic. Journal of Orthopaedic Research, 1992. 10(3): p. 440-445.
Radin, S., et al., In vivo tissue response to resorbable silica xerogels as controlled-release
materials. Biomaterials, 2005. 26(9): p. 1043-1052.
Palazzo, B., et al., Controlled drug delivery from porous hydroxyapatite grafts: An
experimental and theoretical approach. Materials Science and Engineering: C, 2005. 25(2): p.
207-213.
Kim, H.-W., J.C. Knowles, and H.-E. Kim, Hydroxyapatite/poly(ε-caprolactone) composite
coatings on hydroxyapatite porous bone scaffold for drug delivery. Biomaterials, 2004.
25(7–8): p. 1279-1287.
Zhang, H., et al., An efficient method to synthesize carbonated nano hydroxyapatite assisted by
poly(ethylene glycol). Materials Letters, 2012.
Zhao, Y.F. and J. Ma, Triblock co-polymer templating synthesis of mesostructured
hydroxyapatite. Microporous and Mesoporous Materials, 2005. 87(2): p. 110-117.
Chen, J.D., et al., Self-organization of hydroxyapatite nanorods through oriented attachment.
Biomaterials, 2007. 28(14): p. 2275-2280.
Sanosh, K.P., et al., Synthesis of nano hydroxyapatite powder that simulate teeth particle
morphology and composition. Current Applied Physics, 2009. 9(6): p. 1459-1462.
Juang, H.Y. and M.H. Hon, Effect of calcination on sintering of hvdroxvaDatite. Biomaterials,
1996. 17(21): p. 2059-2064.
Ślósarczyk, A., Z. Paszkiewicz, and C. Paluszkiewicz, FTIR and XRD evaluation of
carbonated hydroxyapatite powders synthesized by wet methods. Journal of Molecular
Structure, 2005. 744-747: p. 657-661.
Zhang, H., et al., An efficient method to synthesize carbonated nano hydroxyapatite assisted by
poly(ethylene glycol). Materials Letters, 2012: p. 26-28.
Ślósarczyk, A., Z. Paszkiewicz, and C. Paluszkiewicz, FTIR and XRD evaluation of
carbonated hydroxyapatite powders synthesized by wet methods. Journal of Molecular
Structure, 2005: p. 657-661.
Li, Y., W. Tjandra, and K.C. Tam, Synthesis and characterization of nanoporous
hydroxyapatite using cationic surfactants as templates. Materials Research Bulletin, 2008. 43:
p. 2318-2326.
65
Chemical Engineering & Fundamental and Applied
Sciences
10:30-12:00, December 15, 2012 (Meeting Room 5)
Session Chair:
255: Biomarker Signatures and Depositional Environment Study of Source Rocks in the
Boxing Area of Dongying Depression, Eastern China
Ying Wang
China University of Petroleum
Luofu Liu
China University of Petroleum
285: Platinum-supported Nanoporous Carbon (Pt/CMK-3) as Electrocatalyst for Direct
Methanol Fuel Cell
National Centre for Catalysis
Research and Department of
Parasuraman Selvam
Chemistry, Indian Institute of
Technology-Madras
National Centre for Catalysis
Research and Department of
Balaiah Kuppan
Chemistry, Indian Institute of
Technology-Madras
333: Conjugate Conduction Convection and Surface Radiation in the Annulus of Two
Concentric Vertical Cylinders
Abhay K Sahu
Indian Institute of Technology
R.K. Saini
Indian Institute of Technology
M. Bose
Indian Institute of Technology
070: Path-finding in a Maze-like Puzzle using Multipartite Graph Algorithm
Nien-Zheng, Yew
Universiti Malaysia Sabah
Kung-Ming, Tiong
Su-Ting, Yong
The University of Nottingham
Malaysia Campus
The University of Nottingham
Malaysia Campus
66
304: Wetting Characteristics on Patterned Surfaces by Lattice Boltzmann Method
Ping Chen
China University of Petroleum
Guo Tao
China University of Petroleum
Mingzhe Dong
University of Calgary
Bing Wang
China University of Petroleum
315: Students’ Response on the Detailed Handout Lecture Notes
The University of Nottingham
Thian Khoon Tan
Malaysia Campus
349: Scale-up of Polymethacrylate Monolithic Column: Understanding Pore
Morphology by Electron Microscopy
Clarence M Ongkudon
Universiti Malaysia Sabah
Ratna Dewi Sani
Universiti Malaysia Sabah
67
255
Biomarker Signatures and Depositional Environment Study of Source
Rocks in the Boxing Area of Dongying Depression, Eastern China
Ying WANGa,b, Luofu LIUa,b,*
a
b
State Key Laboratory of Petroleum Resources and Prospecting, China University of
Petroleum (Beijing), Beijing 102249, China
Basin and Reservoir Research Center, China University of Petroleum (Beijing), Beijing
102249, China
* Corresponding author: liulf@cup.edu.cn
Abstract
In this research, saturated hydrocarbons of 16 mudstone samples from two major source
rocks in the Boxing area, namely upper 4th and lower 3rd Members of Eocene Shahejie
Formation (Es4u and Es3l), were analyzed by gas chromatography (GC) and gas
chromatography-mass spectrometry (GC-MS), respectively. The biomarker signatures are
summarized as follows: for Es4u source rock, there is obvious predominance of phytane
(Pr/Ph is mostly lower than 0.5, Ph/n-C18 > 1.52) and gammacerane (gammacerane/C31
homohopane is mainly 1.16~5.11), higher tricyclic terpanes/17α-hopanes ratio (0.05~0.79)
and very low concentration of 4-methly steranes; for Es3l source rock, there is pristane
predominance (Pr/Ph > 1), very low ratios of tricyclic terpanes/17α-hopanes (< 0.07) and
gammacerane/C31 homohopane (< 0.20), and obviously higher concentration of 4-methly
steranes. Besides, the Es4u source rocks can be divided into two types according to gas
chromatograms: n-alkanes of Type A Es4u source rock extracts show normal distribution with
n-C23 or n-C25 as main peak; n-alkanes of Type B Es4u source rock extracts show double peak
pattern (n-C14—n-C20 and n-C24—n-C30). Finally, depositional environments of the two
source rocks in the Boxing area were identified: the Es4u source rock was deposited in the
saline—hypersaline semi-deep (Type A Es4u source rock in the sag center) to shallow (Type B
Es4u source rock on the sag edge) lacustrine environments; the Es3l source rock was deposited
in the freshwater—brackish semi-deep—deep lacustrine environments.
Keywords: biomarker, depositional environment, source rock, boxing sag, dongying
depression
1. Introduction
The “Biomarkers” have been widely used in reconstructing paleoenvironments. So far, there
are some researches on distinguishing Chinese lacustrine environments with different salinity
through biomarker signatures of source rock extracts or oils (e.g. as discussed by Philp et al.
59
[1], Wang [2], Fu et al. [3, 4] and Zhang et al. [5]).
The Boxing area, containing the Boxing Sag which is one of the main sags in the Dongying
Depression, is located at the southwest of Dongying Depression in the southern Bohai Bay
Basin, eastern China. Many earlier papers mentioned that there are two major source rocks in
the Boxing Sag, namely upper 4th Member of Eocene Shahejie Formation (Es4u) and lower 3rd
Member of Eocene Shahejie Formation (Es3l) (as discussed by Rong and Wang [6], Yang and
Chen [7] and Han et al. [8]). However, these conclusions are inferred from the studies on the
Dongying Depression, and the samples used in these studies are few from the Boxing area.
In this paper, biomarker signatures of newly collected samples in the research area were
summarized. And then, combined with many earlier researches on source and environment
meanings of these biomarkers, the depositional environments of the two major source rocks
in the Boxing area were further identified.
2. Samples and Experimental Methods
In this research, 16 mudstone samples were collected and the sampling wells are
representative for the whole study area.
The representative portions of the samples were broken, powdered and through a 100 mesh
sieve successively. Then, the powdered samples were extracted with chloroform in a Soxhlet
apparatus for 72 hours, and the rock extracts were further extracted with chloroform for 12
hours to precipitate asphaltenes. And then, the filtrates were fractionated by using column
chromatography (silica gel/alumina, 3:1) into saturate and aromatic hydrocarbons, and
nonhydrocarbons. The used elution solvents were hexane, dichloromethane-hexane (7:3) and
chloroform-methanol (1:1), sequentially .
The saturated hydrocarbons of rock extracts were analyzed by gas chromatography (GC) and
gas chromatography-mass spectrometry (GC-MS). The GC analysis was performed on an
Agilent 6890N equipped with a 30 m × 0.25 mm × 0.25 µm HP-5 fused silica capillary
column. The GC-MS analysis was performed on an Agilent 6890GC/5975iMS equipped with
a 60 m × 0.25 mm × 0.25 µm HP-5MS fused silica capillary column. The flow of helium
carrier gas (99.99%) was at a constant rate of 1.0 mL·min-1. The MS was operated in both of
the full scan mode and the selected ion monitoring mode with the ionization energy as 70 eV.
3.
Results and Discussion
3.1 Normal Alkanes
The Es4u source rocks can be divided into two types according to the gas chromatograms:
n-alkanes of the Type A Es4u source rock (southern Boxing Sag) extracts show normal
distribution with n-C23 or n-C25 as main peak, and n-C21-/n-C22+ is 0.51~0.62; n-alkanes of
the Type B Es4u source rock (north and south edges of Boxing Sag) extracts show double
peak pattern (n-C14—n-C20 and n-C24—n-C30), except one sample from the Well Liang-212
that shows single peak (n-C22—n-C28), and n-C21-/n-C22+ is 0.31~0.97 (Table 1, Figure 1a),
Meanwhile, n-alkanes of Es3l source rock (northern Boxing Sag) extracts show normal
distribution or weak double peak pattern, with n-C22—n-C25 as predominance and
n-C21-/n-C22+ is 0.71~0.77 (Table 1, Figure 1a).
Many earlier studies indicate that n-C23 or n-C25 being the main peak of n-alkanes, n-alkanes
60
with carbon number smaller than 20 means lower algae/bacteria input (as discussed by
Cranwel [9], Wakeham [10] and Canuel et al. [11]), while those bigger than 25 means
terrestrial higher plants input (as discussed by Eglinton and Hamilton [12], Cranwell et al.
[13], Volkman et al. [14], Jaff et al. [15] and Huang et al. [16]). Thus, the parent materials of
Es3l and Type A Es4u source rock is mainly from aquatic macrophytes of lacustrine
environment and the lower algae/bacteria input of Es3l is higher than that of Type A Es4u
source rock; the parent materials of the Type B Es4u source rock is mixed of lower
algae/bacteria and terrestrial higher plants while that of the sample from Well Liang-212 is
mainly from higher plants.
3.2 Acyclic Isoprenoids
Didyk et al. [17] discovered that the concentrations of pristane and phytane can reflect the
redox condition of an environment. Peters et al. [18] further states that Pr/Ph < 0.8 is more
closely related to anoxic, commonly saline to hypersaline environment, and Lu and Zhang
[19] summarized that Pr/Ph = 1~2 indicates weak reducing—weak oxidizing environment.
The Pr/Ph ratio of the Es4u source rock is mostly lower than 0.5 when that of the Es3l source
rock is 1.05~1.28 (Table 1). Thus, the former has higher salinity and stronger reducing
condition than the latter. Meanwhile, Pr/Ph of some Type A Es4u source rock samples
(0.48~1.27) is higher than that of the Type B Es4u source rock samples (0.16~0.47) due to
their higher thermal maturity that can be reflected by their deeper burial depths (Table 1).
Furthermore, for the Es4u source rock samples, Pr/n-C17 is higher than 1 except for one
sample form Well Gao-31, Ph/n-C18 > 1.52. And Li et al. [20] pointed out that the phytane
predominance can be used as a signature of palaeoenvironment that has high salinity. But the
situation is contrary for the Es3l source rock samples (Pr/n-C17 < 0.63, Ph/n-C18 < 0.6) (Table
1, Figure 1a), while Fu et al. [4] summarized that both ratios of Pr/n-C17 and Ph/n-C18 smaller
than 1 could be in accordance with freshwater—brackish lacustrine environment in China.
3.3 Terpanes
As shown in the Table 1 and Figure 1b, the ratio of tricyclic terpanes/17α-hopanes is highest
in Type A Es4u source rock samples (0.07~0.79), lower in Type B Es4u source rock samples
(0.05~0.19) and lowest in Es3l source rock samples (0.02~0.07). High concentration of
tricyclic terpanes compared with hopanes was claimed to be a feature of saline lacustrine
environment in China (as discussed by Philp et al. [1]), therefore the environment close to the
sag center where Type A Es4u source rock deposited has the highest salinity.
High concentration of gammacerane is related to water density stratification associated with
high salinity (as discussed by Peters et al. [18] and Zhang et al. [21]). From Table 1, the ratio
of gammacerane/C31 homohopane for Es4u source rock samples (0.37~8.10, mainly in
1.16~5.11) is much higher than that of Es3l source rock samples (< 0.2). So the environment
where the Es4u source rock deposited has higher salinity.
3.4 Steranes
For both Es4u and Es3l source rocks, ααα-20R C28 sterane shows obviously lower
concentration than ααα-20R C27 and C29 steranes, and the former one is mainly of ααα-C29
sterane when the latter one is higher in ααα-20R C27 sterane concentration (Figure 1c).
61
Generally, the aquatic organisms such as algae are rich in ααα-20R C27 sterane while the
higher plants are rich in ααα-20R C29 sterane for continental sedimentation (as discussed by
Huang and Meinshein [22]). Accordingly, the original source of these two source rocks is
characterized by mixed input, and the Es4u source rock is richer in higher plants input when
the Es3l source rock is rich in aquatic organisms input, indicating that the Es4u source rock has
shallower water depth than the Es3l source rock.
62
Table 1. Biomarker parameters of the source rocks in the Boxing area of Dongying Depression, eastern China.
Peak of
Tricyclic
Gammacerane/
Sample
Depth
Pr/P Pr/n-C1 Ph/n-C1 n-C21-/n-C22
Well
n-alkane
terpanes/
C31
type
(m)
h
+
7
8
s
17α-hopanes
homohopane*
Fan-118 3032.20
22
1.22
0.47
0.37
0.77
0.03
0.12
l
Es3
Liang-21
3009.00
25
1.05
0.63
0.60
0.71
0.07
0.20
source rock
9
Fan-137 2851.85
23
1.28
0.63
0.46
0.71
0.02
0.11
Fan-137 3214.50
23
0.30
1.57
4.81
0.51
0.79
5.03
Fan-143 3112.00
25
1.27
1.95
1.52
0.57
0.35
1.40
Type A
u
Es4 source Fan-138 3072.90
25
0.74
2.33
2.83
0.61
0.69
3.77
rock
Gao-89
3016.50
23
0.48
1.28
2.56
0.62
0.13
0.81
Gao-891 2813.00
23
0.94
1.72
1.71
0.62
0.07
0.62
Liang-21
3064.00
20
0.37
1.01
2.13
0.66
0.19
1.55
6
Liang-21
2684.80
24
0.36
2.99
6.50
0.31
0.05
2.10
2
Type B
Gao-890 2610.00
25
0.37
1.54
3.40
0.41
0.14
1.16
Es4u source
Gao-31
2501.26
18
0.32
0.70
1.59
0.80
0.09
8.10
rock
Gao-351 2440.15
14
0.25
2.01
7.04
0.86
0.14
1.99
Fan-127 2391.60
25
0.47
1.46
3.19
0.51
0.07
3.78
Gao-38
2396.66
16
0.36
1.60
3.72
0.79
0.07
0.37
Bo-104
2153.30
17
0.16
2.46
24.85
0.97
0.08
2.99
* Gammacerane/C31 homohopane = gammacerane/(22S 17α, 21β-homohopane + 22R 17α, 21β-homohopane).
63
Response_
信 号 :
45000
C30
C23
108.D\FID1A.CH
C29
40000
C17 C18
35000
Es3l
source rock
4-methyl
steranes
C28
30000
C29
25000
Pr
20000
C31
Ph
15000
C32 C
TsTm
10000
5.00
10.00
15.00
20.00
25.00
30.00
35.00
40.00
45.00
50.00
C27
gammacerane
33
C34 C35
55.00
Time
Well Fan-137, 2851.85m
Well Fan-118, 3032.20m
Response_
Well Liang-219, 3009.00m
信 号 : 116.D\FID1A.CH
55000
Ph
50000
45000
Type A
Es4u
source rock
C29
C30
60000
40000
C17
35000
gammacerane
C21 C23
C18
Pr
30000
25000
tricyclic terpanes
20000
C27
C29 C31 C
32
C33 C
TsTm
15000
C28
34
4-methyl
steranes
C35
10000
Response_
10.00
15.00
20.00
25.00
30.00
35.00
40.00
Well Gao-89, 3016.50m
Time
45.00
50.00
55.00
信 号 : 114.D\FID1A.CH
Well Fan-143, 3112.00m
Well Gao-891, 2813.00m
80000
70000
C30
Ph
60000
50000
Type B
Es4u
source rock
40000
C28
C16
30000
C25 C27 C29
Pr
tricyclic terpanes
20000
10.00
15.00
20.00
25.00
30.00
35.00
40.00
45.00
50.00
C27
C29 C31 C
32
TsTm
10000
5.00
C29
gammacerane
C14
C33 C
34
C35
55.00
Time
Well Gao-351, 2440.15m
Well Liang-216, 3064.00m
Well Fan-127, 2391.60m
(a) GC: normal alkanes and
Biomarkers
(b) GC-MS: terpanes (m/z 191)
(c) GC-MS: steranes (m/z 217)
isoprenoids
Figure 1. Biomarkers of the source rocks in the Boxing area of Dongying Depression, eastern China.
64
4-methyl
steranes
For Es4u source rock, the concentration of 4-methly steranes is very low, even cannot be
detected in most samples (Table 1, Figure 1c); for Es3l source rock, the concentration of
4-methly steranes is very high in the sample from Well Liang-219 (Figure 1c), middle in Well
Fan-118 and little in Well Fan-137. Generally, the 4-methly steranes mainly come from
dinoflagellates that lived in freshwater environment (as discussed by Fu et al. [3]). Thus, Es3l
source rock in the north was deposited in a freshwater—brackish environment.
4. Depositional Environment Distribution
Combined with all the above analysis and the distribution of samples in this research, the
depositional environment distribution of Es4u and Es3l source rocks in the Boxing area are
modified according to the former sedimentary facies maps from the Research Institute of
Geological Science (as discussed in [23]; Figure 2).
Figure 2. Depositional Environments of (a) Es4u and (b) Es3l source rocks in the Boxing area
of Dongying Depression, eastern China (according to the Research Institute of Geological
Science, Shengli Oilfield Company Limited, 2007).
5. Conclusions
According to the biomarker signatures of Es4u and Es3l source rocks in the Boxing area, the
depositional environments were analyzed as follows:
(1) The Es4u source rock has obvious phytane and gammacerane predominance, higher
tricyclic terpanes/17α-hopanes ratio and very low concentration of 4-methly steranes, was
deposited in the saline—hypersaline semi-deep (Type A Es4u source rock in the sag center) to
shallow (Type B Es4u source rock on the sag edge) lacustrine environments.
(2) The Es3l source rock has pristane predominance, very low concentrations of tricyclic
terpanes and gammacerane, and higher concentration of 4-methly steranes, was deposited in
the freshwater—brackish semi-deep—deep lacustrine environments.
65
6. Acknowledgements
The authors are very appreciated to the Research Institute of Geological Science, Shengli
Oilfield Company Limited for funding this research, especially Kefeng Wu and Zhiyong Liu
for sampling, and Guanghua Jia, Dong Tang, Honglei Sun and Dongxu Wang for materials
collection.
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
7. References
R.P. Philp, J. Li and C.A. Lewis, An organic geochemical investigation of crude oils
from Shanganning, Jianghan, Chaidamu and Zhungeer basins, People’s Republic of
China, Organic Geochemistry, 14, 447-460, 1989.
T. Wang, A contribution to some sedimentary environmental biomarkers in crude oils
and source rocks in China, Geochimica, 3, 256-262, 1990.
J. Fu, G. Sheng, J. Xu, G. Eglinton, A.P. Gowar, R. Jia, S. Fan and P. Peng, Application
of biological markers in the assessment of paleoenvironments of Chinese non-marine
sediments, Organic Geochemistry, 16, 769-779, 1990.
J. Fu, G. Sheng, J. Xu, R. Jia, S. Fan and P. Peng, Application of biomarker compounds
in assessment of paleoenvironments of Chinese terrestrial sediments, Geochimica, 1,
1-12, 1991.
Z. Zhang, F. Yang, D. Li and C. Fang, Biomarker assemblage characteristics of source
rocks and associated crude oil in saline lake facies of Cenozoic in China, Acta
Sedimentologica Sinica, 16, 119-123, 1998.
Q.H. Rong and G.Z. Wang, Geochemical signature of oil migration in Southeast Boxing
Depression, Jiyang, China, Journal of Chengdu University of Technology (Science and
Technology Edition), 31, 517-521, 2004 (in Chinese with English abstract).
C. Yang and J. Chen, Petroleum genetic types and in depth exploration potential in the
Boxing subsag, Petroleum Geology and Recovery Efficiency, 11, 34-36, 2004. (in
Chinese with English abstract).
D. Han, Z. Li, S. Li, C. Xu and L. Long, Geochemical Characteristics of Paleogene
mudstones in the Boxing Sag north of the west Shandong Rise and their tectonic
implications, Chinese Journal of Geology, 42, 678-689, 2007 (in Chinese with English
abstract).
P.A. Cranwell, Lipid geochemistry of sediments from Upton Broad, a small productive
lake, Organic Geochemistry, 7, 25-37, 1984.
S.G. Wakeham, Algal and bacterial hydrocarbons in particulate matter and interfacial
sediment of the Cariaco trench, Geochimicaet Cosmochimica Acta, 54, 1325-1336,
1990.
E.A. Canuel, K.H. Freeman and S.G Wakeham, Isotopic composition of lipid biomarker
compounds in estuarine plants and surface sediments, Limnology and Oceanography,
42, 1570-1583, 1997.
G. Eglinton and R.J. Hamilton, The distribution of alkanes, in: T. Swain (Ed.), Chemical
Plant Taxonomy, Academic Press Inc., 87-217, 1963.
P.A. Cranwell, G. Eglinton and N. Robinson, Lipids of aquatic organisms as potential
contributors to lacustrine sediments—II, Organic Geochemistry, 11, 513-527, 1987.
J.K. Volkman, R.B. Johns, F.T. Gillan, G.J. Perry and H.J. Bavor Jr, Microbial lipids of
an intertidal sediment—1. Fatty acids and hydrocarbons, Geochimica et Cosmochimica
Acta, 44, 8, 1133-1143, 1980.
R. Jaff, G.A. Wolff, A.C. Cabrera and H.C. Chitty, The biogeochemistry of lipids in
rivers of the Orinoco Basin, Geochimica et Cosmochimica Acta, 59, 4507-4522, 1995.
66
[58] Y. Huang, F.A. Street-Perrott, R.A. Perrott, P. Metzger and G. Eglinton,
Glacial-interglacial environmental changes inferred from the molecular and
compound-specific δ13C analyses of sediments from Sacred Lake, Mt Kenya,
Geochimica et Cosmochimica Acta, 63, 1383-1404, 1999..
[59] B.M. Didyk, B.R.T. Simoneit, S.C. Brassell and G. Eglinton, Organic geochemical
indicators of palaeoenvironmental conditions of sedimentation, Nature, 272, 216-222,
1978.
[60] K.E. Peters, C.C. Walters and J.M. Moldowan, The Biomarker Guide Volume 2:
Biomarkers and Isotopes in Petroleum Exploration and Earth History, Cambridge
University Press, Cambridge, 499-614, 2005.
[61] S. Lu and M. Zhang, Oil and Gas Geochemistry, Petroleum Industry Press, Beijing,
178-194, 2010.
[62] R. Li, D. Lin, Z. Wang, M. Xin and Z. Huang, The new criteria used by judging
high-salinity environment, Chinese Science Bulletin, 8, 604-607, 1986.
[63] L. Zhang, D. Huang and Z. Liao, Gammacerane-geochemical indicator of water column
stratification, Atca Sedimentologica Sinica, 17, 136-140, 1999 (in Chinese with English
abstract).
[64] W. Huang and W.G. Meinschein, Sterols as Ecological Indicators, Geochimica et
Cosmochimical Acta, 43, 739-745, 1979.
[65] Research Institute of Geological Science, Depositional environment distribution of Es4u
and Es3l source rocks in the Boxing Sag of Dongying Depression, Shengli Oilfield
Company Limited, 2007 (internal project sources).
67
285
Platinum-supported Nanoporous Carbon (Pt/CMK-3) as
Electrocatalyst for Direct Methanol Fuel Cell
Balaiah Kuppan and Parasuraman Selvam*
National Centre for Catalysis Research and Department of Chemistry,
Indian Institute of Technology-Madras, Chennai 600 036, India
Abstract
Platinum-supported ordered mesoporous carbon catalysts prepared by reduction of colloidal
platinum using four different reducing agents, viz., paraformaldehyde, sodium borohydride,
ethylene glycol and hydrogen, and deposition over ordered mesoporous carbon, CMK-3.
The resulting platinum nanoparticles supported nanoporous carbon, designated as Pt/CMK-3,
was tested for electocatalytic oxidation of methanol. In this study, the effect of the various
reduction methods on the influence of particle size vis-à-vis on the electrocatalytic effect is
systematically investigated. All the catalysts were systematically characterized by various
analytical, spectroscopic and imaging techniques. The results of the synthetic methods,
characterization techniques and the electrocatalytic performance indicate that the Pt/CMK-3
catalysts are superior to that prepared with activated carbon (Pt/AC) as well as with that of
the commercial platinum-supported carbon catalyst (Pt/E-TEK). In particular, the catalyst,
Pt/CMK-3, prepared using paraformaldehyde reduced platinum showed much higher activity
and long-term stability as compared to the other reducing methods.
68
333
Conjugate conduction convection and surface radiation in the annulus of
two concentric vertical cylinders
A K Sahua, R.K. Sainib , M. Bosec,*
a,b,c
Department of Energy Science and Engineering, Indian Institute of
Technology Bombay, Mumbai 400076, India
*Corresponding author E-mail: manaswita.bose@iitb.ac.in
Abstract
In many industrial applications, especially in solar and nuclear thermal power plants, one
often encounters systems where all three modes of heat transfer, play vital roles and influence
the overall heat transfer efficiency. In this work, the effect of surface radiation on conjugate
conduction, convection heat transfer is investigated using the techniques of computational
fluid mechanics and heat transfer. The present investigation has revealed that surface
radiation has a significant influence on the overall convective heat transfer rate in a fluid flow
through the annulus of vertical tubes. The local Nusselt number is found to enhance two folds
when effect of radiation is considered as compared to that when the radiative heat transfer
between the walls, is ignored.
Keyword: Solar and nuclear thermal power plant; conjugate heat transfer; CFD; surface
radiation; Nusselt number.
1. Introduction
In most of the technological applications, especially in solar and nuclear thermal power
sectors, the interaction between the surface radiation and the convection along with the
conduction is important. Influence of conjugate conduction and convection in different flow
configurations, is well studied [1-8] in literature. In recent years, the focus is shifted toward
the problems where combined effects of all the three modes of heat transfer, are important
[9-15].
Weng and Chu have studied the effects of thermal radiation on the natural convection heat
transfer with a participating fluid inside a cylindrical annulus with both ends closed and
insulated [15]. Ramesh and Merzkric have investigated the combined effect of convection
and radiation heat transfer in side vented open cavities [12]. Rao et al. have numerically
analyzed the conjugate laminar mixed convection with surface radiation in a vertical channel
[16]. Flush mounting discrete heating source are provided over the wall. The effect of surface
emissivity, aspect ratio, discrete heat source and modified Richardson number (Ri=Gr/Re2)
69
on behaviour of fluid and heat transfer within the channel is determined. Premachandran and
Balaji have studied the effect of surface radiation on conjugate convection in horizontal
channels with protruding heat sources and have proposed a correlation between the maximum
temperature and the dimensionless quantities such as Reynolds and Grashof numbers and
ratios of emissivities and thermal conductivities [11]. Sharma et al. have studied the influence
of radiation on the transient turbulent convection heat transfer from a source placed inside a
cylindrical enclosure. In a more recent study [13], Kim and Choi have analysed the influence
of conduction-natural convection conjugate heat transfer in the gap between concentric
cylinders [10], under solar irradiation and have reported that the higher is the thermal
conductivity smaller is the Nusselt number of the surface. The influences of radiation on
conjugate natural convection in a horizontal annulus with an inner heat generating solid
cylinder have been studied by Shaija and Narasimham [14].
Nouanegue and Bilgen have studied the conjugate heat transfer in solar chimney systems for
heating and ventilation of dwellings using finite difference and have observed that the
Nusselt number is enhanced due to surface radiation which improves the ventilation
performance of chimney [17]. Recently, Al-Amri and El-Shaarawi have investigated the
effect of radiation on convection flow between the heated vertical plates. Effect of
temperature, fluid properties and Nusselt number is numerically studied. They have proposed
a new dimensionless parameter called Radiation number (
, where σ is the
Stefan Boltzmann constant, q is the heat flux, b is the channel width and kf is the thermal
conductivity of fluid [9].
Heat transfer in a fluid flowing through the annuli of two concentric vertical cylinders has
been studied quite extensively in literature [2, 3, 4, 8, and 18]. Reddy and Narasimham have
numerically simulated the conjugate natural convection in a vertical annulus with heat
generating rod located at centre and have proposed correlations between the Nusselt number
and the dimensionless temperatures [8]. Sankar and Do have studied the effect of discrete
heating on convection heat transfer in a vertical cylindrical annulus [18]. To the best of the
author’s knowledge, analysis of conjugate conduction, convection and surface radiation in the
annuli of two concentric vertical cylinders has not been reported in literature. The objective
of the present work is to investigate the influence of surface radiation on the overall heat
transfer rate in a fluid flowing through the annuli of two concentric vertical cylinders. The
outer wall of the outer cylinder is considered to be isothermal and inner wall of the inner
cylinder is assumed to be adiabatic. Both the ends are open and fluid flows in from bottom in
the annulus and exits from upper end as shown in Figure 1.
70
Figure 1: Schematic of the flow geometry where r1= 6.223, r2 = 8.636, r3 = 17.526, and r4 =
21.082. All dimensions are in mm. These values correspond to Standard schedule 40 pipes of
3/8 inch and 5/4 inch as the outer and the inner cylinder, respectively [4].
2. Governing equation and numerical Solution
Macroscopic mass, momentum and energy conservation laws with constant physical
properties e.g., thermal conductivity, viscosity, expansion coefficient, form the set of
governing equations (1– 4).
(1)
(2)
(3)
(4)
The buoyancy term in the momentum equation is simplified using Boussinesq approximation
i.e.
where
is coefficient of volume expansion and
is the density
evaluated at average temperature [22]. The fluid in the annulus is assumed to be transparent.
Surface radiative heat transfer between the inner surface of the outer wall and the outer
surface of the inner cylinder, is considered. The radiative heat exchange between two grey
surfaces is modeled using the Stefan-Boltzmann equation (5) [19].
(5)
where, σ is the Stefan Boltzmann constant and F is the shape factor and ε is the emissivity.
Equation
(4)
can
also
be
written
as
where,
.
The relevant dimensionless groups identified by Shaarawi and Negm are listed in Table 1.The
numerical values for which the present analysis is performed, are also mentioned in the same
table.
71
Table 1: Dimensionless parameters used in the analysis [4]
Description
Expression
Numerical value
Grashof number
0.5962
Modified Grashof Number
457.4982,635,887,1230, 257, 1830,
2771
0.7053
14.2393
0.22– 0.30
Prandtl number
Nusselt number
Modified Nusselt number
Dimensionless radial
---
coordinate
Dimensionless axial
coordinate
---
Dimensionless
temperature
Reynolds number
--389.64, 500, 700, 1000, 1500
2.1 Boundary Conditions
Inlet velocity and the temperature of the fluid are specified at the bottom of the annulus. The
annulus is open to the atmosphere. The outer wall of the large cylinder is maintained at
constant temperature of 393 K. The inner wall of core cylinder is assumed to be adiabatic.
The boundary conditions are described as:
;
;
;
and
. The numerical values of the inlet and outlet conditions are
listed in Table 2.
Table 2: Boundary condition for which the simulations are performed
Parameter
Numerical Value
Unit
Thot
Uinlet
Tf,in
393
0.4589, 1.1778, 11.777
313
K
m/s
K
P0
1
atm
2.2 Numerical method
Finite volume method is employed as the computational technique to simulate the fluid flow
along with heat transfer [20, 21]. Commercially available CFD based software;
ANSYS-Fluent is used as the simulation tool. The pressure based solver is used as the fluid is
assumed to be incompressible. The second order upwind scheme is used as the discretization
72
method for momentum and energy equation. We have also assumed that the flow is
axisymmetric in nature. The convergence criteria is set at 10-6 in respective units for the
present simulations.
3. Results
Structured mesh of size 300 × 30, based on the results of grid convergence test, is selected for
the simulation. Temperature and velocity profile within the annulus are determined. The
overall heat transfer rates are obtained for the entire range of modified Grashof number.
Variation of modified Nusselt number with modified Grashof number as a function of
annulus dimensionless axial position is obtained for the both cases considering effect of
radiation and pure convection. Influence of radiation on the total heat transfer rate has been
determined. Correlation between the modified Nusselt and the Grashof numbers in the fully
developed region is obtained.
3.1 Temperature profile
The dimensionless temperature profile (θ) of the fluid at various axial position within the
annulus for both the cases where effect of radiation is considered and not at Gr*= 457.4984
are shown in Figure 2.
Temperature of the fluid increases as it flows upwards in the annulus. The temperature of the
fluid is significantly high when radiation is considered. This is because, the outer surface of
the inner cylinder gets heated up due to the radiative heat transfer from the inner surface of
the outer wall and the fluid receives heat from both the inner and the outer cylinders through
convection. The linear nature of the temperature profile in the radial direction between R = 1
to 1.2, is due to the pure conduction in the solid wall. The temperature is constant between
R= 0.4 to 0.5 as the heat flux through the wall of the inner cylinder is zero as a consequence
of the adiabatic boundary condition at the inner surface of the hollow cylinder. The effect of
surface radiation is more clearly captured in Figure 3(a), where, the temperature of the
interface between the inner cylinder and the fluid is plotted as a function of the height. The
wall temperature is lower when the heat transfer is due to pure convection as compared to
that when the surface radiation between the two concentric cylinders, is considered.
73
1
1
1
Pure convection
Radiation
0.8
0.8
0.8
0.6
0.6



0.6
0.4
0.4
0.4
Pure convection
Radiation
0.2
0.2
0
0.4
0.6
0.8
1
0
0.4
1.2
0.6
0.8
1
Pure convection
Radiation
0.2
0
0.4
1.2
0.6
0.8
R
R
Z=0.02L
1
1.2
R
Z=0.25L
Z=0.75L
Figure 2: Temperature profile at different non dimensional axial position Z for pure
convection and radiation, at the modified Grashof number (Gr*) = 457.4984
The bulk temperature of fluid, determined as
, is shown in
Figure 3(b). The mean temperature of the fluid increases as it travels upwards. Also, the bulk
Axial position, Z,m
Axial position, Z,m
fluid temperature is higher when the influence of radiation is considered than that in those
cases of purely convective heat transfer.
1
0.8
P ur e c o nve c t i o n
Inne r wal l wi t h r adi at i o n
Te m pe r at ur e o f t he o ut e r wal l
0.6
0.4
0.2
0
300
320
340
360
380
1
0.8
0.6
0.4
0.2
0
340
400
P ur e c o nve c t i o n
With radiation
350
360
370
380
390
400
T emperature, K
T emperature, K
(a)
(b)
Figure 3: (a) Temperature of the interface between the fluid and the inner cylinder along the
annulus purely convection for (Gr*) = 457.4984; (b) Bulk fluid temperature in the annulus for
combined and purely convective modes of heat transfer for a (Gr*) = 457.4984
3.2 Correlation between modified Nusselt and Grashof numbers
Next, we have determined the influence of radiation on modified local Nusselt number at the
interface between the fluid and the outer cylinder. The modified Nusselt number is defined as
(
) and incorporates the variation in the aspect ratio of the geometry. The local
Nusselt number (
) is determined from the local heat flux and temperature
difference between bulk fluid and the wall, at various vertical positions in the annulus. The
variation of modified local Nusselt number with the modified Grashof number for the both
the cases, with and without the influence of radiation, is plotted in the Figure 4.
74
2.5
1
1.4
Pure convection
Radiation
2
1.2
Pure convection
Radiation
1
0.8
Nu*
Nu*
Nu*
1
1.5
Pure convection
Radiation
0.8
0.6
0.6
0.4
0.4
0.2
0.5
0.2
0
0
500
1000
1500
2000
2500
0
3000
0
500
Gr*
1000
1500
2000
2500
0
3000
0
500
1000
Gr*
Z=0.02L
1500
2000
2500
3000
Gr*
Z=0.25L
Z=0.75L
Figure 4: Variation of modified Nusselt number Nu* as a function of Gr* at different axial
position for purely convective and combined models of heat transfer
The modified Nusselt number increases with the modified Grashof number implying an
enhancement in convective heat transfer rate. The heat transfer rate in the case where
radiation is considered is higher compared to those where only convective heat transfer is
assumed. It is also observed that the Nusselt number decreases along the vertical direction.
The percentage enhancement in Nu* due to radiative heat transfer, however, increases as the
fluid moves upward as shown in Figure 5(a). .The modified Nusselt number (Nu*) is
observed to have a power law relationship (Nu* = k Gr*n) with the modified Grashof number
in the fully developed region, for both the cases where the effect of radiation is considered
and not (Figure 5(b)). The numerical values of k and n are obtained by fitting the
100
1.4
Z=0.02L
Z=0.05L
Z=0.10L
Z=0.25L
Z=0.50L
Z=0.75L
80
60
1.2
Pure convection
Radiation case
1
Nu*
Percentage difference in Nu*
experimental points to a straight line in logarithmic scale. The slopes of the curves for the
purely convective and combined modes of heat transfer are close to each other (i.e., 1.23 for
without and 1.05 for with the effect of radiation) but the value of ‘k’ is about three times
more when the radiative heat transfer between the cylinder walls, is considered.
40
0.8
0.6
0.4
0.2
20
500
1000
1500
2000
2500
0
3000
Gr*
0
500
1000
1500
2000
2500
Gr*
(b)
(b)
Figure 5: (a) Percentage difference in modified Nusselt number Nu* at different axial
75
3000
position as a function of Gr*; (b) Variation of modified Nusselt number Nu* as a function of
30
80
Percentage difference in (Q/L)
Heat transfer rate / Length, (Q/L)
Gr* in fully developed region for combined (simulation purely convective, Nu*= .0001(Gr*)
1.2255
, R2 = 0.9981 (-) and with radiation, Nu* = 0.0003 (Gr*) .05032, R2= 0.9966 (-)
Pure convection
Radiation
70
60
50
40
30
20
10
0
500
1000
1500
2000
2500
25
20
15
10
5
0
3000
500
1000
1500
2000
2500
3000
Modified Grashof No., Gr*
Modified Grashof No., Gr*
(a)
(b)
Figure 6: (a) Overall heat transfer rate / length as a function of modified Grashof number
Gr* for purely convection and radiation; (b) Percentage difference in overall heat transfer
rate/ length as a function of Gr*
3.3 Overall heat transfer rate per unit length
The overall heat transfer rate per unit length obtained as
where
of the fluid, for both the cases considering with and without the effect of radiation is
determined, next. Figure 6(a) shows the comparison of overall heat transfer rate per unit
length between the pure convective and the combined modes. The percentage difference in
overall heat transfer rate per unit length increases with modified Grashof number Figure 6(b).
Also, the enhancement in the heat transfer rate is greater at higher modified Grashof number,
i.e., the effect of surface radiation becomes more relevant as the Grashof number increases.
3.4 Effect of Emissivity and Reynolds number
The effect of emissivity ε, on the modified Nusselt number Nu* is also studied. The Nusselt
number decreases Figure 7(a) with the reduction is emissivity, as expected. The radiative heat
transfer rate is lower for the grey surfaces than that for a pure black body. However, it is of
interest to note that even for an emissivity as low as 0.6 (for most of the steel alloys), the
Nusselt number is nearly twice of the value when the influence of radiation is not considered.
Next, the influence of radiation on the modified Nusselt number is for different Reynolds
number is discussed. In the flow developing region, i.e., near the inlet, the modified Nusselt
number increases with the mass flow rate i.e., the Reynolds number (Figure 7(b)). In the fully
developed region, increase in the flow rate does not have any further effect on the modified
Nusselt number. This observation is consistent with our earlier correlations where the
modified Nusselt number is expressed as Nu* = k Gr*n, indicating that the Nusselt number in
76
the fully developed region is independent of Reynolds number within the range of parameters
studied in the current work.
0.24
0.3
E=0
E=0.6
E=0.8
E=1
0.22
0.2
0.25
Nu*
0.18
Nu*
Re
Re
Re
Re
Re
0.16
0.14
=
=
=
=
=
3
5
7
1
1
8
0
0
0
5
9.64
0
0
00
00
0.2
0.12
0.1
0.15
0.08
0
0.2
0.4
0.6
0.8
1
0
Axial position., Z
0.2
0.4
0.6
0.8
1
A xi al po si ti o n., Z
(a)
(b)
Figure 7: (a) Effect of Emissivity on Nu* as a function of axial position for Gr* = 457.4984;
(b) Effect of Reynolds number on Nu* as a function of axial position for Gr*=457.4984 with
radiation
4 Conclusion
We have numerically investigated the influence of surface radiation on the overall convective
heat transfer in a fluid flowing through the annulus of two vertical concentric cylinders. The
radiation is observed to play an important role on the overall heat transfer rate even when the
temperature of the heated wall is in a moderate range of about 120◦C and the difference
between the wall and the inlet fluid temperature is about 80 ◦C. We have shown that the
overall heat transfer rate is enhanced up to 28% if the effect of radiation is considered as
compared to those cases where heat transfer is assumed to be purely convective. The present
work demonstrates using a simple geometry, the importance of the effect that the surface
radiation has on design of various heat exchange equipments.
5 References
[66] Choukairy, K., Bennacer, R. and Vassuer, P, Natural convection in vertical annulus
boarded by an inner wall finite thickness, International Community of Heat mass
transfer, 2004, Vol. 31, pp.501–512.
[67] Desai, C. P. and Vafai, K, Experimental and numerical numerical study of buoyancyinduced flow and heat transfer in an open annular cavity, International Journal of Heat
and Mass Transfer, 1996, Vol. 39 (10), pp.2053–2066.
[68] El-Shaarawi, M. and Al-Nimr, M, Fully developed laminar natural convection in
open-ended vertical concentric annuli, International Journal of Heat and Mass transfer,
1090, Vol. 33 (9), pp.1873–1884.
77
[69] El-Shaarawi, M. A. I. and Negm, A. A. A, Conjugate natural convection heat transfer in
an open-ended vertical concentric annulus, Numerical Heat Transfer, 1999, Vol. 36A,
pp.639–655.
[70] Khan, J. A. and Kumar, R, Natural convection in vertical annuli: A numerical study for
constant heat flux on the inner wall, Journal of Heat Transfer,,1989, 111/915.
[71] Lacroix, M. and Joyeux, A, Coupling of wall conduction with natural convection from
heated cylinders in a rectangular enclosure, International Community of Heat Mass
Transfer, 1996, Vol. 23, p.143–151.
[72] Mobedi, M, Conjugate natural convection in a square cavity with finite thickness
horizontal walls, International Communications in Heat and Mass Transfer, 2008, Vol.
35, pp.503–513.
[73] Reddy, P. V. and Narasimham, G. S. V. L, Natural convection in a vertical annulus
driven by a central heat generating rod’ International Journal of Heat Transfer, 2008,
Vol. 51, pp.5024–5032.
[74] Al-Amri, F. G. and El-Shaarawi, M. A. I, Combined forced convection and surface
radiation between two parallel plates, International Journal of Numerical Methods of
Heat and Fluid Flow, 2010, Vol. 20 (2), pp.218–239.
[75] Kim, D. C. and Choi, Y. D, Analysis of conduction natural convection conjugate heat
transfer in the gap between concentric cylinders under solar irradiation’, International
Journal of Heat Transfer, 2009, Vol. 48, pp.1247–1258.
[76] Premachandran, B. and Balaji, C, Conjugate mixed convection with surface radiation
from a horizontal channel with protruding heat sources, International Journal of Heat
and Mass Transfer, 2006, Vol. 49, pp.3568–3582.
[77] Ramesh, N. and Merzkric, W, Combined convective and radiative heat transfer in side
vented open cavities, International Journal of Heat and Mass Transfer, 2001, Vol. 22,
pp.180–187.
[78] Sharma, A. K., Velusamy, K., Balaji, C. and Venkateshan, S. P, Conjugate turbulent
natural convection with surface radiation, International Journal of Heat and Mass
Transfer, 2007, Vol. 50, pp.625–639.
[79] Shaija, A. and Narasimham, G, Effect of surface radiation on conjugate natural
convection in a horizontal annulus driven by inner heat generating solid cylinder,
International Journal of Heat Transfer, 2009, Vol. 52, pp.5759–5769.
[80] Weng,
L. C. and Chu, H. S, Combined natural convection and radiation in a vertical
annulus, Heat and Mass Transfer, 1996, Vol. 31, pp.371–37.
[81] Rao, C. G., Balaji, C. and Venkateshan, S, Effect of surface radiation on conjugate
mixed convection in a vertical channel with a discrete heat source in each wall,
International Journal of Heat and Mass Transfer, 2002, Vol. 45, pp.3331–3347.
[82] Nouanegue, H. and Bilgen, E, Heat transfer by convection, conduction and radiation in
solar chimney systems for ventilation of dwellings, Inter- national Journal of Heat and
Fluid Flow, 2009, Vol. 30, pp.150–157.
[83] Sankar, M. and Do, Y, Numerical simulation of free convection heat transfer in a
vertical annular cavity with discrete heating, International Communications in Heat and
Mass Transfer, 2010, Vol. 37, pp.600–606.
[84] Sukhatme, S. P, A Textbook on Heat Transfer, Universities Press (India) Private Ltd,
India, 2006.
[85] Ansys Fluent, Ansys fluent user’s guide 13.0, ANSYS, Inc., 2010.
[86] Patankar, S. V, Numerical heat transfer and fluid flow, Taylor Francis, 2007.
[87] Bird, R. B., Stewart, W. E. and Lightfoot, E. N, Shell energy balances and temperature
distribution in solids and laminar flow, Transport phenomena, John Wiley and Sons Pvt
Ltd, Singapore, 2007.
78
070
Path-finding in a Maze-like Puzzle using Multipartite Graph Algorithm
Nien-Zheng, Yewa, Kung-Ming, Tiongb, Su-Ting, Yongc
a
(Formerly at) Universiti Malaysia Sabah, Jalan UMS, 88999, Kota Kinabalu, Sabah
Malaysia
E-mail address: yewnz27@gmail.com
b
The University of Nottingham Malaysia Campus, Jalan Broga, 43500 Semenyih
Selangor Malaysia
E-mail address: KungMing.Tiong@nottingham.edu.my
c
The University of Nottingham Malaysia Campus, Jalan Broga, 43500 Semenyih
Selangor Malaysia
E-mail address: Su-Ting.Yong@nottingham.edu.my
Abstract
Number Link is a Japanese maze-like logic puzzle where pairs of same numbers arranged in a
rectangular grid are distinctly connected using continuous and non-intersecting lines. A
closer investigation reveals that Number Link can be represented as a graph, whereby vertices
represent the numbered or unnumbered cells and edges represent the paths connecting the
cells. A simple reorientation then results in a multipartite graph where solvability can be
determined and the solution, including instances of multiple solutions, can be found using
appropriate algorithms that utilize the rules of the Number Link puzzle. In this paper, the
multipartite graph algorithm is used to solve Number Link puzzles ranging from size 6 x 6 to
27 x 27. The results of this multipartite graph algorithm are then compared with the
modified Tremaux’s algorithm previously studied.
Keyword: Number Link, Multipartite Graph Algorithm, Modified Tremaux’s Algorithm
1. Introduction
Number Link is a Japanese maze-like logic puzzle that involves connecting pairs of same
numbers arranged in a rectangular grid using distinct continuous and non-intersecting lines.
According to the publishers of Number Link, solving a Number Link puzzle requires inspired
guessing and puzzle sense, and there is no perfect logical method to solve this puzzle [1].
There are, however, useful insights and tips on how to approach a Number Link puzzle, e.g.
in [2]. Solving Number Link puzzles programmatically was put forward as a Mathematica
79
challenge in 2007 [3]. Algorithmically, there have been previous efforts on solving Number
Link puzzles using: (i) zero-suppressed binary decision diagrams (ZDDs) [4], (ii) integer
programming solver, CPLEX [5], and (iii) Sugar, a SAT-based constraint solver [6]. In a
previous study [7], Number Link puzzles were treated as maze-like constructs and a
well-known maze path finding algorithm was altered into the Modified Tremaux’s Algorithm
to solve the Number Link puzzles. However, there were limitations to this method in terms
of efficiency in finding the solution, limited applicability to only puzzles of smaller sizes,
requiring significantly larger memory space and longer processing time and inability to find
possible alternative solutions. Taking a closer look, it can be seen that a Number Link
puzzle can be represented as a multipartite graph. In this paper, the multipartite graph
algorithm is used to find the solution for Number Link puzzles and this method proves to be
superior to the Modified Tremaux’s Algorithm in all aspects.
2. Main Solving Number Link Puzzles using Multipartite Graph Algorithm
2.1 Number Link
Number Link was developed by Issei Nodi and it was first published in 1987 by Nikoli Co.
Ltd. Nikoli publishes puzzle magazines and develops their own puzzle games, many of
which are subjects of research [1]. Number Link puzzles come in sizes of m x n, where m =
n or m ≠ n (with m representing columns and n representing rows as used by Nikoli’s
puzzles). An example of Number Link 7 x 7 is shown in Figure 1. The objective of the
puzzle is to connect the number pairs (e.g. 1 pairs up with 1, 2 pairs up with 2, and so on),
each with its own continuous line.
Figure 1: Example of Number Link and its solution [1]
There are three ground rules governing the solution of Number Link. Firstly, pairs of the
same numbers are connected with a single continuous line. Secondly, lines go through the
center of the cells, horizontally, vertically, or changing direction, and never twice through the
same cell. Thirdly, lines cannot cross, branch off or go through the cells with numbers [1].
Additionally, all cells must be marked by a line (although not mentioned explicitly by
Nikoli’s rules).
2.2 Conversion of Number Link to Multipartite Graph
A Number Link puzzle can be denoted by a graph. In the graph, vertices represent cells and
80
edges correspond to the available paths between two adjacent cells. A simple 45 degrees
anti-clockwise rotation would then transform the graph into a multipartite graph with k
partitions, i.e. a k-partite graph (Figure 2). A k-partite graph is a graph whose graph vertices
can be partitioned into k disjoint sets so that no two vertices within the same set are adjacent
[8].
Figure 2: A Number Link puzzle is denoted as a k-partite graph
The k-partite graph for a Number Link puzzle can be viewed as a collection of bipartite
graphs. A partition only connects to one or two partition(s) (e.g. partition 1 only connects to
partition 2, partition 2 only connects to partition 1 and 3). This observation is important in
the implementation of the methods in Sections 2.3 and 2.4. Based on the example in Figure
3, the multipartite graph for this puzzle consists of nine vertical (i = 1, 2, 3, … , 9) and nine
horizontal (j = 1, 2, 3, …, 9) partitions.
Figure 3: A 9-partite graph for a Number Link puzzle
2.3 Solvability Test
Before solving the multipartite graph of a Number Link puzzle, solvability test is
implemented to check whether the puzzle is solvable. The following equations are used:
ui = 2vi - si
(1)
ai + bi = ui
(2)
81
where ui is the number of edges used by partition i, vi is the number of vertices at partition i,
si is the number of starting points (i.e. numbered vertices) at partition i, ai is the number of
edges used to connect partition i with partition i-1 and bi is the number of edges used to
connect partition i with partition i+1.
Given a k-partite graph of a Number Link puzzle, the puzzle’s solvability can be checked
through the calculation of a1, ak, b1 and bk. A puzzle is solvable if the calculation starts with
a1 = 0 ends with ak = uk and starts with b1 = u1 ends with bk = 0. Figure 4 shows an example
of a solvable 9-partite graph of a Number Link puzzle with the calculated values of ui, ai and
bi at each partition i and j.
Figure 4: Calculated values of ui, ai and bi for a 9-partite graph of a Number Link puzzle
2.4 Selection of Possible Paths: e-value of Vertices and the Corner Rule
To find the possible paths in a puzzle, the edges used by each partition are determined by
identifying the right edge values (e-values) that represent the edges used by each vertex. In
the multipartite graph, every vertex is assigned e-values that correspond to distinct path
directions with NE (e-value = 1), SE (e-value = 2), SW (e-value = 4) and NW (e-value = 8).
The final e-value of a vertex is the sum of e-values for all the edge(s) used with six possible
path selections (Figure 5).
82
Figure 5: Six different path selections of edges (represented in binary and decimal)
Possible path selections at each partition can be deduced based on the possible path selections
of the previous partition (e.g. Figure 6). However, every possible path selection that
violates the “hidden” rules of Number Link will be eliminated. The rules that apply are: (i)
an edge cannot make a “U-turn” along its adjacent edge, (ii) an unnumbered vertex must
connect to exactly two unnumbered vertices, and (iii) a numbered vertex must connect to
only one vertex.
Figure 6: Possible paths in the second partition
In the process of finding possible paths at each vertex, the selection of possible paths is
hastened through an inbuilt corner rule resulting from the rules of the Number Link puzzle.
Using the corner rule, an unnumbered vertex cornered by a right angle path must be filled by
another right angle path. Based on e-values, when the e-value of a vertex is determined to
be 3 or 12, the corner rule applies in the path selection. This corner rule continues to apply
until the new cornered vertex is a numbered vertex (e.g. Figure 7).
Figure 7: Corner rule stops at a numbered vertex
2.5 Algorithm
A Number Link puzzle is converted into a k-partite graph with i and j partitions. The
following algorithm is then used to solve the Number Link puzzle.
1.
Calculate u, a and b values for every partition, with initial value a1 = 0 and b1 = u1.
If ak = uk and bk = 0 (solvable puzzle), move to step 2, else move to step 6.
2.
Start with partition i = 1. Move to step 3.
3.
Find the possible paths of partition i using corner rule (e-value = 3 or e-value = 12)
83
and other Number Link rules.
4.
5.
6.
If paths are found, move to step 4.
If there are no
possible paths in this partition, move to step 6 when i = 1, else if i ≠ 1, assign i = i-1
and repeat step 3.
Record the current paths. If i = k, this puzzle is solved and move to step 5. If i ≠ k,
assign i = i+1 and move to step 3.
Display solution.
NO solution.
2.6 Example of a Number Link Puzzle Solved using Multipartite Graph Algorithm
This section explains the use of the multipartite graph algorithm, coded into the Number Link
Solver, to solve a Number Link puzzle example (Figure 8). The number in the bracket
indicates the e-value of a vertex.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 8: A Number Link puzzle solved using multipartite graph algorithm
The algorithm starts from the left to the right corner, i.e. from i = 1 to i = 9. When i = 1,
84
there is only one possible answer, e-value = 3, where the corner rule applies (Figure 8a).
When i = 2, the two e-values are 5 and 10 respectively because all vertices must be used in a
Number Link puzzle (Figure 8b). Then, when i = 3, corner rule applies (since e-value = 3)
for the vertex at location j = 5 and this is followed by the other two e-values of 5 and 10
respectively (Figure 8c). Next, when i = 4, the e-values of vertices when j = 4 and j = 6 can
only be 5 and 10 respectively, and therefore e-values for the vertices at j = 2 and j = 8 can
only be 5 and 10 respectively (Figure 8d). This algorithm continues for i = 5 (Figure 8e), i =
6 (Figure 8f), i = 7 (Figure 8g), i = 8 (Figure 8h) and the puzzle is finally solved (Figure 8i).
2.7 Multipartite Graph Number Link Solver
The Multipartite Graph Number Link Solver was developed using Visual Basic with a simple
interface design. The Number Link solver (Figure 9a) has the size of 15 x 20 by default, but
the size of the puzzle can be adjusted ranging from 6 x 6 to 27 x 27.
(a)
(b)
(c)
(d)
Figure 9: (a) Default size is 15 x 20, (b) Solution drop box, (c) Solution 0, and (d) Solution 1
The Multipartite Graph Number Link solver implements the ideas in Sections 2.2-2.4 using
the algorithm outlined in Section 2.5. The Number Link solver has functions to determine a
puzzle’s solvability and to store multiple solutions (if any). When a puzzle is not solvable, it
displays “This puzzle has no solution”. If the puzzle is solvable, the solver proceeds to find
the solution(s). When multiple solutions occur, a drop box pops out after the puzzle is
solved (e.g. Figure 9b) and the solution displayed depends on the selection made (e.g. Figure
9c and 9d).
The Multipartite Graph Number Link solver is superior to the earlier Modified Tremaux’s
Algorithm Number Link solver in [7]. Both algorithms were tested with puzzles from [9] on
an Intel® CoreTM 2 Duo 3GHz processor with 4GB RAM.
comparison between the two solvers.
85
The following table shows the
Table 1: Comparison between the Modified Tremaux’s Algorithm and Multipartite Graph
Characteristic
Output
Solution
Algorithm
Modified Tremaux’s
Multipartite Graph Algorithm Solver
Algorithm Solver
Numerical
Lines (making it easier to see the paths)
Finds only a single Able to find multiple solutions (when
solution
they exist)
Solving speed
Slow (for larger sizes e.g.
15 x 20, it runs out of
memory and hangs)
Solvability (i.e. Cannot
identify
and
detection of an continues to find and
improper puzzle) output a “solution” that do
not traverse all the cells
Fast (less than a second for smaller
puzzle sizes and for larger ones, e.g. 15
x 20, within a few seconds)
Able to identify and tells the user that
the puzzle has no proper solution that
traverses all the cells
2.8 Conclusion
The Multipartite Graph Algorithm is an efficient method to solve Number Link puzzles. The
present algorithm enables the solvability of a puzzle to be determined and the finding of
multiple solutions if they exist. In previous methods [4-6], the question of solvability and
existence of multiple solutions was not considered. The current performance results, however,
could not be properly compared with the results of CPLEX, Sugar and ZDDs as presented in
[4] as the Number Link puzzles used in [4] are different from the puzzles tested in this paper.
Although that may be the case, we suspect, however, that the performance of the Multipartite
Graph Algorithm is superior to ZDDs as in [4] it was reported that for puzzles of size 20 x 15
the solving time was more than 175s whereas in our algorithm, the solving time was merely a
few seconds at most. It could be argued though that the puzzles considered, although of the
same size, may have different levels of difficulty due to different path matchings and also the
solving time difference could be due to differences in computer processor speed.
[1]
[2]
[3]
[4]
[5]
3 References
Nikoli
Co.,
Ltd.
Nikoli
Web. Retrieved July 12,
2012 from
http://www.nikoli.co.jp/en/.
Melon’s Puzzles. A Numberlink Solving Primer. Retrieved July 13, 2012 from
http://mellowmelon.wordpress.com/2010/07/24/numberlink-primer/
Pegg Jr E. 2007. Number Link. The Mathematica Journal, 10 (3). Retrieved July 12,
2012
from
http://www.mathematica-journal.com/data/uploads/2012/05/NumberLink.pdf
Yoshinaka R, Saitoh T, Kawahara J, Tsuruma K, Iwashita, H, Minato S. 2012.
Finding all Solutions and Instances of Numberlink and Slitherlink by ZDDs.
Algorithms, 5, pp.176-213. doi: 10.3390/a5020176
GLPK deno Pencil Puzzle Koryakuho (Solving Pencil Puzzle Using GLPK: in
Japanese).
Retrieved
July
11,
2012
from
http://www21.tok2.com/home/kainaga11/glpk/glpk.htm
86
[6]
[7]
[8]
[9]
Tamura N. Solving Number Link Puzzles with Sugar Constraint Solver [in Japanese].
Retrieved
July
11,
2012
from
http://bach.istc.kobe-u.ac.jp/sugar/puzzles/numberlink.html
Yew N Z, Tiong K M, Yong S T. Recursive Path-finding in a Dynamic Maze with
Modified Tremaux's Algorithm. In Proc. of WASET Issue 60: International
Conference on Applied Mathematics and Engineering Mathematics (ICAMEM),
Phuket, Thailand, December 21-23, 2011, pp.845-847.
Weisstein, E W. k-Partite Graph. Retrieved July 12, 2012 from MathWorld--A
Wolfram Web Resource. http://mathworld.wolfram.com/k-PartiteGraph.html
Nikoli Co., Ltd. 2006. Number Link (Chinese translation). Grimm Press, Taiwan.
87
304
Wetting characteristics on patterned surfaces by Lattice Boltzmann
method
Ping Chena , Guo Taoa,*, Mingzhe Dong b, Bing Wanga
a
b
State Key Laboratory of Petroleum Resource and Prospecting, China University of
Petroleum (Beijing), 18 Fuxue Road, Changping, Beijing, China
* corresponding author: E-mail address: taoguo@vip.sina.com
Department of Chemical and Petroleum Engineering, University of Calgary, 2500
University Dr. NW Calgary, Alberta, Canada
E-mail address: mingzhe.dong@ucalgary.ca
Abstract
The influence of surface texture on wettability is one of the most interesting topics of
research, from biology to petroleum industrial applications. In this paper the Lattice
Boltzmann method (LBM) is employed to model the wetting changes of the drop sitting on
patterned surfaces. The applicability of the LBM for characterizing the effects of surface
topology on wettability is demonstrated first by the consistency of the predicted apparent
contact angle (APCA) by the LBM to the physical lab measured APCA in the previous
experiments. The simulated contact angles have the same changing trends as the theoretical
predicted and lab measurement results in terms of wettability alternation versus surface
roughness. The drop distributions on patterned surfaces are mimicked as they would be in a
physical lab measurement. The quantitative numerical simulations of LBM could be
significantly efficient tool if consider the time and money costs for measuring the drop
distributions and APCAs on patterned surfaces in a physical laboratory.
Keyword: patterned surfaces; wetting; Lattice Boltzmann method; apparent contact angles.
1. Introduction
Wetting and spreading are universal surface phenomena when a liquid is contacted with a
solid surface. Wetting on a solid surface is affected by two factors: the surface energy and
surface roughness [1]. The surface energy is an intrinsic chemical property of the material
and wetting can be manipulated directly by surface properties, especially by the roughness of
the surface. It has been observed that the structure of surfaces can significantly influence the
wetting behavior [2-6]. Therefore, it is necessary to study how surface structure could
88
influence the wettability, hence to help the design of surface topology to promote desired
wetting behavior. Such studies may led to the applications in many fields, such as
microfluidic systems, bio-sensing, heat exchange, and self-cleaning surfaces[7-10].
The apparent contact angle(APCA) has been commonly used to represent the rough surface
wetting. The effects of the surface roughness on the wettability have been explored for more
than half a century [11-16]. Onda et al. prepared fractal surfaces and compared the measured
APCA with the predicted APCA from Wenzel’s theory [3]. Bico et al. prepared substrates
with specified surface roughness and verified that the APCA is in good agreement with
results predicted by the Cassie theory[17]. Bico et al. proposed the phenomenon could occur
in the hydrophilic case. Liquid film invaded the texture of a solid wall decorated with pillars,
which is intermediate between spreading and imbibitions called hemi-wicking[18]. The
aforementioned experiments and previous research are not conclusive regarding which one of
the theories correctly models APCA on a rough surface, and all cases are possible on the
same rough surface. more investigation is required. Because it is time-consuming and costly
to perform experiments in the lab due to the requirement for sophisticated apparatus[19],
numerical modeling is needed for exploring the apparent contact angles on patterned
surfaces.
In this paper Lattice Boltzmann Method LBM is employed for meso-scale modeling and
illustrating the wetting phenomena of a drop resting on patterned surfaces topologically. The
predicted APCA, measured APCA and simulated APCA on patterned surfaces are compared.
finally, the APCA on a variety of patterned surfaces are modeled and the relationship
between the APCA and Young contact angle is discussed.
2. Theory
2.1 Contact angles on patterned surfaces
The basis for studying wetting on rough surfaces was established by Wenzel[11] and Cassie
[12] over half a century ago. The Wenzel equation relates to the wetting state where a liquid
drop penetrates into the corrugation on the patterned surface seen in Fig.1(a). The Wenzel
apparent contact angle, θw is written as a function of the Young contact angle θ and the
roughness ratio r:
(1)
cos w  r cos 
The roughness ratio is defined as the ratio of the true area of the solid surface to its nominal
area. The Wenzel model is valid between θ* < θ < π/2. θ* meets:
cos 
1
r 
(2)
where φ is the fraction of solid/ liquid interface. If the contact angle is smaller than θ*, the
penetration front would spread beyond the drop and a liquid film forms over the surface.
89
Fig.1(b) depicts the transition from the Wenzel state to the surface film state, called
hemi-wicking[18]. Apparent contact angle θhw is written as:
cos hw   cos  *  1  
(3)
Cassie describes the wetting situation where a drop sits on the patterned surface and air
remains trapped in the grooves below the drop, forming air pockets, which is illustrated in
Fig.1(c). The apparent contact angle θC defined as:
cosC  f (1  rf cos  )  1
(4)
In this equation, f is the fraction of the projected area of the solid surface that is wet by the
liquid, and rf is the roughness ratio of the wet area. When f = 1, rf = r, and the Cassie equation
turns into the Wenzel equation.
(a)
(b)
(c)
Fig. 1 (a)Wenzel state; (b)Hemi-wicking state; (c)Cassie state.
2.2 Basic theory of LBM
The LBM has been broadly used to reveal physical mechanisms of the pore scale behavior of
fluid-fluid and fluid-solid interactions with a higher computational efficiency [20,21]. The
particle’s movements in a LB algorithm are separated into collision and propagation
described in the following equation:
f i ( x  ei , t  1)  f i ( x, t )  i ( f ( x, t ))
(5)
where fi(x, t) means the particle distributions at the position x along the direction i at the time
t, and ei is the local particle velocity and i ( f ( x, t ) represents the collision operator, which
indicates the changing rate of
fi(x, t) resulting from collision [22]:
1
eq
i ( f ( x, t ))   ( f i ( x, t )  f i ( x, t ))
(6)

eq
where τ is the parameter about the relaxing speed and f i is the local equilibrium distribution
function written as:
9
3
f i eq ( x, t )  wi [1  3(ei  u)  (ei  u) 2  u 2 ]
2
2
(7)
where u is the velocity and is the density. The most popular model in two-dimensional
media is D2Q9, where "D2" stands for "two dimensions" and "Q9" stands for "9 speeds", W 0
= 4/9, Wi = 1/9 for i = 1, 2, 3, 4 and 1/36 for i = 5, 6, 7, 8 in LB modelling.
90
Microscopically, for the two-phase model the segregation of a fluid system into different
phases is due to the interparticle forces. The fluid-fluid interaction force is written as[23]:

Ff ( x )    ( x ) G f .(x
 ei)
ei
(8)
i
where Gf is the fluid-fluid cohesive parameter and superscripts σ and  represent the
components, respectively. When attractive forces exist between the particles, Gf is negative.
And   is the density function of the fluid  . Similarly, the interaction force between the
fluid, σ, and solid walls is represented as:
Fw    ( x ) Gw .s( x  ei)
ei
(9)
i
where s represents the pore distribution in the porous media and Gwσ is the adhesive
parameter between the fluid, σ, and the solid wall; the parameter s(x+ei) is an indicator
function which is set to be 0 for matrix and to be 1 for pores. A positive(negative) value of
the adhesive parameter is used to stand for the adhesive forces of the wetting(non-wetting)
fluid. Therefore, the new particle velocity u*(x) could be modified involving the fluid-fluid
and the fluid-solid interaction forces above-mentioned:
  ( F f  Fw )
*
(10)
u ( x )  u( x ) 
  ( x)
3. Simulation Results and Discussions
The approach of modeling contact angles on the flat surface has been investigated by
selecting the parameters Gf and Gwσ in the LBM [24]. Taking the parameters in Table. 1, the
modeled contact angles θM are listed in Table.1 and the drop distributions are illustrated in
Fig.2 by LBM simulations. It can be seen from the table the results basically cover the range
from 0 degrees to 180 degrees, which indicates that the LBM is doable to model contact
angles on the flat surface. Next, we will emphatically check the reliability of the LBM in
investigating the apparent contact angles on patterned surfaces.
It was assumed the patterned surfaces employed in simulations are textured by inerratic
micro-pillars, as shown in the Fig. 3. The parameters a and H in Fig.2 represent the length of
the side and height of the pillar, respectively. The parameter b is the width of the grooves. It
is feasible to select the three parameters of the pillars in the Fig.2 to texture the patterned
surface for various desired roughness.
The previously published experimental results in literatures [25,26] are then taken as
references. The pillars are distributed evenly on the patterned surface, possessing the
following dimensions: a= 23.73um, b= 13.6um and H= 45.2μm measured by the scanning
electron microscope. The predicted APCA by Cassie equation and measured APCA in the lab
are 149.7 degrees and 147.4 degrees at Young contact angel θ=117.3 degrees, respectively.
91
The APCA modeled by the LBM is 146.6 degrees. It can be found that the results from three
methods are very close to each other. Another example is presented by comparing the APCA
on the pattern surface with pillars of a= 8um, b= 16um and c= 32μm. When the Young
contact angles is 105 degrees, the predicted and measured APCA are 144.6 degrees and 144.5
degrees, respectively. The APCA obtained by the LBM is 142.7 degrees in this case. Again,
there are only minor differences(1.3%) in the 3 APCA results. Therefore, it can be concluded
that the LBM is capable to model the APCA on patterned surfaces quite precisely. .
Table 1. Contact angles θ (degree) and adhesive parameters ( Gw  G w , G f =0.0252)
Case
a
b
c
d
e
f
g
h
i
k
-0.011
-0.0112
-0.008
-0.005
-0.00
0.002
0.005
0.008
0.011
0.01
4
6
28
8
6
4
2
18
135.1
117
103.2
75.3
59.5
40.6
18.9
7.9
8
θM(degree)
170.9
158.3
Fig.2 Contact angle modeled by the LB method
Fig.3 The pillars on the patterned surface
Subsequently, the LBM was used to analyze the distributions of the drop on patterned
surfaces with distinct roughness. In this numerical experiment the surface roughness is taken
as 2, 3, 3.7, 4 from left to right by picking the appropriate pillars and separation distances.
When the Young contact angle is 120 degrees, the drop distributions on the patterned
surfaces are illustrated in the Fig.4. The APCA of the drop on rough surfaces are in sequence
[125.2 131.0 138.6 146.4] degrees from Fig. 4(a) to Fig. 4(d), which are obvious greater than
120 degrees with the increasing roughness. To observe further details, the grooves on the
rough surface are filled in the liquid, as shown in Fig.4(a), which reaches the so called
“Wenzel state”. As the surface roughness further increases, the liquid disaffiliates itself from
the grooves and is replaced by air, as illustrated in the Fig.4(b)-(d). The drop sits on top of
92
the textured surface with trapped air underneath. The liquid is excluded from the grooves and
air remains, which results in the solid–air interface of the droplet with the air in the pores. At
this point, Cassie state is turned up. Both the Cassie state and Wenzel state of the drop on
patterned surfaces are mimicked as they would be in a physical lab measurement. In addition,
10
10
10
the patterned surface
would increase
the hydrophobic property of the
10it also confirms that
20
20
20drop when the Young
contact angle20 is larger than π/2, which has been well known in this
30
30
30
30field.
40
40
40
40
50
50
50
50
60
60
60
60
70
70
70
80
80
70
80
90
100
10
80
20
10
90
30
90
10
100
20
20
40 20
100
60
40
3020
80
(a)
50
10040
60
10
20
100
203080
(b)
40
90
100
40
40
60
203080
(c)
40
40100
60
80
(d)
50
50
Fig. 4 Fluid distribution on patterned surface
at Young contact50 angle=120 degrees
60
60
60
60
70
70
70
80
80
80
90
90
90
90
100
100
100
100
100
70
80
(a)20
40
60
80
100
20
(b)
40
60
80
100
20
40
(c)
60
80
20
40
(e)
60
80
100
Fig. 5 Fluid distribution on patterned surfaces at Young contact angle=60 degrees
Following up these studies, a series of simulations are preformed under the condition of
Young contact angel being 60 degrees. The structures of patterned surfaces selected in these
simulations are the same as those in the previous simulations. Fig.5 shows that the resulted
drops on these specific patterned surfaces are no longer rounded any more. The roughness
is smoothed by a liquid film filled on the patterned surface in these cases and therefore the
hemi-wicking occurs instead of Wenzel state. The apparent contact angles are
[49.1,34.4,32.7,31.1] degrees, respectively. These results prove that though the film formed
around the drop improves the wetting, it could not change the wetting so much as predicted
by the Wenzel model. In fact, the APCA on the patterned surface would be zero only when
the Young contact angle is zero on the flat surface.
So far, we have performed a series simulations and shown the resulted drop states being
consistent with the existing theories[11,12,18].To further test the applicability of the LBM
modeling, extensive simulations for the APCA θa of the drop sitting on a veriorty of
patterned surfaces are carried out under different Young contact angles θ. The
correspondences between θa and Young contact angles are shown in Fig.6. It can be seen
that the θa are always smaller than θ as long as θ is less than π/2 . But the opposite is true
if θ> π/2. This implies that the wetting is enhanced at θ< π/2 and the non-wetting is
93
100
strengthened at θ> π/2 due to surface textures. The changing trends of cosine of the
apparent contact angle θa and Young contact angles θ are found to be consistent with the
previous experiments from the published work[3,27]. It corroborates that the rough surface
is the major factor to control and regulate the apparent contact angles.
4. Conclusion
In this paper, the LBM is employed to model the wetting changes of the drop due to surface
roughness. The APCA is first simulated with LBM and then compared to the predicted and
lab measured APCA. The applicability of the LBM is proven by the consistency of these
results. Application of the LBM is then extended to analyze the drop distributions on a
variety of patterned surfaces and the relationship between the APCA and Young contact
angle. It has been found that simulated contact angles have the same changing trends as that
of the previously published experimental results. The quantitative numerical simulations of
LBM could be significantly efficient tool if consider the time and money costs for measuring
the drop distributions and APCAs on patterned surfaces in a physical laboratory.
Fig. 6 Cosine of the apparent contact angle θa vs. cosine of Young contact angles
on patterned
surfaces with roughness r=2,3, 11/3,4.
5. Acknowledgement
This study is supported by NSFC (NO. 41174118). The author would like to appreciate China
Scholarship Council for sponsoring as a visiting student (NO.2009644010) in University of
Calgary and also thank the help of Dr. Zhu of China Petroleum Logging CO.LTD.
6. Reference
[1] B. He, Neelesh A. Patankar, and Junghoon Lee, Multiple equilibrium droplet shapes and
design criterion for rough hydrophobic surfaces, Langmuir, 2003, 19, 4999-5003.
[2] D. Quéré, Wetting and roughness, Ann. Rev. Mater. Res. 2008, 38, 71-99.
94
[3] T. Onda, S. Shibuichi, N. Satoh and K. Tsujii, Super-water-repellent fractal surfaces,
Langmuir, 1996,12 (9), 2125–2127.
[4] W. Barthlott and C. Neinhuis, Purity of the sacred lotus, or escape from contamination
in biological surfaces,Planta. 1997, 202(1), 1-8.
[5] R. Blossey, Self-cleaning surfaces—virtual realities. Nat. Mater.2003, 2, 301–306
[6] K. K. S. Lau, J. Bico, K. B. K. Teo, M. Chhowalla, G. A. J. Amaratunga, W. I. Milne, G.
H. McKinley and K. K. Gleason, Superhydrophobic carbon nanotube forests,Nano Lett.
2003, 3, 1701–1705.
[7] C. I. Park, H. E. Jeong, S. H. Lee, H. S. Cho and K. Y. Suh: Wetting transition and
optimal design for microstructured surfaces with hydrophobic and hydrophilic
materials, J. Colloid Interf. Sci. 2009, 336, 298-303.
[8] Z. Burton and B. Bhushan, Hydrophobicity, adhesion and friction properties with
nanopatterned roughness and scale dependence . Nano Lett. 2005, 5, 1607-1613.
[9] H. Ren, R. B. Fair and M. G. Pollack, Automated on-chip droplet dispensing with volume
control b electro-wetting actuation and capacitance metering,Sensors and. Actuators B,
2004, 98 , 319-327.
[10] A. Marmur. Solid-surface characterization by wetting, Annu. Rev. Mater. Res. 2009, 39,
473-489.
[11] R. N. Wenzel, Resistance of solid surfaces to wetting by water, Ind. Eng. Chem.
1936,28 (8), 988-994.
[12] A.B.D. Cassie and S. Baxter, Wettability of porous surfaces, Trans. Faraday Soc.
1944, 40, 546-551.
[13] Z.Yoshimitsu, A. Nakajima, T. Watanabe and K. Hashimoto, Effects of surface on the
hydrophobicity and sliding behavior of water droplets, Langmuir 2002, 18, 5818-5822.
[14] A. Lafuma and D. Quere, Superhydrophobic states, Nat. Mater., 2003, 2, 457-460.
[15] C. W. Extrand, Model for Contact angle and hysteresis on rough and ultraphobic
Surfaces, Langmuir, 2002, 18, 7991-7999.
[16] A. Marmur, Wetting on hydrophobic rough surfaces: to be heterogeneous or not to
be?, Langmuir, 2003, 19, 8343-8348.
[17] J. Bico, C. Marzolin and D. Qu´er´e, Pearl Drops, Europhysics Letters, 1999, 47,
220-226.
[18] J. Bico, U. Thiele and D. Quere , Wetting of textured surfaces, Colloids and Surfaces A:
Physicochem Eng Aspects, 2002, 206,41-46.
[19] Y. Xia and G.M. Angew. Whitesides, Replica molding with polysiloxane mold provides
this patterned microstructure, Chem. Int . Ed. , 1998 , 37,550 -575.
[20] M.G. Schaap, M.L. Porter, B.S.B. Christensen and D. Wildenschild, Comparison of
pressure-saturation characteristics derived from computed tomography and Lattice
Boltzmann simulations, Water Resour. Res., 2007, 43, W12S06.
[21] M.C. Sukop and D.T. Thorne, Jr., Lattice Boltzmann Modeling: An Introduction for
Geoscientists and Engineers, Springer, Heidelberg, Berlin, New York, 2006.
[22] Y. H. Qian, D. D'Humières and P. Lallemand, Lattice BGK models for Navier-Stokes
equation, Europhysics Letters, 1992, 17, 479-484 .
[23] X. Shan and H. Chen, Lattice Boltzmann model for simulating flows with multiple
phases and components, Phys. Rev. E, 1993, 47(3), 1815-1819.
[24] B. He, N. A. Patankar and J. Lee, Multiple equilibrium droplet shapes and design
criterion for rough hydrophobic surfaces, Langmuir, 2003,19, 4999-5003.
[25] M. C. Sukop and D. T. Thorne, Lattice Boltzmann Modeling: An Introduction for
Geoscientists and Engineers, 2006,Springer, Heidelberg, Berlin, New York.
95
[26] K. A. Wier and T. J. McCarthy, Condensation on ultrahydrophobic surfaces and its effect
on droplet mobility: Ultrahydrophobic surfaces are not always water repellant, Langmuir,
2006, 22, 2433-2436.
[27] S. Shibuichi, T. Onda, N. Satoh and K. Tsujii. Super water-repellent surfaces resulting
from fractal structure, J. Phys. Chem., 1996,100 (50), 19512-19517.
96
315
Students’ Response on the Detailed Handout Lecture Notes
Thian Khoon TAN
The University of Nottingham Malaysia Campus, Faculty of Engineering, Jalan Broga,
43500 Semenyih, Selangor Darul Ehsan, Malaysia.
E-mail address: TK.Tan@nottingham.edu.my
Abstract
Lecture notes provide the content of the lecture.
Therefore it is one of the important course
materials and information that assist students in understanding the lesson. Current study
was trying to look into the response on the detailed handout lecture notes. Handout lecture
notes are varied amongst lecturers, especially its contents layout, the details (point forms or
lengthy sentences), and with or without diagrams and figures. The way it is presented may
imply the first impression for students on the lecturer and the lectures. It may not excite
students’ interest or may ignite students’ curiosity which could indicate early and positive
responses before the lecture is started. This study was carried out by distributing different
handout for the F40 BMB module during the lectures. The handout for the lecture notes
were separated into two parts. First part contains 4 lectures which were given with detailed
lecture notes and the second part, which were the last 4 lectures, were given in less detailed
lecture notes. At the end of the module, the students were given a questionnaire to find out
their preferences. They were also asked if they took down any notes during lectures.
Besides, few other categories of questions were also added in the questionnaire. Finally, it
was concluded that all students preferred detailed handout lecture notes. However, when
compared the students’ overall performances with the earlier batches, they did better than
their earlier counterparts.
97
349
Scale-up of Polymethacrylate Monolithic Column: Understanding Pore
Morphology by Electron Microscopy
Clarence M. Ongkudona,*, Ratna Dewi Sania
a,
* Biotechnology Research Institute, Universiti Malaysia Sabah, Jalan UMS, 88400, Kota
Kinabalu, Sabah, Malaysia
E-mail address: clarence@ums.edu.my@ums.edu.my
Abstract
This paper describes the pore morphology of polymethacrylate monoliths prepared via bulk
polymerisation process. The pore size and globule size were estimated using Scanning
Electron Microscope (SEM). 9 different sections of the monolith were each studied under the
SEM. Interestingly, the results showed that the pore size was widely distributed at the top
section of the monolith compared to the bottom section whilst globule size was fairly
homogenously distributed across the monolith. A bimodal pore size was evident at the top
section of the monolith whilst globule size was generally unimodal. This study suggests that
the polymerization proceeds in a unique pattern correlating to the combined effect of internal
heat build-up and external heating.
Keywords: Polymethacrylate; Scale-up; Polymerisation; Monolith; Electron Microscopy
1. Introduction
A monolith is a continuous phase consisting of a piece of highly porous organic or inorganic
solid material and most commonly used as a chromatographic adsorbent. The most essential
feature of this adsorbent is that all the mobile phase is forced to flow through its large pores.
As a consequence, mass transport is steered by convection; reducing the long diffusion time
required by particle-based supports [1]. Surprisingly, the number of reported applications of
large volume monolith adsorbents is fairly low despite the successful implementation of
small-scale monolith adsorbents in biomolecules purification [2]. This is highly due to
extensive heat being formed during polymerization obstructing the obtention of homogenous
pore size in large-scale monoliths. ∆Tpoly = 8 °C reflects one magnitude shift in pore size [2].
As a consequence, rod-type monoliths larger than 5 cm thickness with homogenous pore
geometries have not been practically produced to date. In large-scale monolith preparation,
increasing the column length will increase the column back pressure during chromatographic
process. In organic-based monolith preparation, polymer shrinkage leads to a phenomenon
98
known as the ‘wall channeling’ resulting in the samples being purified effectively bypassing
the monolith uncaptured [3]. It is hyphotesised that the gravitational pressure exerted during
polymerization might influence the pore size distribution of a long monolith column.
Specifically, in a long rod-like monolith column (>1 m), the pore size at the upper end of the
column is thought to be significantly larger than that at the bottom end. This paper reports a
fundamental study on monoliths’s pore morphology in an attempt to understand the pore
formation during a continuous bulk polymerization process.
2. Materials and Methods
2.1 Chemicals
Ethylene glycol dimethacrylate (EDMA), Glycidyl methacrylate (GMA), Cyclohexanol and
AIBN (1% weight with respect to monomer)
2.2 Methodology
The monolith was prepared via free radical co-polymerisation of EDMA and GMA
monomers. About 10.5 ml GMA and 4.5 ml EDMA were combined with 15ml cyclohexanol
making a solution with a total volume of 30 ml. Then, 0.15g of AIBN (1% weight with
respect to monomer) was added to initiate the polymerization reaction. The polymer mixture
was sonicated for 15 min and sparged with N2 gas for 15 min to expel dissolved 02. The
mixture was gently transferred into a test tube. The top end was sealed with a parafilm sheet
and placed in a water bath for 4 hours at a fixed temperature, °C. The polymer resin was
washed to remove all porogens and other soluble matter with methanol and placed in
incubator shaker at 40°C, 40 rpm overnight. The polymer resin was washed with deionized
water and incubated again. The polymer resin was dried in incubator shaker overnight at
50°C. The polymer resin was sliced in radial direction into three parts which were top, middle
and bottom. Each part was further sliced into three sections, outer, middle and inner part and
observed under the Scanning Electron Microscope.
3. Results and Discussions
The morphology of the polymethacrylate monoliths is illustrated on Scanning Electron
Microscope (SEM). In this study, the pore and globules size of the monolith were measured.
The highest distribution of pore size on the top outer section of the monolith (Fig. 1)
was 46.15% (3-4 µm) whilst top middle side was 25% (1-5 µm) and the top inner side was
50% (2-3 µm). The highest distribution of globules size from top outer, middle and inner side
were each 63.64% (2-3 µm), 66.67% (2-3 µm) and 44.44% (4-5 µm).
For the middle section of the monolith, the highest distribution of pore size and globules
99
size on the outer part were 50% (2-3 µm) and 54.55% (2-3 µm). The middle part recorded the
highest distribution of 33.33% (1-2 µm) and 66.67% (2-3 µm). On the middle inner side, the
highest distribution of pore size was recorded as 1-2 µm with 62.5% and globules size was
recorded on the 2-3 µm with 85.71%.
For the bottom section of the monolith, the highest distribution of pore size and globules size on the left
side were 62.5% (1-2 µm) and 37.5% (1-2 µm). Meanwhile, on the middle side the highest percentage of
distribution of pore size and globules size were similar with 25% (1-2 µm). On the inner side, the highest
distribution of pore size and globules size were 75% (1-2 µm) and 71.43% (2-3 µm).
Figure 1. (left)Pore morphology of polymethacrylate monolith as viewed under SEM.
(right)Sections of the monolith that were studied.
Table 1. Pore and globule size of each section of the monolith. Each size value was taken
from the highest peak of the size distribution data.
Size range of pores and globules, µm *(values in bracket represent
the percentage, % of size distribution)
OUTER
MIDDLE
INNER
Pore
Globule Pore
Globule
Pore
Globule
TOP
3-4 (46)
2-3 (64) 2-3 (25)
2-3 (67)
2-3 (50)
4-5 (44)
MIDDLE
2-3 (50)
2-3 (50) 1-2 (33)
2-3(67)
1-2 (63)
2-3 (86)
BOTTOM
1-2 (63)
1-2 (38) 1-2 (25)
1-2 (25)
1-2 (75)
2-3 (71)
Monolithic structures have generally been shown to exhibit good mechanical stability,
100
though the level of stability depends on the material of construction. The size distribution of
pores existing within polymethacrylate monolithic matrix may cover a wide range up to the
second order of magnitude. The morphology of polymethacrylate monoliths is made of
interconnected globules that are partly aggregated. The pores in the polymer actually consist
of irregular voids existing between clusters of globules or between globules of a given cluster
or even within globules. The pore size distribution reflects the internal organization of both
globules and clusters within the polymer matrix and this mainly depends on the composition
of the polymerization mixture and reaction conditions. This makes it flexible for the pore size
to be fine-tuned.
The composition of the porogen and/or the ratio of monomers to porogen in the initial
polymerization mixture can change the pore characteristic of the monolith. Besides,
temperature plays a crucial role in tailoring the pore size in thermally initiated polymerization.
This can be explained by the effect of temperature on the nucleation rate, and a general rule is
that higher reaction temperatures lead to monoliths with smaller pores, mostly as a result of
increased initiation rate.
4. Conclusion
The main findings of the study suggest that globule size distribution is less influenced by the reaction conditions
whilst pore size distribution is highly dependent on the polymerization conditions. Generally, pore size is
increasingly less homogenously distributed with increasing monolithic column volume.
5. Acknowledgments
This project was funded by Universiti Malaysia Sabah, Malaysia under SGPUMS grant.
6. References
[1] Podgornik, A., Jancar, J., Mihelic, I., Barut, M., Stancar, A., (2010) Large Volume
Monolithic Stationary Phases: Preparation, Properties, and Applications. Acta Chim.
Slov. 57, 1-8.
[2] Urthaler, J., Schlegl, R., Podgornik, A., Strancar, A., Jungbauer, A., Necina, R., (2005)
Application of monoliths for plasmid DNA purification development and transfer to
production. J. Chromatogr. A 1065, 93-106.
[3] Feng, Q., Yan, Q.-z., Ge, C.-c., (2009) Synthesis of Macroporous Polyacrylamide and
Poly(N-Isopropylacrylamide) Monoliths via Frontal Polymerisation and Investigation of
Pore Structure Variation of Monoliths Chin. J. Polym. Sci. 27, 747-753.
101
Civil Engineering
13:00-14:30, December 16, 2012 (Meeting Room 5)
Session Chair:
056: Preliminary Study on the Use of Microwave Permittivity in the Determination of
Asphalt Binder Penetration and Viscosity
Ratnasamy Muniandy
Universiti Putra Malaysia
068: Durability Enhancement of the Strengthened Structural System Joints
Bassam A. Tayeh
Universiti Sains Malaysia
307: Statistical Evaluation of Redundant Steel Structural Systems Maximum Strength
and Elastic Energy
Amanullah Rasooli
Nagoya Institute of technology
353: Risk Analysis of Cost Overrun in Multiple Design and Build Projects
Ramanathan Chidambaram
Kumpulan Liziz Sdn. Bhd
Narayanan Sambu Potty
Universiti Teknologi Petronas
102
056
Preliminary study on the use of microwave permittivity in the
determination of asphalt binder penetration and viscosity
Ratnasaamy Muniandya, Md. Manniruzamanb
Salihudin Hassimc
aProfessor/Dr
Department of Civil Engineering
Universiti Putra Malaysia, Malaysia
ratnas@eng.upm.edu.my
bPhD Graduate/Dr
Department of Civil Engineering
Universiti Putra Malaysia, Malaysia
maniruzaman@yahoo.com
cAssociate Professor
Department of Civil Engineering
Universiti Putra Malaysia, Malaysia
hsalih@eng.upm.edu.my
Abstract
Asphalt binder analysis and grading have gone through many stages of facelift starting from
the conventional tests such as penetration and viscosity to the recent use of superpave
binder test and grading system. Although these approaches have laid the foundation for the
testing and grading of asphalt binders, it is observed that the time frame involved and the cost
of testing and equipment are rather costly. As such, a new approach using microwave
technique was carried out to see if there can be any correlation established between the
penetration-viscosity values and the measured microwave permittivity. This paper looks into
the use of microwave frequencies from 8 to 12 GHz on bibder specimens of various
viscosities. An experimental matrix was developed using 3 different binders, 3 additives.
The microwave permittivity was to measure the dielectric constants of the various blends at
temperatures of 25, 30, 35, 40, and 45 °C. Several sets of nomo graphs were estabilished
based on the observed correlation that can be used to determine the penetraion and viscosity
of of saphalt binders using with the measured dielectric constant of the binder material. Such
an approach is expected to be more accurate, repeatable, contactless and non-destructive to
some extent. It shows a great potential to be used as a binder grading method besides the
103
conventional and SHRP methods.
Keywords- Asphalt, Rheology, Microwave, Permittivity, Dielectric Constant
1. Introduction
Billions of dollars are spent on the rehabilitation of roads worldwide that are primarily binder
related. Despite continuous effort in ensuring the quality, accuracy and grading of asphalt
binders that are expected to reduce the dominant road pavement problems such as rutting and
fatigue cracking, those said problems are still prevalent. The present conventional testing
methods such as penetration and viscosity are very old procedures. The H.C Bowen
Penetrometer is nearly 120 years old and we are still dependent on that for the binder
penetration and viscosity values [1]. On the other hand the price of asphalt binder has gone
up tremendously that in turn pushed the cost of road construction sharply. Malaysia is no
exception. The cost for maintaining the Malaysian roads was more than RM 5 billion even 10
years ago [2]. It could be much higher now and this trend seems to be rather a worldwide
phenomenon. It is an accepted fact that we are still spending considerable time, effort and
money to test and grade our binders to ensure quality and precision. However, the world
economy and the way of lives of people are transforming at an alarming rate. As such an
alternative approach of grading and determining the properties of asphalt at a push button
speed is highly warranted. A study was undertaken at Universiti Putra Malaysia to explore the
potential of using microwave permittivity techniques to measure the penetration and viscosity
of asphalt binders.
Microwave approach is a non destructive procedure that can be performed in two ways. One
is by the Free-space method whereby two spot-focusing horn lens antennas are placed in
far-field zone while the other is carried out by using waveguide methods. In both cases a
Vector Network Analyzer (VNA) could be used. Musil & Zacek, in 1986 worked on such
microwave principles since it is contactless and nondestructive, found that this method may
be suitable for high and at low temperatures, in strong magnetic and electric field, and in even
hostile environment. However, they did not work on any binders [3].
The term permittivity relates to whether the medium permits the electrical lines of force or
not and if so to what extent. This depends on the material’s ability to transmit or permit an
electrical field. Permittivity describes the interaction of a material with an electric field.
The dielectric constant is equivalent to the relative permittivity (εr) or the absolute
permittivity (ε) relative to the permittivity of free space (ε0). The real part of permittivity (ε r')
is a measure of how much energy from an external electric field is stored in a material. The
imaginary part of permittivity (εr'') is called the loss factor and is a measure of how
104
dissipative a material is to an external electric field. The imaginary part of permittivity (εr") is
always greater than zero and is usually much smaller than (εr'). For low loss material, εr'' is
much smaller than εr'. These formed the basis for many of Zoughi & Bakhtiari's research in
1990 [4].
2. Materials and Method
In this study the Microwave Free-Space Method (MWFSM) was used to determine the
dielectric constants of asphalt binders since permittivity through any liquid and semi-solid
medium varies with the dielectric constants of the material. To be able to do this, a new
protocol in the sample preparation was developed. Due to the testing requirements of binders
at higher temperatures, a high temperature-resistant material was selected and tested. Teflon
was found to be suitable for the study since the melting point was determined to be
approximately 327° C. Work done by Musil and Zacek in 1986 showed that Teflon had a
loss factor of nearly zero 0.00028 @ 3GHz signal permittivity. A set of accessory apparatus
were designed and fabricated for use in the permittivity tests of the selected binders. Firstly,
the binder specimen container size was determined to be not more than 150mm by 150 mm
based on the VNA’s two spot-focusing horn lens antennas setup. The binder thickness in the
Teflon holder was determined to be smaller compared with the wavelength of the microwave
frequency used [5].
The final dimensions of 140mm x 140mm with a side wall thickness of 5mm were selected
and the Teflon specimen containers were made using a CNC machine. This is shown in
Figures 1and 2 below. These dimensions are deemed to be appropriate in line with the
free-space testing procedure. The internal dimensions of the Teflon container are, 100mm x
100mm x 5mm. It was made sure that the specimen dimensions are greater than the minimum
dimensions required for the type of VNA model selected.
Secondly, a Teflon sample container holding rack as shown in Figure 3 was designed and
fabricated to allow for 16 specimens to be conditioned in the temperature chamber at any
time. A heat resistant syringe with a capacity of 170ml was fabricated using the Teflon
material. The syringe was fitted with an injecting Teflon needle big enough (but smaller than
5mm diameter) to draw and inject hot asphalt binders to fill up the binder specimen
containers. The dimensions of the fabricated syringe are shown in Figure 4.
105
Figure 1: Fabricated Teflon Sample Container
Dia. S/S 304 =4.7mm
11
17
mm
mm
175mm
60mm
50mm
29
10
0m
55
mm
30
Figure 2: Teflon Binder Specimen Containers
0m
m
m
mm
Figure 3: Teflon Specimen Holding Rack
D
=3
7m
m
````
0
m
m
=3
m
40
5m
D
22
25
mm
0m
m
`
Figure 4: Teflon Temperature-resistant Srynge
3. Sample preparation
Three binders namely 80-100, 60-70- penetration graded binders and the PG 76 binders were
selected along with three different additives for the modification of the neat asphalts. The
objective of the binder modification was to create as many binder blends as possible with a
106
wide range of viscosity. The commonly available additives such as Cellulose Oil Palm Fiber,
#40 Tire Rubber Powder (TRP) and Ethylene-vinyl Acetate (EVA) were selected and blended
with neat asphalt binders. The COPF proportion was 0.2% to 1.0% with an increment of
0.2% while the rest were from 2% to 10% with an increment of 2%. The blended asphalt
binders were then tested at a frequency range of 8 to 12 GHz and a temperature range of 25°C
to 45°C for a total of 1,200 specimens. The complete experimental matrix and test plan are as
shown in Table 1 below.
Table 1: Experimental Matrix for the Conventional and Microwave tests.
80-100 Bitumen
Sample
No.
Sample
Cellulose
No.
1
2
0%, Control
0.2%
3
4
5
6
0.4%
0.6%
0.8%
1.0%
Sample
EVA
No.
TRP
1
7
0%, Control
2%
1
12
0%, Control
2%
8
9
10
11
4%
6%
8%
10%
13
14
15
16
4%
6%
8%
10%
60-70 Bitumen
Sample
No.
Cellulose Sample No.
17
18
19
20
21
22
0%,
Control
0.2%
0.4%
0.6%
0.8%
1.0%
17
23
24
25
26
27
EVA
Sample No.
TRP
0%,
Control
2%
4%
6%
8%
10%
17
28
29
30
31
32
0%, Control
2%
4%
6%
8%
10%
Sample No.
TRP
33
44
45
46
47
48
0%, Control
2%
4%
6%
8%
10%
PG 76 Bitumen
Sample
No.
Cellulose Sample No.
0%,
33
34
35
36
37
38
Control
0.2%
0.4%
0.6%
0.8%
1.0%
EVA
0%,
33
39
40
41
42
43
Control
2%
4%
6%
8%
10%
107
4. Test Protocol
Based on the preliminary investigations carried out prior to the actual tests revealed that the
X-band frequencies of 8 GHz to 12 GHz were found to give a good ball park range. The test
was carried out at an interval of 500 MHz, to capture 1601 data for each test at 5
temperatures from 25°C to 45°C at an interval of 5°C. The X-band corresponds to a
wavelength range of 37.5mm to 25mm. The two horn lens antennas were set on a wooden
table of dimension 1.825 m x 0.915 m, with a wooden platform thickness of 25mm
to
avoid any unwanted reflection. A Calibration and gating technique was used in the vector
network analyzer to reduce measurement errors.
The binder filled Teflon containers were heated to the specified temperatures mentioned
above and tested individually by placing the specimen on a special holder in between the
VNA equipment at the focal point of the antennas. Each specimen is tested at 5 different
frequencies. The test setup is as shown in Figure 5.
Mounting frame
of antenna
Lens horn antenna,
model 18820-F
Base PVC frame
for antenna
215mm
20mm
25
0m
m
900mm
Figure 5: PVC Mounting Frame for Lens Horn Antenna
5. Results and Discussion
A total of 1200 specimens were tested which generated 200 sets of data for each condition.
The penetration values obtained at various temperatures were matched with the dielectric
constants of each specimen measured through the permittivity principles using the VNA
device. Figure 6 below shows the correlation between the dielectric constants of COPF
modified PG 76 binder tested at 8 GHz against the penetration values tested conventionally.
This preliminary work shows that binders tested at 8 and 9 GHz frequencies displayed a good
108
trend. As such, if an asphalt binder is tested for its dielectric constant through the microwave
permittivity process, the following established plots can be used to predict the penetration
value of the binder and perhaps the penetration grade of the bitumen. A similar data match
was done for the dielectric constant permittivity of the same blended binder against viscosity
of the binder whereby the viscosity and grade can be determined. The established trend is as
shown in Figure 7 below.
It was observed that all of the neat and asphalt blended binders had dielectric constant range
of between 2.3000 and 2.8000. Since there are way too many plots established for the binders
to be tested at 8 and 9 GHz frequencies and temperatures from 25°C to 45°C, only sample
plots are shown here. However, similar plots can be generated to incorporate both the
penetration and viscosity values in one plot. Some sample plots are shown in Figure 8a to 8l.
Figure 6: Penetration-Dielectric Constant of COPF-PG 76 @ 8GHz
Figure 7: Viscosity-Dielectric Constant of COPF-PG 76 @ 8GHz
109
40
35
30
45
25
y
13
85
3
1150
.1
275
- 76
.8x 72
17 0.9
1
3 ²=
y= R
175
=R²
Viscosity, cP
2
11
Penetration,1/10 of mm
Viscosity, cP
y=
R²
=
0 .9 5 1 6 2
89 .7x
4
+
Temperature ºC
35
30
40
24
=
34
.1
x
0.
97
+
65
8
57
.5
7
27
x
7
9
.
9
.9
96
14 ² = 0
R
y=
25
6.6
275
Penetration, 1/10 of mm
Temperature ºC
45
1150
SN 1, 9 GHz, Viscosity vs DC
SN 1, 9 GHz, Penetration vs
DC
175
SN 1, 8 GHz, Viscosity vs DC
SN 1, 8 GHz, Penetration vs DC
75
2.640
1050
2.660
75
2.558
Dielectric Constant
(a) 80-100 Grade bitumen, Neat,
8 GHz
2.608
Dielectric Constant
2.658
(b) 80-100 Grade bitumen, Neat,
25
Temperature (ºC)
35
40
45
25
Temperature (ºC)
40
35
45
30
1430
1230
2.483
y=
R ² 137
= 7.5
0. x
97 +
57 37
152
96
.2
2.583
2.633
252
52
2.479
Dielectric Constant
(c)
y=
2.3
24 2
x - 55
.93 0.9
8
=
61 R ²
152
1230
2.379
2.683
.9
SN 17, 9 GHz,
Viscosity vs DC
SN 17, 9 GHz,
Penetration vs DC
1330
52
2.533
R² 10
= 41 x
0.
95 + 2
56 84
9
Viscosity, cP
1330
Penetration, 1/10 of mm
Viscosity, cP
SN 17, 8 GHz,
Viscosity vs DC
SN 17, 8 GHz,
Penetration vs DC
252
30
y=
1430
2
7.3
- 76 3
.6x 41
04 = 0 . 9
8
y = R²
2.579
2.679
Dielectric Constant
60-70 Grade bitumen, Neat,
8 GHz
(d) 60-70 Grade bitumen, Neat,
25
5100
Temperature (ºC)
40
45
35
5100
7.6
y=
SN 33, 8GHz,
Viscosity vs DC
SN 33, 8 GHz,
Penetration vs DC
145
R² - 14
= 17 x
0.9 +
82 39
9 47
4600
95
195
.6
y=
4800
4700
30
y=
4900
Viscosity, cP
4700
45
195
58
24
- 1 379
.7x 0.9
38 =
63 R²
Penetration, 1/10 of mm
Viscosity, cP
4900
4800
Temperature (ºC)
40
35
5000
y=
9 GHz
25
30
5000
R² 19
= 35x
0 .9 +
91 53
8 7
9 GHz
3.9
93
- 7 535
.3x 0.9
01 =
47 R ²
145
SN 33, 9 GHz,
Viscosity vs DC
SN 33, 9 GHz,
Penetration vs DC
4600
95
Penetration, 1/10 of mm
2.620
Penetration, 1/10 of mm
1050
4500
4500
4400
2.652
45
2.672
2.692
2.712
2.732
4400
2.617
2.752
2.637
2.657
2.677
2.697
Dielectric Constant
2.717
2.737
45
2.757
Dielectric Constant
(e)
PG 76 Grade bitumen, Neat, 8 GHz
(f) PG 76 Grade bitumen, Neat,
110
9 GHz
Temperature ºC
40 35
45
8.7
263
SN 2, 8 GHz, Viscosity vs DC
SN 2, 8 GHz, Penetration vs DC
35
- 3
2x 9
5. 1 . 6 2
61 ² = 0
=
R
y
1190
1140
2.390
163
. 19
2.490
2.540
25
2.590
. 14
51
-3 3
x
9
5.1 62
62 = 0.
y = R²
y=
-9
5
R² 8.85
=0 x+
. 59 26
15 26
1240
263
SN 2, 9 GHz, Viscosity vs DC
SN 2, 9 GHz, Penetration vs DC
1140
2.390
63
2.440
30
45
163
Penetration, 1/10 of mm
Viscosity, cP
1240
60
40
Viscosity, cP
y=
R² - 9
= 0 46.
.59 39x
47 +
2
1290
40 35
Penetration, 1/10 of mm
Temperature ºC
45
63
2.440
2.490
2.540
2.590
Dielectric Constant
Dielectric Constant
(g) 80-100 Grade bitumen, 0.2% COPF, 8 GHz
(h) 80-100 Grade bitumen, 0.2% COPF, 9
25
Temperature (ºC)
40
35
45
45
Temperature (ºC)
35
30
40
30
SN 18, 8 GHz,
Viscosity vs DC
SN 18, 8 GHz,
Penetration vs DC
y
y
1800
1700
2.554
=
247
R-² 17
= 79
0. .5
99 x
15 + 4
8
147
2.654
SN 18, 9 GHz,
Viscosity vs DC
SN 18, 9 GHz,
Penetration vs DC
y
247
1.
7
147
1800
96
.8
1700
2.458
47
2.604
R ² 110
= 4.2
0. x
95 +
79 30
3
1900
Viscosity, cP
Viscosity, cP
1900
Penetration, 1/10 of mm
y=
8 .5
24
- 2 788
.6x0.9
4
4 =
15 ²
= R
.2
80
- 6 47
7 x . 98
7.9² = 0
7
=9 R
Penetration, 1/10 of mm
25
2.704
47
2.558
2.658
Dielectric Constant
Dielectric Constant
(i) 60-70 Grade bitumen, 0.2% COPF, 8 GHz (k) PG 76 Grade bitumen, 0.2% COPF, 8 GHz
Ambient
Viscosity, cP
6900
7300
y=
1R² 876.2
=0
x
.98 + 51
39
94
.5
7100
45
30
02
02
-2 8
8x .942
00
0
10 ² =
R
y=
SN 34, 8 GHz,
Viscosity vs DC
SN 34, 8 GHz,
Penetration vs DC
140
6700
90
6500
6300
2.649
R ² 136
= 0 9.2
.98 x +
55 38
0
7100
190
1.6
y=
6900
6700
30
y=
190
Viscosity, cP
7300
35
Penetration, 1/10 of mm
45
Ambient
Temperature (ºC)
40
35
08
27
- 1 382
.7x 0.9
79 =
72 R ²
140
SN 34, 9 GHz,
Viscosity vs DC
SN 34, 9 GHz,
Penetration vs DC
90
Penetration, 1/10 of mm
Temperature (ºC)
40
6500
40
2.669
2.689
Dielectric Constant
2.709
2.729
(l) PG 76 Grade bitumen, 0.2% COPF, 9
6300
2.610
2.630
2.650
2.670
2.690
Dielectric Constant
2.710
2.730
40
2.750
(j) 60-70 Grade bitumen, 0.2% COPF, 9 GHz
Figure 8a-j: Binder Penetration and Viscosity Grading Charts
111
6. Conclusion and Recommendation
The preliminary investigation of the potential use of microwave permittivity in the
determination of penetration and viscosity proved positive. For any given tested binder
permittivity the penetration and viscosity values can be found using the established nomo
graphs. A similar effort can be extended to determine the rheological properties and grading
of asphalt binders in the future. Besides that, additional research can be undertaken to grade
bitumen. Such an approach may allow one to determine the binder properties and grades with
a push of a button since the entire test took less than 10 minutes.
7. Acknowledgment
The authors wish to thank Universiti Putra Malaysia (UPM), the Ministry of Higher
Education, Malaysia for making the grant available for this research, and Universiti
Teknologi MARA for assisting in the areas of microwave principles.
[1]
[2]
[3]
[4]
[5]
8. Reference
Roberts, F. L., Kandhal, P. S., Brown, E. R., Lee, D. Y., & Kennedy, T. W. (1996). Hot
mix asphalt materials, mixture design, and construction (2nd ed.). Lanham, Maryland:
NAPA Education Foundation.
EPU, M. (2001). Eighth Malaysian Plan, 2001 -2005, Economic planning unit (2001).
Putrajaya, Malaysia.
Musil, J., & Zacek, F. (1986). Microwave Measurements of Complex Permittivity by
Free Space Methods and Their Applications.(Translation). Amsterdam - Oxford - New
York - Tokyo ELSEVIER.
Zoughi, R., & Bakhtiari, S. (1990). Microwave nondestructive detection and evaluation
of disbonding and delamination in layered-dielectric slabs. 39(6), 1059-1063.
Ida, N. (1992). Microwave NDT. Dordrecht / Boston / London: Kluwer academic
publishers.
112
068
Durability enhancement of the strengthened structural system joints
Bassam A. Tayeha, B H Abu Bakarb,M A Megat Joharic
a
Engineering Division, Islamic University of Gaza, Gaza, Palestine
School of Civil Engineering, Universiti Sains Malaysia, Engineering Campus
14300 Nibong Tebal, Pulau Pinang, Malaysia
btayeh@iugaza.edu
b
c
School of Civil Engineering, Universiti Sains Malaysia, Engineering Campus
14300 Nibong Tebal, Pulau Pinang, Malaysia
cebad@eng.usm.my
School of Civil Engineering, Universiti Sains Malaysia, Engineering Campus
14300 Nibong Tebal, Pulau Pinang, Malaysia
cemamj@eng.usm.my
Abstract
The number of existing structures under repair and rehabilitation has extensively increased
over the past two decades; these structures typically require performance enhancements
including durable and safe repair and strengthening. The experimental program aimed to
investigate the bond strength at the joint surfaces between conventional concrete substarte as
old concrete and reactive powder concrete (RPC) as new overlay concrete. Pull off test was
used to quantify the direct tension of the bond strength. Different surfaces roughness were
used for old concrete. The obtained results, clearly showed that, RPC could be linked
excellent to the old concrete at early age; as a result, all failures occurred through the old
concrete, regardless of the surface roughness of old concrete. RPC could be used as an
excellent overlay concrete for increasing the durability at joint surfaces of the strengthened
structural system.
Keywords: Reactive powder concrete (RPC); Bond strength; Overlay concrete; Pull off test;
Old concrete; Durability.
1. Introduction:
Concrete overlay is widely used to repair or strengthen concrete structures, such as
pavements, industrial floors and bridge decks. It consists of replacing the damaged concrete
113
or increasing the cross section of existing concrete structures with the repair material, which
is fully bonded on the substrate concrete. However, the concrete overlay experiences a
serious performance problem. Although many researchers have been carried out to enhance
the durability of the concrete overlay, failures were often observed. The bond strength at joint
surfaces between the old concrete and new overlay concrete generally presents a weak link in
the repaired structures [1]. Thin bonded concrete repair overlays are among promising
rehabilitation approaches to extend the service life of old concrete structures. To be effective,
a good bond must be developed between the old concrete and the new overlay concrete, for
the better
resistance of concrete structures against penetration of harmful substances. [2],
[3]
The reactive powder concrete (RPC) has a remarkable flexural strength and very high
ductility; their ductility is greater than 250 times that of conventional concrete [4]. Their
extremely low porosity gives them low permeability and high durability, and makes them
potentially suitable for being used in a new technique for retrofitting reinforced concrete
structures [5]. RPC could be used as a repair material as it has strong mechanical bond,
which is formed between the old concrete and RPC as an overlay material [6].
This paper describes recent research efforts to assess the bond strength at joint surfaces
between conventional concrete (CC) as old concrete and reactive powder concrete (RPC) as
new overlay concrete, using pull off test to quantify the bond strength in direct tension.
Different surfaces roughness were used for conventional concrete.
2. Experimental programme
2.1(CC) substrate and (RPC) properties
The mix design of the conventional concrete substrate (CC) as old concrete used in this study
ensures an average characteristic compressive strength of 45 MPa is achieved at 28 days. The
NC used contains Type-I ordinary Portland cement, river sand with fineness modulus of 2.4,
coarse aggregate (granite) with a maximum size of 12.5 mm, a water-to-cement ratio of 0.5
and a slump value of between 150-180 mm. The mix proportions of the NC substrate are
presented in Table 1.
The mix proportions of the (RPC) which was used as a repair material are given in Table 2
according to [7]. The (RPC) contains Type-I Portland cement, densified silica fume, well
graded sieved and dried mining sand, high strength micro-steel fiber and polycarboxylate
ether based (PCE) superplasticizer. The steel fiber used has a fiber length and fiber diameter
of 10 mm and 0.2 mm, respectively, and the steel fiber has ultimate tensile strength of 2500
MPa. The (RPC) achieved an average 28 days cube compressive strength, f cu  170 MPa.
114
Table 1: (CC) substrate mix design
Item
Table 2: (RPC) mixture proportions
Mass
3
Item
Remark
(kg/m3)
(kg/m )
OPC
400
Coarse Aggregate
930
Mass
Type I
OPC
768
Max 12.5
Silica Fume
192
mm
Sieved Sand
1140
Fine Aggregate
873
Micro-Steel Fiber
157
Water
200
Superplasticizer
40
Free Water
144
Superplasticizer
4
2.2 Specimens preparation
Each of the tested specimens comprised of two different materials, being the (CC) as a
substrate and (RPC) as a repair material. The fresh (CC) was covered and left to set in the
respective moulds for 24 hours after casting. After 24 hours the (CC) specimens were
demoulded and were cleaned and cured for another two days in a water curing tank. At the
age of three days, the (CC) substrate specimens were taken out from the water tank for
surface preparation. In this study, the experimental parameter is the surface texture of the
substrate. Three different types of surfaces were prepared, which are (i) as-cast (AC), i.e.,
without surface preparation as reference (ii) wire-brushed (WB) without exposing the
aggregates, and (iii) sand-blasted (SB) to purposely expose the aggregates
Figure 1 shows the different surfaces of the (CC) substrate specimens after undergoing
surface preparation. The (CC) specimens were further cured in a water tank until the age of
28 days since the casting date. At the age of 28 days, the (CC) substrate specimens were left
to dry for 60 days.
Before casting the (RPC), the surfaces of the (CC) substrate specimens were moistened for
10 minutes and wiped dry with a damped cloth. The (CC) substrate specimens were then
placed into steel moulds with the slant side facing upward. Mixing of the (RPC) was
carrying out using a pan type mixer. The moulds were then filled with the (RPC).
Figure 2 shows the complete composite specimens for the split cylinder strength test and
slant shear strength test. The composite specimens were steam cured for 48 hours at a
temperature of 90°C. After the steam curing, it was cured in a water tank. The splitting
tensile and slant shear tests were performed on the 3rd and 7th day after casting the composite
specimens.
115
a. as-casted
b. sand-blasted
c. wire-brushed
Figure 1: Conventional concrete substrate (CC) specimens with different surface
textures for
Pull off test.
2.3 Pull-off test
The pull-off test method is a common tensile test method used to assess the adhesion
between the repair overlay and the existing concrete substrate. According to the ASTM D
4541 standard [8], the pull-off test was chosen for two reasons: to evaluate the bond strength
at the tension of the interface, and it could be carried out in-situ [9].
The adopted geometry for the (CC) substrate specimens (300 mm × 300 mm × 80 mm
thickness) was unreinforced concrete slabs. About10 mm of (RPC) was casted as an overlay
to the (CC) substrate. A core with a diameter of 75 mm was drilled into the composite
specimens, and was further extended by 15 mm beyond the interface into the (CC) substrate.
A circular steel disc was bonded with the core surface using epoxy glue, as shown in Figure
2.a. Tension force was applied to the disc, as shown in Figure 2.b.
The pull-off bond strength (Spo) was calculated by dividing the tensile (pull-off) force at
failure (FT) by the area of the fracture surface (Af), as shown in Eq. (1):
S po 
FT
Af
(1)
The pull-off test provides the most conservative bond measurement because it is not
influenced by friction at the substrate surface.
116
(a)
(b)
(c)
Figure 2: Composite specimens RPC/ CC substrate for Pull off test, a. coring and glue the
disc, b. applying the tension force and c. failure occurred in (CC) substrate.
3. Discussion of results
The pull-off test results are shown in Table 3. In all specimens and at different ages of the
test, the failure occurred in the (CC) substrate, indicating a strong bond with the (RPC). This
significant result is important because of the following points considered in this study: (i) The
(CC) substrate used in this study was of high quality because the compressive strength at 28
days was 45 kg/cm2. (ii) This study did not use any binder material, which has been used in
other studies to improve the bond between the old concrete substrate and the new concrete
[10-13]. (iii) Different types of substrate surface roughness, namely, as-casted, wire-brushed,
and sandblasted surfaces, were used with different roughness parameters, such as low,
medium, and high, respectively.
All these points were considered, but the results of the pull-off tests prove that the (RPC) can
interlock and form a strong mechanical bond with the (CC), as shown in Figure 2c, because
all failures in the pull-off test were in the (CC) substrate, regardless of the effect of substrate
roughness. Therefore, the bond between the (RPC) and (CC) substrate was stronger than the
tensile strength of the (CC) substrate. This result proves that binder materials would not be
needed if (RPC) is used as overlay material. The bond strength results of the pull-off test
found in a number of studies is zero for the as-casted substrate surface [14, 15]. However, in
this study, the bond strength in the as-casted substrate surface was stronger than that in the
(CC) substrate and is similar to that in the sand-blasted surface. The bond strengths of the
pull-off test were 2.2, 2.33, and 2.26 MPa at a test age of three days; 2.27, 2.23, and 2.22
MPa at a test age of seven days, for the as-casted, wire-brushed, and sand-blasted surfaces,
respectively.
The ACI Concrete Repair Guide specifies a minimum acceptable bond strength range for
117
direct tensile strength, (0.5 MPa-1.0 MPa) and (1.0 MPa – 1.7 MPa) at ages one and seven
days, respectively. This guide is useful in the selection of appropriate repair materials [16].
Based on this guide and according to [17] as shown in Table 4, the characteristics of (RPC)
surpasses that of a typical repair material because the bond strength obtained in this study is
more than 2.0 MPa. Thus, the (RPC) in all pull-off tests in this study can be categorized as
“excellent” because the bond strength is stronger than that of the (CC) substrate [18].
Table 3: Pull off bond strength and failure mode.
Surface
Treatment
Sample
No.
AC1
AC2
AC3
As-cast
surface
Wire-brushed
surface
Sand-blasted
surface
WB1
WB2
WB3
SB1
SB2
SB3
FT
(kN)
Spo
(MPa)
Failure
Mode
FT
(kN)
Spo
(MPa)
Failure
Mode
Test results at 3 days
9.6
2.17 Substrate
9.1
2.06 Substrate
10.5
2.38 Substrate
Mean 2.20
Excellent
COV 7.29
Test results at 7 days
9.6
2.17 Substrate
10.4
2.35 Substrate
10.1
2.29 Substrate
Mean 2.27
Excellent
COV 4.03
11
9
10.9
Mean
2.49
2.04
2.47
2.33
8.9
9.7
11
Mean
2.01
2.19
2.49
2.23
COV
10.94
COV
10.74
8.9
10.2
10.9
Mean
COV
2.01
2.31
2.47
2.26
10.15
9.2
9.8
10.4
Mean
COV
2.08
2.22
2.35
2.22
6.12
Substrate
Substrate
Substrate
Excellent
Substrate
Substrate
Substrate
Excellent
Substrate
Substrate
Substrate
Excellent
Substrate
Substrate
Substrate
Excellent
Bond Quality based on Table 4 [ACI Concrete Repair Guide] [16] and Table 4 [17].
Table 4: Quantitative bond quality in term of bond strength [17]
Bond Quality
Excellent
Very
Good
Good
Fair
Poor
Bond Strength
(MPa)
 2.1
1.7 – 2.1
1.4 – 1.7
0.7 –
1.4
0 – 0.7
4. Conclusions
The following conclusions can be drawn from the present study.
The results of the pull-off test showed that the failures occurred in the substrates in all
118
specimens at early age (third and seventh days). These results proved that the (RPC) bonded
very well with the old concrete, regardless of the type of surface roughness. All pull-off test
results could be classified as “excellent.”
The results indicated that, RPC enhances the carrying capacity, the stiffness of the repaired
structures and blocks harmful materials at the early stage of repair. As a result, a high
durability is achieved for future repair systems, and the service life of the structure is
extended. Therefore, the number and the extent of interventions are minimized to the lowest
possible level. Only preventative maintenance will be necessary. “Repairing the repairs,” a
problem that often exists in most repair systems, will no longer be necessary. This high bond
strength between the (RPC) and (CC) substrate at an early age can be used in several
important operations for repairing and strengthening concrete structures, including bridges
and pavements. This process is particularly important for rehabilitation processes that require
fast, strong, and high-quality bonding at the early stage of applying the repair material. A
particular application is the rehabilitation of roads and bridges, where the rehabilitation
process should be finished as soon as possible to reduce the time that traffic lanes are closed
to the public. This fast repair process will greatly satisfy bridge users.
5. Acknowledgments
The authors gratefully acknowledge the Universiti Sains Malaysia for providing the financial
support for this study.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
6. References
A. Momayez, M.R. Ehsani, A.A. Ramezanianpour, H. Rajaie, Comparison of methods
for evaluating bond strength between concrete substrate and repair materials, Cement and
concrete research, 35 (4), pp. 748-757, 2005.
N. Gorst, L. Clark, Effects of thaumasite on bond strength of reinforcement in concrete,
Cement and Concrete Composites, 25 (8), pp 1089-1094, 2003.
B.A. Tayeh, B.H. Abu Bakar, M.A. Megat Johari, Mechanical properties of old
concrete - UHPFC interface. , in: International Conference on Concrete Repair,
Rehabilitation and Retrofitting III (ICCRRR 2012), 02-05 September 2012, Cape Town,
South Africa., 2012, pp. 1035-1040.
P. Richard, M. Cheyrezy, Composition of reactive powder concretes, Cement and
concrete research, 25 (7), pp 1501-1511, 1995.
F.J. Alaee, Retrofitting of reinforced concrete beams with CARDIFRC, Journal of
Composites for Construction, 7 (3), pp. 174-186, 2003.
B.A. Tayeh, B.H. Abu Bakar, M.A. Megat Johari, Y.L. Voo, Utilization of ultra high
performance fibre concrete (UHPFC) for rehabilitation – a review. , in: The 2nd
International Conference on Rehabilitation and Maintenance in Civil Engineering
(ICRMCE-2), 8-10 March 2012 Solo, Indonesia, 2012, pp. 533-546.
B.A. Tayeh, B.H. Abu Bakar, M.A. Megat Johari, Characterization of the interfacial
bond between old concrete substrate and ultra high performance fiber concrete repair
composite, Materials and Structures, DOI 10.1617/s11527-012-9931-1 (2012).
ASTM-C4541, Standard Test Method for Pull-Off Strength of Coatings Using Portable
Adhesion Testers., in:
American Society for Testing and Materials, West
Conshohocken, PA 19428-2959, United States, 1992.
119
[9] K.R. Hindo, In-place bond testing and surface preparation of concrete, Concrete
International, 12 (4), pp. 46-48, 1990.
[10] G. Li, A new way to increase the long-term bond strength of new-to-old concrete by the
use of fly ash, Cement and concrete research, 33 (6), pp. 799-806, 2003.
[11] G. Xiong, J. Liu, G. Li, H. Xie, A way for improving interfacial transition zone between
concrete substrate and repair materials, Cement and concrete research, 32 (12), pp.
1877-1881, 2002.
[12] G. Li, H. Xie, G. Xiong, Transition zone studies of new-to-old concrete with different
binders, Cement and Concrete Composites, 23 (4), pp 381-387, 2001.
[13] R. Abbasnia, M. Khanzadi, J. Ahmadi, Mortar mix proportions and free shrinkage effect
on bond strength between substrate and repair concrete, in, CRC, 2008, pp. 353.
[14] P.M.D. Santos, E.N.B.S. Julio, V.D. Silva, Correlation between concrete-to-concrete
bond strength and the roughness of the substrate surface, Construction and Building
Materials, 21 (8), pp. 1688-1695, 2007.
[15] E.N.B.S. Julio, F.A.B. Branco, V.D. Silva, Concrete-to-concrete bond strength.
Influence of the roughness of the substrate surface, Construction and Building Materials,
18 (9), pp. 675-681, 2004.
[16] G. Chynoweth, R.R. Stankie, W.L. Allen, R.R. Anderson, W.N. Babcock, P. Barlow, J.J.
Bartholomew, G.O. Bergemann, R.E. Bullock, F.J. Constantino, Concrete Repair Guide,
ACI Committee, Concrete Repair Manual, 546, pp 287-327, 1996.
[17] M.M. Sprinkel, C. Ozyildirim, Evaluation of high performance concrete overlays placed
on Route 60 over Lynnhaven Inlet in Virginia, in, Virginia Transportation Research
Council, Charlottesville, 2000, pp. 1-11.
[18] B.A. Tayeh, B.H. Abu Bakar, M.A. Megat Johari, Y.L. Voo, Mechanical and
permeability properties of the interface between normal concrete substrate and ultra high
performance fiber concrete overlay, Construction and Building Materials, 36, pp
538-548, 2012.
120
307
Statistical Evaluation of Redundant Steel Structural Systems Maximum
Strength and Elastic Energy
Amanullah Rasooli a,*, Hideki Idotab
a
b
Showa-ku, Gokiso, Nagoya Institute of Technology, 466-8555, Building# 24, Nagoya,
Japan.
lalistani@yahoo.com
Showa-ku, Gokiso, Nagoya Institute of Technology, 466-8555, Building# 24, Nagoya, Japan.
idota@nitech.ac.jp
Abstract
In this investigation, the failure of basic redundant steel structural systems is investigated. By
assuming that each member of the system has brittle, semi-brittle, or perfectly plastic
properties, the statistical behavior of perfectly brittle systems, semi-brittle systems and
perfectly plastic systems are evaluated, and the effects of member strength variability on the
system maximum strength and elastic energy are investigated. By carried out the Monte Carlo
simulation (MCS) method, maximum strength and energy of the redundant steel structural
systems are evaluated. The member strength is defined random variable a selected normal
distribution represents the random variable
Keywords: Monotonic load, Maximum Strength, Energy, Deformation, and Variability
1. Introduction
Two basics simple categories, non redundant and redundant are commonly used to make the
complicated structural systems. A basic non redundant system is series system, the failure
probability of one component is equal to the failure probability of the whole system, basic
redundant system is parallel structural system, In a parallel system, one or more components
reaching the limit state does not necessarily indicate a system failure, as shown in Fig. l. A
parallel system is also called a redundant system. The mechanism differences between a
parallel system with brittle components and one with ductile components are significant.
State of the art report on connection performance (FEMA-355D, September, 2000), the
damaged steel structural systems that semi brittle failure is important behavior of steel
structures systems. In related work, Idota and Ito (2003) investigated the brittle failure of a
parallel steel structure system, and Hohenbichler, Gollwizter and Rachwitz (1981) studied
121
parallel structural systems. Hendawi and Frangopol (1994) researched system reliability and
redundancy in evaluations of structural design. Redundancy of structural systems was subject
of Kou-Wei Liao master's thesis (2004). Narashimhan (2007) examined system reliability and
robustness. Kirkegaard and Sorensen (2011) presented a paper about the ductile behavior of
structures. In 1994, Hendawi studied structural systems. Using the abovementioned literature,
this paper presents a comprehensive study on the characteristics and failure of parallel steel
structural systems as samples of basic redundant systems. From the central limit theorem
(CLT), the strength is a normal distribution; the effects of member strength coefficient of
variation (CoV) on the maximum strength and elastic energy of parallel steel structural
systems are investigated. Although random variables can have many types of distributions,
only a normal distribution is used here, and uncorrelated member strength are considered for
the members. The Monte Carlo simulation method is used, and the number of samplings is
10000 trials. The results demonstrate a multiplicity of system effects is possible, and the
effects depend on the abovementioned factors. The primary purpose of this study is the
assessment of parallel steel structural characteristics as basic redundant system under a
monotonic load to obtain results that will contribute to the development of more rational
structural system designs and evaluations of mechanical properties.
1.1 Parallel steel structural systems and Possible State of Member behaviors.
Parallel steel structural systems with N elements can have different failure paths; the
following figures show the force-deformation characteristics of member and parallel steel
structural systems.
1.5
Ri/R0
1

0.5
0
0
0.5
1
1.5
d/d
0
Fig.1 Parallel steel structural system
(a) perfectly ductile
122
2
1.5
1
1
i
Ri/R
R /R
0
0
1.5
0.5
0.5
0
0
0.5
di/do
1
1.5
0
0
2
0.5
d /d 1
i
1.5
2
o
1. Perfectly brittle
(c)
Semi brittle
Fig.3 Forec-deformation characteristics of parallel steel structural system member
3
3
3
N=3
N=3
2.5
2.5
2
2
1.5

1
0.5

Ri
Ri
1.5

Ri
2.5
2
1.5
0
N=3
1
0.5
0.5
0
0.2
0.4
0.6
d/d0
0.8
1
1.2
0
0
1
0
0
0.5 d/d0
1
1.5
0.4
0.8
d/d
1.2
1.6
0
(a) Perfectly brittle system.
(b) Semi brittle system.
(d) Perfectly ductile system
Fig.3 characteristics of parallel steel structural systems
In Fig.4. Rmax represents the ultimate strength,
Ry represents the yield strength, and Rres
represents residual strength dmin represents the
displacement of the system corresponding to yield
strength, dmax is displacement of the system
corresponding to ultimate strength of the system
R
max
R
yield
R
res
d
min
d
max
Fig. 4 Ideal force-deformation curve of a parallel steel structure system with two members
When the force-deformation diagram of a system is given Fig.4. Elastic energy of the system
can be calculated by
2. Mathematical Equations and Numerical Investigations
By considering parallel systems with N members that have normal random distributed tensile
strength, herein member strength and deformation of member are non-dimensionalized
123
(Fig.3), it is assumed that the standard deviation of member strength are the deterministic
values 0, 0.1, 0.2, 0.3, 0.4 and 0.5. In this study of basic redundant steel structural systems
with N ductile, brittle and semi-brittle members, such systems fail when all of their
components fail. The maximum strength of the systems with ductile member is given by
The mean value and variance of the strength are then given by
From the CLT, the strength is normally distributed and independent of the distribution of
individual components if
Then, the CoV is as follows:
According to the CLT the perfectly ductile system maximum strength distribution function
is:
Where x is the random variable.
The results of the system assessment are shown in the remaining figures. The effects of
member strength CoV, where N is the number of parallel steel structural system members, are
plotted for the parallel steel structural systems with N=2, 4, 6, 8 and 10 members. The
member linking the components is perfectly rigid and constrained to remain horizontal so that
the axial deformation of the components is equal to the force-deformation characteristics of
the members. As already stated,. A component fails if its axial force reaches its resistance
level. The failure may be brittle, in which case the load supported prior to failure by the failed
member is redistributed to another member, or the failure may be ductile, in which case the
124
failed member continues to support a load equal to its resistance. The resistance of the system
statistically depends on the resistances of the members, so the variation of the member
strength CoV directly affects the systems. The effects of the member strength coefficient of
variation (CoV) on the system variability can be found from the numerical assessments of the
parallel steel structural systems. The cases of ductile, brittle, and semi-brittle behaviors are
presented in this order in Figures
2.1 Effects of Member Strength CoV on the Maximum Strength of the System
0.3
14
12
N=2
N=6
N=10
N=4
N=8
Eq.(10)
Rmax
0.2
8


Rmax
10
N=2
N=4
N=6
N=8
N=10
Eq.(11)
6
0.1
4
2
0
0
0.1
0.2
(a
Ri
0.3
0.4
0
0
0.5
0.1
Ri
0.2
0.3
(b)
Fig. 5 Perfectly ductile system maximum strength
0.3
N=2
3
N=4
10
E
N=2
3
0.2
N=4
8
Rmax
N=6
7
6

9
N=10
7
9
N=10

Rmax
N=8
N=6
N=8
5
0.1
4
2
0
0
0.1
0.2

0.3
0.4
0.5
Ri
(a)
0
0
0.1

0.2
Ri
(b)
Fig. 6 Perfectly brittle system maximum strength
125
0.3
12
N=2
N=4
N=6
10
0.3
N=8
N=10
0.2
Rmax
6


Rmax
8
N=2
N=4
N=6
N=8
N=10
4
0.1
2
0
0
0.1
0.2

0.3
0.4
0.5
Ri
0
0
0.1

Ri
0.2
0.3
(a)
(b)
Fig. 7 Semi-brittle system maximum strength
Where:
The member strength coefficient of variation,
coefficient of variation,
the system maximum strength
= the system maximum strength mean value.
The present section delivers the results found from the observations of figures that show the
effects of the member strength CoVon the maximum strength of a parallel steel structure
system. Fig. 5(a) shows that the maximum strength of the perfectly plastic system does not
change by changing the member strength CoV Figure 5(b) indicates that by increasing the
member strength CoV, the system maximum strength variability increases. From the
observation of Fig. 6(a), it is found that when the member strength CoV increases, the
perfectly brittle system maximum strength decreases. From Fig. 6(b), it is found that by
increasing the member strength CoV, the variability of the maximum strength of the system
increases Figs. 7(a) and 8(a) show that when the CoV of a member strength increases, the
maximum strength of the semi-brittle system and the combination system decreases. By
increasing the member strength CoV, the CoV of the system maximum strength increases. A
comparison of the figures indicates that when the member strength CoV increases, the system
maximum strength decreases for the brittle system, the semi-brittle system, and the
combination system. If the system elements are large number, the maximum strength
decreases more if the components of the system are few, the maximum strength of the system
does not decrease more. When the system is plastic, the relation between the CoV of the
member strength and the CoV of the system strength is likely to be linear. For brittle systems
the relation between member strength CoV and the system variability is tendency to be
curved.
2.1 Effects of Member Strength CoV on the Elastic Energy of the System
126
0.6
25
N=2
N=4
N=6
20
N=8
N=10
N=2
N=4
N=6
N=8
N=10
0.5
0.4
15



E
0.3
10
0.2
5
0
0.1
0
0.1
0.2

0.3
0.4
0
0
0.5
0.1
0.2
Ri

0.3
0.4
0.5
Ri
(a)
(b)
Fig. 3 Effects of the member strength CoV on the perfectly plastic systems elastic energy
0 . 3
10
N=2
N=6
N=10
N=4
N=8
Eq.(x)
8
0 . 2
=
=
=
=
=
q
2
4
6
8
1 0
. ( x )




6
N
N
N
N
N
E
4
0 . 1
2
0
0
0.1
0.2

0.3
0.4
0
0.5
0
0 . 1
Ri
Ri
(a)
0 . 2
0 . 3
(b)
Fig. 4 Effects of the member CoV on the perfectly brittle system energy
0.3
12
N=2
N=4
N=6
N=8
N=10
10
8
0.2




6
N=2
N=4
N=6
N=8
N=10
4
0.1
2
0
0
0.1
0.2

0.3
0.4
0
0
0.5
0.1

0.2
0.3
Ri
Ri
(a)
(b)
Fig. 5 Effects of the member strength CoV on the semi-brittle system elastic energy
Where:
The member strength coefficient of variation,
127
the system elastic energy
coefficient of variation,
= the system elastic energy mean value.
After the observations of the effects of the member strength CoV on the basic redundant
systems found out that the effects of member strength CoV on the perfectly plastic system
energy is shown in Fig. 3(a) the energy of the perfectly plastic system changes by increasing
or decreasing the member strength CoV. The CoV of the member strength affects the
variability of the system elastic energy and the relation of the member strength CoV with the
CoV of the system elastic energy is likely to be linear for the perfectly plastic and perfectly
brittle systems Fig.3(b) and Fig.4(b). When the number of system components increases, the
variability of the system elastic energy decreases Fig. 3(b) by decreasing the member strength
CoV. For a perfectly brittle system Fig. 4(a), the system energy does not change by changing
the member strength CoV. When the CoV of the member strength increases, the variability of
the perfectly brittle system energy increases. If the number of the structural system members
increases, the system variability is small if the number of members are few Fig. 4(b), for a
semi-brittle system, the system elastic energy increases by increasing the member strength
CoV and decreases when the member strength CoV decreases Fig. 5(a). The variability of the
system energy increases when the member strength CoV increases Fig. 6(b). When the
number of members is small, the variability is small; when the number of members is higher,
the variability of members is also higher. A comparison of Fig. 3(b) and Fig. 4(b) indicates
that the variability of the brittle system is more than the variability of a plastic system when
the CoV of the member strength increases.
3. New Equations
For estimation of the perfectly ductile systems
and
the following equation
equations proposed respectively.
The following equation derived for estimation perfectly brittle system energy and its
variability respectively.
128
4. Conclusion
After investigation and study of parallel steel structural systems, the following conclusions
were obtained:
(1) The effects of the member CoV on the system energy and maximum strength are not the
same for brittle, ductile and semi-brittle.
(2) The maximum strength and energy of the brittle system can be decreased by increasing
the member strength CoV. For the ductile system, the maximum strength does not change.
(3) By increasing the member strength CoV, the variability of the system maximum strength
and energy increase.
(4) The number of members affects the system strength and energy.
(5) The CoV of the members strength, maximum strength of the system and elastic energy
are very important. Current design and evaluation specifications can be improved by
including them in the design and evaluation of steel structural systems.
(6) New models for estimation of the maximum strength of parallel steel structural systems
and elastic energy proposed.
The results presented here are supplemented by the existing information on statistical
evaluations on parallel steel structural systems and other types of steel structural systems.
[1]
[2]
[3]
[4]
5. References
R.C HIBBELER, Mechanics of Materials, Fourth Edition, New Jersey, 1999.
FEMA-355D, State of the art report on connection performance, September, 2000.
FEMA 350, Recommended Seismic Design Criteria for New Steel Moment Frame,
Program Reduce the Earthquake Hazards on Steel Moment Frame Structures, 2000.
Hideki IDOTA andTakanori ITO, Reliability of Building Structures Considering
Random Occurrence of Brittle failure, May 2003.
[5] Samer Hendawi, Dan M. Frangopop, System reliability and redundancy in structural
design and evaluation, Structural Safety Vol.16, 1994, pp.47-71.
[6] Kuo-Wei Liao, Redundancy in steel moment frame systems under seismic excitations,
M.S. Thesis, Department of Civil and Environmental Engineering, University of
Illinois at Urabana-Chapaign, Urbana, Illinois, 2004.
129
[7] Poul Henning Kirkegaard,John Dalsgaard Sorensen,Dean Cizmar,Vlatka Rajcic,System
Reliability of Timber structures with
Ductile behavior, Engineering
Structures,Vol.33.2011,pp. 30393-3098.
[8] S.Hendawi, Structural system reliability with applications to bridge analysis, design and
optimization, Ph.D. Thesis, Department of Civil Engineering, University of Colorado,
Boulder, Colo., 1994. FRED MOSES, Structural System Reliability and Optimization,
Computer &Structures, Vol.7, 1975, pp.283-290.
[9] Hideki Idota and Kenji Yamazaki, A study on energy absorption capacity of steel frames
considering uncertainty of member ductility, j.Struct.Constr.Eng.AIJ.No.575, Dec.2003,
pp.189-195.
130
353
Risk analysis of cost overrun in multiple Design and Build projects
Chidambaram Ramanathana,*, Narayanan Sambu Pottyb
a
Kumpulan Liziz Sdn. Bhd.,33-11-1,1 Lrg. Ruang Grace Square, Kota Kinabalu, Malaysia
E-mail address: ramoo_ctr@yahoo.com
b
Universiti Teknologi PETRONAS, Bandar Seri Iskandar, Tronoh, Malaysia
E-mail address: nsambupotty@yahoo.com
Abstract
Nowadays projects are more complicated involving huge contract values, participants from
multi-discipline, more specialized works, tighter schedule, stringent quality standards, etc.
Ultimately, cost and time are the two key parameters that plays significant role in a project
success. The study focuses on multiple Design and Build (D&B) project which has
complicated risk governed with fixed contract sum (Lump sum). There is no such specific
study in Malaysia. The objective of this study is to identify the risk causing factors
contributing cost overrun and enable to prepare a realistic risk response plan. The research
investigates through questionnaire survey and case study. This unique study is more
specific conducting the analysis on three distinct regions of Malaysia: Sabah, Sarawak and
Peninsular Malaysia. This benefits the industry to manage proactively with appropriate risk
response plan to the respective region.
Keyword: multiple projects; D&B projects; risk analysis; cost overrun
1. Introduction
For years the engineering and construction industry has had a very poor reputation for coping
with risk, with many major projects failing to meet deadlines, cost targets, and specifications.
New product development demands shorter project duration owing to tremendous amount of
competition and fluctuation of customers’ demand [1]. Recent trend in the industry indicate
continuous use of alternative procurement methods such as design-build. Only specialised
development projects are procured in D&B mode in which the D&B contractor has the
combined responsibility of both design and construct component defined in the form of
contract. D&B projects has fixed contract sum and contract period. Variation orders are
not considered for contract procured in this method. These new procurement schemes
require us to be more careful about risks, because new schemes increase complexity of
projects [2]. A multi-project setting is one in which several projects are being performed at
131
the same time in parallel with differed start and completion dates. This multi-setting has the
benefit of sharing the human resources and equipments from a common resource pool and
further enabling to use and share certain expertise in an efficient manner with less idle time.
In construction projects not only is on-time delivery important, it also translates directly into
whether the contractor will meet the client requirement, quality and provide a return on
investment. Delivering a project on time does not occur by hoping that the required
completion date will be met [3]. Majority of D&B projects encounter events and/or changes
that affect the original plan of executing a project [4]. Further, resources such as labour,
material, and equipment may be scarce, in high demand and as a result may hamper project
execution. Attempting to solve these unforeseen issues during a project without a plan in place
to determine the immediate impact is a major risk which can often lead to delayed projects and
disputes between the parties [5]. Study shows that 84% of Malaysian public projects are
facing cost overrun. Therefore, as a professional and a scholar this potential problem has
motivated the author to take extensive research study to meet these challenges.
2. Risk Management Review
Risk is not consistent. Different researchers could have different understanding of risk [6].
Unexpected events will usually occur during a project [7]; [8]. Risk management is
considered to be a tool that limits the impact of these unexpected events, or prevents these
events from happening. Accordingly, it is generally assumed that risk management
contributes to the success of the project [9]. Risks have a significant impact on a
construction project’s performance in terms of cost, time and quality [10].
Attempts were made to improve the construction industry's poor reputation. In 1994 Michael
Latham [11] said that “No construction project is risk free. Risk can be managed, minimised,
shared, transferred, or accepted. It cannot be ignored”. This work was followed in 1998 with
a report by Sir John Egan [12], “Rethinking Construction” (DETR, 2000b), and the
Construction Best Practice Programme was subsequently set up supported by the
Construction Industry Board (CIB). The main focus is to transform outdated management
practices and business cultures. Risk management is one of fifteen business improvement
themes.
In 2007, Rossi [13] stated that Risk monitoring and control is the process of keeping track of
the identified risks, monitoring residual risks and identifying new risks. Risk monitoring
and control is an ongoing process during the project life cycle, the risks changes as the
project develops, new risks arise, or some disappear.
Dey and Ogunlana [2] studied Risk management process in BOT (Build-Operate-Transfer)
projects, following the risk management framework which included risk identification, risk
classification, risk analysis, risk attitude and risk response (risk allocation).
132
3. Risk Factors and Risk Owners
The list of risk factors that were identified from the previous studies and those identified
through the experience of the local practitioners was used in the quantitative analysis. All
these risk factors were compiled and placed in the right sequence in Table 1. The owners
responsible for these risk factors are also shown in the table.
Table 1 Risk Factors and Risk Owners
No
Risk Factors
Risk owners
Effects
1
Wrong / inappropriate choice of site
Project
Cost overrun
2
Incomplete design at time of tender
Design
Cost overrun
3
Lack of cost planning / monitoring at pre & post contract stages
Owner
Cost overrun
4
Design changes
Owner / Consultant
Cost overrun
5
Lack of coordination at design phase
Owner / Consultant
Cost overrun
6
Unpredictable weather conditions
External
Cost overrun
7
Price fluctuations of raw materials
External
Cost overrun
8
Remeasurement of provisional sum
Consultant
Cost overrun
9
Extension of time
Contractor
Cost overrun
10
Omissions and errors in the bills of quantities
Consultant
Cost overrun
11
Inaccurate quantity take-off
Consultant
Cost overrun
12
Lack of project related experience
Contractor
Cost overrun
13
Lack of knowledge in local regulations
Owner / Consultant
Cost overrun
14
Inadequate project preparations, planning and implementation
Contractor
Cost overrun
15
Delay in supply of raw materials and equipment
Contractor
Cost overrun
16
Long period of the project maintenance period
Contractor
Cost overrun
17
Bad allocation of workers at site
Contractor
Cost overrun
18
Project materials monopoly by some suppliers
Material
Cost overrun
4. Data and Analysis
Convenience sampling method of Snowball sampling technique used to distribute the
questionnaire set with likert scale from 1 to 5 representative measure of Continually Occur to
Never Occur respectively. Totally 136 valid responses were used for the analysis of
Relative Important Index (RII) using the following expression (1):
………….. (1)
Where
W = Rating given to each factors by the respondents (i = 1 to 5)
A = the highest rating (i.e. 5 is the study)
N = Total No. of respondents (i.e. 136 in the study)
133
The data are processed to find the results of ranking in various viewpoints of contract
location (place wise):
(1) Sabah in East Malaysia,
(2) Sarawak in East Malaysia and
(3) Peninsular Malaysia.
As the data collected in this study is non-parametric and ordinal variables, the powerful
method of examining the relationship between parties of variables is by using Spearman’s
Rank correlation [14]. It is calculated using equation (2) as below:
……………. (2)
Where:
rs = Spearman’s rank correlation coefficient
d = the difference between ranks assigned to variables for each cause
n = the number of variables (ranks), equals to 79 and 9 for all the factors and groups
respectively.
5. Results and Discussion
Eighteen (18) factors were formulated after critical review of previous studies that are relevant
to Malaysian construction industry. These 18 risk factors were ranked in viewpoints of eight
categories by calculating the RII from the response of: Sabah, Sarawak and West Malaysia,
with concluding results shown in Table 2 and discussion of results as follows:
5.1 Sabah region view
The analysis of responses from the Sabah region gives ‘Extension of Time’ as 1st rank with
RII 0.743, Mean 3.72 and SD 0.84; ‘Price fluctuation of raw materials’ in 2nd ranking with
RII 0.721, Mean 3.61 and SD 0.80; whereas the least ranking is ‘long period of project
maintenance period’ at 18th ranking with RII 0.523, Mean 2.62 and SD 0.77.
5.2 Sarawak region view
The analysis of responses from Sarawak region gives ‘Design change’ as 1st rank with RII
0.680, Mean 3.40 and SD 0.95; ‘Price fluctuation of raw materials’ in 2nd ranking with RII
0.665, Mean 3.33 and SD 0.87; whereas the least ranking is ‘long period of project
maintenance period’ at 18th ranking with RII 0.498, Mean 2.49 and SD 0.81.
5.3 West Malaysia region view
134
The analysis of responses from West Malaysia region gives ‘price fluctuation of raw
materials’ as 1st rank with RII 0.694, Mean 3.47 and SD 0.87; ‘Design change’ in 2nd ranking
with RII 0.667, Mean 3.33 and SD 0.97; whereas the least ranking is ‘bad allocation of
workers at site period’ at 18th ranking RII 0.500, Mean 2.50d SD 0.80.
Table 2 Combined views of RII and factor ranks of the factors causing cost overrun for
respondents from the three regions
Factor
Factor causing cost overrun
Sabah
Code
RII
Rank
Sarawak
West Malaysia
RII
RII
Rank
Rank
D.1
Wrong / inappropriate choice of site
0.57
15
0.55
15
0.53
16
D.2
Incomplete design at time of tender
0.61
12
0.60
7
0.63
5
D.3
Lack of cost planning / monitoring during pre
0.64
8
0.65
4
0.62
7
and post contract stages
D.4
Design changes
0.67
4
0.68
1
0.67
2
D.5
Lack of coordination at design phase
0.63
10
0.63
5
0.60
9
D.6
Unpredictable weather conditions
0.70
3
0.60
8
0.64
4
D.7
Price fluctuations of raw materials
0.72
2
0.67
2
0.69
1
D.8
Remeasurement of provisional sum
0.65
6
0.58
13
0.63
6
D.9
Extension of time
0.74
1
0.66
3
0.67
3
D.10
Omissions and errors in the bills of quantities
0.62
11
0.59
11
0.62
8
D.11
Inaccurate quantity take-off
0.61
13
0.58
14
0.56
12
D.12
Lack of project related experience
0.61
14
0.59
12
0.55
15
D.13
Lack of knowledge in local regulations
0.57
16
0.53
16
0.56
14
D.14
Inadequate project preparations, planning and
0.66
5
0.59
9
0.60
10
implementation
D.15
Delay in supply of raw materials and equipment
0.65
7
0.59
10
0.56
13
D.16
Long period of the project maintenance period
0.52
18
0.50
18
0.50
17
D.17
Bad allocation of workers at site
0.55
17
0.53
17
0.50
18
D.18
Project materials monopoly by some suppliers
0.64
9
0.63
6
0.58
11
18
Count: No. of Factors
Table 3 shows there is high degree of agreement between the opinions of the respondents from
the three regions Sabah, Sarawak and West Malaysia. The correlation coefficient between
Sabah and Sarawak is 0.802, Sabah and West Malaysia is 0.851, Sarawak and West Malaysia
0.835 and the P-value (Sig.) is 0.000 which is less than the level of significance 0.01, so there
is significant relationship between regions.
Table 3 Correlation of cost overrun between Sabah, Sarawak and West Malaysia
135
Sabah Sarawak West Malaysia
Sabah
Correlation Coefficient
Sig. (1-tailed)
N
Sarawak
Correlation Coefficient
Sig. (1-tailed)
N
West Malaysia
Correlation Coefficient
Sig. (1-tailed)
N
1.000
.802**
.851**
.
.000
.000
18
18
18
.802**
1.000
.835**
.000
.
.000
18
18
18
.851**
.835**
1.000
.000
.000
.
18
18
18
**. Correlation is significant at the 0.01 level (1-tailed).
6. Results and Discussion
There were 18 factors formulated after critical review of previous studies that are relevant to
Malaysian construction industry. The analysis of responses from all respondents (overall
category) gives Extension of time ranks 1st with RII 0.712, Mean 3.6 and SD 0.90; price
fluctuation of raw materials is 2nd ranking with RII 0.701, Mean 3.5 and SD 0.85; whereas
the least ranking is long period of project maintenance period at site at 18th ranking with RII
0.510, Mean 2.6 and SD 0.80.
There is agreement between three regions Sabah, Sarawak, West Malaysia. The correlation
coefficient values obtained shows low agreement from 80.2% (between Sabah and Sarawak)
to high agreement of 85.1% (between Sabah and West Malaysia) and the P-value (Sig.) is
0.000 which is less than the level of significance 0.01, so there is significant relationship. The
reliability test of Cronbach’s alpha coefficient is calculated using SPSS ver.18 for the cost
overruns and the results show that the internal consistency reliability test for cost overrun has a
value 0.964 which is close to 1. This shows good reliability of the questionnaire data.
The results and conclusion of the risk analysis for cost overrun will help the organisations to
prepare risk response plan for all their future projects [15]. The completed research provides
the process of developing options and actions to enhance opportunities and reduce threats for
the project success. The construction organisation dealing with multiple D&B projects may
be benefited with these results.
7. References
[1] R. R. Vega and R. J. Vokurka, “New product introduction delays in the computer
industry”, Industrial Management & Data Systems, vol. 100, no. 4, pp. 157-263, 2000.
[2] P. K. Dey and S. O. Ogunlana, “Selection and application of risk management tools and
techniques for build-operate-transfer projects”, Journal of Industrial Management &
Data systems, vol. 104, no. 4, pp:334-346, 2004.
136
[3] C.T.Ramanathan, N. S. Potty and A. B. Idrus, “Analysis of time and cost overrun in
Malaysian construction”, Advanced Materials Research Journal. Vol. 452-453, pp.
1002-1008.
[4] C.T.Ramanathan, N. S. Potty and A. B. Idrus, “Risk factors influencing time and cost
overrun in multiple D&B projects in Malaysia: A case study”, IEEE International
Conference on Industrial Engineering and Engineering Management. pp. 854-859.
[5] P. D. Galloway, “Comparative Study of University Courses on Critical-Path Method
Scheduling”. Journal of Construction Engineering and Management ASCE, vol. 132, no.
7, pp. 712-722, 2006.
[6] H. Zhang, “Two schools of risk analysis: A review of past research on project risk”,
Project Management Journal WILEY, vol. 42, no. 4, pp. 5-18, 2011.
[7] J. K. Pinto, Project management achieving competitive advantage. Upper saddle river,
NJ; Pearson Prentice Hall, 2007.
[8] J. R. Tuner and R. A. Cochrane, “Goal-and-methods matrix: coping with projects with
ill defined goals and/or methods of achieving them”, International Journal of Project
Management, vol. 11, no. 2, pp:93-102, 1993.
[9] R. Olsson, “In search of opportunity management; is the risk management enough?”,
International Journal of Project Management, vol. 25, no. 745 – 752, 2007.
[10] A. Ahmed, B. Kayis and S. Amornsawadwatana, “A review of technicians for risk
management in projects”, Benchmarking International Journal, vol. 14, no. 1, pp. 22-36.
2007.
[11] M. Latham, “Constructing the Team”. HMSO, London, United Kingdom, pp. 87-92.
1994.
[12] J. Egan, “Rethinking construction: The report of the Construction Task Force to the
Deputy Prime Minister, John Prescott, on the scope for improving the quality and
efficiency of UK construction”, Department of the Environment, Transport and the
Regions Construction Task Force, London, UK, 1998.
[13] P. Rossi, “How to link the qualitative and quantitative risk assessment”, in Proceedings
of 2007 PMI global Congress EMEA, Budapest, Hungary, 2007.
[14] A. Bryman and D. Cramer, “Quantitative Data Analysis with SPSS Release 10 for
Windows”, 2nd edition, Taylor and Francis inc., USA, 2002.
[15] C.T.Ramanathan, N. S. Potty and A. B. Idrus, “Construction delays causing risks on
time and cost - A critical review”, Australasian Journal of Construction Economics
and Building, vol. 12, no. 1, pp. 37-57.
137
Computer and Information Sciences I
13:00-14:30, December 15, 2012 (Meeting Room 5)
Session Chair:
228: Design And Implementation Of Microcontroller Based Computer Interfaced Home
Appliances Monitoring And Control System
Nwankwo Nonso Prince,
Federal Polytechnic Oko
Nwankwo Vincent
Federal Polytechnic Oko
Azubuike Onuorah Patrick
Patech Electronics Co
242: Design And Implementation of Computer Interfaced Voice Activated Switch Using
Speech Recognition Technology
Azubuike Onuorah Patrick
Patech Electronics Co.
Nwankwo Nonso Prince
Federal Polytechnic Oko
251: Geo-databases: Expanding Geo-expert Modeling Perspectives
Elzbieta Malinowski
University of Costa Rica
252: A Non-Voice based Digital Speech Watermarking
Mohammad Ali Nematollahi
University Putra Malaysia
S.A.R Al-Haddad
University Putra Malaysia
M. Iqbal Bin Saripan
University Putra Malaysia
Shayamala Doraisamy
University Putra Malaysia
Laith Emad Hamid
University Putra Malaysia
332: Autonomous UAV support for rescue forces using Onboard Pattern Recognition
Florian Segor
Fraunhofer IOSB
Chen-Ko Sung
Fraunhofer IOSB
243: Neural Network Modeling of Breaking Force in Variable Road and Slope
Conditions
Recai KUS
Selcuk University
Ugur Taskiran
Selcuk University
Huseyin Bayrakceken
Afyon Kocatepe University
138
228
Design and Implementation of Microcontroller based Computer Interfaced
Home Appliances Monitoring and Control System
Nwankwo Nonso Prince a,*, Nwankwo Vincent b, Azubuike Onuorah Patrick c
a
Department of Computer Engineering, Federal Polytechnic Oko, Anambra State, Nigeria.
princetechfoundation@yahoo.com
b
Department of Computer Engineering, Federal Polytechnic Oko, Anambra State, Nigeria.
ifechinwankwo@ovi.com
c
Patech Electronics Co. / Research Institute Onitsha, Anambra state, Nigeria.
pazubuike207@yahoo.com.
Abstract
This paper presentation was based on “Design and Implementation of Microcontroller Based
Computer Interfaced Home Appliances Monitoring and Control System”. The system utilized
an interface program running on a personal computer (PC) to monitor the magnitude of the
current drawn by the loads from each sockets and to switch ON / OFF the two electrical
sockets. The system made use of a microcontroller and an analogue to digital conversion
(ADC) technique to monitor and switch the loads on these sockets. Every unit that made up
the system was considered. The microcontroller used in this project work was used to store
all the machine code of the system. The Top universal programmer was used to transfer the
machine code of the system to the microcontroller. The computer and the microcontroller
were interfaced using a serial port, for proper monitoring, control and switching of the system.
The PC acts as an input and output device, it is also used to store the visual basic program of
the system with the major control and switching being done by the microcontroller. Other
components that made up the system were connected to the microcontroller in one way or the
other to achieve the desired system.
Keywords: Analogue to Digital Converter (ADC), Control System (CS), Microcontroller,
Personal Computer (PC), Serial Port (SP).
1. Introduction
1.1. Motivation
The motivations of this work are the desires to explore and expand the capability and the
compatibility of the PC; and to gain practical experience in the use of microcontrollers in
designing applications. And also how the two (the microcontroller and the PC) can be
interfaced to monitor and control physical processes.
1.2. Objectives
1. Communicate with a microcontroller via a PC serial port using VB 6.0.
2. Monitor the state of the 2 sockets from the PC via the microcontroller.
i) Monitor the ON/OFF state of the sockets.
ii) Monitor the load i.e. the amount of current drawn through each socket.
3) Switch the sockets ON or OFF from the PC.
139
4) Provide over voltage and under voltage protection for connected loads.
5) Provide over current protection for each socket without the aid of a fuse.
1.3. Scope of Work
This project is restricted to the control of power from the consumer end and it is limited to the
control of two loads, whose individual power rating is below 100VA. Power control is
restricted to ON/OFF control only.
1.4: Concepts of the Project
There is a need to measure and control the amount of power consumed by electronic
appliances. This need can effectively be met by the use of a computerized power
consumption meter / control system. The computerized power consumption meter / control
system is a device that records the amount of power being consumed by electronic appliances
as well as setting a limit to maximum power that can be drawn from the major power supply.
In this system a computer is interfaced with a consumption meter. The meter has a plug which
is connected to the power supply. It also has a socket into which the appliances are plugged.
As the meter is loaded by the appliances their voltage amperage and wattage are recorded and
displayed on the computer. The computer is programmed using visual basic 6.0 to carry out
the above task and also to set the limit to which the meter can be loaded.
2. Design Methodology
Serial
Communication
interface
Serial Port
Micro
Controller
Interface
program
Actuator
Sockets
Measurement
Technique
PC
Fig. 1.
Block Diagram of the Microcontroller Based Computer Interfaced Home
Appliances Monitoring and Control System
The system uses a PC to switch ON/OFF sockets and also to monitor the state (voltage and
current) of the sockets through the microcontroller. The PC communicates with the
microcontroller via the PC’s serial port with the aid of a program written in Microsoft’s
Visual Basic 6.0.
MCLR
AN0
AN1
AN2
RA3
RA4
RA5
NC
RE0
RE1
RE2
VDD
Vop
Vc1
Vc2
Measurement
Technique
10k
VCC
GND
XTAL
XTAL
4.5MHz
22pF
22pF
NC
ULN2003A
UL1
UL2
RE0
RE1
RC2
RC3
RD0
RD1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
PIC 16F877A
2.1 The Microcontroller Unit (MCU)
140
40
20
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
RB7
SCLK
RB6
RES
RB5
SDIN
RB4
NC
NOKIA 3310 LCD
RB3
D/C
RB2
NC
RB1
SCE
RB0
VDD
VSS
RD7
NC
RD6
RD5
RD4
RC7/RX
T2IN
MAX232
RC6/TX
T2OUT
RC5
RC4
NC
RC3
RC2
Fig. 2. Pin Connection Diagram of the PIC16F877A
2.2. Nokia 3310 LCD
The NOKIA LCD was used in the early design phase in calibrating the voltage and current
measuring circuits via the PIC’s ADC. The LCD is controlled by PCD-8544 controller. Fig. 3
below shows the connection of the LCD pins to the microcontroller.
RES
~SCE
GND
VOUT
RB6
RB1
MCU
RB3
D/~C
SDIN
RB5
SCLK
RB7
VCC
PCD8544
Fig. 3. Pin Connection Diagram of NOKIA 3310 LCD
2.3. The Actuator
The actuators used for this project are relays which function by connecting or disconnecting
the socket terminals from supply. The choice of the relay used was based on the following
ratings:
 Voltage rating – 12V (switching voltage)
 Contact rating – 10A (max. current that can be controlled by the relay)
 Coil rating – 100mA (max. coil current).
To switch the relay, it requires a transistor switching circuit to amplify the current from the
CMOS device (PIC) to a value that can energize its coil. The unit that handles the switching
is the ULN2003A IC. Below is a figure showing how it was connected to the relay.
VREL
NC
16
UL1
2
15
RD1
UL2
3
4
NC
5
6
ULN2003A
MCU
1
RD0
N/C
2
3
4
5
N/O
NC
1
14
VREL
2
13
12
NC
10
8
9
N/C
1
11
7
3
4
5
N/O
Fig. 4. ULN2003A IC Connection to the Relay
141
VREL
9
3
N/C
1
2
R1
5
15
R2
N/O
R3
8
Fig. 5. Darlington pair making up the ULN2003 IC
2.4. Serial Communication Interface (SCI)
RX
C6
C7
TX
PC's Serial port
16
2
15
3
14
T1out
13
Rin
12
Rout
11
T1in
7
10
T2in
RC6
8
9
R2out
RC7
4
C8
5
C9
6
T2out
R2in
VCC
1
MAX232
GND
GND
NC
MCU
Fig. 6. Pin Connection of MAX232
C6, C7, C8, C9 – 16V, 10μF (EIA standard)
All connections except those of pins 7, 8, 9 and 10 are standard connections.
The MAX232 contains a voltage doubler which doubles the 5V to 10V, and an inverter which
inverts 5V giving -5V and 10V giving -10V. Thus achieving logic levels of -5V to -10V and
5V to 10V, which is compatible with RS232 logic levels.
2.5. DC Power Supply Unit (PSU)
All electronic components require a steady dc power supply. Thus a regulated dc power
supply unit was built to this effect. Below is a block diagram of the power supply unit (PSU).
Transformer
Rectifier
Filter
Regulator
Fig. 7. The Block Diagram of a Regulated DC PSU
The PSU is made up of the following sections.
Transformer – This section scales down / reduces the supply voltage to an R.M.S. value
close enough to the desired DC value (12V). Rectifier – This section converts the scaled
142
down ac signal to a varying dc signal. Filter – This section removes the ripples from the
rectified signal to give a fairly constant dc value. Regulator – This section regulates the
supply to the desired voltage (+5v).
The circuit diagram of the PSU is shown below.
BR
L
VREL = 12V
78L05
IN
R1
N
VCC = 5V
OUT
COM
C2
C1
LED
T1
Fig. 8. Circuit Diagram of the (PSU)
3. MCU and PC Algorithm Development
This section describes the algorithms used in developing the MCU and PC programs. In
designing an algorithm for any program, the first step is to determine the function of the
program. This function is then broken down into a sequence of simple operations.
3.1. MCU Program Algorithm
1. DECLARE VARIABLES AND CONSTANTS
2. DECLARE INPUT AND OUTPUT PORTS
3. INITIALIZE PROGRAM
4. GET VOLTAGE
5. GET CURRENT1
6. GET CURRENT2
7. LCD DISPLAY
8. PC COMMUNICATE
9. DECIDE
10. DELAY
Algorithm Description;
(1) Declare variables and constants. (2) Configure MCU pins as either inputs or outputs.
(3) Certain MCU parameters are set on MCU start up. These parameters and their settings are
as follows;




Set a logic 0 to the actuator pins
Clear LCD
Set USART baud rate to 2400
Read the 16 bit values of over voltage, under voltage, over current which were stored in
EEPROM before power was turned OFF. (4) - (6) – The MCU receives the digital
equivalent of the analogue signals from the measuring circuit (for voltage and current
measurements) via the ADC. The ADC is set up to sample the analogue signals at a very
high rate. The Microcontroller unit (MCU) then determines the sample with the highest
value and uses it as the working value. (7) - The ADC’s digital outputs which represent
143
supply voltage, currents in socket1 and socket2; as well as calculated power are sent to
the LCD via appropriate pins. (8) On start up, after initialization, the MCU starts sending
the measured (voltage and currents) data to the PC, also ready to receive data from it. (9)
On receiving data from the PC, it determines the nature of the information, based on a
given code, and then performs the required operation. (10) The delay is used for
controlling purposes.
Below is a figure showing the flow chart of the MCU program algorithm.
START
(POWER ON)
DECLARE IO
PORTS
GET VOLTAGE
DATA
GET CURRENT
DATA
LCD DISPLAY
OUTPUT DATA
TO PC
DATA
RECEIVED
FROM PC
NO
YES
DECIDE EVENT
POWER ON
YES
NO
STOP
(POWER OFF)
Fig. 9. MCU Program Flow Chart
3.2. Pc Interface Program Algorithm
1.
2.
3.
4.
INITIALIZE PROGRAM
READ RECEIVE BUFFER
DISPLAY DATA
ON_CLICK EVENT SELECT CASE (#)
CASE (1): SWITCH ON/OFF SOCKET1. CASE (2): SWITCH ON/OFF SOCKET2
CASE (3): SET OVER VOLTAGE1. CASE (4): SET OVER CURRENT1
CASE (5): SET OVER CURRENT2. CASE (6): SET UNDER VOLTAGE2
Below is a figure showing the flow chart of the (PC) program algorithm.
144
START
INITIALIZE
PROGRAM
DATA
RECEIVED
NO
YES
DISPLAY DATA
USER INPUT
NO
YES
SEND DATA TO
MCU
NO
EXIT
YES
STOP
Fig. 10. PC Interface Program Flow Chart
4. Software Development
Fig. 11. Picture Showing the MikroBasic Programming Environment
4.1. Interface Program Development
The Interface program was designed to provide for monitoring of data and initiating real-time
switching of the loads.
145
Fig. 12. Picture Showing Visual Basic Environment of the Interface Program
5. Conclusion
In concluding this project, let me review the extent to which the aim and objectives have been
realized. I was able to switch ON/OFF two electrical sockets using a PC interface program.
This was achieved via PC’s serial port communication with a microcontroller. At the end, the
designed system was able to switch ON/OFF the sockets and monitor the currents drawn by
the loads through each of the sockets. Provision was also made for over-voltage, over-current
and under-voltage protective features.
Finally, the designed system is a remarkable breakthrough in monitoring and control system
technology and should be adopted in order to explore all its numerous benefits.
6. References
[1]. William, J.P. (1986) Control Systems Engineering, 2nd Ed., pp. 1-3.
[2]. Wikibooks (2002) Control Systems Engineering, [Online] Available:
http://en.wikipedia.org/wiki.htm [2007, June 21]
[3]. Tocci, R.J. and Widmer, N.S. (2003) Digital Systems-principles and Applications, 8th Ed.,
Prentice-Hall of India, India. pp. 623, 803-805.
[4]. Parallel Port Interfacing Tutorial (2002-last update), [Online] Available:
http://www.logix4u.net.htm [2007, July 5]
[5]. Peacock, C., Serial Port Interface (2005, 15 June-last update), [Online] Available:
www.beyondlogic.org/serial.htm [2007, August 12]
[6]. Evolution of Visual Basic (2007, 28 April-last update), [Online] Available:
http://en.wikipedia.org/wiki.htm [2007, September 18]
[7]. Zayed, T., Control Electrical Appliances Using PC (2004, 31 April-last update), [Online]
Available: www.codeproject.com [2007, August 12]
[8]. PIC16F87XA
Data
Sheet
(2003-last
update),
[Online]
Available:
http://mikroelektronika.co.yu/english.htm [2007, October 14]
146
242
Design and Implementation of Computer Interfaced Voice Activated Switch
using Speech Recognition Technology
a,
* Azubuike Onuorah Patrick, b Nwankwo Nonso Prince.
a
b
Patech Electronics Co. / Research Institute Onitsha, Anambra state, Nigeria.
Department of Computer Engineering, Federal Polytechnic Oko, Anambra State, Nigeria.
a
pazubuike207@yahoo.com, bprincetechfoundation@yahoo.com
Abstract
This paper presents “Computer Intafaced Voice activated switch”using speech recognition
technology. The project Involves Speech activation or deactivation of electrical load using
voice signals or manual method. This is achieved by developing a speech application using
visual basic 6.0 and then linking it to a program that drives the parallel port of the computer.
Two stages were implemented in the design of this system; they are the hardware and
software design stages. All stages were carried out carefully, adhering strictly to the design
specifications. This proved useful as the result obtained after construction and testing was
extremely satisfactory because the system was able to switch the connected loads ON and
OFF using voice and manual control methods. This is a secure and reliable system which can
be used in homes, industries, churches, offices, e.t.c. The system involves the AT89C52
microcontroller, relays and other active and passive electronic components connected
together to form the hardware unit. The AT89C52 microcontroller was used to store all the
machine codes of the system. The computer made available in this project was used to
establish an interface or a communication link with the microcontroller. The communication
link used in this project was the parallel port. It sends the corresponding signal / message to
the microcontroller anytime a voice instruction / mannual command is received, then the
microcontroller executes this instruction by switching ON / OFF the corresponding laod
connected to it through a relay.
Keywords: Computer Interface (CI), Parallel Port (PP), Speech recognition technology
(SRT),
Visual Basic (VB), Voice Activated Switch (VAS)
1.
Introduction
1.1. Statement of the Problem
1, In the radioactive process, manual switching poses great health hazards to the operator due
to the side effects of exposure to the radioactive elements.
147
2, Manual switching is an uphill task for the disabled and the aged due to movement
constraints.
3, Sometimes, we are victims of electric shocks from our devices when we try to switch them
ON/OFF due to leakage currents from the electric switches.
1.2. Significance of Project
Implementing this system will be of great benefit to our ever demanding and developing
society. Some of these benefits include;
I. Accidental switching of devices will be taken down to a minimum in our industries
and homes.
II. Walking from point to point to ON / OFF appliances will be a thing of the past.
III. It is cost effective.
IV. Switching can be done from a far distance to safe guard personnel during dangerous
industrial processes / control.
V. Our industries and homes will become friendly to disabled people.
VI. This project is a step forward, in perfecting speech recognition technology.
1.3.0. Methodology
Speech activation or deactivation switching is all about automating real life devices using
speech. It involves developing a speech application using visual basic 6.0 and then linking to
a program that drives the parallel port of the computer. Visual basic 6.0 has some objects that
enable one to develop voice applications like Microsoft direct text to speech, direct speech
synthesis, voice dilation, voice command. These are some of the applications developed by
Microsoft and incorporated into visual basic 6.0 components library which must be invoked
into the object window and subsequently included into the form before their properties and
settings will be configured for use.
For visual basic 6.0 to access the Microsoft Voice applications, a Speech engine DLL
(dynamic link library) must be deployed in the system 32 of the computer. The devices to be
controlled are interfaced to the computer through the parallel port.
Based on the fact that VB 6.0 is an event driven program, the events of a voice command
must be registered and listed as a menu item in the program.
The particular voice command to actuate a particular device must be registered, so that
whenever the command is spoken the sound goes into the computer through the microphone
and the right event will be generated. The VB 6.0 codes handle the events and their various
cases. The desired code to actuate a particular device will be written under its case.
The device carrying the load has a microcontroller which is interfaced to the computer
148
parallel port through the data lines. Also, connected to the microcontroller is the base of the
transistor, this is passed through a buffer to strengthen the voltage signal.
1.3.1. The Block Diagram of the Computer Interfaced Voice Activated Switch
The block diagram of the system is shown in figure 1 below;
OUTPUT
INPUT
(SPEECH) /
MANUAL
METHOD
CENTRL
LOGIC &
PROCESSIN
CONTROL
G UNIT
UNIT
BUFFER
SWITCH
SECTION
Fig. 1. The Block Diagram of the Computer Interfaced Voice Activated Switch
2. Concepts of Computer Interfacing
Computer interfacing is the art of connecting computers and peripherals. An interface device is
a hardware component or system of components that allows a human being to interact with a
computer, a telephone system, or other electronic information system.
2.1. The Computer External Bus
The computer bus is a group of conductor lines that allows the memory, Central processing
unit (CPU) and the I/O devices to exchange data. External devices are connected to the
computer through an I/O interface called a port. A port is a hardware that allows data to be
shared, enter or exit a computer, network node or a communication device. There are basically
two types of ports, the parallel and serial port.
2.2. The Parallel Port
The Parallel Port is the most commonly used port for interfacing home made projects. This port
will allow the input of up to 9 bits or the output of 12 bits at any one given time, thus requiring
minimal external circuitry to implement many simpler tasks. The port is composed of 4 control
lines, 5 status lines and 8 data lines. It's found commonly on the back of PC as a D-Type 25 Pin
female connector.
2.3. Concepts of Libraries
A Library is a collection of subprograms used to develop software, they can also be defined as
a collection of system or user tasks that may be executed by other tasks in the system
(Anderson .M. 2008). Libraries contain "helper" code and data, which provide services to
149
independent programs. This allows code and data to be shared and changed in a modular
fashion. Some executables are both standalone programs and libraries, but most libraries are
not executables. Executables and libraries make references known as links to each other
through the process known as linking, which is typically done by a linker. The main reason for
libraries are to prevent the redesign of software each time a function is needed in a task and also
for resource sharing. There are actually two types of libraries: dynamic and static library.
In computer science, a static library or statically-linked library is a set of routines, external
functions and variables which are resolved in a caller at compile-time and copied into a target
application by a compiler, linker, or binder, producing an object file and a stand-alone
executable. This executable and the process of compiling it are both known as a static build of
the program. Historically, libraries could only be static. Static libraries are either merged with
other static libraries and object files during building/linking to form a single executable, or they
may be loaded at run-time into the address space of the loaded executable at a static memory
offset determined at compile-time/link-time.
2.4. Dynamically Linked Libraries
Dynamic linking means that the subroutines of a library are loaded into an application
program at runtime, rather than being linked in at compile time and remain as separate files
on the disk. Only a minimal amount of work is done at compile time by the linker; it only
records what library routines the program needs and the index names or numbers of the
routines in the library. The majority of the work of linking is done at the time the application
is loaded (load time) or during execution (runtime). The necessary linking code, called a
loader, is actually part of the underlying operating system.
3.
Design Methodology
The project construction is made up of two (2) parts; the hardware and the software part.
3.1. The Hardware Part
150
PARELLEL PORT
DB 25 connector
C1 ?
P1.0
P1.1
VCC
P2.0
P2.1
Microcontroller
unit P2.2
P1.2
18
19
20
C2 ?
D1
D2
D3
OCTA BUFFER
74244
Q1
Q2
Q3
R1 100?
R2 100?
R3 100?
12V
Live
Live
12V
Live
12V
Light1
Light2
Light3
Fig. 2. The Circuit Diagram of the Voice Activated Switch
3.2. Transistor Relay Driver
The relay driver circuit provides a high current, 5V switching capacity necessary for
actuating the 12v electromechanical relay in the load circuit. The function of the relay is to
provide the needed open circuit (O.C) and the live connection for the input of the load. The
circuit diagram is shown in fig. 3 below.
Fig. 3. Circuit diagram of a Transistor Relay Driver
151
3.3.The AT89c52 Microcontroller
It has 8 bit microprocessor, an on chip flash memory, and decoders, for decoding or locating
data in memory, internal clock timers and counters for counting and for synchronizing logic
operations in the microcontroller. It has an external clock generator circuit called the crystal
oscillator. The AT89C52 is a powerful microcomputer that provides a highly flexible and cost
effective solution to many embedded control applications.
1
2
P1.2
P1.3
P1.4
3 Port 1
4
5
P1.5
P1.6
P1.7
RESET
P3.0
P3.1
P3.2
P3.3
6
7
8
9
10 Port 3
11
12
13
Port 0
89c52
P1.0
P1.1
40
39
Vcc
P0.0
38
37
36
P0.1
P0.2
P0.3
35
34
33
32
31
30
Port 2 29
28
P0.4
P0.5
P0.6
P0.7
EA
ALE
PSEN
P2.7
P3.4
P2.6
14
27
P3.5
P2.5
15
26
Fig. 4. The AT89C52 Microcontroller Pin - out
P3.6
P2.4
16
25
P3.7
P2.3
17
24
The AT89C52 Provides the Following Features:
XTAL 1
P2.2
18
23
1. 8 Kbytes of on-chip flash memory
XTAL 2
P2.1
19
22
2. 256 bytes of on chip RAM
Vss
P2.0
20
21
3. 32 programmable I/O lines
4. Two 16 bit timer/counter
5. Full duplex serial port
6. On-chip oscillator and clock circuitry.
In addition, the AT89C52 is designed with static logic for operation down to zero frequency
and supports two software selectable power saving modes. The idle mode stops the CPU
while allowing the RAM, timer/counter. Serial port and interrupt system to continue
functioning. The power down modes saves the RAM contents but freezes the oscillator
disabling all other chips functions until the next hardware resets.
152
4. Software Part
The software part entails the development of an application program that will enable
communication with the parallel port, to send output signals and also to provide a graphical
user interface (GUI) for operators.
4.1. Graphical User Interface (GUI)
Fig. 5. Voice Activated Switch Graphical User Interface
The application package for the voice activated switch has a graphical user interface (GUI) as
shown in fig. 5 above. The Control of the device(s) connected to this system is done by
speaking the appropriate word stored in the database of the computer system like (bulb 1
come on, or fan go off) or by simply clicking the required ON/OFF button associated with
the loads. The software design provides two different types of control to the load. They are;
Mode 1 control and mode 2 control
4.2. Mode 1 control
This type of control provides only a simple ON/OFF control of the load. The load is put ON
or OFF by simply clicking the ON or OFF command button respectively.
4.3. Mode 2 control
Mode 2 allows the control of the load from both a manual switch and the voice activated
153
switch. It has the enable voice control, voice and manual, and the disable voice control
command buttons.
The enable voice control command button is used to activate voice control of the load. The
disable manual control button is used to disable the manual switching of the load.
4.4. Software Programming
Visual basic 6.0 is used for the GUI design and the programming of the software. However,
inpout32.11, a C++ program was imported to allow visual basic gain direct access to the
parallel port. Each section of the port is accessed by its own address, and will work as if
almost independent from the other. The port sections and its address are shown in the table
below:
Port section
Address (decimal)
Address (hexadecimal)
Data line
888
378h
Status line
889
379h
Control line
890
37Ah
Table 1. Ports and Addresses
To access any section of the port, three things must be known. These include; the port address,
the command to access the port, and the number to send to the port (decimal, binary, or
hexadecimal).
154
4.5 Program Code
Fig. 6. The Flow Chart of the Voice Activated Switch
5. Conclusion
After the construction and testing of the system, the following results where achieved:
 Switching of loads (bulb) ON and OFF using the software voice commands.
 Switching of loads (bulb) ON and OFF using the software manual control.
The design and implementation of computer based “Voice Activated Switch” using speech
recognition technology and manual method had been proven to be a reasonable advancement
in speech recognition technology, and control / access system technology. The work done
here is original and has not been published. The computer interface has expanded the
155
flexibility of the multi-functional Microcontroller. This is a major breakthrough in digital
design and technological advancement in general.
I therefore conclude that this system switches reliably using manual and voice methods,
so it is a secure and reliable device which can excel in this day and age.
6. References
[1] Anderson .M. (2008): “Buckey balls to Boost Flash Memory” IEE Spectrum, Vol.45,
N06. June, P15
[2] Cavin .R. Hutchboy .J.A. Zhirnov .V. (2007): “Emerging Research Architectures”
Computer, Vol 29, No 5. May, Pp 12-15
[3] Gray A.B. (2001): Learning Visual Basic . Published by Garret and Garret house.
[4] Graaf, R.F. (1999): “modern dictionary of electronics”, 9th edition, Butterworth
Heinemann publishers, MA.
[5] Hart.K.A. Johnson .W.A. (2005.): Windows System Programming, Third Edition
[6]
[7]
[8]
[9]
Published by Morgan Kaufmann Publishers
Kang S.J. Kocabas.C. Ozel .T. Shim.M. Pimparkar.N. (2001): “Semi-conductors in
Electronics” IEEE Semi-Conductors Vol. 15, No3, March, Pp 20-25.
T. Phan, and T. Soong (1999), “Text-Independent Speaker Identification,” M.S. thesis,
University of Texas, Austin, TX, USA.
Walker P.M.B. (1988). Chambers Science and Technology Dictionary, Published by
Pearson Prentice Hall
www.switches.com (accessed January 2008) “Types of Switches”
uploaded by Dr
Max Hakim.
[10] www.switchhall.com (accessed September 2008) “Switch types” Uploaded by Alfred
Hayes
156
251
Geo-databases: Expanding Geo-expert Modeling Perspectives
Elzbieta Malinowski
Department of Computer and Information Sciences, University of Costa Rica
San José, Costa Rica
elzbieta.malinowski@ucr.ac.cr
Abstract
Spatial data has been traditionally manipulated by geo experts, e.g., geographers,
cartographers, surveyors. They usually rely on geographic information systems that use
separate repositories for spatial and non-spatial data. In many occasions, these repositories
are implemented without following the principles established for the database design, leading
to problems of data inconsistency. The spatial extensions in current DBMSs open a new
possibility to develop spatial data repositories. However, this development requires the
knowledge of geo-concepts together with informatics, or in short geoinformatics concepts
related to spatial database design and manipulation. In this article, we exemplify different
approaches for implementing a spatial data repository and show the importance of using the
traditional modeling techniques extended to spatial data.
Keyword: Spatial databases, GIS, spatial queries, geoinformatics.
1. Introduction
Traditionally, spatial data was stored in a central repository in a specific format provided by
software of GISs. This software was perceived as a standalone technology until mid-1990s
and was used mainly by experts in the field [13]. This limitation was imposed due to the
complexity of tools that manage spatial data and the necessary specialized knowledge
required to understand the concepts related to spatial data format, functions and operations.
Further, these systems usually managed spatial and non-spatial data in separate repositories,
delivering much functionality for spatial data and lacking basic features for data integrity and
control, such as the ones existing in relational databases. Fortunately, the situation has
changed with the addition of spatial features to the object-relational DBMSs. These systems
allow having a huge volume of conventional (e.g., string, numeric) and non-conventional
(e.g., geometric, images) data sets in one repository. Further, they provide well-known
elements, e.g., declarative integrity constraints, triggers, concurrency control, as well as new
features related to spatial data handling, e.g., spatial indexes, spatial functions and operations,
or different formats and models for storing spatial data.
157
The GISs and spatial DBMSs differ in the features mentioned above as well as in the type of
users and their skills/ability to implement spatial data repositories. On the one hand, geo
experts have a strong background in concepts related to spatial data; however, even though
they are usually familiar with basic database concepts, in many cases they lack knowledge of
design principles and other database features. On the other hand, informatics experts are well
prepared in implementing databases; nevertheless, since spatial extension to current DBMSs
is a relatively new technology, they are not very skilled for applying their knowledge to
spatial databases. This issue affects all phases of the spatial database design, leading to
poorly-structured systems and query performance problems. This lack of knowledge about
spatial data features among informatics experts makes the communication process between
them and geo experts a difficult task. As a consequence, even though currently the GIS and
DBMS systems are clearly converging technologies, the informatics and geo knowledge do
not merge as they should in order to face upcoming technological and social changes.
In this article, based on the knowledge gained during the execution of a research project
together with geo experts, we refer to issues related to modeling and implementing spatial
repositories. Section 2 briefly defines spatial data concepts. Section 3 refers to common
design principles for modeling applied by geoinformatics experts compared to the
traditionally-used approach by geo experts. Section 4 refers to some additional aspects that
are present while modeling spatial data and Section 5 provides conclusions to this article.
2. Spatial Data Concepts
Spatial data can be modeled using the object-based approach that decomposes space into
identifiable objects describing their shapes, e.g., using the line to describe a river.
SQL/MM:1999 included for the first time the definitions of different spatial data types for
database applications [13]. Currently, due to increasing complexity of spatial applications
(e.g., GPS data, trajectory data), additional spatial data types are defined by ISO/TC211 and
OGC [10] and also included in the newest 2011 version of the SQL/MM standard [12].
When spatial objects are used, it is common practice to refer to topological relationships.
They describe how spatial values relate to each other, e.g., to test whether a river crosses a
state or whether the airport is within city limits. According to the OGC specification [9], there
are six basic topological relationships with some examples shown in Fig.1.
Touches
Overlaps
Crosses
Within
Figure 1: Examples of different topological relationships.
158
The definitions of spatial data types also include the specifications of different methods and
functions. Some of them implement the topological relationships mentioned above. Others
allow implementers to retrieve the propriety of the spatial component, e.g., its type or length,
or compose more complex components through the aggregation operators.
3. Modeling Principles
The conventional database design process includes three levels of abstraction: conceptual,
logical, and physical models. Spatial databases are databases extended with a new data type
for representing spatial objects. Therefore, the well-known design phases should also be
applied for this kind of databases [1, 6]. However, two different types of experts use this kind
of databases: geo experts who want to use capabilities of current DBMS to store spatial data
and informatics experts who become familiar with complex geo concepts, i.e., geoinformatics
experts. Due to heterogeneous backgrounds and training of these two kinds of specialists,
different approaches for database modeling and querying exist.
3.1 A Geo-Expert Perspective
Until the mid-1990s spatial data was mainly used by experts in the field (geo expert), e.g.,
geographers, cartographers, surveyors. This data was traditionally stored using hybrid
architecture with the separation of spatial and conventional data, e.g., using the widely spread
format called shape file. Each shape file represents a theme and is called a layer, e.g.,
distribution of districts or rivers. The shape file includes several files, one of which is a
representation of geometry and another one is used for storing conventional data.
A shape file usually contains the necessary information without considering any design
principles. For example, even though Costa Rica has 475 districts, the shape file includes 849
records (known as features) and among them 45 districts are represented by more than one
record. In these repeated records only the geometry is different since each record includes a
part of the geometry representing a district (many districts are formed by islands). The
calculation of the total population of Costa Rica using the population of all districts stored in
849 records, gave 7 million in the year 2000, even though the real population was 3.8 million.
The situation gets more complicated when several layers are used and each of them contains
uncontrolled information related to the same objects, e.g., the canton layer also includes the
names of provinces and the clinics layer includes data about provinces, cantons, and districts.
This lack of control in data repetition leads to data inconsistencies because the changes in the
same attribute in one layer are not reflected in other layers. Furthermore, other typical
database controls are also missing, such as unique or null values. Additionally, since there is
no control of geometries among different shape files, spatial data is also represented
incorrectly, e.g., some district limits are outside their corresponding canton limits.
The culture of relying on the physical level design (i.e., using shape files), skipping
159
conceptual and logical schemas has recently changed due to convergence of GISs and spatial
DBMSs. However, even though many models for different kinds of spatial applications have
been proposed [4] and also some design steps have been specified [5, 6], these models are not
widely known or used. As a consequence, the most common practice among geo experts is to
represent tables without any relationships between them, i.e., in the same way as the shape
files are created. An example is shown in Fig.2 and it is based on the existing attributes in
shape files. This schema is used for performing an analysis related to a distribution of rivers
and clinics in different parts of a country’s administrative division.
Figure 2: A conceptual representation of layers.
There are several particularities in the schema shown in Fig.2:
 To better understand semantics, the same attribute names are used to represent the same
data, e.g., ProvinceName; however, in reality, the different shape files include several
names, e.g., NProvince, Province, PName making the integration process difficult;
 Even though the values for masculine and feminine populations in 2000 (MasPop2000,
FemPop2000), and different types of dwellings can be obtained by aggregating
corresponding data from the districts, they are also included in the cantons, since shape
files do not manage the concept of hierarchy and data aggregations;
 There is a high repetition of the same data in different shape files;
 There are no attributes representing geometry since the common understanding is that all
objects in the schema should have spatial representation;
 The Gid attribute is used as a primary key and it also represents the key to identify
geometries stored in other files. This situation leads to the above-mentioned problem of
representing the same district as many times as many islands it contains.
As can be seen, the schema in Fig.2 does not follow the basic database design rules. In many
cases, this is due to lack of adequate training of geo experts in handling database concepts and
the shortage of computer science specialists with knowledge of spatial data features. In
addition, the geo experts are more familiar with GISs than spatial DBMSs and use them for
analysis and visualization. They are not concerned about storage structure and they do not need
160
to relate tables, because they use topological relationships to obtain the required information
about related spatial objects, instead of relying on foreign keys.
Even though the approach of using unrelated layers has worked in some cases for several
decades, currently there is great interest in incorporating spatial data into business applications
where more conventional data is attached. As a consequence, the complexity of spatial systems
increases constantly and different kinds of users require data manipulations. Therefore, GIS
modeling should be aligned with the general principle of modeling and implementing
databases to avoid data inconsistency and different types of controls. The latter either do not
exist in GISs or if they do exist, the lack of knowledge among common geo experts does not
allow their full utilization.
3.2 A Geoinformatics-Expert Perspective
There are several proposals for conceptual modeling of spatial databases. Some of them
formally define a set of spatial data types and operations without providing visual help of its
representation [7], while others rely on graphical representation extending the ER model or
UML notations [1, 6, 10]. The most common practice in representing a spatial component and
topological relationships is to use a pictogram [3, 10]. Fig.3a) shows the previous example
modeled using common database principles and pictograms for representing objects’
spatiality and topological relationships. The schema in Fig.3a) includes five entity types, i.e.,
River, District, Canton, Clinic, and Type, where the first four are spatial.
PassThrough
(0,n)
District
(1,1)
Code
PartialCode
Name
MasPop1997
FemPop1997
MasPop2000
FemPop2000
DwellingO
DwellingD
DwellingC
(0,n)
River
Id
Name
Length
BelongTo
District
(1,n)
PassThrough
DistrictFK
RiverFK
Canton
Code
PartialCode
Name
Area
River
(0,n)
Id
Geometry
Name
Length
Include
(1,1)
Clinic
Id
Name
Resources
(1,1)
IsOf
(1,n)
Canton
Code
Geometry
PartialCode
Name
MasPop1997
FemPop1997
MasPop2000
FemPop2000
DwellingO
DwellingD
DwellingC
CantonFK
Code
Geometry
PartialCode
Name
Area
Clinic
Id
Geometry
Name
Resources
TypeFK
DistrictFK
Type
Type
Id
Name
Description
Id
Name
Description
a)
b)
Figure 5: Schemas for a spatial database: a) conceptual and b) logical.
The relationship types in the schema include restriction represented by the topological
relationship if the related entity types are spatial. Fig.5a) presents four relationship types:
PassThrough, BelongTo, Include, and IsOf. The first three contain different topological
relationships, e.g., the topological relationship in PassThrough indicates that only rivers that
intersect districts are allowed. This scheme (Fig.5a) can be mapped to the object-relational
161
model using the well-known mapping rules extended with representation of a spatial
component as an additional attribute [8] (Fig.5b).
The physical schema depends on the chosen DBMS. For example, to represent an attribute of
a spatial data type Oracle provides a unique data type mdsys.sdo_geometry that represents
any types of geometry (e.g., line, surface), independently from the spatial data type expressed
in the conceptual schema: the same type is used for creating the River table with a geometry
of line and the District table with a geometry of a surface. This geometry type is a complex
type instantiated during insert operations. This would allow users to insert a value that is
different from the spatial data type specified in the conceptual schema (e.g., a line for a
district). Therefore, a trigger must be added to enforce the condition that verifies the
geometry type [8]. The same solution of using triggers can be applied to verify the constraint
expressed by topological relationships in the conceptual schema [8]. Additionally, to
represent the relationship types between spatial entity types, geo- and informatics knowledge
is required as can be seen in the example for inserting data into the PassThrough table, where
a relate operator ensures the inclusion of rivers and districts that are related (no disjoint):
insert into PassThrough select r.Id, d.Code from River r, District d
where sdo_relate(d.Geometry, r.Geometry, ‘mask=anyinteract’)= ‘TRUE’;
4. Implementation Consequences
The importance of using correct design helps to avoid the above-mentioned problems and to
improve query performance. For example, consider the queries for retrieving names of
districts and rivers that pass through those districts. There are several possibilities to express
this query in Oracle; four of them are shown in the first column of Table 1. Three of these
queries differ in the where clause using only topological relationships (geo-expert approach)
or conventional join (geoinformatics-expert approach).
The queries refer to three tables: the River table with 4264 records, the District table with 475
records, and the PassThrough table with 11149 records. They were run in the local server
with the aim to compare execution times and plans. The execution time is specified with the
precision of seconds to compare the time magnitude. To avoid the detailed description of all
operations, the explanation of execution plans refers only to the join strategies and retrieval
of geometries chosen by optimizer, since they are usually the most costly ones. These queries
provide the same results, however, if only topological relationships are used (the first and
second queries), the optimizer cannot choose among different join strategies and the slowest
nested loop strategy is applied. Further, more detailed analysis of execution plans shows that
when topological relationships are used in the where clause, the geometries are retrieved in
earlier stages. Both aspects affect the magnitude of execution time.
The first and second queries in Table 1 compare geometries from both tables, i.e., River and
District, acting as a spatial join operator with a predicate different from the disjoint
162
topological relationship. Oracle introduces an additional table function called sdo_join that
explicitly refers to the spatial join operation. Using this table function for expressing a query
(third row in Tab.1), the execution time increases to 289 seconds. Notice also that when the
geo-expert perspective is used without the PassThrough table, only the more expensive
options could be utilized, i.e., options in the first, second, and third rows in comparison to the
geoinformatics approach with a conventional join strategy that executes in 0.26 seconds.
Therefore, the users must be aware of different possibilities that exist for implementing
queries with spatial data.
Table 1: Comparison of execution times for different queries.
Query
Exec.
Time
Join strategy
(sec.)
Select d,Name,r.Name
from River r, District d
9.10
Nested loop join with the full scan of
the River tables and rowid access to
where sdo_relate(d.Geometry,
r.Geometry, ’mask=anyinteract’)=’TRUE’
the District tuples are chosen. The
spatial index over the districts’
geometries is used in the earlier stage
of the execution.
Select d,Name,r.Name
from River r, District d
where
sdo_relate.sdo_intersection(d.Geometry,
Nested loop join with the full scan of
the River and District tables are
chosen. The geometries are selected
in the earlier stage of the execution.
56.60
r.Geometry,0.005) is not null;
Select d.Name,r.Name
from River r, District d,
table(sdo_geom.sdo_join(’River’,’Geometr
y’, ’District’,’Geometry’,’mask=anyinteract
’)) c
where c.rowid1=r.rowid and
c.rowid2=d.rowid
289.00
Select d,Name,r.Name
from River r, District d, PassTrough p
0.26
Two hash joins are chosen. However,
the inner hash join uses a “collection
iterator (pickler fetch)” that is a very
costly process of converting a packed
array into a byte stream for storage,
manipulation, or display.
Two hash joins are chosen. No
geometries need to be retrieved.
where r.Id=p.RiverFK and
p.DistrictFK=d.Code
5. Other Considerations
The geoinformatics-expert design perspective was applied for developing spatial databases
and spatial data warehouses for a Costa Rican institution using a set of 80 different shape
163
files. Before the development of spatial database applications, different geo-expert users
manipulated subsets of these shape files according to their specialties. These different shape
files were located in separate systems and were difficult to share among interested parties.
The creation of a normalized database with high data quality (different cleaning processes
were applied) opens the possibility of data reuse and expands the perspectives of analysis.
Nevertheless, there are several issues that must be resolved in order to take advantage of
spatially-extended DBMSs. For example, there is a lack of geomatics vision of data reuse
among specialists from different fields. For example, tourist agencies require the information
about beaches and airports. However, since these two shape files were created by separate
institutions unrelated to tourism, spatial objects are represented as points. Therefore, users
cannot obtain the data about the beach size or the length of the landing tracks in airports.
Another example is the use of lines for a spatial representation of risk zones that do not allow
users to revise whether other objects (schools, clinics, etc.) are within these zones. This
situation forces different institutions/organizations to incur to high cost of digitalizing the
same territory using different scales and geometries for representing spatial extents and
creating separate systems. Although the solution for using different spatial types to represent
the same object already exists in spatial databases referring as multiple representations (e.g.,
[10]), it is not widely known, particularly among geo experts.
Another issue is the lack of established methodology for the development of spatial databases
and spatial information systems. Although some proposals exist for spatial database design [1,
6, 13], there is still a need to expand existing approaches or propose new ones that facilitate
the development of complex spatial information systems with large spatial databases.
Furthermore, spatial data is affected by changes in time. In general, geo experts lack the
knowledge about the possibility of developing spatial data warehouses (SDWs). Using
multidimensional models for the SDW design and implementation allows, in a natural way, to
associate different changes in spatial extensions of the same object to specific instances of
time through the time dimension [8]. Later on, spatial OLAP tools can be applied that, as
already demonstrated [2], are more adequate for users unfamiliar with complex concepts
related to spatial data manipulations. Accordingly, new kinds of users can enhance the
decision-making processes relying on spatial data.
6. Conclusions
Geographic or spatial data forms part of many every-day activities and is used in different types
of applications. This shift of production and usage of geographic data from geo experts, e.g.,
geographers, cartographers, surveyors, to other types of users requires well-designed and
implemented spatial data repositories to avoid problems of data inconsistency and poor query
performance. Therefore, there is need to have geoinformatics experts who understand complex
features of spatial data representation and manipulation, as well as DBMS features related to
164
design, data manipulation, and other relevant topics. Thus, people involved in spatial data will
be able to envisage a technological change of the convergence of GISs and DBMSs.
7. References
[1] Bédard Y. Principles of Spatial Database Analysis and Design. In Longley P.A. et al.
(eds.) GIS: Principles, Techniques, Applications & Management, chap.29, 2005.
[2] Bédard Y. A New Way to Unlock Geospatial Data for Decision-making. In Proc. of Conf.
on Directions on Location Technology and Business Intelligence, 2005.
[3] Bédard Y. & Larivée S. Modeling with Pictogramic Languages. In Shekhar S. et al. (eds.),
Encyclopedia of GIS, Springer, 2008.
[4] ESRI. Data models. http://resources.arcgis.com/content/data-models, 2010.
[5] ESRI. Geodatabase design steps.
http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=Geodatabase_design_s
teps, 2010.
[6] Filho J.L. & Iochpe C. Modelling with a UML Profile. In Shekhar S. et al. (eds.),
Encyclopedia of GIS, Springer, 2008.
[7] Güting R.H. & Schneider M. Real-based Spatial Data Types: the ROSE Algebra. The
VLDB Journal 4(2): 243-286, 1995.
[8] Malinowski E. & Zimányi E. Advanced Data Warehouse Design: From Conventional to
Spatial and Temporal Applications. Springer, 2008.
[9] OGC. Standards and Specifications. http://www.opengeospatial.org/standards, 2010.
[10] Parent C., Spaccapietra S., & Zimányi E. Conceptual Modeling for Traditional and
Spatio-Temporal Applications: The MADS Approach. Springer, 2006.
[11] Shekhar S. & Chawla S. Spatial Databases: A Tour. Prentice Hall, 2003.
[12] SQL/MM ISO/IEC 13249-3:2011. SQL Multimedia and application packages – Part3:
Spatial, 2011.
[13] Yeung A. & Hall G.B. Spatial Database Systems: Design, Implementation, and Project
Management. Springer, 2007.
165
252
A Non-Voice based Digital Speech Watermarking
Mohammad Ali Nematollahi a,*, S.A.R Al-Haddad b,*,
M. Iqbal Bin Saripan c,*, Shayamala Doraisamy d,*, Laith Emad Hamid e,*
a
Department of Computer & Communication Systems Engineering, Faculty of Engineering,
University Putra Malaysia, UPM Serdang, 43400 Selangor Darul Ehsan, Malaysia
greencomputinguae@gmail.com
b
c
d
sar@eng.upm.edu.my
iqbal@eng.upm.edu.my
shamala@fsktm.upm.edu.my
e
laithemad@gmail.com
Abstract
There are a lot of techniques for speech watermarking, and many challenges and
opportunities in terms of robustness, capacity, and imperceptibility are not explored yet; this
paper tries to present non-voiced digital speech watermarking by using the excitation of the
autoregressive (AR) model. By applying band-pass filter, watermark bits are embedded into
the band-pass area which provides robustness against telephony channel attack. Also this
paper will use elliptic filtering because no filtering techniques work better than elliptic
filtering. Synchronization may be applied for detecting the location of the watermark bits in
speech signal. The experimental results have shown that for embedding a watermark in
narrowband speech at a bit rate of 450 bit/s, the watermark algorithm is very robust against
noisy channel and amplitude modulation and masking attacks.
Keyword: Digital speech watermarking, autoregressive model (AR), Speaker verification/
authentication.
1. Introduction
Watermarking is the technique and art of the hiding additional data (such as watermarked bits,
Logo, text messages, etc.) in the host signal, which may be an image, video, audio, speech or
text, without any perceptibility of the existence of additional information. The additional
166
information which is embedded into the host signal should be extractable and also must resist
against various intentional and unintentional attacks. The conventional digital speech
watermarking process is depicted in Figure 1.
Figure 1: Fundamental architecture of digital speech watermarking.
2. Autoregressive model
The watermark algorithm work is based on the autoregressive (AR) model which represents
the speech signal S(n) as predictor coefficients C(n), a white Gaussian excitation e(n) and
time-variant gain G(n).
P is the order of the model.
3. Speech watermarking based on phase modulation
In this paper we will use phase spectrum of non-voice speech signal for watermarking. This
technique provides phases modification easily without MSE distortion because the energy
remains the same as the state when watermark is not embedded into the speech signal.
By applying phase of the speech signal, we can provide more capacity for watermark bits.
However this technique has two main drawbacks. Firstly, the phase modulation is not totally
inaudible for human auditory system (HAS), and secondly, the channel is not always an
AWGN channel.
3. Proposed algorithm
Every watermark algorithms has two aspects, first watermark embedding and then watermark
extraction as a proof of ownership.
3.1. Embedding Algorithm
As mentioned in section 1.B, phase modulation replaces the watermark bits instead of gain
excitation in AR model of host speech signal.
The main advantage of this technique is that the watermarked speech is not different from
167
original speech for non-voice segment and it is totally imperceptible. Also it is possible to
replace the white Gaussian e(n) of original speech signal by watermark bits. The speech
signal has two domains: temporal domain and spectral domain. In temporal domain,
watermark bits are only embedded into the non-voice segments and the voice segments
remain like the original speech. By using the thresholds in zero crossing rate (ZCR) and
Energy level (MSF), we can detect the voice and non-voice parts of speech signal.
if(msf > thresh_msf)
Voice
Else
unvoice
if (ZCR < thresh_zc)
Voice
Else
unvoice
In the spectral domain, by applying the band-pass and band-stop filters we can embedd the
watermark bits only in band-pass frequency of the speech signal which is useful during
transmission through the channel. Figure 2 presents embedding procedure. First, voice and
non-voice are detected by Voice activity detection (VAD) from full band of speech signal.
Then Yule AR estimator calculates the excitation G(n) and linear prediction coefficient (LPC).
After that, systems synthesize speech segment for detection from the original speech, which
passes through the band-pass filter. To calculate the error e(n). The speech signal is
decomposed into two frequency sub-bands by applying the band-pass and band-stop filters
without down-sampling which is shown in Figure 2 and Figure3. Voice segments are
reconstructed without any modification. On the other hand, watermark bits are inserted by
substituting e(n) of the non-voice segments. Finally the unmodified band-stop sub-band is
added to the result for synthesizing watermarked speech signal.
Figure 2. Elliptic Band-pass filter
168
Figure 3. Elliptic Band-stop filter.
Figure4. Watermark embedding of Non-voiced speech signal.
3.2. Extraction Algorithm
The whole procedure is depicted in Figure 5. Because the main aim of this paper is digital
speech watermarking, first, input speech must be sampled by RLS adaptive equalizer. We
assume that the received speech signal is synchronized properly. However in some situations,
synchronization must be done. As mentioned in the embedding section, the watermarks bits
are embedded into the band-pass frequency of speech, that’s why the received speech must be
passed through the band-pass filter same as embedding step. By applying adaptive filtering
for compensating the all-pole of embedded and unknown effect of transmission channel filter.
In the extraction process, we can apply training symbols instead of recursive least square
(RLS) ,which has more advantages such as tracking behavior, rate of convergence and no
need of inverse matrix due to recursive nature, over other equalization methods such as LMS.
The RLS adaptive filter calculates the e’(n)=U(n)w(n) . Finally, the watermark bits are
169
extracted after synchronization and equalization by applying a simple minimum Euclidean
distance metric.
Figure5. Watermark extractor scheme.
4. Overall System Evaluation
4.1. Robustness
We measured robustness of the proposed algorithm by using bit error rate (BER) of detected
watermark bits which is presented in Table1.
Table1. System robustness under various channel attacks at 690 bit/sec.
Transmit ion channel
Attacks
BER (%)
BSC capacity
Ideal channel
Band-pass
Flat fading 3Hz
AWGN 30 dB
IRS filter
Aeronaut channel
All-pass filter
3.5
3.5
3.6
4.8
4.9
5.2
7.7
540
540
537
499
494
486
420
4.2 . Capacity
The majority of segments of the speech signal are classified as non-voice which directly
affects watermark capacity for example, out of 57 sec of speech, 32 sec are non-voice. Table
1 presents binary symmetric channel (BSC) which is calculated by
170
R is the Rate and P is BER probability.
4.3 . Inaudibility
The evaluation test has been applied by subjective mean opinion score (MOS) and objective
signal to noise ratio (SNR). The results showed that watermark bits are audible on ideal
transmission channel but inaudible in the presence of AWGN and different channel attacks.
The MOS of system is about 4.06 and SNR is 32.13 dB.
Table 2: MOS grading scale
MOS
Description
5
Imperceptible
4
Perceptible, but not
annoying
3
Slightly annoying
2
Annoying
1
Very Annoying
4.4. Implementation
By using random input speech signal of utterances (six male, six female), we recorded 60
seconds of speech with sampling frequency of Fs=8000 then it was analyzed by LP with
order of P=10 and the watermark bits were embedded into frequency band between 650 to
3300 Hz. And watermark bits were selected from interval (0 to 1). Also there is a possibility
for trading off between robustness and capacity by changing each factor.
4.5. Discussion
In some detector systems the LP coefficients are not available for extraction process that’s
why applying the LPC embedding for adaptive filter in the receiver part is not a realistic
approach. Different speech watermarking approaches have various contributions on
inaudibility, capacity, and robustness and also they have different applications in real words
and the results are under different channel attacks which made it difficult for us to compare
our method to other methods.
5. Conclusions
Speech watermarking based on phase modulation has more advantages over
quantization-based speech watermarking such as that the watermark capacity does not depend
171
on the noise channel distortion ratio. However it depends on the host signal to channel noise
ratio. Also the result showed that we can apply the non-voice segments of speech signal for
watermarking. Our algorithm does not claim optimality but based on the experimental results,
this algorithm can insert 690 bits which the total BER is about 6.7 % under band-pass
channel attacks with bandwidth around 2.6kHz and SNR of each frame having 31.11 dB. The
large study area is available through different implementations and theoretical capacity which
is a new opportunity for researchers to increase performance. For instance, using noise of
speech signal in voice signal can be studied too.
6. References
[88] S. Verdú and T. S. Han, “A general formula for channel capacity,” IEEE Transactions
on Information Theory, vol. 40, no. 4, pp. 1147–1157, Jul. 1994.
[89] Y. Wu, “A proof on the minimum and permissible sampling rates for the firstorder
sampling of bandpass signals,” Digital Signal Processing, vol. 17, no. 4, pp.
[90] 848–854, Mar. 2007.
[91] J. R. Barry, E. A. Lee, and D. G. Messerschmitt, Digital Communication, 3rd
ed.Springer, 2004.
[92] S. Haykin, Adaptive Filter Theory, 4th ed. Prentice-Hall, 2002.
[93] ITU-T Recommendation G.191: Software Tools for Speech and Audio Coding
Standardization, International Telecommunication Union, Sep. 2005.
[94] M. Nilsson, B. Resch, M.-Y. Kim, and W. B. Kleijn, “A canonical representation of
speech,” in Proceedings of the IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP), vol. 4, Honolulu, Hawaii, USA, Apr. 2007,
pp.849–852.
[95] P. Moulin and R. Koetter, “Data-hiding codes,” Proceedings of the IEEE, vol. 93, no.
12, pp. 2083–2126, Dec. 2005.
[96] G. Sharma and D. J. Coumou, “Watermark synchronization: Perspectives and a new
paradigm,” in Proceedings of the IEEE Conference on Information Sciences and
Systems, Princeton, NJ, USA, Mar. 2006, pp. 1182–1187.
[97] D. J. Coumou and G. Sharma, “Watermark synchronization for feature-based
embedding: Application to speech,” in Proceedings of the IEEE International
Conference on Multimedia and Expo, Toronto, Canada, Jul. 2006, pp. 849–852.
[98] M. Schlauweg, D. Pröfrock, and E. Müller, “Soft feature-based watermark decoding
with insertion/deletion correction,” in Information Hiding, ser. Lecture Notes in
Computer Science. Springer, 2007, vol. 4567/2008, pp. 237–251.
[99] Konrad Hfbauer, “Speech watermarking and Air Traffic Control”. Graz, 2009.
[100] Nematollahi, M.A.; Al-Haddad, S.A.R.; Doraisamy, S.; Saripan, M.I.B., “Digital Audio
and Speech Watermarking Based on the Multiple Discrete Wavelets Transform and
Singular Value Decomposition”, Modelling Symposium (AMS), 2012 Sixth
Asia Digital Object Identifier: 10.1109/AMS.2012.54 Publication Year: 2012 , Page(s):
109 – 114.
172
332
Autonomous UAV support for rescue forces using Onboard Pattern
Recognition
Chen-Ko Sunga,*, Florian Segorb
a
Fraunhofer IOSB, Fraunhoferstr. 1, Karlsruhe, Country
E-mail address: chen-ko.sung@iosb.fraunhofer.de
*E-mail address: chen-ko.sung@sung.de
b
Fraunhofer IOSB, Fraunhoferstr. 1, Karlsruhe, Germany
E-mail address: florian.segor@iosb.fraunhofer.de
Abstract
During search and rescue operations in case of man-made or natural disasters the application
forces need exact information about the situation as soon as possible. At the Fraunhofer IOSB
a security and supervision system is developed which uses varied modern sensor systems for
this purpose. Beside land robots, maritime vessels, sensor networks and fixed cameras also
miniature flight drones are used to transport the most different payloads and sensors. To gain
a higher level of autonomy for these UAVs, different onboard process chains of image
exploitation for tracking landmarks and of control technologies for UAV navigation were
implemented and examined to achieve a redundant and reliable UAV precision landing. First
experiments have allowed to validate the process chains and to develop a demonstration
system for the tracking of landmarks in order to prevent and to minimize any confusion on
landings.
Keyword: AMFIS, mobile sensor carrier systems, adaptive, guiding point, information fusion,
landing system, rescue team, automatic landmark tracking
1. Introduction
The civil security and supervision system AMFIS was developed at the Fraunhofer IOSB as a
mobile support system for rescue forces in accidents or disasters. The system is designed as
an open integration hub for a large number of heterogeneous sensors, sensor networks and
sensor carriers to support rescue forces optimally. Beside cameras, sensor knots, ground
robots, and underwater robots, unmanned aerial vehicles (UAVs) [1] are also used. This
concern in most cases vertical take-off and landing (VTOL) systems, which navigate
themselves on the basis of the global positioning system (GPS). To gain a higher level of
autonomy for these systems different onboard process chains of image exploitation for
tracking landmarks and of control technologies for UAV navigation were implemented and
173
examined to achieve a redundant and reliable UAV precision landing.
The benefits of onboard process chains are multiple: the data transmission from the UAV to
the ground station is reduced and the level of autonomy for UAV operations is increased. The
methods used for the automatic landmark tracking are invariant to rotation and scaling. They
are efficient, robust, and adaptive regardless of the flight level of the UAV. In this paper, the
selected onboard process chains for the automatic landmark tracking, for UAV navigation and
for conversion of landmark positions from image coordinates to world coordinates in video
sequences are presented. First experiments have allowed to validate the process chains and to
develop a demonstration system for the tracking of landmarks in order to prevent and to
minimize any confusion on landing.
2. System Overview
The security and surveillance system AMFIS developed as a technology demonstrator at the
Fraunhofer IOSB is designed as a support system for rescue and emergency units. Assuming
that stationary but also highly mobile sensor systems will become more and more relevant in
the near future, the system provides a homogeneous an intuitive workspace to simplify the
use of heterogeneous systems. To make this possible different attempts are realized and tested.
Besides, the reduction of the operating complexity as well as the fusion of data from different
data sources to generate a comprehensive situation picture, the construction of the user
interfaces plays a central and important role. In addition, an essential factor is the modularity
and hence the adaptability of the system. The sensors and sensor carriers used in the
technology demonstrator can be seen as a place holder and therefore can be exchanged or
complemented very simple to provide a suitable system for different purposes.
To test the AMFIS system in different situations and application scenarios it was equipped
exemplarily with a rather big range of diverging sensor systems.
Primarily EO and IR cameras are used as sensors for surveillance purposes. These sensors are
complemented with movement dispatch riders, gas or vibration sensors and a huge number of
secondary sensors, like GPS, acceleration sensors or attitude sensors. These sensors are
installed either on fixed positions, or are carried by mobile systems to their destination. On
this occasion different flight platforms, ground robot, as well as over and under water vessels
are used.
The ground station allows the application of these different systems by providing the operator
always with the same or an only slightly modified interface regardless of the type of the used
asset. What is valid for the hardware assets is also considered by the development of the
support systems for the operator. Thus, new backend systems can complement the available
ones when required or substituted. To guarantee an easy, quick and efficient application of
the different subsystems the operator is supported by different backend systems. Work
174
routines which are not necessarily relevant for application or only stress the operator
needlessly are simplified as far as possible or completely automated. The aim is to use the
integrated mobile systems as autonomous as possible and to process the data stream in such a
way that an overflow can be precluded. Especially in case of the miniature flight drones this
principle can be used efficiently. A great number of functions which are keeping the UAV
alive and deals with the fulfillment of its job can be automated. Collision control, positioning,
reaction to certain events up to heading to a desired position or flying over a defined area are
done by the ground station when needed which therefore reduces the working load on the
operator. A parallel application of different systems becomes possible.
Additional intelligent analyzing systems support the work with the incoming data. Video
analyzing systems as for example ABUL [2] can be integrated easily and simplify therefore
the processing of the data.
3. Application Scenarios
The security and surveillance system AMFIS [3] has been developed as an adaptive system of
systems to be capable of dealing with a large number of different demands. Besides, changing
surroundings, advances in the field of sensor technology and future demands had to be
considered on creating the system. Hence, the essential application scenario can become very
basically formulated. If in situ sensors and sensor systems are not sufficient or even do not
exist to provide enough relevant information about the current situation, the advantages of
small sensor carriers in combination with the most different sensors conformist on the
situation can take effect.
Because particularly UAVs can operate independent from infrastructure like paths or streets
and regardless of the state of the ground, their application is examined under a special focus.
Sophisticated multi rotor systems are equipped with a large number of sensors which support
automatic flight.
This allows only a short-term but extremely mobile aerial reconnaissance, in particular by
providing a view into threatened or dangerous areas without endangering human life or by
creating images of areas which are only very hardly accessible.
In addition, the bird's-eye view can be also be used to receive a comprehensive overview
about complex situations [4].
Beside the function as a sensor carrier for cameras the aerial systems are also used for the
transport of other sensors as for example chemical measuring systems to analyze poisonous
materials in the air. Thus, invisible menaces can be more exactly determined and allows a
better protection of people and rescue forces.
As big as the advantages of an aerial reconnaissance are, as complicated and, however,
time-luxurious is the application of flying drones. To make this generally possible the
operating complexity has been strongly reduced by the AMFIS ground control station (GCS)
175
and the autonomy of the drones. This is primarily possible because the drones are equipped
with GPS systems. The positioning system GPS allows quick automatic starting, a position
regulation and the homing, so that no pilot is required in these cases to manually take over
control. Though, the landing using GPS is possible, a lot of space is required to provide a
secure area on account of the inaccuracy of the GPS.
To reduce the positioning error and to provide a UAV capable of a precise automatic landing
without any manual intervention a system was designed, which allows real time onboard
pattern detection. The detected guiding point is used to improve the positioning of the UAV
in world coordinates and to reduce the inaccuracies of the inertial measurement unit (IMU).
The hard- and software components used in this landing system are fully integrated into the
AMFIS GCS to further reduce the workload on the operators.
The following chapters describe the mark-based recognition procedure for positioning.
4. Landmark Detection using an adaptive operation
The investigation concentrated on the visual detection of a man-made landmark with a
fish-eye lens. For this purpose we assume, that the mark must always be clearly visible from
the image sensor, independent of the flight level of the UAV during the in-flight detection
process. Numbers and characters are good land marks, because they have a system behind
them, and can encode with high information content. For the validation of the process chains
and for the development of a demonstration system the character “H” is used for the mark of
the landing site. To test the generality of our algorithm of the landmark detection and pattern
recognition, other characters or patterns will be used in the future.
Many segmentation methods have been developed and implemented in the past in order to
reduce the search area for the detection of target objects [5, 6 and 7]. One of the methods is
the so called binarization. An image is first binarized using the method of
foreground-background separation. The binarization is performed using one or more
threshold-values and creates a black and white image (blob image) as result. The white areas
are intended to represent target images and the black areas the background. Crucial for a good
binarization according to the respective task is an appropriate threshold setting. Such an
adapted threshold determination is generally not trivial, if the gray or color value ranges for
the relevant image regions are not known in advance. If an image is binarized with a lower
threshold, the foreground gets too many pixels. If the image is binarized with a higher
threshold, the target loses its signature or pixels.
In our setup we assume that the landmarks have man-made special forms with selected colors.
A color-to-gray conversion algorithm converts the color images to gray value images. The
colors of a landmark are assigned to higher gray values of a gray value image. The adaptive
and selective multi-target method [8] is used to separate the landmarks from the background.
Without reducing the original image resolution, for example by Gaussian pyramid, the
176
images are segmented after a binarization and noise reduction. The search areas for the
detection of landmarks are therefore drastically reduced to some blobs or regions of interest
(ROIs). The blobs are candidates for the detection of a landmark on a landing site in image.
Figure 1 shows the selected ROI with his vertices and green bounding box as a detected
landmark.
5. Recognization and Inerpretation of Landmark Images
The vertices of the blobs (see Figure 1c) are calculated for the correction of pattern distortion
caused by the camera pose and fish-eye lens. The image data that lie within the vertices must
be transformed back to a standard position with a standard size before the pattern recognition
is applied. This step ensures a rotation and scale invariant pattern recognition. Many pattern
recognition methods can be used for the interpretation of the transformed regions where the
blobs are contained [9]. Knowledge-intensive and intensive learning methodologies do not fit
the system requirement because the computational power is low within the flying platform.
For the onboard image evaluation a non-compute-intensive process - so-called zigzag method
- was developed and applied. This process analyzes how many binary values of relevant parts
in the transformed region are correlated with the expected values. If a back-transformed
region has a high correlation, this region is recognized as a landmark and interpreted as the
capital character "H". The position and rotation of the landmark in the image are calculated
from coordinates of their vertices.
a)
b)
c)
Figure 1: The search areas for the detection of landmarks are drastically reduced to some blobs or regions of
interest (ROIs). a) An original image. b) Blobs after the binarization and noise reduction. c) A selected RIO
with his vertices and green bounding box as a detected landmark.
177
6. Results
A fish-eye lens must be used sometimes in order to capture more area, because sometimes the
activities in the surroundings of the landing site should also be known. In this case the
imaged landmarks may be distorted due to the fish-eye lens and rotational motion (see Figure
2). Nevertheless, the distorted landmark can still be correctly detected and recognized at a
low height of about 10 meters. That means all parameters in the process chains are well
matched for that case. No rectification for distorted images is made because there is no more
processing power onboard the UAV. The resolution of the landmark is drastically impaired if
the machine ascends to a higher flight level.
Figure 3 shows the X- and Y-Positions of the detected and recognized landmark in image
over 1000 images with green “x”. 164 images with unrecognized landmark and 281 images
with no landmark are registered with red “+”. About 16.4% (= 164 / 1000) of input images
are unrecognized. Thus the detection rate and the recognition rate is about 83.6%.
The results after the rotation and scale-invariant pattern recognition in image sequences are
shown by Figure 4. The feature detection and pattern recognition in the process chain are
working properly, even when the UAV rotates around the landing site. The center of the
recognized pattern is marked with a red circle. Independent of the sensor pose, the position of
the landing site also on the image border in images is detected correctly. The image
coordinates of the center of the landing site are transferred to world coordinates in order to
calculate the UAV pose. Figure 7 shows that the UAV approached the landing site and then
flew away.
landmark
Figure 2: The imaged landmarks may be distorted due to fish-eye lens and rotational motion. The resolution
of the landmark is drastically impaired if the machine flies height.
178
Figure 3: The X- and Y-Positions of the detected and recognized landmark over 1000 images are registered with
green “x”. The images with no landmark and unrecognized landmark are registered with red “+”
.
Figure 4: The pictures above show the fly maneuver over the landing site. The process chain works well with
image sequences that are captured with a fish-eye lens.
179
7. Conclusions and Future Work
In this paper we present a system to support rescue forces in disasters or accidents. To
provide a better data basis for creating situation awareness different sensor assets are used to
generate information. On account of their special qualities, on this occasion, a main focus lies
on the application of flying miniature drones. To make their application possible important
functions not relevant for application must be automated.
For this purpose a concept for onboard pattern recognition for autonomous UAV landing has
been presented. The cumulative histogram is used to find out backwards the adaptive
threshold value for the detection of pattern images in image sequences. The extracted pattern
can be recognized by using a so-called zigzag method. The results of the investigations
motivated us to add more characters or patterns and active components. Goals are to develop
a system for precision landing using an active landmark that is recognized by the drone,
allowing an additional visual control of the air component, including the necessary
procedures for the evaluation of multi-sensor image data.
8. References
[101] Bürkle, A., “Collaborating Miniature Drones for Surveillance and Reconnaissance”,
Proc. of SPIE Vol. 7480, 74800H, Berlin, Germany, 1-2 September (2009).
[102] Heinze, N., Esswein, M., Krüger, W. and Saur, G., “Image exploitation algorithms for
reconnaissance and surveillance with UAV”, Proc. of SPIE Vol. 7668, Airborne
intelligence, surveillance, reconnaissance (ISR) systems and applications VII (2010).
[103] Leuchter, S., Partmann, T., Berger, L., Blum, E.J. and Schönbein, R., “Karlsruhe
generic agile ground station”, Beyerer J. (ed.), Future Security, 2nd Security Research
Conference, Fraunhofer Defense and Security Alliance, 159-162 (2007).
[104] Segor, F., Bürkle, A., Kollmann, M. and Schönbein, R., “Instantaneous Autonomous
Aerial Reconnaissance for Civil Applications - A UAV based approach to support
security and rescue forces”, The 6th International Conference on Systems ICONS 2011,
St. Maarten, The Netherlands Antilles, 23-28 January (2011).
[105] Sung, C.-K., “Extraktion von typischen und komplexen Vorgängen aus einer Bildfolge
einer Verkehrsszene”, Bunke, H., Kübler, O., Stucki, P. (Hrsg.), Mustererkennung
1988, Informatik-Fachberichte 180, 90-96 (1988).
[106] Navon, E., Miller, O. and Averbuch, A., „Color image segmentation based on adaptive
local thresholds”, Image and Vision Computering, Vol. 23, 69-85 (2005).
[107] Sezgin, M. and Sankur, B., "Survey over image thresholding techniques and
quantitative performance evaluation", Journal of Electronic Imaging, Vol. 13, Issue 1,
146-165 (2004).
[108] Sung, C.-K., “Adaptive and Selective Multi-Target-Tracker”, Proc. of SPIE Vol. 8137,
81370T-1, San Diego, California, USA, 23 August 2011.
[109] Wood, J., “Invariant pattern recognition: a review”, Pattern Recognition, Vol. 29, No.
1, 1-17 (1996).
180
243
Neural Network Modeling of Breaking Force in Variable Road and Slope
Conditions
Recai Kusa*,Ugur Taskiranb,Huseyin Bayrakcekenc
a*
Selcuk University, Technical Education Faculty, Automotive Department, Department
Konya, Turkey
recaikus@gmail.com
b
Selcuk University, Technical Education Faculty, Computer Education Department, Konya,
Turkey
utaskiran@selcuk.edu.tr
c
Afyon Kocatepe University, Technology Faculty, Automotive Department Afyon, Turkey
bceken@aku.edu.tr
Abstract
Most of the traffic accidents occur when the vehicle could not stop at desired point and time.
The most important factors affecting the breaking performance are wheel and road conditions
and parts which make up breaking system. Breaking test can be performed as road tests or
can be carried out on breaking test equipment. Because road tests require special test areas
and long time to complete, experiments performed with test equipment are preferred. The
duration of tests and studies can be shortened by modeling experimental results with artificial
neural networks (ANN). In this study, breaking outcomes are examined in different roads like
linear and side slope roads, in different slipping conditions of the road; differences between
the breaking performances are determined and mathematical model based on ANN is
developed.
Keywords: ANN model, braking forces, braking performance, road surface, road slope,
vehicle.
1. Introduction
The most important phase of vehicle design is optimization of vehicle features like comfort,
safety, performance and economics [1].
Today vehicles are expected to provide many
features and to have many different specific functions and these expectations increase with
passing time.
Breaking performance is an indicator of effectively working breaking system.
181
Breaking
performance is directly affected by the factors like vehicle weight, centre and wheel
cylinders, break drum or disk structures, break hydraulic circuit and break hydraulic, tires,
suspension system, breaking system mechanical parts, road conditions and load on the
vehicle. There some theoretical and experimental modeling studies in the literature about
examination parameters effecting breaking forces and accurate breaking analysis [2-5].
Breaking test can be performed on real road conditions as road tests. For faster and more
efficient way is to use testing equipment. Breaking performance analysis based on data
obtained from and measured by breaking test equipment is more suitable because road tests
require longer period of experimentation and special testing areas [6-7].
Breaking performance reveals breaking system condition consequently it is an important
performance indicator. Breaking performance is influenced directly by the factors like
breaking system parts and wheel-road conditions. The effect of the factors on breaking force
can be obtained by experimentally or can also be estimated by developing mathematical
models [8-9].
In this study, breaking outcomes are examined in different roads like linear and side slope
roads, in different slipping conditions of the road; differences between the breaking
performances are determined and mathematical model based on ANN is developed.
2. Experimental Procedure
Experiments have been performed in Afyon Kocatepe University, Technical Education
Faculty, Mechanical Education Department, Department of Automotive Laboratories.
Experiments performed on 2004 Ford Fiesta automobile having 1025kg of weight. Vehicle
does not have ABS breaking equipment. Vehicle tire size is 175/65 R14 and tire air pressure
is 0,21MPa.
Because of the weight of the momentum arm, to eliminate the perpendicular load affecting
the load sensors and to arrange the calibration values in computer program, used data is
named as tare and measured values are subtracted from the tare values and net breaking force
values are determined.
182
Fig. 1.
Experiment Setting Schematic Diagram
Schematic view of the experiment settings can be seen in the Fig. 1. Drums spin the wheel of
the vehicle by means of electrical motors and when the wheels are reached predetermined
speed, vehicle breaks are applied to stop the drums of the experiment equipment. The
momentum arm connected to electrical motors applies force to load cells and by means of
electronic equipment interface forces during the breaking period are transferred to computer
as experimental measurement data.
During the experiment vehicle is climb up the testing platform and wheels are placed on
drums and then safety precautions are taken carefully. Next computer program is run. In the
vehicle test program, <friction coefficient> is chosen from menu bar (for example 0.8 for dry
road conditions). The person in the vehicle starts the engine and frees the hand break. In this
instant, there is no pressure on clutch, break and acceleration pedal. Electric motors are
started with start-stop buttons and drums are starts to turn consequently vehicle wheels are
also started to turn. When the wheels reached predetermined speed, break is applied by the
driver by using brake pedal. Computer is recorded the force values and shows the measured
force-time graphics. In Fig. 2 dry road condition test results are presented graphically as seen
in the computer screen.
183
Fig. 2.
Dry road condition test results as seen in the computer screen
When the graphics are examined it can be seen that the braking force of the right wheel is
2900N maximum as the left wheel breaking power is 3200N maximum. Force fluctuations
can be determined easily by the sensitivity of the equipment and is due to reasons like
deceleration of vehicle during breaking, fluctuations of electrical system etc.
3. Model
3.1 Preprocessing of raw data
Before the model development, experimental data is preprocessed to be used in modeling.
Raw experimental data is a time dependent variation for different braking angles and road
conditions.
An example of raw breaking data for wet road surface and 7 degrees uphill
conditions as a function of time is given in Fig. 3.
184
Fig. 3.
An example of raw data graph
As can be seen from Fig. 3. that model should be based on road conditions and slope angles
instead of time dependency. The object of the experiment is obviously to see the effect of
road conditions and slope angle on breaking force. Because of explained reasons
experimental data is preprocessed.
Table 1. Preprocessed and randomly chosen training data
Slope
Angle
(Degrees)
Friction
Coefficient
Breaking
Force
(N)
-7
0.80
4990.908
-5
0.80
4383.275
-7
0.50
3956.452
-5
0.50
3420.74
0
0.80
2859.349
-3
0.50
2742.084
3
0.80
2454.149
0
0.50
1936.551
3
0.50
1620.167
5
0.50
1346.701
7
0.50
1005.134
-7
0.10
635.7126
-5
0.10
526.8997
-3
0.10
364.0079
0
0.10
339.41
5
0.10
234.2579
7
0.10
163.5725
185
For every road condition and slope angle maximum value of the time dependent force data is
calculated (Table1). Later data values between the 100% and 75% of the maximum value of
the raw data are chosen and their mean is calculated. For seven values of road inclination and
three road slip conditions, 21 different breaking force values are evaluated. These values
form the preprocessed model data. Challenge is, now, finding a suitable curve fitting model
for the preprocessed data set consisting of two input and one output variables (Table 2).
Table 2. Preprocessed and randomly chosen test data
Slope Angle
(Degrees)
Friction
Coefficient
Breaking
Force(N)
3
0.10
275.1
-3
0.80
3848.1
7
0.80
1527.8
5
0.80
5052.1
3.2 Artificial neural networks
To find a suitable model, curve fitting ability of artificial neural networks (ANN) is used.
ANN can be defined as using physical hardware or computer software to learn and store
relationships in a way similar to real network of nerves do. ANN can easily and smoothly
approximate and interpolate multivariate data which may otherwise require huge amounts of
memory. ANNs are used in many scientific areas like nonlinear statistical fitting and
prediction.
As modeling software MATLAB® ANN curve fitting tools are used to evaluate the weights
and bias values of the model. For training, 17 input and output couples are chosen, rest 4
input and output pairs are used for testing purposes.
MATLAB® nntraintool is used for only training purposes. When training is completed, the
weight, bias values and network structure is tested by using same software packages
Simulink® program. Then the results and model are tested and their performances are
measured. The ANN model consists of 2 inputs, one output and 20 hidden layer neurons and
trained by using Levenberg-Marquardt method. The ANN model block diagram is given in
Figure 4.
In ANN model block diagram hidden layer transfer function is hyperbolic tangent sigmoid
where output is:
y = tansig(x) = 2/(1+exp(-2x))-1
(1)
It is similar to tanh(x) but for the sake of accelerated calculations, MATLAB®
implementation calculates tansig(x) faster than tanh(x) with negligible numerical differences.
Output layer is purely linear transfer function where output is;
186
y = purelin(x) = x
(2)
Fig. 4.
ANN model block diagram
3.3 Performance criterion
To test the suitability of developed model Mean Absolute Percentage Error (MAPE) is used.
Mentioned performance criterion is given below.
(3)
Where ei is the differences between model evaluated data and actual values, acti is the actual
values for ith input-output pair and n is the number of pairs. Trained model is simulated in
Simulink® and 4 input vectors are applied to Simulink® model input. Simulink® simulation
results are compared to actual experimental results. MAPE results are used as performance
criterion. Lesser the MAPE value means the better performed model.
After many different training and testing periods, one of the best MAPE result is obtained as
0.0219 and related network results are given in Table 3 and Table 4.
MAPE value of 0.0219 is a highly acceptable value for the presented kind of application.
Presented ANN model is very successful to represent the experiment results.
Table 3. Best MAPE result comparison table
271.3
3808.8 1574.0 1969.4
Actual 275.1
3848.1 1527.8 2052.1
Model
Error
3.8
39.3
-46.2
82.7
MAPE
0.0219
4. Conclusions
Breaking force depends on many variables including vehicle weight, center and wheel
cylinders, break drum or disk structures, break hydraulic circuit and break hydraulic, tires,
suspension system, breaking system mechanical parts, road conditions and load on the
vehicle. Two of the important variables depend on road condition; one of which is road slip
conditions and the other one is road slope. Both of the parameters are included in the
development of the experimental model which increases the validity of the model.
Although, the model is not complex, it seems sufficient to model experiment. Experimental
187
modeling has advantage of simplifying the experiments and shortens the experiment duration.
Comparison with experimental results shows an error of 2.19%. The model can be improved
by using more hidden layer neurons in the network to acquire better MAPE results. One other
way to improve the model is to reduce number of neurons in hidden layer without sacrificing
MAPE results.
Table 4. Best MAPE result result table (MAPE=0.0219, OUTPUT BIAS=0.2217)
Input Weights
Layer
Weights
Layer
Bias
2.189
-5.778
0.234
-6.345
4.534
-4.260
0.209
-5.656
-1.577
6.079
-0.138
4.907
3.560
-5.155
0.116
-4.233
3.920
4.666
-0.309
-3.875
-5.136
-3.486
-0.152
3.063
2.984
-5.401
0.150
-2.222
5.185
-3.489
-0.165
-1.681
4.568
-4.285
-0.138
-0.943
1.007
6.169
0.383
-0.493
-0.057
-6.244
-0.756
-0.208
4.309
-4.523
0.090
0.905
4.683
-4.115
-0.200
1.587
-5.383
-3.218
0.235
-2.273
5.266
-3.501
-0.206
2.839
-4.503
4.230
-0.277
-3.684
-2.620
-5.623
0.265
-4.315
4.949
-4.237
-0.213
4.596
5.662
-2.370
-0.188
5.705
-4.263
4.418
-0.120
-6.323
5. Acknowledgments
The authors thank Selcuk University Scientific Research Centre for financial support and
Afyon Kocatepe University for laboratory facilities.
[1]
6. References
H. Bräss, (2003). Das automobil und die wissenschaften, ATZ, 105, 74-82.
188
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
J.H. Smith, (2002). An Introduction to modern vehicle design, ButterworthHeinemann, Oxford.
A.Y. Maalej, D.A. Guenther, J.R. Ellis, (1989). Experimentaln Development of Tyre
Force and Moment Models, Int. J. Of Vehicle Design, Cilt 10, No 1.
P. Oppenhelmer, (1988). Comparing Stopping Capability of Cars with and without
Antilock Braking Systems (ABS), SAE, No 880324.
H.B Pacejka., R.S. Sharp, (1991). Shear Force Development by pneumatic Tyres in
Steady State Conditions: A Review of Modeling Aspects, Vehicle System Dynamics,
Cilt 20, 121-176.
Y.Q., Wang, R. Gnadler, R., Schieschke, (1996). Vertical Load-Deflection Behaviour
of a Pneumatic Tire Subjected to Slip and Camber Angles, Vehicle System Dynamics,
Cilt 25, 137-146.
H. Bayrakçeken, D. Altıparmak, (2007). Design of a Brake Test Equipment and Brake
Force Measurement and Modelling, GJ. Fac. Eng. Arch. Gazi Univ. Cilt 22, No 1,
21-26, 2007 Vol 22, No 1, 21-26.
H. Bayrakçeken, D. Altıparmak, (2005). Mathematical Modelling, and an Experimental
Study, for Vehicle Braking Force Analysis, G.U. Journal of Science 18(3):489-503.
R. Kus, H. Bayrakceken, U. Taskiran, (2006). Modeling and Experimental Investigation
of Effect of Steering Angle on Braking Force, Journal of International Research
Publications, Volume 1, Issue: Technomat&Infotel, pp. 12-21, ISSN 13118978
189
Computer and Information Sciences II & Electrical and
Electronic Engineering
10:30-12:00, December 16, 2012 (Meeting Room 5)
Session Chair:
257: 2D to Stereoscopic 3D Conversion for Photos Taken from Single Camera
Chwee Keng Tan
Temasek Polytechnic
309: Native and Detection-Specific Log Auditing for Small Network Environments
Brittany Wilbert
Sam Houston State University
Lei Chen
Sam Houston State University
336: Thailand Museum Data Exchange via XML Web Service
Pobsit Kamolvej
Kasetsart University
Usa Sammapun
Kasetsart University
Nitirat Iamrahong
Kasetsart University
Guntapon Prommoon
Kasetsart University
La-or Kovavisaruch
NECTEC, Thailand
378: A New Villain: Investigating Steganography in Source Engine Based Video Games
Christopher Hale
Sam Houston State University
Lei Chen
Sam Houston State University
Qingzhong Liu
Sam Houston State University
239: The Design of Gain Controllable Current-mode First-order All-pass Filter for
Analog Integrated Circuit
King Mongkut’s Institute of
Winai Jaikla
Technology Ladkrabang
Totsaporn Nakyoy
Rajabhat University
240: Electronically Tunable Low-Component-Count Current-Mode quadrature
Oscillator Using CCCFTA
Rajamangala University of
Chaiya Tanaphatsiri
Technology Srivijaya
Narong Narongrat
Suan Sunandha Rajabhat University
190
257
2D to stereoscopic 3D conversion for photos taken from single camera
Chwee Keng Tan
Temasek Polytechnic, 21 Tampines Avenue 1 S(529757) Republic of Singapore
Email: tanck@tp.edu.sg
Abstract
The traditional method of 2D to stereoscopic 3D photo conversion requires the creation of the
depthmap of the photo by the tedious, manual selection of the regions in the image and
painting them in different grayscale levels to represent the depth information of the image.
The depthmap is used next as the displacement map to generate another perspective image.
In this paper, I present a software tool that can speed up the depth map creation process using
standard image processing modules and an image-shifting algorithm detailed in this paper.
My objective is to create a tool which can speed up the depthmap creation process when
compared to the tedious process of manual selection/marking using photo-editing tool such as
Photoshop.
Keyword: Computer vision application, 2D to stereoscopic 3D conversion, segmentation,
depthmap creation, image shifting algorithm
1. Introduction
In this computer vision application, I attempted to create a software tool that can speed up the
depth map creation process using signal processing modules and algorithm. My objective is
to create a tool which allows the depthmap of a photo in much quicker time when compared
to the tedious process of manual selection/marking using Photoshop.
2. The work flow
The work flow is illustrated in Figure 1. In Step 1, load a photo and convert to its grayscale
which is used as the base for the depthmap.
In step 2, user can apply specific image processing modules which serve the following useful
purposes:

Invert: If the original grayscale of the photo is not a good representation, invert the
grayscale to get a better depth map as the base. (See figure 2).

Erode: This is useful for removing unwanted small bright spots.
191

Dilate: This may be useful to remove unwanted black dots in white/bright regions.

ThresMax: This is a truncation operation: Any gray value above a threshold M is set to M.
Here, I set M to 210. This is useful when we want to limit the brightness level of some
areas in the base depthmap to a maximum value.

ThresMin: This is also a truncation: Any grayscale below a threshold M is set to M. Here
I set M to 30. This is useful when we want to limit the darkness level of some areas in the
base depthmap to a minimum value.

Smoothed: This applies Gaussian smooth using a window size of 9 by 9 pixels. Applying
smoothed operation to the final depthmap often reduces artifacts in the shifted image [2].
(An example is illustrated in Figure 6 and 7.) Furthermore, asymmetric smoothing
(standard deviations, σx=20 and σy=70 - less smoothing in the horizontal direction as
compared to the vertical direction) is used here as it gives rise to lesser curving of straight
objects as reported in [2].
Step 1: Convert
Step 2: Apply image
Step 3: Approximate
Step 4: Generate
image to grayscale
processing block(s)
marking of objects
right-eye image by
as necessary
manually using
image shifting and
segmentation
holes-filling
Figure 1. Stages in the 2D-to-3D creation.
algorithm
Figure 2. User interface. The image on the right is obtained by inverting the grayscale of the
original image. This is used as the base for subsequent stages.
In step 3 the user performs segmentation based on the Watershed algorithm as detailed in [1].
The user identifies and marks each segment approximately initially by clicking “Segment
image” (see Figure 3). Next the user adds more markings to improve the segmentation (see
Figure 4). Once the user views that the segmentation is satisfactory, the user can start to
reassign grayscale value or gradient (Ground or Sky, see Figure 5) to a selected region picked
192
by the mouse. The last step in preparing the depthmap is to apply the Smoothed operation to
obtain a blurred depthmap. Smoothing of depth map is known to reduce artifacts in the image
shifting process [2].
In the final step, the user generates the right-eye image via shifting and holes-filling which is
described in the next section.
Figure 3. Initial marking (on the left image) and initial segmentation using Watershed
algorithm
Figure 4. Additional markings (on the left image) to improve segmentation.
Figure 5. Final depthmap after applying ground gradient
193
Figure 6. Shifted right image using un-smoothened depthmap. Notice the artifacts on the right
sides of the rocks
Figure 7. Shifted right image using smoothened depthmap shows lesser artifacts on the right
sides of the rocks.
3. Image Shifting
The original, source photo is treated as the left-eye image from which we generate the
right-eye image. The generation involves two steps – pixel-by-pixel shifting from source
(left-eye) image to destination (right-eye) image, and holes filling - as depicted in Figure 9
and 10.
3.1 Pixel-by-pixel shifting
In reality, the depth-to-disparity relationship is not a linear one [2]. However, it is sufficient to
194
use a linear relationship as, to start with, the depth map used here is arbitrarily created and
may not reflect the true relative depths of objects in the scene. The depth-to-disparity
relationship as shown in figure 8 is given by a straight line as:
disparity =( -(maxShift-minShift)/ 255.0 *depth) + maxShift
(1)
where maxShift and minShift depend on image’s size as follows:
maxShift , minShift
=
4, -4
5, -5
6, -6
if image width <= 2000
if 2000 <image width <=2500
if 2500 <image width <=3000
7, -7
8, -8
if 3000 <image width <=3500
if image width >3500
Disparity (in pixels)
maxShift
0
255
depth
minShift
Figure 8. Depth-to-disparity conversion to generate right-eye, virtual image
3.2 Holes-filling
At this point, holes are generated when some parts of the memory which stores the
destination image is not being filled by any shifted pixel. This could be due to
depth-discontinuities and/or rounding of the disparity to a whole number.
Generally, holes are regarded as occlusions. For occlusions, we repeat each group of holes
using the neighbouring pixel of the shifted image with the furthest depth. See Figure 10. In
order for the depth representation of the shifted image to be correct, we need to rely on the
shifted depth map instead of the original depth map.
Figure 11 shows a comparison of hole-filling technique using (a) original depth map and
neighbouring pixel of holes from original image and (b) shifted depth map and neighbouring
pixel of holes from shifted image. Notice that in (a), there are more artifacts as compared to
(b).
195
Figure 9. Pixel shifting illustration (assuming maxShift is 4 and minShift is -4)
Figure 10. Holes filling (gray boxes represent holes)
196
(a) using original depthmap and neighbouring pixel of holes from original image for holes
filling
(b) using shifted depthmap and neighbouring pixel of holes from shifted image for holes
filling
(c) Shifted images before holes filling (using smoothed depth map)
Figure 11. Comparison of holes filling technique (using smoothed depth map)
4. Results and Conclusion
Generally, I am able to create the depth map in much less time using this software tool when
compared to using the Photoshop. As a rough comparison, I took about 10-15 minutes to
create the depthmap for the “Raffles” image (Figure 12) using my tool while I took about 1-2
197
hours when I used the selection tools in photo-editing software such as Photoshop.
(a) Original coloured image
(b) Depth map created with the software (c) Depthmap created using Photoshop
Figure 12. Raffles image and the artificial depthmaps
5.
References
[1] F.Meyer, “Color image segmentation,” Proceedings of the International Conference on
Image Processing and Its Applications (pp.303-306), 1992.
[2] Z.Liang & J.T. Wa, “Stereoscopic Image Generation Based On Depth Images for 3D
TV”, IEEE Transactions On Broadcasting, Vol.51, No.2 (pp. 191-199), 2005.
[3] C. Verekamp, etc., “Question Interface For 3D Picture Creation On An Autostereoscopic
Digital Picture Frame,” Conference Proceedings of 3DTV-Conference, 2009.
198
309
Native and Detection-Specific Log Auditing for Small Network
Environments
Brittany Wilberta, Lei Chenb
a
Department of Computer Science
Sam Houston State University
Huntsville, Texas, United States
BMW005@SHSU.EDU
b
Department of Computer Science
Sam Houston State University
Huntsville, Texas, United States
LXC008@SHSU.EDU
Abstract
A vital aspect of network monitoring is the necessity for log management. Most large
companies and organizations may have well developed security policies and procedures, and
sufficient fund, people, and other resources for continuous log maintaining and auditing.
However, for limited-sizenetwork environments in small business, it is relatively difficult to
collect log data, process it in a manner that is human readable, and be notified of potential
attacks. This paper discusses techniques for log collection and auditing within a small
network environment. In this research, techniques performed in larger environments to assist
in log auditing are examined, and a discussion of how these techniques can be modified and
adapted for small businesses is also performed.
Keyword: log auditing; small networks; small home environments; small business
environments; native logging; detection-specific logging
1. Introduction
The difficulty of collecting log files within a network environment is an issue which must be
discussed regardless of the size of the environment. However, when discussing this within a
small network, either a home environment or a small home office of not more than50
employees, the difficulties are much greater. Factors such as who has access to the network,
what hardware and software has access, as well as the cost of maintaining log management
must be considered when discussing how to transmit and audit these messages. Also, if a
malicious node has obtained access to the network, or there has been an external breach to the
199
network, log data must be accessible as well as human readable to ensure that managers of
the environment can inform authorities.
This paper discusses the use of native log files and detection-specific logging as a means of
protecting small businesses by first defining the differences between native log files and
detection specific logging. Next, techniques used to provide auditing as well as
detection-specific logging solutions for network environments will be discussed. A strategic
framework will then be presented for small networks to use the log auditing techniques
discussed for their log collection and auditing needs. Finally, recommendations will be
provided in regards to further avenues to broach when discussing logging data within the
small business environment.
2. Terms and Definitions
Before discussing log monitoring within a small business environment, it is necessary to
givea definition of terms used in this paper. Since previous work [1] [2] has clearly defined
many of these terms, we will summarize these definitions as well as provide additional terms
to be used:
Event: A single recordable situation conducted within an environment [1].
Log: Also known as an “event recording” this is a single event data record which describes
when an action has been conducted [2].
Log File: This is a collection of logs which are grouped together and created by an application
(for instance, system logs for the Linux environment are typically collected in the /etc/log
directory.
Log Format: This is the layout which is used to construct the log message. This includes
“fields, separators, delimiters, tags, etc. to distinguish the distinct parts of one LOG entry” [2].
Native Logs: These log files are creative automatically by the software program used to create
log messages within a software or hardware system.
Aggregation: Combining data received from log messages by locating similar attributes from
the data. Aggregated data provides a summation of instances an event has occurred. For
instance, if the user “johnsmith” failed to log into a system multiple times, the user name can
be aggregated to determine the number of times the failures occurred.
Correlation: Allows for association between two or more unique log messages.
Audit trail: This is the record of several log messages which has been collected across a
system.
Incident: A potential malicious event within an environment.
Detection: The process or act of discovering an incident.
Detection-Specific Logging: The use of techniques to parse, aggregate, correlate log messages
to determine if an incident has occurred within an environment.
200
3. Small Network Considerations
There are three main concerns that a small network and related products must consider [3]:
Usability: Products and services which are used within a small network environment must be
simple to use setup and manage.
Unique Characteristics: Because much of the technologies used by small network
environments today were developed by research labs, they may not consider the unique
characteristics of a small network environment: “lack of professional administrators, deep
heterogeneity, and expectations of privacy” [3].
Design: Designing a home network environment which includes the unique characteristics and
concerns of the environment involves “a concerted, interdisciplinaryeffort by networking,
HCI, andsystems researchers” [3].
However, it is still seen across many of these networks that although technology is used to
assist the small network, it may not fit those demands. Mainly due to this gap, when a
security incident occurs within these networks, it is difficult for these environments to
address the cause and accountability without significant hurdles. The considerations when
addressing these hurdles to provide monitoring within these networks are the following [5]:
Network sizes: Device idiosyncrasies as well as the sizes of the wireless network
environments can cause challenges to the solutions that can be implemented.
Device integration: Devices must be able to be seamlessly integrated into the monitoring
solution.
Information retrieval: The information which is collected from different devices should not
require a detailed knowledge of the device.
Extensibility: The monitoring system has to be extensible to collect information from new
network devices.
4. Defining a Small Network
For purposes of this paper, we will combine of needs of home networks and small sized home
business networks to define a “small network”. Although the needs of a home network and a
business network may or may not always converge, there are several aspects of each
environment which are compatible between both:
Both environments may be highly reliant on third party hardware to provide network
connectivity within the environment.
The number of regular, authenticated users is small. For the purposes of this paper it will be
defined as no greater than 50.
All of these users most likely will not be connected to the network at one time.
Third party hardware devices will be connected in some way to the network (printers and fax
machine(s), notebooks, desktops, MP3 players, etc.).
Log messages need to be collected from these third party devices.
201
As a result, not all users within the environment may be knowledgeable of basic security
practices, and this lack of knowledge could causesecurity incidents. Therefore log messages
need to be collected to create audit trails of activities within the environment. However on the
other hand, since third party software and hardware aredeployed, it is difficult to create
detection-specific logging and audit trails. This is because most software creating log
messages uses either unique and/or proprietary log formatting techniques that are dependent
of the software and hardware the log messages originate from. As an example of this, log
messages from a Cisco firewall will have a completely different set of formatting needs than
a Windows Server 2007 log message.For that reason, a discussion of detection-specific
logging techniques as well as auditing techniques is required in order to provide options
which small networks could use to monitor the environment.
First Audit Trail Analysis and Current Detection-Specific Logging and Auditing
Techniques
The first early implementation of a log auditing system was demonstrated by Mounji, Le
Charlier, and Zampuniéris in 1995. This system used the RUSSEL rule-based language to
construct an early form of network monitoring which can be used for intrusion/anomaly
based detection [5]. This framework provided logging controls architecture requirements, log
generation, availability requirements as well as performance measurements which can be
used to construct a log auditing system across a network. As a result, their work constructed
many considerations which must be used today to build a log monitoring system for any sized
network environment. These considerations include [5]: the framework of the network
depends on the construction of the network which the log monitoring will be implemented on;
testable performance variables need to be introduced in order to provide feedback on how the
log monitoring acts under stressors; a standardized programming implementation allows for
more reliable log collection and monitoring and to construct the backend of the log
monitoring framework; and potential limitations which have to be considered when
constructing the framework.
Auditing Techniques
There have been several auditing techniques which provide accurate audit trails within an
environment. This section will discuss a few of these methods.
Log Summarization and Anomaly Detection
Amixture of log summarization techniques and their unique anomaly detector can be used to
conduct log auditing within distributed networks. Gunter et al. assumes that long-running
services and applications within the distributed systems that are discussed and managed under
separate administrative domains. The technique suggests troubleshooting infrastructure by
using both a data summarizer technique to reduce the number of unnecessary logs to analyze,
202
and then use anomaly detection in order to inform system administrators of dramatic
performance abnormalities while a long-lived service (such as large data transfers or long
running jobs) are operating.
The authors implemented a log summarization extension to the Netlogger Toolkit [6]. This
extension was then used to summarize log data within their testing environment, then they
calculated the time necessary for to summarize their log data based on a baseline they set, as
well as what was calculated after testing completed. A separate test was later performed to
test file transfer anomaly experiments [6]. The results of their findings was that the time
necessary to complete analysis of data depends on the environment as well as the application
the logging is used for. Whereas static data suffer smooth, optimal results, the results from
network activity resulted in a high number of false positives [6].
Although this technique is focused on distributed environments, the results obtained from the
log summarization section of the paper provide information in regards to how to minimize the
impact of collecting log data. Log summarizations need to ensure that the data being
summarized is accurate and does not impact the analysis of the resulting actions. Although
smaller networked environments will not necessarily populate large numbers of log
messages, it is still likely that they have proportionally smaller storage to store log messages.
Therefore, log summarization techniques must be used to allow for log data to be stored
efficiently while providing the most information about the activity which created the
messages.
Interoperable and Flexible Logging Principle
Huemer and Tjoadiscuss utilizing XML technology to create a flexible log solution which can
be used in dispersed computing environments[2]. The authors suggest that current logging
solutions do not meet the current usage of log files within environments, and that an alternate
solution to storing this data is necessary. They then suggest using XML, which can offer an
open system with flexibility in current log management while providing an open standard
which can be used across platforms. The authors suggest that XML logging provides the
following benefits [2]:consistency: log files can be checked quickly for
inconsistencies;automatic evaluation: tools such as XSLT, correlation, aggregation and
parsing would be possible to enable evaluation of log files to be done automatically;
protection: XML security can be used to protect integrity and confidentiality of files;
reporting: reports can be created quickly to cater different stakeholders; interoperability: the
system is platform independent and provides migration of the system without compatibility
issues..Although it can be disagreed upon if using XML data will be efficient in large scaled
environments, where the types of software, hardware and applications used may not be able
to upgrade to an open logging platform efficiently or effectively [2], this recommendation to
move to an open XML format for smaller networked environments using a software solution
could be ideal. A small and low cost or even free software program could be used to convert
203
log messages from the standard operating system format to an XML format then uploaded to
the storage location for analysis. Because XML files are not resource expensive, these files
can be easily used to reduce the overhead needed for log summarization as well as anomaly
detection for those files in network environments as well.
Evidence-based Audit
AURA0is an authorization logic which provides ‘proof-based’ auditing in multiple
environments [7]. Vaughan et al. suggest that even though it is universally agreed that
auditing log data is important, there has been little research into good auditing procedures.
The authors then argue that audit log entries should provide evidence that justifies activities
during system execution. The authors provide three reasons on why this should be done [7]:
Connecting contents of log entries provides a principled way of determining the type of
information to log.
Proof provides structure which can allow administrator to find flaws or misconfigurations
within the logging environment.
Verifiable evidence helps to reduce the size of the trusted computer base to be able to tell the
difference between malicious nodes and node which appear to have malicious activity, but
is actually from other sources, such as a careless user within the environment.
This technique uses a syntax which provides a “Type”, and “Kind” variable, which is then
used to provide a logic structure to the log messages to be parsed [7]. As a result a logic string
is constructed based on the type of activity and then rules and signatures are applied to create
the evidence necessary for AURA0 to produce results.
This type of auditing is potentially the final step necessary to allow for small environments to
alert users and administrators of potentially malicious activities. The root of many issues
related to logging is the human readability problem. Administrators and individuals in smaller
environments with different types of systems, software and hardware have an uphill battle in
regards to learning and understanding log messages. However, an integration of auditing
systems, which alert when evidence of activity is seen, is critical in making sure that the
correct log files are looked into. Similar to IDS and anti-virus products, a logging solution
should be able to create rules which effectively communicate with the activity while
eventually allowing for the learning curve for understanding log data to decrease. By
implementing algorithms applying logics to auditing automation, a networked environment,
regardless of the size,can produce viable information not difficult to understand.
Explanation-Based Auditing
This technique is an algorithm which has been constructed to provide “explanations” to
access logs files compiled particularly within the healthcare industry [8]. Fabbri and LeFevre
use the healthcare industry since it is especially impacted by laws and regulations. The
authors have proposed techniques for better articulating activity seen within the environment.
They first provide example log accesses which an administrator of a network may see, for
204
instance, where nurses and doctors within the medical facility have visited throughout the
day. Next, they provide an SQL based approach to provide user understandable explanations
to the data. The results of testing revealed that a mixture of user generated as well as their
automated explanation templates allowed administrators to flexibly accessthe log messages
that were being processed and reviewed [8].
This approach allowing users to understand the logs being processed could be applied to
small networks. While gaining the log data is important, how this data is processed for
auditing is just as vital. The authors address these concerns by creating its implementation of
a parser technology for healthcare related access logs which have to be reviewed. However as
multiple types of log messages require parsing within a small network, this method may or
may not be able to scale sufficiently. Nonetheless for smaller infrastructures this appears to be
a capable method for understanding what is occurring within the environment.
Confidential Auditing
Shen et al. discuss using cluster-based trusted third party (TTP) architecture for event logging
and auditing services, which allow for each TTP node to be “blind” from knowing other log
sources exist while allowing for confidential auditing to exist [9]. This technique provides a
system model where logs are transported from a Distributed Information System (DIS) to a
secure log location. This model allows for defense against nodes which have been
compromised as well as provides confidentiality, integrity checking and authentication of the
log data collected [9].
Securely transporting the log messages collected within networks has to beconsidered along
with ways to audit the data [9]. The authors discuss a model on how this may be done using
TTP. A small network must consider confidentiality, integrity, and authentication of the log
data transported, as well as attempts to provide a stateless system for each node to prevent the
spread of malicious node data from corrupting the environment. Since in many of these
environments a connection of a malicious node to the environment could risk significant
damages to the network and result in large downtime or loss of data, the network must be
able to defend against these nodes as well as quickly alert when these nodes have breached
the system.
Detection-Specific Logging Techniques
The reasons that log data should be collected include: monitoring processes and applications,
providing accounting and billing to customers, providing accountability to business to show
that they are in compliances to regulations such as Payment Card Industry Data Security
Standard (PCI-DSS) [10] and Sarbanes-Oxley (SOX) [11], as well as tracking malicious
activity performed by individuals and how data has been used within each environment [2].
Nevertheless, the amounts of log data created for both non-malicious and malicious activity
can become overwhelming, especially if this activity is only monitored by one or two system
205
administrators, or is done by a home user. If these users are not properly trained, or even if
they are, it is very difficult to efficiently process these logs without proper tools.
As defined in the section of terms and definitions, detection-specific logging is the use of
techniques to determine that an incident has occurred. Detection-specific logging typically
uses techniques including data mining, pattern matching, and other types of statistical
analysis to determine if an incident has occurred within an environment, who (or what)
initiated in the incident, as well as when it has happened. Once this data has been determined,
detection-specific logging will allow for alerting for this data to occur.
Detection by Mining Patterns
Xu et al. discuss using data mining and statistical learning methods to provide automatic
monitoring and detection of abnormal execution from console logs [12]. This particular
technique was tested on online activity with environments that collect log data from multiple
unique sources. This uses a process of first parsing logs by examining the statements in the
source code to discover the schema used to create log messages for that program. Next, the
technique identifies event traces by grouping fields of event streams then converting these
event streams into event traces. Finally, the process represents the events traces by a message
count vector (MCV) which allows for predictable session indicators to be isolated and
categorized for the technique [12]. This is then used to open, read, write and close MCV
values, which can then be used for anomaly detection. After these steps are taken the data is
calculated to produce a pattern which can be mined to determine if activity the environment
is consistent with previous activity [12].
This parser technique converts the log messages from their raw state to a state which can be
used for data mining within the environment. This type of technique can be used within both
large and small environments, as well as can be used to convert data from the software
schema to generate log messages to another format, such as the XML formatting example
discussed earlier. Since many programs may use a proprietary logging schema, even within
the schema used for the operating system the software has been created for, a parsing
technique such as the one discussed could be vital in allowing for a more universal format to
be used across platforms.
Execution Anomaly Detection
Fu et al. discuss converting log messages to log keys, which can be used for Finite State
Automation (FSA) to provide a performance measurement to automatically detect anomalies
within new log files [13]. This technique is tested on distributed systems with a variety of log
messages types and a significant number of different attributes, which may turn out to be
difficult to parse and find anomalies from. Fu et al. suggest that log messages can be
categorized by providing them with a corresponding log key [13]. This log key can then be
used for conducting analysis, such as grouping log messages based on the type of themfor
identifying statements that are similar, as well as locating common parts of log messages.
206
Once this is complete, rules can be based on these variables and anomaly detection can be
implemented. The results of their evaluation suggestions that the time necessary to detect
anomalies based on this type of parsing technique is less than parsing directly from the log
message itself [13].
The authors also provide another technique that can be used to convert log data from its raw
input to an output ready for anomaly detection [13]. This technique is similar to the MCV
method in converting the log messages received to a format in order to provide simpler
pattern matching [13].
Although the techniques are tested on larger distributed systems, analysis of this method
when discussing smaller environments is important as well. Because small environments may
have a significant amount of different nodes, which are focused on different task and may
produce vastly different log messages, techniques such as this one to parse these log
messages are important to provide accurate and quick automated assessments of the
environment.
Relational Analysis of Access Logs
This technique uses a community-based anomaly detection system (CADS) which is an
unsupervised framework used to detect insider threats based on information logged within the
environment [14]. The authors of this work, Chen and Malin, first discuss the different types
of access control models for groupings of users within an environment, as well as previous
anomaly detection techniques [14]. The CADS framework processuses collaborative
information systems (CIS) to construct user groups, then uses community pattern extraction
to articulate when there are possible changes within these groups. Then a network for pattern
matching potential anomalies within the logs is created. Thisframework and its modelswere
tested usinghealthcare industry data [14].
Although this method does not directly address network devices, the practicality of this
detection-based auditing system is important. Such type of detection attempts to remove the
need for constant monitoring of the log messages, and instead itcorrelates the variables found
in order to detect anomalies. This method is important to consider because network logs can
have millions of messages received per day, and this type of detection is important to parse
out the information necessary to review, as well as attempt to isolate abnormal areas within
the environment.
Strategy on Log Auditing for Small Networks
Earlier, three considerations were discussed when providing any sort of recommendations or
suggestions or a small network. These were the usability, the unique characteristics of the
environment, and the design of the product. As a result, when providing a framework in
which a small network environment should consider, these needs must be met as well as
understood by those who may administer the environment.
207
Review of Who and What is On the Network
Before the log monitoring software has been implemented, a complete review of what
hardware has been connected to the environment should be performed. This will enable
isolation of a threat if it is a new node attempting to access the network.Security training
should be done by everyone who has access to the network. This is important for both small
home networks as well as small businesses to mitigate some of the risk of users
inappropriately using the network. However, since users are unpredictable, this will not
always work.The administrator(s) of the network should become knowledgeable of how
logging works, as well as how to locate log messages within the environment.
Enable Logging
Logging should be enabled on all appliances that are connected to the network.Log files
should be stored for the recommended period of time.The ability to transmit log data should
be enabled on devices which have the ability to transmit.Log data should be backed up to
multiple outside sources.If monetary resources are available, log messages should be
periodically backed up and stored in a location outside of the home or office.
Researching Log Monitoring Solutions
Log monitoring software should always provide an option to correlate and aggregate log
messages.A reliable log monitoring service should minimize the work necessary to collect
and view log files.Most logs monitoring software on the market that is low price does not
provide all of the services necessary to provide auditing as well as alerting of
activity.Depending on budget, a log monitoring solution may not be available. However, most
programs allow for log messages to be read. Also, most operating systems now have log
message viewers which can be used for a preliminary analysis of what has occurred.
Future Considerations and Conclusions
Log and auditing research in regards to small networks such as home networks and small
business environments has not been touched on as much as other avenues. This is largely a
result of the inability to access these nodes unless they are illegally compromised. However,
the importance of research into these territories must be addressed. Since malicious
individuals constantly leverage small businesses as a well as home networks to conduct
Distributed Denial of Service (DDoS) attacks, propagate worms and viruses to multiple
network as well as initiate identify theft and credit card fraud.Services such as
detection-specific logging and log auditing could provide small networks additional tools
besides anti-virus programs, to prevent themselves from being attacked. Also, these
techniques can also be used to alert the administrators of these networks when activity
outside of normal behavior criteria has been performed on their environment.
The biggest barrier into this type of research has always been cost. Since it is easier to barrier
larger business against attacks, it has not been a focus of research into providing these tools
208
for small business network or home environments. However, log monitoring in these
environments needs to be considered. Besides providing these users the means to protect
themselves against attack, or at least know when they have been compromised faster, this
option allows for better overall protection for other larger environments, as these users may
be less likely to compromise those networks inadvertently.
In the future, combining several of the techniques discussed in this paper would be the key to
providing a low cost solution to these small environments. As these environments contain
very few administrators, a logging solution must have the ability to be seamlessly deployed
within the environment, provide alerts when an incident has potentially has been found, while
being able to learn from past activity and be modified depending on criteria the administrator
sets. If these criteria can be met, a solution can be integrated into a product such as an
anti-virus program and provide a holistic solution to a small network environment.
5. References
[1] Chuvakin, E. Fitzgerald, R. Marty, R. Gula, W. Heinbockel, and R. McQuaid, “Common
event expression,” December 2008, http://cee.mitre.org/.
[2] Huemer, D., and A. M. Tjoa, “A Stepwise Approach Towards an Interoperable and
Flexible Logging Principle for Audit Trails,” 3rd Int. Conf. on New Generations, pp.
114-119, © Apr 2010 IEEE. doi: 10.1109/ITNG.2010.33.
[3] Edwards, W. K., R. E. Grinter, R. Mahajan, and D. Wetherall, “Advancing the state of
home networking,” Commun. ACM, Vol. 54, Issue 6, pp. 62-71, © June 2011 ACM. doi:
10.1145/1953122.1953143.
[4] Ho, C. C., K. N. Ramachandran, K. C. Almeroth, and E. M. Belding-Royer, “A scalable
framework for wireless network monitoring,” Proceedings of the 2nd ACM international
workshop on Wireless mobile applications and services on WLAN hotspots, pp. 93-101,
© 2004 ACM. doi: 10.1145/1024733.1024745.
[5] Mounji, A., B. Le Charlier and D. Zampuni ris, “Distributed audit trail analysis,” Symp.
on Network and Distributed System Security, pp. 102-112, © Feb 1995 IEEE. doi:
0-8186-7027-4/95.
[6] Gunter, D., B. L. Tierney, A. Brown, M. Swany, J. Bresnahan, and J. M. Schopf, "Log
summarization and anomaly detection for troubleshooting distributed systems," 8th
IEEE/ACM Int. Conf. on Grid Computing, pp. 226-234, © 2007 IEEE.
doi=10.1109/GRID.2007.4354137.
[7] Vaughan, J. A., L. Jia, K. Mazurak, and S. Zdancewic, “Evidence-based audit,” IEEE
Computer Security Foundations Symposium, pp. 177-191, © Jun 20088 IEEE. doi:
10.1109/CSF.2008.24.
[8] Fabbri, D., and K. LeFevre, “Explanation-based auditing,” Proc. VLDB Endow, Vol 5,
Issue 1, pp. 1-12, Sept 2011.
209
[9] Shen, Y., T. C. Lam, J. C. Liu, and W. Zhao, “On the Confidential Auditing of
Distributed Computing Systems,” International Conference onDistributed Computing
Systems, pp. 600-607, © 2004 IEEE. doi: 1063-6927/04.
[10] PCI Security Standards Council, “Payment card industry (PCI) data security standard”,
2006.
Available:
https://www.pcisecuritystandards.org/pdfs/pci_audit_procedures_v1-1.pdf
[11] SOX-Online. “Sarbanes-Oxley roadmaps & basic approaches”, 2006. Available:
http://www.sox-online.com/approaches.html
[12] Xu, W., L. Huang, A. Fox, D. Patterson, and M. Jordan, “Online System Problem
Detection by Mining Patterns of Console Logs,” IEEE Int. Conf. on Data Mining, pp.
588-597, © Dec 2009 IEEE. doi: 10.1109/ICDM.2009.19.
[13] Fu, Q. J. Lou, Y. Wang, and J. Li, “Execution Anomaly Detection in Distributed Systems
through Unstructured Log Analysis,” Int. Conf. on Data Mining, pp. 149-158, © Dec
2009 IEEE. doi: 10.1109/ICDM.2009.60.
Chen, Y. and B. Malin, “Detection of anomalous insiders in collaborative environments via
relational analysis of access logs,” Proceedings of 1st ACM conference on Data and
application security and privacy, pp. 63-74, © 2011 ACM. doi: .1145/1943513.1943524.
210
336
Thailand Museum Data Exchange via XML Web Service
Pobsit Kamolveja, Usa Sammapuna, Nitirat Iamrahonga, Guntapon Prommoona,
La-or Kovavisaruchb
a
Faculty of Science, Kasetsart Universityline, Bangkok Thailand
E-mail address: nitirat.i@hotmail.com
b
NECTEC, NSTDA, Patumthani, Thailand
E-mail address: la-or.kovavisaruch@nectec.or.th
Abstract
This paper continues on the work of previous studies of creating a data exchange standard for
museum related information. The previous work surveyed existing national standards being
implemented by various governments or local museum chains. It found that there exists a
variety of standards and methodology being used to electronically communicate between
museums, but none exists on a truly national level, much less internationally. From the
previous study, a data exchange standard was proposed for implementation in Thailand
museums. This paper will present the result of a pilot project in applying the developed
standard to three local museums in Thailand, namely the Chaosampraya National Museum,
Royal Barges National Museum and the Kanjanaphisek National Museum. Proper approach
for the host of the central web portal as well as example interface and XML messages are
discussed in the paper.
Keywords: Web service, XML, Museum Data Exchange, Culture Information
1. Introduction
Thailand’s culturalheritage is different throughout its four regions: Central, North, Northeast
and South. Information in the country’s arts and culture is stored in museums across the
regions. The biggest central hub of cultural information is at the Fine Arts Department. The
information stored at the hub, however, is often basic and out of sync with the original
source. This is due to the fact that there is no linkage between the museums in Thailand as
well as to the data hub at the Fine Arts Department. Method of information storage is
different, from hard copy documents and local electronic storage to web-based servers.
Moreover, museums that have electronic database often have a unique database that uses their
own unique data format and structure. Some museums do display their artifact information on
211
their websites but often with limited information and difficult to access. [8]
From the previous studies on cultural information at Chaosampraya National Museum,
coordination with research team in Japan and studies on PowerHouse museum in Australia, a
data structure was proposed as a standard for data exchange between museums. There is
currently no universal data exchange standard for museum information in Thailand, or on an
international level. This paper is a continuation of the work from previous the previous study
by implementing the proposed data exchange standard for museums in Thailand through a
web portal system that can be accessed by regional museums and the general public.
2. Relatedworks
On the international level, there has been an effort from the tourism private sector to create a
universal data exchange standard for the industry. The sector includes transportation business
in airlines and car rentals, tourism agencies such as online web portals and independent
agents, and lodging providers in hotels and motels. The collaborative effort was to create a
standard for exchanging data between the entire supply chain, for the tourism agency to book
hotel rooms and hotels providing car rental transportation for visitors. The central standard is
called the Open Travel Alliance (OTA), with an example portal in www.kayak.com that
makes use of the standard. Users can rent cars, book hotel rooms and flights at the website
while service providers can connect with the web portal to automatically provide their service
through Kayak. [5]
Figure 1. Depicting the system architecture for exchanging information on a universal
standard [5]
212
The data exchange is done through a request towards the webserver with predefined
web service parameters through HTTP protocol. The web server subsequently replies through
its own web service message through SOAP and HTTP protocol to be displayed in the
client’s website. The OTA standard uses XML language that is applicable on multiple
platforms and language, enabling data exchange between ASP.Net, Java or PHP platform as
shown in Figure 1. [5]
3. Conceptual framework
From the studies on data storage of Chaosampraya National Museum in Thailand,
coordination with research teams in japan and PowerHouse Museum in Australia, a data
structure for museum data exchange was proposed (Figure 2). The structure consist of two
parts, the Common Data Set that holds the universal data elements found on most museums
and the Payload that holds exchange information for request, such as attached media and
extended artifact attributes. [8]
Common Data set
Attach File
Antique Name
History
Group
Discoverer
Found Date
Found Place
Era
...
Type File
File
...
Factor
Dimension
Weight
...
Payload
Figure 2. Depicts the standardized database design for data exchange through web service.[8]
The objective of the data exchange standard on cultural and fine arts information between
museums is for future collaboration between museums. Users or visitors can access museum
information easily with the help of a well-established technology in web service
communication. Moreover, from the study on data storage type in various museums, the
appropriate agency for the host of a central web portal in Thailand is the Fine Arts
Department in Thailand. This is due to the ownership of information by the Department on
most of Thailand museum’s artificial information. Other museums should establish a web
service client with the central hub in the Fine Arts Department to enable request by the
central hub for information in regional museums. The Department then acts as the focal point
for users to mine museum information.
The communication to the central hub will need to use a centralized database exchange
standard for the information to be universally interchangeable between other museums and
the central hub for services such as searching for information in artifacts in a predefined era
or categorical type. The central hub sends a message request according to the standard to each
museum’s database for a message response back to the web portal for the user.
213
4. System integration and Architecture
The prototype established will enable the Fine Arts Department to act as the web portal
provider for users to search for information. The pilot museums for the prototype
implementation includes the Royal Barges National Museum holding valuable royal artifacts
for royal rituals and decrees, the Kanjanaphisek National Museum holding historic objects
and arts such as pottery, weaponry and agricultural tools, and lastly the Chaosampraya
National Museum holding religious artifacts found in the Pranakon Sri Ayutthaya Province.
The system architecture was designed to exchange information between museums that have
the ability to store and share information through the proposed standard as depicted in Figure
3.
CODE : ASP.Net
DBMS : MS SQL Server
User
DBMS
User can contact to each
Domain through Domain’s web
site or Museum web Search
Search Service
Museum Web Search
(Web Portal)
Museum Officer
Search by provided service
and user’s criteria
Get Request or
Response
Get Request or
Response
CODE : PHP
DBMS : MySQL
Data
Exchange
Museum Web
Museum Web
Get Request or
Response
CODE : PHP
DBMS : MySQL
Museum Web
Museum Officer
Museum Officer
Royal barges
National Museum
Chaosampraya
National Museum
DBMS
DBMS
Kanjanaphisek
National Museum
Figure 3.Depicting the system architecture for exchanging information on a standard.
The cultural and arts data exchange standard will be in the XML language for exchange
through multiple platforms. The interoperability between platforms will enable the standard
to become implemented on a wide scale.
Information exchange between the web portal and the client website through the web service
is also applicable. The web portal can request message towards the client service. An example
is shown in Figure 4. where a user requests a search for golden artifacts aged between B.E.
2000 to B.E. 2555 through the web portal to the client websites. The web portal sends the
message to two museums with available service as seen in Figure 5.
214
Figure 4.Depicts the search through central web portal.
Figure 5. Depicts the message details (MDEX_MuseumContentRQ.xml) sent to the client
server.
When the web service on the client side receives the message, it processes the message and
sends back a message response to the web portal in a predefined format. An example reply to
the previous request is shown in Figure 6. The figure shows the two museums, Chaosampraya
National Museum and Royal Barges National Museum replying with two and one objects
respectively. The message response has details as requested by the standard, seen in Figure 7.
215
Figure 6. Depicts the displayed result of the message request to the client severs.
216
Figure 7.Depicts the message details (MDEX_MuseumContentRS.xml) sent to the web
portal.
5. Discussion & Conclusion
The system proposed in this paper found that it is applicable with the three pilot museums.
The data management of the Fine Arts Department is also improved through consistent
update from the client servers. One specific aspect that is important to the department is the
security of the artifacts. The department now can update the movement or displacement of
artifacts on a real time basis. From the user perspective of the system, it is found that data
entry on the client server can be a redundant process if it does not replace the current manual
system. This create extra work load for the museum staff that can hinder the linkage viability.
A total solution should be proposed to the client museum, enabling it to transfer all or most of
its function onto a digital database and eliminate manual processes completely.
Another obstacle to the central database is the availability of internet connection at regional
museums. Some do not have the funds or infrastructure to support real time connectivity or
database management required for the system to function properly. Digitization of existing
information in hard copy is also a monumental task that needs to be addressed in a
determined approach.
Lastly, future work for the effort in creating a universal data exchange standard is to conduct
more implementation with museums in Thailand, both on a physical level and policy basis.
Museums needs to be involved in evolving the exchange standard to better suit the actual
existing information type and work process in Thailand. Seminars focus groups and more
field work is necessary for the standard to develop into a national standard.
6. References
[1] Q. Mei, "A knowledge processing oriented life cycle study from a Digital Museum
system", in Proc. ACM Southeast Regional Conference, 2004, pp.116-121.
[2] E. Hyv¨onen, E. M¨akel¨a, M. Salminen, A. Valo, K. Viljanen, S. Saarela, M. Junnila
and S. Kettula. “Museum Finland-Finnish museums on the semantic web”. Web
Semantics: Science, Services and Agents on the World Wide Web Volume 3, Issues
2–3, October 2005, Pages 224–241.
217
[3] P. Stathopoulos, S. Zoi, N. Konstantinou, E. Solidakis, C. Basios, T.Zafeiropoulos, P.
Papageorgiou and N. Mitrou. “E-MUSEUM – A CONTENT MANAGEMENT
SYSTEM FOR PROVIDING MUSEUM VISITORS WITH PERSONALIZED
AUDIOVISUAL INFORMATION”. Third International Conference of Museology &
Annual Conference of AVICOM Mytilene, June 5 – 9, 2006.
[4] S. Chan, Powerhouse Museum, Australia. “Tagging and Searching -Serendipity and
museum collection databases”.Museum and the web 2007 the international conference
for culture and heritage on-line.
[5] P. Kamolvej, K. Tungchitipunya, G. Prommoon and S. Thuansunhirun, “Thailand
tourism Collaborative Commerce (TCC) via XML Web Service”, Workshop
Proceedings of The 17th International Conference on Computers in Education ICCE
2009.
[6] L. Kovavisaruch, P. Kamolvej, G Prommoon and T. Chalernporn, “Introduction of
RFID Smart Museum Guide at Chao Sam Praya Museum”. Workshop Proc. of The
17th International Conference on Computers in Education ICCE 2009.
[7] La-or Kovavisaruch, Virach Sornlertlamvanich, Thatsanee Chalernporn, Pobsit
Kamolvej and Nitirat Iamrahong, “Evaluating and Collecting Museum Visitor Behavior
via RFID”, 2012 Proceedings of PICMET '12: Technology Management for Emerging
Technologies, Pages 1099 - 1101.
[8] L. Kovavisaruch, V. Sornlertlamvanich, P. Kamolvej, N. Iamrahong and G. Prommoon,
“Interexchange Museum Database via Web Service”, 2012 Service Research and
Innovation Institute Global Conference, Pages 623 – 627.
218
378
A New Villain: Investigating Steganography in Source Engine Based Video
Games
Christopher Halea,Lei Chenb,Qingzhong Liuc
a
Department of Computer Science
Sam Houston State University
Huntsville, Texas
chris.hale@shsu.edu
b
Department of Computer Science
Sam Houston State University
Huntsville, Texas
chen@shsu.edu
c
Department of Computer Science
Sam Houston State University
Huntsville, Texas
liu@shsu.edu
Abstract
In an ever expanding field such as computer and digital forensics, new threats to data privacy
and legality are presented daily. As such, new methods for hiding and securing data need to
be created. Using steganography to hide data within video game files presents a solution to
this problem. In response to this new method of data obfuscation, investigators need methods
to recover specific data as it may be used to perform illegal activities. This paper
demonstrates the widespread impact of this activity and shows how this problem is present in
the real world. Our research also details methods to perform both of these tasks: hiding and
recovery data from video game files that utilize the Source gaming engine.
Keywords: steganography, Steam, Source, video games, digital forensics, investigation,
Hammer
1. Introduction
With the growing amount of information and responsibility placed on computer systems
comes an increased threat of misuse and abuse of these systems. Safeguards must be
219
developed in conjunction with technology in order to ensure its safety and keep threats in
check. In a most recent report by Symantec, they found over 286 million unique malware
variations in circulation. The report also shows a 93% increase in web attacks, a 42% increase
in mobile device vulnerabilities, and 6,253 new software vulnerabilities. These numbers
represent the largest yearly increase in the fifteen years that this study has been conducted [1].
All of these numbers represent the inherent threats that computers and electronic devices
bring to their users.
As the threat of computer crime grows, so does the number of avenues which criminals may
use to conduct illegal and potentially damaging activities. One of the newest and often
overlooked threats comes from a seemingly innocuous source: video games. In the not too
distant past, creating a video game was a relatively small venture. A team of one or two
individuals could create, publish, and release a game on their own. Since its humble
beginnings, the art of video game development has become an enormous commercial success.
With over 72% of all American households playing video games and $4.9 billion in revenue
[2][3], this industry is booming more now than ever before. As video game business
continues to grow and develop, the potential for exploiting these services proportionately
increases. Video games vulnerabilities are not often seen as serious security threats by
individuals and security professionals. This paper outlines several of these threats and how
they can be used to transmit illegal data and conduct potentially illegal activities. It also
demonstrates how investigators can respond to these threats in order to combat this emerging
phenomenon in computer crime.
This paper is organized as follows. In Section II we introduce the Source Engine, one of the
most popular game engines, Steam, a powerful game integration and management tool, and
Hammer, an excellent tool for creating virtual environment in video games. In Section III we
look at various ways of hiding data in video games using the above tools. Section IV
discusses methods for investigators to detect hidden data in game files and environments. We
draw conclusion and lay out future work in Section V.
2. The Source Gaming Engine
This paper primarily focuses on threats presented by the Source gaming engine. This engine
is owned and developed by the Valve Corporation. Due to its extremely large user base and
commercial popularity, it is one of the most popular in the world of gaming.
The Valve Corporation: Creators of the Source Engine
Kirkland, Washington, in the year of 1996, Valve was founded by Gabe Newell and Mike
Harrington, two previous Microsoft Windows developers. The company started to create
innovative and groundbreaking new video games. Valve initially worked on developing
several small projects through the next two years, eventually abandoning these plans and
focusing their resources on their first commercial release: Half-Life. Since its release in 1998,
220
Half-Life received over fifty Game-of-the-Year Awards as well as being heralded as "one of
the best games ever" [4]. Following the commercial success of Half Life, Valve released its
next successful game: Counter-Strike. Currently, Counter Strike 1.6 is the most widely played
online video game in the world with the exception of Massively Multiplayer Online Role
Playing Games [5]. In September 2003, Steam was released as a tool to seamlessly integrate
updates into the Counter Strike franchise. It gradually saw greater integration into Counter
Strike and all Valve game releases. Since the release of Half-Life 2 in 2004, Valve has
released a number of titles, each using an improved and altered version of the Source game
engine. Many of these games have achieved immense commercial success, including Left 4
Dead 1 and 2, Portal 1 and 2, and further iterations to the Half-Life franchise [6].
The Source Engine
In order to develop their games, Valve acquired the rights to use and modify the Quake game
engine, published by id Software. The Quake engine was regarded as one of the premier
video game engines of that time, powering the extremely popular and trendsetting First
Person Shooter game Quake. This engine was the first to transfer from a two dimensional
sprite based gaming system to a three dimensional world [5]. The borrowed game engine was
heavily modified in order to better suit Valve's needs, and eventually became known as the
Goldsrc engine. The following years at Valve were focused a combination of developing
smaller titles as well as further enhancing the aging Goldsrc engine. After several iterations
and releases, the Source engine was born from the outdated Goldsrc engine [7]. The Source
engine has been used and is still being utilized on all Valve game releases since its inception.
The modular nature of the Source engine lends itself to constant development and
improvement. One of the most notable additions to the Source engine and all of Valve's
published games is the integration of the Steam platform.
Steam
Prior to Steam, the release of an update or patch would result in the disconnection of a large
portion of the users for some time as the game updated. Steam initially set out to better
facilitate patch deployment and management. As Steam began to expand, it also gained
more features and functions. Through time, Steam began to handle more than patch
deployment, including digital distribution, multiplayer, digital rights management,
community features, chat and voice functionality, and anti-cheat detection and resolution
technologies. Eventually, the Steamworks API was also released, allowing developers to
interface with the Steam platform. As Steam gained popularity, other game developers began
to offer their game catalogs as downloads through it.
One of the largest draws of Steam is that it is both platform and machine independent. Since
its inception, Steam has continually grown in both scope and user base. As of the beginning
of 2012, Steam has 1523 games available through the store front [9], as well as 40 million
active user accounts [10]. On January 2, 2012, Steam broke an all-time record by having 5
221
million concurrent players in game at the same time [11]. While Valve has never revealed any
details about their market shares, a competing online distribution service Stardock estimated
that Steam had 70% of the digital distribution market in 2009 [12]. Steam continues to see
growth and development, and Valve has revealed no plans to abandon the popular service.
Hammer
One of the unique characteristics of the Source engine is its cooperation with the developer
community. Many game engines choose to keep their tools and game mechanics from the
general populous. They instead only allow contracted developers the opportunity to work
with this proprietary software. This is not the case with the Valve Corporation. Most of
Valve's tools, including Hammer, are often published with free access to anyone who uses
their games.
The Hammer Editor is the official level creation tool used by Valve for all Source based
games. It is free software available to any person who has purchased a Source based game. It
is included as part of the Source Software Development Kit (SDK). Hammer is a replacement
for the outdated Worldcraft tool which was used on Goldsrc games. Created by Ben Morris in
1996, Worldcraft's rights were acquired by Valve when they hired Morris a year later [13].
The Hammer editor allows a developer to create a map through the use of brushes, entities,
and map properties [14]. Brushes are the most primitive of objects in a game level. They are
primarily geometric solids such as blocks, rectangles, cones, and spikes. These brushes are
the primary building blocks to all Source levels. Almost all large shapes and terrains are
created through the manipulation of basic brushes. Small and more detailed objects are
created through the use of models, a separate category of entity within Hammer. Entities are
non-static, sometimes animate objects that are used for interaction as well as non-visible
game data or logic needed to make a map come to life. There are two general types of
entities: point and brush. Point entities exist logically at a point or points within the level.
Examples of these entities include players, non-player characters, or lights. Brush entities are
tied to a brush in order to exist, but modify its existence somehow. Some examples of brush
entities include doors, elevators, ladders, or other moving interacting objects. Another
example of brush entities are triggers, an invisible event that fires based on input from the
player such as walking into an area or completing a task. By combining brushes and entities,
a virtually limitless series of levels can be created.
Hammer also includes tools to compile raw map data into a format that is usable by the
Source engine. By default, uncompiled maps are saved in the proprietary VMF format. This
is a plaintext, human readable file format that stores information about the level [15]. In order
to convert this text into information that the Source engine can use, several compilation steps
are needed. There are four main programs which run to create a playable level: the game
executable, VBSP, VVIS, and VRAD. The game executable parameter allows the user to
specify which game and set of specific tools to use from the available Source based games
222
[16]. For instance, Half Life 2 based games have different options and functionality than Left
4 Dead based games. Once the game parameter has been set, the map data is passed to VBSP.
This tool converts a raw .vmf file into a compiled Binary Space Partition (BSP) file. This is
the file type actually used by the engine to render the map. VBSP converts primitives such as
brushes into polygons, generates visible sections of the map, creates props, and embeds
entities [17]. Once this is completed, the .bsp file is passed to VVIS. VVIS embeds visibility
data in the map. This is done by splitting the map into visleaves, which are small sections of
the map that load one at a time, rather than all at once. This improves performance and load
times significantly. VVIS also determines which visleaves can see each other for rendering
order [18]. Once complete, the .bsp is passed to VRAD. The VRAD tool embeds lighting data
into the map. Any user defined and dynamic lighting information is inserted at this point. The
pre-compiled light is then propagated through the level through a radiosity algorithm [19].
Once all these tools are complete, the BSP file is ready to be loaded by the game engine and
executed.
3. Hiding Data in Source Games
Why Video Games and Steam?
There are several criteria that make video game files excellent candidates for data hiding, the
first of which is their size. When examining common files such as JPEGs, an uncommonly
large file size can be an indicator of foul play, making it impossible to hide large file in these
file types without raising the risk of discovery. The advantage of video game data is that large
file sizes are common. A typical video game installation may contain more than ten gigabytes
of data. With this large palette available, any number of files can be hidden within. Another
factor which makes video game data a viable candidate for data hiding is its commonality.
Video games, especially those with incredibly large sales volumes, are installed on millions
of computers across the world. An investigator finding these files on a suspect machine may
not immediately deem them as suspicious. Furthermore, because of the dynamic nature of
video games, files on each user's machine are expected to deviate from the initial released
version. This eliminates the possibility of an investigator using expected file contents or hash
values to check for changes and therefore flag a file or system as potential evidence. Perhaps
the biggest advantage video games have in data hiding is that they are entirely different on
disk than when they are running. A simple text message in game could not be located in the
code, yet obtained easily from an in-game window or overlay. Since most investigations are
conducted on dead systems, these sorts of messages are almost untraceable. The only way for
an investigator to mitigate this risk would be to load and execute every game level as part of
an investigation for potential evidence. This sort of investigation is impractical and would
never be used by an investigator in the field or in a digital forensic lab. All of these attributes
make video game data an almost insurmountable challenge to investigators or unwanted data
observation.
223
After establishing that video games provide an ample platform for data hiding, one questions
remains: Why Steam? Of all the video game publishers in the industry, why target Valve's
platform and Source engine? There are a number of factors that make Steam an excellent
candidate for conducting this sort of activities. First is its widespread use. With over 40
million active users, the presence of Steam on a system should raise no suspicion for an
investigator. An added benefit of this software is that it is cross-platform. The data embedded
on files running on a Windows-based system will still be able to be transferred and utilized by
malicious users on Mac or Linux platforms as well. The tools and methods for hiding the data
are likewise cross-platform. Another contributing factor of utilizing the Steam platform is the
tools for interfacing with and modifying data for the Source engine are also widely available
and free to use. The Hammer world editor, for instance, comes free with any Source based
game. This makes protecting data privacy more accessible for all users, even those with
malicious intent. Because these tools have been available for some time, the file formats and
behaviors are well understood and documented. Manipulating this data is easier, allowing for
further tweaking and exploitation of game files and properties. One of the most unique
properties of the Steam platform is the integrated social connectivity features. Steam handles
much more than simply running video games, it connects users, distributes content, and has
an integrated platform for sharing maps, mods, and tweaks. By utilizing this built-in
functionality, malicious users can distribute game levels with hidden data on a grand scale.
This activity would also not draw any unwanted attention from investigators, as content is
commonly shared between users on the Steam network. By using the built-in tools, packaging,
and sharing functionality, Steam is an all in one tool for hiding and transmitting data using
video game files.
Steganography
In the realm of data hiding, there are two main approaches to data obfuscation. The first and
most common approach is encryption. All encryption techniques follow the same basic
strategy: data is sent through an encryption algorithm in order to generate ciphertext, then
encrypted message is sent or stored for the recipient, and the ciphertext is finally decrypted
with the key. The security of data depends entirely on the encryption algorithm and how well
key is kept secure. An unintended recipient is able to see the ciphertext, but they cannot
access the original information without the key. Steganography utilizes a different approach
to data security. Rather than transforming data into an unreadable form, steganography hides
data inside of a benign secondary piece of data. When data is hidden in this way, onlookers
are unaware that there is any data hidden at all. Steganography is thus a form of security
through obscurity. Other than the sender and receiver, nobody suspects that a message has
been transmitted. Hiding data inside of Source game files is therefore a form of
steganography.
224
Embedding Text with Brushes
Embedding data with brushes is the most straightforward method of obfuscation. To
accomplish this, a malicious user can simply create a new brush and shape it in such a way as
to form the words or letters that compose the hidden message. By adding several brushes, the
message can be expanded from a single letter or word to a larger collection of text. While
brushes are initially set as primitive shapes such as rectangles or cubes, more complicated
solids can be created by utilizing Hammer's built in vertex and face edit tools, thus adding
more tools to the malicious user's arsenal. Hiding text in Source games via brushes is a great
tool for users who wish to obfuscate or share messages without being seen. The advantage of
using this approach as opposed to standard encryption is that an investigator or onlooker will
not be able to detect that there is data hidden in the level. Even if an investigator detects that
data has been hidden in the game files, the data is untraceable on disk as it exists only in the
geometry of the level. The main disadvantage of this approach is that it is tedious. It is also
impractical with large amounts of information. For small messages, however, this approach is
ideal.
Embedding Text with Overlays
Game overlays are messages and dialogs which appear in-game as the player navigates
through the game map. They are typically used to provide a player with important
information, hints, or instructions. A malicious user may manipulate this functionality to
embed messages which have no bearing on the game, but are instead intended as hidden data.
In order to embed and hide in game text overlays, two entities are utilized at a fixed point in
the map: env_instructor_hint and info_target.
Env_instructor_hint is a point entity in the Left 4 Dead version of the Source engine [20].
This entity exists to provide in-game hints for players via a popup on screen. This popup is
primarily text based, although a small image may be inserted. By using this entity and
creating custom text, any message can be embedded in the game. Any number of these
entities can be added to a given level, creating a virtually limitless space for text to be hidden.
After inserting an env_instructor_hint into the map, the entity can be customized through
several variable attributes in Hammer. The most useful of these variables is Caption. Caption
holds the text that is ultimately displayed on screen and is where a malicious individual can
embed hidden messages. This text can be combined with one of many predefined images that
are included in the Onscreen Icon variable. There are many other variables within
env_instructor_hint which define the type of text, size, color, and pulsation, and other
attributes. With this entity alone, a hidden message can be easily embedded in a map which is
as simple as a cube with the player inside.
Although env_instructor_hint is the mechanism used for hiding messages, it does not exist on
its own. The env_instructor_hint entity does not yet have a physical place in the map to
display. Info_target serves this purpose. Info_target is the physical placeholder to which
225
env_instructor_hint is bound [21]. Wherever the info_target entity is placed is where the user
will see the text defined in env_instructor_hint displayed on screen. Combining these two
entities can be useful for hiding data in a specific location in a map that the intended user
knows to look for. By utilizing env_instructor_hint and info_target, malicious users can
transform a custom game map into a container filled with malicious text messages.
Embedding Images with Textures
Child pornography is one of the most grievous cybercrimes facing law enforcement and
digital forensic investigators. These investigators analyze systems for this type of content to
expose the offenders. Malicious users employ encryption, steganography, and other means of
obfuscation to thwart investigator's attempts to uncover the proper evidence to convict these
criminals in the court of law. Hammer can also be used as a tool to hide potentially illegal or
otherwise malicious images.
While embedding text into a Source map file is fairly straightforward, adding images requires
more involved work. Any image applied to a surface or brush in a map is referred to as a
texture. To begin examining how these files are hidden, it is necessary to understand how the
Source engine handles images. Most images are saved as predefined file types that are
familiar to most computer users. Examples of these include JPEG, PNG, and GIF. The Source
engine does not interface with these types of images directly. Instead, it uses a proprietary
format: VTF.
Valve Texture File (VTF) files are stored in a format different than more widely used image
formats in several key areas. The most notable characteristic of the VTF format is that the
total size of the image must be a power of two as measured in pixels. For example, images
must be 2x2, 4x4, 8x8, 16x16, and so on. The dependency on square images can be a
hindrance to users embedding data; however most images can be made square through
selective cropping or the addition of white space. The reason for the reliance on square
images is the presence of mipmap data in the VTF file. Mipmaps are smaller, lower
resolution versions of the original image also stored in the file. These mipmaps are used by
the Source engine to render the image differently at varying distances in the game. The lower
resolution images are used at the farthest distance, with increasing resolution as the image
comes closer on screen. This improves rendering speed and processing performance of the
game. Each respective mipmap is half of the previous mipmap, creating the reliance on a
power of two size constraint. VTF files also contain other additional information used by the
Source engine, including a bump map scale, a low resolution copy of the VTF for color
rendering, and a reflectivity value used by the VRAD program in determining final rendering
appearance [22]. More complex textures such as environment maps and volumetric textures
include even more data, all of which is stored in the VTF file.
Hammer does not include a tool for creating custom textures that are not already bundled
with the engine. Luckily third party tools exist to convert common image file types to VTF
226
files. One of the most prevalent of these is VTFEdit. VTFEdit is a free, open source tool
which can create a VTF file from almost any popular image file type. It can also edit flags
and attributes within a VTF file. VTFEdit utilizes the open source VTFLib library. VTFEdit
not only can convert to and from the VTF file format, but has additional functionality which
allows it to edit VMT files discussed in detail below. VTFEdit has the ability to handle files
in batches as well as create all mipmaps and VTF file headers utilized by the Source engine
[23].
The Source engine does not only use VTF files for rendering textures in-game. Each texture
file is also paired with a corresponding Valve Material Type (VMT) file. These files are text
based files which serve as a set of metadata for the texture they correspond to. This allows the
Source engine to determine how to render the textures properly. Attributes included in VMT
files include texture names, physical surface types, shader parameters, fallbacks, and proxies
[24]. Although these files are usually fairly simple, they can be used to hide either plain text
or encrypted text. While this type of text would be easily viewable on disk to an investigator,
it can still serve as an additional container for malicious data.
Once a texture has been generated from VTFEdit, it can then be embedded into the map
within Hammer. This is a simple process done through Hammer's map creation interface. The
texture application can be invoked to navigate through the engine's built in textures.
Depending on where the custom texture is stored, the creator can navigate to the directory
and select the custom texture. Applying the texture is as simple as selecting the appropriate
brush or entity and clicking the Apply Current Texture button. Once applied, many of the
texture's properties can be edited from within Hammer. Once compiled, the texture and any
hidden information it or its corresponding VMT file contains is permanently embedded in the
level.
The file extension used by an image on disk is of little importance to an investigator. The
investigator's primary objective is to identify the file format based on its content rather than
file extension. This is done commonly though the use of file signatures. JPEG images, for
instance, have a common header and footer in the code of every image. This header and
footer do not change due to a change in the file extension. Investigators can utilize this fact
by searching for the file header rather than the file type on a suspect system. In the case of
JPEG, the file signature is 'ÿØÿà', or FF D8 FF E0 in hexadecimal. When converting an image
to a VTF file, the file signature and header previously used by the image is lost. This is true
for not only JPEGs, but all other file types. By eliminating these file signatures, an
investigator's most common and effective method of uncovering images is proven ineffective.
This makes the recovery of illegal or malicious files hidden within a Source game level a
greater challenge. The reliance on outside tools and the complexity of the process required to
embed custom images makes this method less than ideal for a malicious user. The time
required to perform this type of steganography is great.
227
Distributing Maps
After a map and all of the corresponding assets are created, packaging and distribution can
take place. Valve has a proprietary file format for this process, called Valve Pack (VPK). This
file format is relatively new replacing the outdated GFC format formerly used in Source
games [25]. VPK files are packages which contain all of the necessary components for
custom maps to be installed and run. This includes the map's BSP files, navigation logic files,
textures, and a few identifying pieces of metadata used by the in-game UI. It is important to
note that VPK data is archived, and therefore typically tightly packed. This format allows
developers to share their data through a single file download which makes download and
installation of game content seamless and easy for the end user. In order to create a VPK file,
Valve has released a free tool as part of the Source engine SDK. This tool, simply called VPK,
uses a directory containing the game files to be packaged as input and outputs the completed
package. It can be run as a command line tool, giving it the ability to be used in batch
programs to output large amount of game packages. It can also be used to list and modify the
contents of a VPK file [26]. Game users can either double-click the VPK package to install it
into the game via the operating system or manually add it by placing it in the 'addons'
subdirectory of the game's installation files. For an investigator, this file provides a one stop
shop for potentially malicious game content.
4. Investigating Source games
The Forensic Process and Tools
As demonstrated throughout this paper, Steam maps can be used to store and transmit
sensitive and potentially dangerous and illegal data. From an investigative standpoint, these
files hold a potential treasure trove of information and evidence. As an investigator,
extracting and examining this evidence is crucial. There are many tools which can be used
accomplish this task. One of the most popular and widely used of them is the Forensic Tool
Kit (FTK).
FTK is a piece of enterprise level digital forensics software used by many investigators in the
field and in crime labs across the U.S. This software package has been forensically tested and
is verified as sound for use in investigations. In order to assume the role of an investigator,
Steam game files will be examined with FTK. The files used for this process were extracted
from a hard drive and archived, which is typical of what an investigator will do with a suspect
machine, rather than working with a live system.
One of the main problems for investigators of Steam game files is the sheer amount of data to
be processed. Every texture and map file potentially holds condemning information. Deciding
which of these files hold evidence is the first obstacle to overcome as every Source based
game comes pre-packaged with hundreds of textures and many map files. It is therefore the
obligation of an investigator to determine which of these files hold malicious data and which
228
are benign.
The first order of business when investigating Source game files is to determine how they are
stored. As stated above, game files may be stored as a single archived VPK file or as a series
of directories in the installation location of the game. To locate VPK files on disk, an
investigator can search for all files with the VPK extension. In the case of a suspect changing
the file extension to cover up data, the investigator can also find these files by the VPK
signature, 0x55aa1234. Once the VPK file has been found, tools can be used to unpack it to
its constituent file structure. The unpacked file structure is much easier to work with,
allowing an investigator to search in the appropriate folders for the files they need. One of the
tools available to unpack these files is GCFScape.
GCFScape is a free utility that is part of the Nem's Tools pack of Source map editing and
enhancement software [27]. It is open source and can be used freely by anyone. This tool
allows users to view, modify, and extract the underlying file structure from VPK files. For an
investigator, it can be used for its extraction functionality. Although it is not a fully verified
forensic tool, its open source nature could provide for verification in the future should an
investigator need to pursue this avenue. After GFCScape has been used to extract the custom
map files from a VPK file, they can then be examined by the investigator. Unfortunately,
FTK cannot natively display the visual contents of any of the proprietary Valve formats, so
hexadecimal searching is the most efficient way to sort through these files.
Investigating Data Hidden With Brushes
It has been shown that data can be hidden in a level by manipulating brushes and geometry to
create words. Unfortunately, this type of hidden data can hardly be uncovered by an
investigator. The level geometry is stored in integer format based on its vertices and their
location in the map. This information is useless on disk to an investigator. The only way to
view and use this data is to load the map into the game and run it. This method of an
investigation is often not feasible or practical for an investigator; however it may be used as a
last resort.
Investigating Data Hidden with Overlays
An unpacked VPK file contains many resources, including textures, maps, models, and
metadata. Each of these files may contain evidence hidden by one or many of the above
methods. The easiest of these to recover is embedded messages hidden via in game pop ups.
These messages are stored in the mapname.bsp file. By examining the hexadecimal contents
of this file, important pieces of information can be uncovered.
Within a BSP file, entities are defined in entity lumps. Each of these lumps contains defining
information about an entity, including its location and properties. This information is stored in
plaintext as it appears in game. By using the identifying fields in the entity lump, an
investigator can recover hidden messages. Messages hidden with the env_instructor_hint
entity as discussed previously can be found by searching for the keyword hint_caption. The
229
hint_caption field contains the actual text displayed on screen in game as a popup hint. In the
BSP, an example of a recovered hint_caption is:
"hint_caption" "Any message may be hidden in game as text!"
By recovering all instances of this keyword, an investigator can uncover hidden messages
from a Source map hidden using env_instructor_hint.
Investigating Data Hidden with Textures
Investigative parsing of map data for hidden images can be accomplished in a similar manner
to searching for text data. By utilizing the file structure and container formats, an investigator
can identify files and then extract them for investigation. Because images included in game
files do not have common file signatures, they will not be properly flagged in most
investigative software. This limitation creates the necessity for an investigator to manually
find these files using the file signatures.
To uncover hidden images in Source game files, the investigator needs to first identify
custom textures embedded in the map. This can be done by first expanding the VPK file with
GCFScape as discussed above. Once expanded, all custom textures can be located in the
materials directory. By utilizing the underlying file structure to identify custom textures, the
investigator can effectively narrow down the search for suspicions textures from every
texture in the game to a small fraction of that number. Alternatively, the investigator can
find all VTF texture files by searching for the hexadecimal VTF file signature "VTF\0".
With the suspect texture files identified, they can then be inspected by the investigator.
Again, FTK does not support natively viewing these files. An outside program such as
VTFEdit would need to be used to open this file. Any malicious images can be viewed and
exported from this program. While VTFEdit is not forensically verified, it can still be utilized
in an investigator as no digital forensic suites can currently parse this data natively.
5. Conclusion and future work
The field of digital forensics is a double bladed sword. On one side there is the concept of
privacy, as individuals believe that they have the right to protect their own data however they
see fit. This can include encryption, steganography, and other means of obfuscation. On the
other side, there are investigators and law enforcement attempting to prevent and uncover
individuals using these means to conduct illegal or malicious activity.
In the name of individual privacy, many tools and software have been developed in order to
hide or otherwise prevent viewing or tampering by anyone other than those intended. Many
file types have known exploitations or loopholes which allow for data to be manipulated in
order to protect privacy. Although this is a pain for investigators, it is a necessary evil. One
vessel for data hiding that has not been developed or researched in depth is video game data.
This paper demonstrates both ends of the spectrum of data privacy and security. It shows how
a concern citizen may use game files to store information that they want to secure from
230
prying eyes. While it is true that this privilege may be abused in a malicious way, this fact is
not guaranteed. This paper also demonstrates how an investigator may conduct an
investigation in the face of these new techniques for data hiding in video game files. It further
demonstrates the need for investigative technologies to continue growth in order to keep up
with the multitude of computer crimes being committed constantly across the nation and the
world.
Computer crime is an ever-growing, ever changing venture. Just as law enforcement has
caught up to the current criminal technology, new technology is introduced. This ongoing
game of cat and mouse necessitates the constant development of tools and methodologies for
fighting computer crime. This concept can be demonstrated with the utilization of Steam and
Source based games to obfuscate data as well as the need for tools to address the issue of
hidden data.
Three main methodologies for hiding data in source games are demonstrated in this paper;
however there are potentially countless more methods available. For instance, steganography
in the raw data files of Source maps can be further investigated and implemented as well as
data in transmission during online or LAN games. This paper also focuses on one game
engine only. There are many more game platforms and engines such as Origin, Stardock, and
Gamefly which may have hidden data which also need further research.
On the investigative front, there is also room for future research and development. As
mentioned above, neither FTK nor any other investigative tools can natively display or
manipulate Source game files. Tools to analyze data in transmission from these games are
also necessary to intercept and analyze potentially illegal data in transfer. The addition of
this functionality to these tools will greatly improve their ability to investigate and process
game files for hidden data. Forensically testing and verifying these programs will create tools
which can be used in court to convict criminals or exonerate innocents.
6. References
[1] M. Fossi and T. Mack, "Symantec Internet Security Threat Report:
Trends for 2010," Symantec Corp., Moantain View, CA, Tech. Rep. 21 182883, Apr.
2011
[2] Entertainment Software Association, (2011). Essential Facts about the Computer And
Video
Game
Industry
[Online].
Available:
http://www.
theesa.com/facts/pdfs/ESA_EF_2011.pdf.
[3] Entertainment Software Association, (2011). Industry Facts: Economic Data [Online].
Available: http://www.theesa.com/facts/econdata.asp.
[4] Valve
Corporation,
(2010).
Welcome
to
Valve
[Online].
Available:
http://www.valvesoftware.com/company/index.html.
[5] T. Bayer, (2010). 14 years of Quake Engine: The Famous Games with id Technology
[Online].
Available:
http://www.pcgameshardware.com/
aid,687947/14-years-of-Quake-Engine-The-famous-games-with-id-Technology/News/
[6] M. Thomsen, (2009). Ode to Source: A History of Valve's Tireless Game Engine [Online].
Available: http://pc.ign.com/articles/102/ 1027317p1.html.
231
[7] A. Capriole and J. Phillips, (2008). The History of Valve [Online].
Available:http://planethalflife.gamespy.com/View.php?view=Articles.Detail&id=121.
[8] Warf!y, (2011). About the Steamless CS Project [Online]. Available:
http://v5.steamlessproject.nl/index.php?page=about.
[9] Valve
Corporation,
(2010).
Games
[Online].
Available:
http://store.
steampowered.com/search/#category1=998&advanced=0&sort_order=ASC&page=1.
[10] K. Mudgal, (2012). Valve Releases PR; Steam Userbase Doubles in 2011, Big Picture
Mode
Coming
Soon
[Online].
Available:
http://
gamingbolt.com/valve-releases-pr-steam-userbase-doubles-in-2011-big-picture-mode-co
ming-soon.
[11] T. Senior, (2012). Steam Hits Five Million Concurrent Players [Online].
Available:http://www.pcgamer.com/2012/01/03/steam-hits-five-million-concurrent-playe
rs/.
[12] K. Graft, (2009). Stardock Reveals Impulse, Steam Market Share
Estimates [Online].Available:http://www.gamasutra.com/php-bin/news_
index.
php?story=26158.
[13] Hammer Editor Version History (2010) [Online].
Available:https://
developer.valvesoftware.com/wiki/Hammer_Editor_version_history.
[14] Mapping
Overview
(2010)
[Online].
Available:
https://developer.
valvesoftware.com/wiki/Introduction_to_Editing.
[15] VMF
Documentation
(2012)
[Online].
Available:
https://developer.
valvesoftware.com/wiki/VMF_documentation.
[16] Hammer
Game
Configurations
(2011)
[Online].
Available:
https://
developer.valvesoftware.com/wiki/Game_Configurations.
[17] VBSP (2011) [Online]. Available: https://developer.valvesoftware.com/ wiki/Vbsp.
[18] VVIS (2011) [Online]. Available: https://developer.valvesoftware.com/ wiki/Vvis.
[19] VRAD (2012) [Online]. Available: https://developer.valvesoftware.com/ wiki/Vrad.
[20] Env_Instructor_Hint
(2011)
[Online].
Available:
https://developer.
valvesoftware.com/wiki/Env_instructor_hint.
[21] Info_target
(2012)
[Online].
Available:
https://developer.valvesoftware.
com/wiki/Info_target.
[22] Valve
Texture
Format
(2011)
[Online].
Available:
https://developer.
valvesoftware.com/wiki/Valve_Texture_Format.
[23] VTFEdit
(2011)
[Online].
Available:
https://developer.valvesoftware.
com/wiki/VTFEdit.
[24] Material
(2011)
[Online].
Available:
https://developer.valvesoftware.
com/wiki/Material.
[25] VPK
File
Format
(2011)
[Online].
Available:
https://developer.
valvesoftware.com/wiki/VPK_File_Format.
[26] VPK (2011) [Online]. Available: https://developer.valvesoftware.com/ wiki/VPK.
[27] R. Gregg, (2006). AboutGCFScape [Online]. Available: http://nemesis.
thewavelength.net/index.php?p=25.
232
239
The Design of Gain Controllable Current-mode First-order All-pass Filter
for Analog Integrated Circuit
Totsaporn Nakyoya,*, Winai Jaiklab
a,
*Department of Electrical Technology, Faculty of Industrial Technology, Suan
Sunandha Rajabhat University, Dusit, Bangkok, 10300, THAILAND
E-mail address: nakyoy@gmail.com
b
Department of Engineering Education, Faculty of Industrial Education, King
Mongkut’s Institute of Technology Ladkrabang, Bangkok, 10520, THAILAND
E-mail address: winai.ja@hotmail.com
Abstract
In this study, a current-mode first order allpass filter using current controlled current follower
transconductance amplifiers (CCCFTAs) is proposed. The features of the circuit are that: the
pole frequency, phase response and current gain can be electronically controlled via the input
bias current: the circuit description is very simple, consisting of 2 CCCFTAs and 1 grounded
capacitor, without any component matching requirements. Consequently, the proposed circuit
is very appropriate to further develop into an integrated circuit. Low input and high output
impedances of the proposed configuration enable the circuit to be cascaded in current-mode
without additional current buffers. The PSpice simulation results are depicted. The given
results agree well with the theoretical anticipation.
Keyword: Current-mode, first order allpass filter, CCCFTA, Integrated circuit
1. Introduction
In recent years, a number of papers have been published dealing with the realization of
current-mode circuits due to their certain advantages compared to voltage-mode circuits [1-3].
They offer to the designer several excellent features such as inherently wide bandwidth,
greater linearity, wider dynamic range, simple circuitry and low power consumption [4]. An
analog filter is the main standard research topics in current-mode circuit designs. One of most
popular analog current-mode filters is a first-order allpass filter (APF) or phase shifter circuit.
This filter is a very useful function blocks of many analog signal processing applications. It is
frequently used in many active circuits such as, phase shifters, oscillators and high-Q
band-pass filters [5-9]. Especially, the first order all-pass filter with gain controllability is
233
very useful for design in many analog circuits to avoid the use of external amplifiers, for
examples quadarture oscillator [10] and multiphase sinusoidal oscillator [11] with
non-interactive control for oscillation condition and oscillation frequency.
Several realizations of current-mode first-order allpass filter using different active
building blocks have appeared in the literature. These include realizations using current
differencing buffered amplifier (CDBA) [9], current conveyors [12-15], current controlled
current conveyors (CCCIIs) [16], OTAs [17-21], differential voltage current conveyor
(DVCC) [22] current operational amplifier (COA) [23] and current differencing
transconductance amplifier (CDTA) [10-11, 24-27]. The literature review of reported
current-mode APFs shows that the weaknesses of these APFs are list bellows:
 use of floating capacitor which is not desirable for IC implementation
 lack of electronic adjustability
 requirement of element-matching conditions
 non-availability of the current-output from a high output impedance terminal
 uncontrollability of current gain
 requirement of external resistor
The current controlled current follower transconductance amplifier (CCCFTA) [28] is a
recently reported active component. It was modified from the first generation CFTA [29-31].
It seems to be a versatile component in the realization of a class of analog signal processing
circuits, especially analog frequency filters. It is really current-mode element whose input and
output signals are currents. In addition, it can also adjust the output current gain.
The aim of this paper is to propose a current-mode gain controllable first-order allpass
filter, emphasizing on the use of the CCCFTAs. The features of the proposed circuit are that:
the current gain and phase shift can be independently controlled with electronic method: the
circuit employs 2 CCCFTAs and 1 grounded capacitor, which is suitable for fabricating in
monolithic chip. The proposed APF also exhibits high-output and low-input impedances,
which is easy cascading in the current-mode operation. The performances of the proposed
circuit are illustrated by PSpice simulations, they show good agreement with the calculation.
2. Theory and Principle
2.1 Basic concept of CCCFTA
Since the proposed circuit is based on CCCFTA, a brief review of CCCFTA is given in this
section. The schematic symbol and the ideal behavioural model of the CCCFTA are shown in
Fig. 1(a) and (b), respectively. It has finite input resistance Rf at f port. This parasitic
resistance can be controlled by the bias current IO. The current if flows from port z. In some
applications, to utilize the current through z terminal, an auxiliary zc (z-copy) terminal is used.
The internal current mirror provides a copy of the current flowing out of the z terminal to the
zc terminal. The voltage vz on z terminal is transferred into current using transconductance gm,
234
which flows into output terminal x. The gm is tuned by IB. In general, CCCFTA can contain
an arbitrary number of x terminals, providing currents ix of both directions. The
characteristics of the ideal CCCFTA are represented by the following hybrid matrix:
V f   R f 0 0   I f 

 
 
 I z , zc   1 0 0  Vx 
 I  0 0  g m  V 
 x  
 z 
(1)
For CMOS implementation of CCCFTA shown in Fig. 3, the Rf and gm are written as
Rf 
1
8kRf I o
and gm  kgm I B
(2)
where kRf=µnCOX(W/L)7,8=µnCOX(W/L)9,10, kgm=µnCOX(W/L)24,25. Here k is the physical
transconductance parameter of the MOS transistor. IO and IB are input bias current to control
Rf and gm, respectively.
IO
vf
if
IB
Rf
if
f CCCFTA x
z
zc
izc
iz
vzc
vz
ix
g mVz
ix
vf
vx
vx
iz
if
iz
vz
izc
vzc
(a)
(b)
Fig.1 CCCFTA (a) Symbol (b) Equivalent circuit
2.2 Proposed current-mode first-order allpass filter
The proposed current-mode APF is illustrated in Fig. 2. It consists of 2 CCCFTAs, 1 resistor
and 1 grounded capacitor. It also seen that low input and high output impedances are
achieved. Considering the circuit in Fig. 2 and using CCCFTA properties in above, the
current transfer function can be rewritten as
 sC  g m 2 
I out ( s)
  g m1 R f 2 

I in ( s)
 sC  g m 2 
(3)
I O1
I in
f
I B1
CCCFTA1
z
x
x
x
f
I Out
IO 2
IB2
CCCDTA2
x
x
x
z
C
Fig.2 Proposed simple current-mode quadrature oscillator
235
From Eq. (4), current gain and phase response of the proposed circuit are:
G ( ) 
 C 
I out
 g m1 R f 2 and     2 tan 1 

I in
 gm2 
(4)
It is found from Eq. (4) that if Rf and gm are equal to Eq. (2), the current gain can be adjusted
electronically/independently from the natural frequency and phase responses by varying IB1
or IO2 while the natural frequency and phase responses can be electronically adjusted by IB2
3. Simulation Results
To prove the performances of the proposed filter, the PSPICE simulation program was used
for the examination. Internal construction of CCCFTA used in simulation is shown in Fig. 3.
The PMOS and NMOS transistors have been simulated by respectively using the parameters
of a 0.25µm TSMC CMOS technology [32]. The transistor aspect ratios of PMOS and
NMOS transistor are indicated in Table I. The circuit was biased with ±1.25V supply
voltages, C=0.1nF, IO1=IO2=80µA, IB1=IB2=98µA. Simulated gain and phase responses of the
APF is given in Fig. 4(a). It can be found that the simulated gain and phase responses are
slightly deviated from ideal responses. Phase response for different IB2 (IB2=50µA, 100µA,
200µA) is shown in Fig. 4(b). This result confirms that the angle natural frequency can be
electronically controlled by setting IB2 as shown in Eq. (4). The time-domain response of the
proposed filter is shown in Fig. 5(a) where a sine wave of 25μA amplitude and 2.5MHz is
applied as the input to the filter. The output currents for different values of IB1 (IB1=100µA,
150µA, 200µA) are shown in Fig. 5(b). It is seen that the current gain can be
electronically/independently adjusted from the natural frequency and its phase shift as
expressed in Eq. (4).
Tab.1 dimensions of MOS transistor
Transistor
W (µm)
L (µm)
M1-M6
5
0.5
M7-M8
16
0.25
M9-M10
8
0.25
M11-M15
15
0.5
M16-M23
5
0.25
M24-M25
24
0.25
M26-M31
3
0.25
236
VDD
M11
M16
M12
M13
M9
M22
M19 M20 M21
M14
M17
z
M10
M18
f
IO
-x -x
M24 M25
M7
x
M8
M3
M1
M23
IB
M5
M4
M27 M28M29
M26
M31
M30
M2
VSS
Fig.3 Internal constructions of CCCFTA
0d
Phase
Phase
Gain (dB)
Phase
0d 10
5
gain
-100d 0
-5
-200d -10
1.0k
10k
-100d
100k
1.0M
Frequency (Hz)
10M
-200d
1.0k
100M
10k
100k
1.0M
Frequency (Hz)
10M
100M
(a) gain and phase response
(b) phase response for different IB2
Fig.4 Frequency responses
40
40
Current (µA)
Current (µA)
Iin IOut
0
-40
0
0
0.2
0.4
0.6
0.8 1.0 1.2
Time (µs)
1.4
1.6
1.8 2.0
-40
2.0
2.4
2.8
3.2
Time (µs)
3.6
4.0
(a) output current vs input current
(b) output current for different IB1
Fig.5 Transient responses
4. Conclusion
This paper proposes an electronically tunable current-mode first-order allpass filter with gain
controllability. It consists of 2 CCCFTAs and 1 grounded capacitor. So it is easy to fabricate
in IC form to use in battery-powered or portable electronic equipments such as wireless
communication devices. The PSpice simulation results were depicted, and agree well with the
theoretical anticipation.
[1]
[2]
[3]
[4]
[5]
[6]
5. References
N. Herencsar, A. Lahiri, K. Vrba, J. Koton, An electronically tunable current-mode
quadrature oscillator using PCAs, International Journal of Electronics, vol. 99, pp.
1-13, 2012
J.-W. Horng, Current-mode highpass, bandpass and lowpass filters using followers,
Microelectronics International, vol. 29, pp.10-14, 2012.
B. Singh, A. K. Singh, R. Senani, New universal current-mode biquad using only three
ZC-CFTAs, Radioengineering, vol. 21, pp. 273-281, 2012.
C. Toumazou, F. J. Lidgey and D. G. Haigh, Analogue IC design: the current-mode
approach, Peter Peregrinus, London, 1990.
R. Schaumann, E. Van Valkenburg, Design of analog filters. Oxford University Press.
New York, 2001.
D. J. Comer, J. E. McDermid Inductorless bandpass characteristics using all-pass
networks, IEEE Transactions on Circuits Theory, vol. 15, no. 4, pp. 501-503, 1968.
237
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
B. Metin, N. Herencsar, K. Pal, Supplementary first-order all-pass filters with two
grounded passive elements using FDCCII, Radioengineering, vol. 20, no. 2, pp.
433-4437, 2011.
J.-W. Horng, DVCCs based high input impedance voltage-Mode first-order allpass,
highpass and lowpass filters employing grounded capacitor and resistor,
Radioengineering, vol. 19, no. 4, pp. 653-656, 2010.
A. Toker, S. Özoğuz, O. Çiçekoğlu, C. Acar, Current mode allpass filters using current
differencing buffered amplifier and a new high-Q bandpass filter configuration, IEEE
Trans. Circuit and System-II , vol. 47, pp. 949-954, 2000.
A. Ü. Keskin and D. Biolek, Current mode quadrature oscillator using current
differencing transconductance amplifiers (CDTA), IEE Proceedings: Circuits, Devices
and Systems, vol. 153, no. 3, pp. 214-218, 2006.
W. Jaikla, P. Prommee, Electronically tunable current-mode multiphase sinusoidal
oscillator employing CCCDTA-based allpass filters with only grounded passive
elements, Radioengineering, vol. 20, no. 3, pp. 594-599, 2011.
M. Higashimura, Y. Fukui, Realization of current-mode allpass networks using a
current conveyor, IEEE Trans. Circuit and System, vol. 37, pp.. 660-661, 1990.
S. Maheshwari, I. A. Khan, Novel first-order current-mode allpass sections using
CCIII, Active and Passive Elec. Comp, vol. 27, pp. 111-117, 2004.
J. W. Horng, C. L. Hou, C. M. Chang, W. Y. Chung, H. L. Liu and C. T. Lin,
High-output impedance current-mode first-order allpass networks with four grounded
components and two CCIIs, International Journal of Electronics, vol. 93, pp. 613-621,
2006.
M. Un, F. Kacar, Third generation current conveyor based current-mode first order
all-pass filter and quadrature oscillator, Istanbul University - Journal of Electrical and
Electronics engineering, vol. 8, no. 1, pp. 529-535, 2008.
P. Singthong, M. Siripruchyanun, W. Jaikla, Electronically controllable first-order
current-mode allpass filter using CCCIIs and its application, 18th International
Conference Mixed Design of Integrated Circuits and Systems (MIXDES), pp. 314-318, ,
2011.
T. Tsukutani, M. Ishida, Y. Fukui, S. Tsuiki, A general class of current-mode
high-order OTA-C filters, International Journal of Electronics, vol. 81, no. 6, pp.
663–669, 1996.
B. M. Al-Hashimi, F. Dudek and Y. Sun, Current-mode delay equalizer design using
multiple output OTAs, Analog Integrated Circuits and Signal Processing, vol. 24, no.
2, pp. 16-169, 2000.
C. M. Chang, B. M. Al-Hashimi, Analytical synthesis of current-mode high-order
OTA-C filters, IEEE Trans. on Circuits and Systems- I, vol. 50, no.9, pp. 1188 –1192,
2003.
C. Psychalinos, K. Pal, A novel all-pass current-mode filter realized using a minimum
number of single output OTAs, Frequenz, pp. 30-32, 2010.
B. Metin, K. Pal, S. Minaei, O. Çiçekoğlu, Trade-offs in the OTA-based analog filter
design, Analog Integrated Circuits and Signal Processing, vol. 60, pp. 205–213, 2000.
S. Minaei, M. A. Ibrahim General configuration for realizing current-mode first-order
all-pass filter using DVCC, Int. J. Electron., vol. 92, no. 6, pp. 347–356, 2005.
S. Kilinc and U. Cam Current-mode first-order allpass filter employing single current
operational amplifier, Analog Integrated Circuits and Signal Processing, vol. 41, pp.
43–45, 2004.
W. Jaikla, M. Siripruchyanun, J. Bajer and D. Biolek, A simple current-mode
quadrature oscillator using single CDTA, Radioengineering, vol. 17, pp. 33-40, 2008.
238
[25] W. Jaikla, M. Siripruchyanun, D. Biolek, V. Biolkova, High-output-impedance
current-mode multiphase sinusoidal oscillator employing current differencing
transconductance amplifier-based allpass filters, Int. J. Electron., vol. 97, no. 7, pp.
811–826, 2010.
[26] W. Tangsrirat, W. Tanjaroen, T. Pukkalanun, Current-mode multiphase sinusoidal
oscillator using CDTA-based allpass sections, Int. J. Electron. Commu. (AEU), vol. 63,
pp. 616-622, 2009.
[27] A. Lahiri, A. Chowdhury, A novel first-order current-mode all-pass filter using CDTA,
Radioengineering, vol. 18, no. 3, pp. 300-305, 2009.
[28] N. Herencsar, J. Koton, K. Vrba, A. Lahiri, O. Cicekoglu, Current-controlled
CFTA-based current-mode SITO universal filter and quadrature oscillator, 2010
International Conference on Applied Electronics (AE), pp. 1-4, 2010.
[29] N. Herencsar, J. Koton, I. Lattenberg, and K. Vrba, Signal-flow graphs for
current-mode universal filter design using current follower transconductance amplifiers
(CFTAs), the International Conference on Applied Electronics – APPEL 2008, Pilsen,
Czech Republic, pp. 69-72, 2008.
[30] D. Biolek, R. Senani, V. Biolkova, Z. Kolka, Active elements for analog signal
processing: classification, review, and new proposals, Radioengineering, vol. 17, no. 4,
p. 15-32, 2008.
[31] N. Herencsar, J. Koton, K. Vrba and J. Misurec, A novel current-mode SIMO type
universal filter using CFTAs, Contemporary Engineering, Sciences, vol.2, no.2,
pp.59-66, 2009.
[32] P. Prommee, K. Angkeaw, M. Somdunyakanok, K. Dejhan. CMOS-based near
zero-offset multiple inputs max–min circuits and its applications. Analog Integr.
Circuits Signal Process, vol. 61, pp. 93–105, 2009.
239
240
Electronically Tunable Low-Component-Count Current-Mode quadrature
Oscillator Using CCCFTA
Chaiya Tanaphatsiria,*, Narong Narongratb
a,
*Department of Electrical, Faculty of Industrial Education and Technology,
Rajamangala University of Technology Srivijaya, Songkhla, 90000, THAILAND
E-mail address: chaiya_32@yahoo.com
b
Department of Electronics Technology, Faculty of Industrial Technology, Suan
Sunandha Rajabhat University, Dusit, Bangkok, 10300, THAILAND
E-mail address: narongn@yahoo.com
Abstract
This article presents current-mode quadrature sinusoidal oscillators employing current
controlled current follower transconductance amplifier (CCCFTA) as active element. The
features of the proposed circuit are that: the circuit topologies are very simple consisting of
single CCCFTA and two grounded capacitors, the proposed circuit is ideal to fabricate in
chip, the condition of oscillation (CO) and frequency of oscillation (FO) can be electrically
tuned. Moreover, the proposed circuit due to high output impedances enables easy cascading
in current-mode circuit. The PSpice simulation results using CMOS CCCFTA are depicted,
and the given results agree well with the theoretical anticipation. The power consumption is
approximately 2.17mW at ±1.25V power supply voltages.
Keyword: Current-mode, Sinusoidal oscillator, CCCFTA, Integrated circuit
1. Introduction
Quadrature oscillators are important blocks for various communication applications, wherein
there is a requirement of multiple sinusoids which are 90◦ phase shifted, e.g. in quadrature
mixers and single-sideband modulators [1]. Recently, current-mode circuits have been
receiving considerable attention of due to their potential advantages such as inherently wide
bandwidth, higher slew-rate, greater linearity, wider dynamic range, simple circuitry and low
power consumption [2].
The current controlled current follower transconductance amplifier (CCCFTA) [3] is a
recently reported active component. It was modified from the first generation CFTA [4-6]. It
seems to be a versatile component in the realization of a class of analog signal processing
240
circuits, especially analog frequency filters. It is really current-mode element whose input and
output signals are currents. In addition, it can also adjust the output current gain.
Over the past few years, a number of schemes have been developed to realize sinusoidal
oscillators [7-16]. Unfortunately, these reported circuits suffer from one or more of following
weaknesses:
 Excessive use of the passive elements, especially the external resistors [9, 15].
 Employ more than one active element [8-10, 12-14].
 Requires two gms in one active element consequently, the circuits become more
complicated [7, 16].
 Some outputs are not high output impedance then the cascade-ability is not directly
achieved [7-8, 15].
 Use of floating capacitor, which is not convenient to further fabricate in IC [9, 11, 15].
The aim of this paper is to introduce a current-mode quadrature sinusoidal oscillator
based on single CCCFTA. The features of the proposed circuit are the following:
 Without external resistor requirement and using only grounded capacitors.
 Use only one active element.
 Electronic adjustment of the oscillation condition and oscillation frequency.
 High-impedance current outputs.
2. Theory and Principle
2.1 Basic concept of CCCFTA
Since the proposed circuit is based on CCCFTA, a brief review of CCCFTA is given in this
section. The schematic symbol and the ideal behavioural model of the CCCFTA are shown in
Fig. 1(a) and (b), respectively. It has finite input resistance Rf at f port. This parasitic
resistance can be controlled by the bias current IO. The current if flows from port z. In some
applications, to utilize the current through z terminal, an auxiliary zc (z-copy) terminal is used
[5]. The internal current mirror provides a copy of the current flowing out of the z terminal to
the zc terminal. The voltage vz on z terminal is transferred into current using transconductance
gm, which flows into output terminal x. The gm is tuned by IB. In general, CCCFTA can
contain an arbitrary number of x terminals, providing currents ix of both directions. The
characteristics of the ideal CCCFTA are represented by the following hybrid matrix:
V f   R f 0 0   I f 

 
 
 I z , zc   1 0 0  Vx 
 I  0 0  g m  V 
 x  
 z 
(1)
For CMOS implementation of CCCFTA shown in Fig. 3, the Rf and gm are written as
Rf 
1
8kRf I o
and gm  kgm I B
(2)
241
where kRf=µnCOX(W/L)7,8=µnCOX(W/L)9,10, kgm=µnCOX(W/L)24,25. Here k is the physical
transconductance parameter of the MOS transistor. IO and IB are input bias current to control
Rf and gm, respectively.
IO
vf
if
IB
Rf
if
ix
f CCCFTA x
z
zc
izc
iz
vzc
vz
vx
g mVz
ix
vf
vx
iz
if
iz
vz
izc
vzc
(a)
(b)
Fig.1 CCCFTA (a) Symbol (b) Equivalent circuit
2.2 Proposed oscillator
The proposed oscillator is shown in Fig. 2. It consists of single CCCFTA and two grounded
capacitors. The output currents Io1 and Io2 are high output impedance and 90º degree phase
difference. Considering the proposed oscillator and using the CCCFTA properties as
described in above section, the characteristic equation is obtained as
s 2C1C2 R f  s  C2  C1 g m R f   gm  0
(3)
From Eq. (3), the CO and FO is written as
C2  C1 gm R f
and  
gm
C1C2 R f
(4)
It is found from Eq. (4) that if Rf and gm are equal to Eq. (2), the CO and FO become
C2 8kRf I o  C1 k gm I B and  
8k
1
Rf
k gm I B I O  2
(5)
C1C2
It is evidently seen from Eq. (5) that the CO and FO are electronically controlled by IO and IB.
From Fig. 2, the current transfer function of Io1 and Io2 is written as
IO 2  s 
I O1  s 

gm
sC2
(6)
IO
f
C1
IB
CCCFTA
zc
I O1
z
x
x
x
x
x
IO 2
C2
Fig.2 Proposed simple current-mode quadrature oscillator
242
3. Simulation Results
To investigate the theoretical analysis, the proposed oscillator in Fig. 2 is simulated by using
the PSPICE simulation program. Internal construction of CCCFTA used in simulation is
shown in Fig. 3. The PMOS and NMOS transistors have been simulated by respectively using
the parameters of a 0.25µm TSMC CMOS technology [17]. The transistor aspect ratios of
PMOS and NMOS transistor are indicated in Table I. The circuit was biased with ±1.25V
supply voltages, C1=C2=0.1nF, IO=80µA, IB=98µA. This yields simulated oscillation
frequency of 2.363MHz. Fig. 4(a) shows simulated quadrature output waveforms. Fig. 4(b)
shows the simulated output spectrum, where the total harmonic distortion (THD) for Io1 and
Io2 are about 1.615% and 1.143%, respectively. The power consumption is approximately
2.17mW.
Tab.1 dimensions of MOS transistor
Transistor
W (µm)
L (µm)
M1-M6
5
0.5
M7-M8
16
0.25
M9-M10
8
0.25
M11-M15
15
0.5
M16-M23
5
0.25
M24-M25
24
0.25
M26-M31
3
0.25
VDD
M11
M16
M12
M13
M9
z zc
M10
f
IB
M19 M20 M21 M22 M23
M14 M15
M17
x x
M24 M25
M7
x
x
x
M8
M3
M5
M6
M27 M28M29 M30 M31
M4
M26
M2
M1
M18
VSS
Fig.3 Internal constructions of CCCFTA
Current (µA)
50
IO1
IO2
0
-50
40.0
40.5
41.0
41.5
Time (µs)
(a)
243
42.0
42.5
43.0
100µ
Current (A)
1.0µ
1.0n
IO1
IO2
0
2
4
6
8
10
12
Frequency (MHz)
14
16
18
20
(b)
Fig.4 (a) Output voltage waveforms (b) Spectrum
4. Conclusion
A simple current-mode quadrature oscillator based on CCCFTA has been presented. The
features of the proposed circuit are that: oscillation frequency an oscillation condition can be
electronically adjusted; the proposed oscillator consists of single CCCFTA and 2 grounded
capacitors, which is convenient to fabricate. The PSpice simulation results agree well with
the theoretical anticipation.
5. References
[1] I.A. Khan, S. Khawaja, An integrable gm-C quadrature oscillator, International Journal
of Electronics, Vol. 87, 2000; pp. 1353-1357.
[2] C. Toumazou, F.J. Lidgey, D.G. Haigh, Analogue IC design: the current-mode
approach, Peter Peregrinus: London, 1990.
[3] N. Herencsar, J. Koton, K. Vrba, A. Lahiri, O. Cicekoglu, Current-controlled
CFTA-based current-mode SITO universal filter and quadrature oscillator,
International Conference on Applied Electronics (AE), 2010, pp. 1-4.
[4] N. Herencsar, J. Koton, I. Lattenberg, K. Vrba, Signal-flow graphs for current-mode
universal filter design using current follower transconductance amplifiers (CFTAs),
International Conference on Applied Electronics, Pilsen, Czech Republic, 2008, pp.
69-72.
[5] D. Biolek, R. Senani, V. Biolkova, Z. Kolka, Active elements for analog signal
processing: classification, review, and new proposals, Radioengineering, Vol. 17, no. 4,
2008, pp. 15-32.
[6] N. Herencsar, J. Koton, K. Vrba J. Misurec, A novel current-mode SIMO type
universal filter using CFTAs, Contemporary Engineering, Sciences, Vol.2, no.2, 2009,
pp.59-66.
[7] W. Jaikla, M. Siripruchyanun, A versatile quadrature oscillator and universal biquad
filter using dual-output current controlled current differencing transconductance
amplifier, International Symposium on Communications and Information Technologies,
2006, pp. 1072-1075.
[8] W. Jaikla, M. Siripruchyanun, CCCDTAs-based versatile quadrature oscillator and
universal biquad filter, ECTI conference, 2007, pp. 1065-1068
[9] A.Ü. Keskin, D. Biolek, Current mode quadrature oscillator using current differencing
transconductance amplifiers (CDTA), IEE Proceedings-Circuits, Devices and Systems,
Vol. 153, 2003, pp. 214–218.
[10] D. Biolek, V. Biolkova, A.Ü. Keskin, Current mode quadrature oscillator using two
CDTAs and two grounded capacitors, 5th WSEAS International Conference on System
Science and Simulation in Engineering, 2006, pp. 368-370.
244
[11] W. Jaikla, M. Siripruchyanun, J. Bajer, D. Biolek, A simple current-mode quadrature
oscillator using single CDTA, Radioengineering, Vol. 17, 2008, pp. 33-40.
[12] W. Tanjaroen, W. Tangsrirat, Resistorless current-mode quadrature sinusoidal
oscillator using CDTAs, APSIPA Annual Summit and Conference, (2009), 4-7
[13] A Lahiri, New current-mode quadrature oscillators using CDTA, IEICE Electronics
Express, Vol. 6, 2009, pp. 135-140.
[14] J.-W. Horng, Current-mode third-order quadrature oscillator using CDTAs, Active and
Passive Electronic Components, 2009, Article ID 789171
[15] D. Prasad, D.R. Bhaskar, A.K. Singh, Realisation of single-resistance-controlled
sinusoidal oscillator: a new application of the CDTA, WSEAS Transactions on
Electronics, Vol. 5, 2008, pp. 257-259
[16] A. Lahiri, A. Misra, K. Gupta, Novel current-mode quadrature oscillators with
explicit-current-outputs using CCCDTA, 19th International Radioelektronika
Conference, 2009, pp. 47-50
[17] P. Prommee, K. Angkeaw, M. Somdunyakanok, K. Dejhan, CMOS-based near
zero-offset multiple inputs max–min circuits and its applications, Analog Integr.
Circuits Signal Process, Vol. 612009, pp. 93–105.
245
Environmental Science
08:45-10:15, December 15, 2012 (Meeting Room 5)
Session Chair:
059: Sulfonated Poly Ether Ether Ketone / Tio2 Nano Composites As Polymer
Electrolyte For Microbial Fuel Cell
Sangeetha Dharmalingam
Anna University
Prabhu Narayaswamy Venkatesan
Anna University
073: Mathematical Modeling of Industrial Water Systems
Pavel Gotovtsev
Moscow Power Engineering Institute
JSC Power-Engineering Schemes and
Julia Tikhomirova
Technologies
Ekaterina Khizova
BWT Company
232: Design on Low-Noise Tire Groove Using the CFD Simulation Technique
National Chin-Yi University of
Min-Feng Sung
Technology
National Chin-Yi University of
Yean-Der Kuan
Technology
Shi-Min Lee
Tamkang University
Rong-Juin Shyu
National Chin-Yi University of
Technology
301: An Exploratory study of Green Supply Chain in SMEs Cluster in India
Shubham Gandhi
Delhi Technological University
Abhishek Dwivedi
Delhi Technological University
Aishwarya Mor
Delhi Technological University
246
059
Sulfonated Poly Ether Ether Ketone/TiO2 Nano composites as Polymer
Electrolyte for Microbial Fuel Cell
Prabhu Narayanaswamy Venkatesan a, Sangeetha Dharmalingamb
a
Anna University, Sardar Patel Road, Chennai, India
E-mail address: nvprabhuannauniv@gmail.com
b
Anna University, Sardar Patel Road, Chennai, India
E-mail address: sangeetha@annauniv.edu
Abstract
Polymer Nano composites are promising alternative polymer electrolyte membranes for fuel
cells because of its cost reduction, improved mechanical stability, gas barrier properties and
proton conductivity of the membrane. E. coli an electro genic bacteria was used as a
biocatalyst in anodic chamber with air cathode coated with Pt/c catalyst. Four different
compositions of SPEEK/TiO2 nano composites membranes (5%, 10%, 15%, 20%) were
fabricated by solvent evaporation method. Water absorption, Ion exchange capacity, of the
fabricated polymer Nano composite membranes were studied and also characterized by
FT-IR, SEM, XRD. The maximum power density obtained was compared with that of
commercially available Nafion 117® membrane.
Keyword: SPEEK, TiO2, Polymer Nano composites, Microbial Fuel Cell.
1. Introduction
Exhausted use of fossil fuel in recent years made us to think of alternate energy. Even though
other renewal energy sources such as solar, wind, hydrothermal sources are possible it
requires more space and are seasonal. Researches in the field of Power generation from
renewable sources are extensively going all over the world to overcome energy crisis in near
future [1]. Microbial fuel cells (MFC) are the one among the renewable alternatives where
research are going on not only to overcome electricity crisis but also for various factory,
domestic waste water treatment and metal reduction treatment [2]. MFC is a reliable system
that produces limited power or bio hydrogen from different kinds of electron donors.
Simultaneous bioelectricity generation and waste water treatment is considered as one of the
most important applications of MFC. Traditional MFC consists of anodic and cathodic
chamber. The process of substrate oxidation by microorganisms (Bacteria) into proton and
electron takes places in the anodic chamber of MFC. The electrons produced by
247
microorganisms in the anodic chamber reaches the anode electrode by two ways of transport
1. Through the bacteria’s extended pili (extended hair like structure) 2. Through the addition
of chemical mediators, which are generally toxic in Nature? The second type of approach is
employed in the case of non-electro genic bacteria which are not capable of transferring
produced electrons to the anode electrode. Several electro genic and non-electro genic
bacteria were used in anode chamber to produce stable current output in MFC. Electron
produced reaches the anode electrode surfaces and then reach the cathode electrode through
the external circuit, which connects anode and cathode. The protons produced in the anodic
chamber permeate towards the cathode electrode through the proton exchange membrane
(separator) and at the cathode chamber water produced due to electron proton reaction [3].
Use of oxidizing agents such as potassium permanganate, potassium dichromate in cathodic
chamber of dual chamber MFC increases the performance of Microbial fuel cell [4].
2. Results and Discussion
2.1 FT-IR:
The FTIR spectra of different weight percentages of SPEEK-TiO2 nanocomposites were
shown in the figure 1. The spectra show the shows the vibrations of TiO2 inaddition to that of
SPEEK spectra shows the presence of TiO2 in SPEEK membrane. FTIR spectra of exhibited
additional band from 400 cm-1 to 647 cm-1 due to the envelope of phonon bands of Ti–O–Ti.
Consequently, the bands at 1011 cm-1, 1081 cm-1 and 1089 cm-1 associated to the hydrogen
bonding of Ti–O with sulfonyl of SPEEK. Ti–O–Ti inorganic network was bonded with
Sulfonyl groups by hydrogen bonding was also observed.
2.2 XRD:
248
The XRD spectra of the composite membranes are shown in figure 2. The figure shows that
with increase in the amount of nanocomposite, the amorphous nature of the membrane
increases which was observed from the decrease in the peak intensity of the pattern near 20 O
Theta value. This type of observation reveals the amorphous nature of the membrane and
uniform distribution of inorganic fillers in membrane surface.
2.3 SEM:
SEM images of different weight percentages of TiO2 composite membranes are shown in
figure 3. The SEM image shows the surface morphology of the composite membranes. Rod
249
shaped structure of the TiO2 was observed on the surface with uniform dispersion till 15% of
TiO2. In 20 % the images seems that the rod shaped particles were found inside the
membrane with agglomeration.
Table 1: Water uptake and IEC values of composite membranes.
Membrane
Thickness
(in cm)
Water uptake
(in percentage)
IEC
(milli equiv/g)
SPEEK
5 % TiO2
10 % TiO2
15 % TiO2
20 % TiO2
0.013 (±0.002)
0.012 (±0.003)
0.013 (±0.001)
0.013 (±0.002)
0.014 (±0.001)
14.00
13.80
11.20
9.50
7.30
0.870
0.603
0.586
0.498
0.412
The water absorption and Ion Exchange Capacity of the composite membranes are tabulated
in table 1. The water absorption capacity of the membrane gets decreasing with increase in
the nano composite ratio. Ion Exchange capacity of membranes is directly related to the water
absorption property of the membrane. With decrease in water absorption, IEC also
decreases as shown in the table 1.
2.4 MFC Performance:
The SPEEK with 20% of titania showed higher power density compared to that of other
composite membranes and commercially available Nafion117®. The membrane showed
better performance because of lesser oxygen crossover and more amorphous nature.
250
2.3 Acknowledgments:
The authors thank the Department of Science and Technology (DST) India, for their financial
support to carry out this work vide letter No. DST/TSG/AF/2010/09, dt.01-10- 2010.
Instrumentation facility provided under FIST-DST and DRS-UGC to Department of
Chemistry, Anna University are sincerely acknowledged.
[1]
[2]
[3]
[4]
3. References
A. T. Heijne, H. V. M. Hamelers , and C. J. N. Buisman, Microbial Fuel Cell Operation
with Continuous Biological Ferrous Iron Oxidation of the Catholyte, Environmental
Science and Technology,41, 4130-4134,2007.
Z. Wang, B. Lim, H. Lu, J. Fan,and C. Choi, Cathodic Reduction of Cu2+ and Electric
Power Generation Using a Microbial Fuel Cell, Bulletin of Korean Chemical Society,
31, 7 2025, 2010.
F. Qian and D. E. Morse, Miniaturizing microbial fuel cells, Trends in Biotechnology
February,29, 2, 62—69,2011.
A. Rhoads , H. Beyenal , and Z. Wandowski, Microbial Fuel Cell using Anaerobic
Respiration as an Anodic Reaction and Biomineralized Manganese as a Cathodic
Reactant, Environmental Science and Technology39, 4666-4671, 2005.
251
073
Mathematical modeling of industrial water systems
Pavel. M. Gotovtseva,Julia Y. Tikhomirovab,Ekaterina I. Khizovac
a
Technology of Water and Fuel Department; Moscow Power Engineering Institute (National
Research University), 14 Krasnokazarmennaya street; Moscow; Russia, 111250
gotovtsevpm@gmail.com
b
JSC Power-Engineering Schemes and Technologies, 94 Bakuninskaya street; Moscow;
Russia, 105082
juliett.power@gmail.com
c
BWT Company, 3a Kasatkina street; Moscow, Russia
e.khizova@gmail.com
The corresponding author: Pavel. M. Gotovtsev
Abstract
The discharge of waste waters from industrial plants is the one of the main sources of rivers
contamination. Those waters include both waste waters from industrial processing and rain
water from site. Abnormal level and frequency of rains leads to difficulties with exploitation
of rain water treatment systems and discharges of water without treatment. It is necessary to
note that rain water from industrial site may be contaminated by different impurities. This
article gives a brief introduction into ways to control discharges of industrial waste water and
provide some analysis of level of contamination for thermal power plants sites as an example.
Keyword: Engineering, Environmental Chemistry, Water pollution control, Industrial waste
water, Rain water, Mathematical modeling.
1. Introduction
Today industry is the one of the most significant sources of water contamination. The
different technological processes require water and contaminated it with impurities that never
observed in water naturally. But technological processes is the not only reason of water
contamination. The rain and thaw water from industry object site is also contents different
matters like oil products, synthetic organics and so on. Due to minimize negative impact on
environment most of industry objects have waste water treatment units sometimes even zero
discharge system with almost full water recycle. But even in those examples there is next
challenge for existing and future industry - changes in climate leads to abnormal high or low
level of rains and snowfalls in several regions [1]. Abnormal high rain leads to situation than
252
significant part of rainwater is discharging in the rivers or lakes without any treatment.
Winters with high levels of snow leads to high amount of law water that in its turn leads to
the same situation than in case of abnormal high rains. The abnormal low level of rains may
cause a very different impurities concentrations in rainwater in comparison with usual. This
challenge is more significant than bigger water system in plant or factory.
2. Waste water at thermal power plants
2.1 Overview of waste water problems at thermal power plants
Power generation is the one of the biggest water consumers. For example in the [2] presented
data based on the Electric Power Research Institute researches about water consumers in the
USA. The biggest consumer is the agricultural sector that using 82% of water consumption;
next is domestic consumption it is equal to 7%; power generation and industry takes 3% each
and last 7% using by livestock, mining and commercial. Almost the same situation is taken
place in Russian Federation. This data and environmental problems with power plants
discharges are leads to developing of water management and zero-discharge systems in
power plants. Let’s take thermal power plant as an example for next consideration.
There next system that linked with environmental problems:
 water purification system that produce ultrapure water for the boilers feeding; this
system also have a two types of the waste water: water with high mineralization and
water with sludge after coagulation;
 cooling systems with “wet” cooling towers: in this system cooled in the towers water
uses for steam turbines condensers cooling; the difficulties in this system related with
blowdown that have from 2 to 5 times higher concentration of impurities in
comparison with water source;
 water with oil; there lot of oil contained systems in the power plants and in some cases
this oil mixtures with water;
 rain water from power plant territory also can contain oil and some specific impurities
related with technological processes, for example coal dust in the coal fired power
plants.
All this systems also require chemistry control and monitoring to minimize the possible
discharges of waste water in the nearest water source.
2.2 Waste water chemistry control at thermal power plants
The waste water chemistry control highly related with the technologies of utilization of waste
water and cooling system blowdown. Next example is based on environmental requirements
and economical aspects typical for Russian Federation, but several solutions can be used in
other countries. The water chemistry monitoring for the unit with heat recovery is steam
253
generator shown in fig.1.
There is a typical scheme of water cycle at power plants in fig.1. As can be seen there are two
main streams of waste water:
 waste water after water treatment systems (pretreatment and demineralization), drainage
treatment and discharges after equipment chemical cleaning to the neutralizing tank;
 rain and thaw water including water polluted with oil to the waste water treatment
system.
Water intake
chemicals
pH
pH
1
turbidity
conductivity
Sludging
Water treatment
4
3
(make up)
2
Sludge dump
Oil in
water
pH
Rain and melt
water
Waste water treatment
system
Cooling system
7
turbidity
Permanent
discharge
Drainage
treatment
8
6
Industrial heating
system
Steam cycle
9
10
Neutralizing site
pH
conductivity
conductivity
5
1
Discharge
Oil in
water
2
3
4
5
6
turbidity
7
9
8
10
controller
users
Figure 1. The water chemistry monitoring system for the unit with heat recovery steam
generator.
Washing water after pretreatment goes to the sludging system and then returns to the cycle,
while sludge gets to the sludge dump. Drainage treatment system is intended for plant’s
contaminated condensate treatment. Neutralizing site represents a tank where listed waste
waters are mixed and diluted with source water to required concentrations.
A main part of the scheme is chemistry control system. It allows detecting water chemistry
failure and avoiding unacceptable discharge. Basic points of water chemistry control around
steam cycle are listed in table 1. All the measuring equipment works in automation mode.
254
Collected data can be storage in database server, which can exchange information with digital
control system of power plant.
The main goals of chemistry control system are next:
1. to improve efficiency of the waste water utilization system;
2. to prevent a waste water discharges for minimization of negative influence at nearest
water source (river, sea, etc);
3. to minimize corrosion and scale formation in the steam turbine condensers from side
of cooling water.
The improving of the efficiency of the waste water utilization system is based on continuous
control of the water quality after and prediction of this quality. Such mathematical models are
unique for each waste water utilization technology and in current times are under
development. Next in this paper is will be shown two examples of those models application.
Table 1. Basic points of water chemistry control
Parameters
Control point
Cooling system makeup (after
chemicals dosing)
Source water
pH
conductivi
ty
+
+
+
Cooling system blowdown
Oil in
water
+
After waste water treatment
system
Waste water after neutralizing site
turbidit
y
+
+
+
+
Waste water right before
discharge
+
+
+
+
2.3 Mathematical modeling aspects of waste water discharge prevention
Waste water discharges are the one of the environmental problems of modern power plants.
The reasons of those discharges are different. For example it may be failure of the waste
water treatment systems, oil leakage during installation, oil leakages from equipment,
enormous high level of the precipitation and etc. Enormous high level of the precipitation
became a most frequent reason of discharges because the waste water treatment systems are
developed on the base of precipitation level of previous years. In these cases for personal
need to know: what concentration of the impurities will be in the water draining into the
nearest surface water source. For example let’s turn our attention in the case of enormous
high level of the precipitation. This effect may be observed not only in the connection with
255
rains but in the period of snow thawing in the years with high level of snow. In the both
periods the flow rate of water to the waste water treatment unit is much higher than this can
treat. If the power plant doesn’t have additional vessels or pools for water collection this
water excess are moved directly in the cooling system without any treatment. Under these
conditions it is necessary to know, what concentration of the impurities will be in the cooling
system blowdown. There is the simple equation for determining of this concentration:
DBl 
dC Bl (t)
dt
n
 k v  (Dr  Сr (t)  Dex  С rw(t)  Dtr  Сtr )  k v  C Bl (t)  ki  Fi
i1
(1)
where: DBl, Dr, Dex, Dtr – flow rate of cooling system blowdown, feeding of
cooling system, excess rain water and water after treatment, kg/sec;
CBl(t) – concentration of the impurity in the cooling system blowdown, ppm;
Cr(t) - concentration of the impurity in the water supply source, ppm;
Crw(t) - concentration of the impurity in the rain water, ppm;
Ctr - concentration of the impurity after treatment, ppm;
kv – coefficient of evaporation in the cooling tower;
ki – constant of ith reaction that leads to decrease of impurity concentration;
Fi – heterogeneous area of ith reaction;
To simplify the model the Cr(t) may be allows as a constant if time equal several hours. Also
measurement of the impurity concentration in the water after waste water treatment may be
making after mixing with excess rain water. In this case function Crw(t) will be approximately
close to the behavior of the impurity concentration in this measurement point (in case of
normal operation of the waste water treatment).
The model based on equations like (1) may be used for analyze behavior such impurities as
oil, particles, some surfactants and others.
2.4 Calculation of impurities concentrations in cooling systems with cooling towers by
using artificial neural networks models
The cooling system with cooling towers a well known from side of water chemistry. One of
the main problems of this system depends with scale deposition of CaCO3 and CaSO4 [3].
The solubility of those salts under temperatures typical for cooling system is also well known.
The understanding of scaling processes gives ability to control water chemistry for
minimization of scaling formation. Due to possibility of CaSO4 formation it is interesting to
know SO42- concentration in the cooling water. With information about correlation between
solubility of CaSO4 and temperature it is possible to make system that can predict scaling
formation. It is necessary to note that concentrations of impurities in the blowdown of
cooling system are the same as in cooling water. But the main problem is to measure
256
automatically the concentration of the SO42-. Also it is necessary to note that this
measurement must have a low price. The alternative of the measurement may be a calculation
of this concentration based on measuring parameters such as pH, conductivity and etc. As
shown in fig.2 in the standard cooling system there is not enough chemical parameters are
measured automatically for the calculations based on conductivity equation in the way like
presented in the [4].
One of the ways to calculate this concentration is using artificial neural networks (ANN). For
this calculation was used Word ANN, this decision was based on our previous experience [5].
The input parameters were conductivity and pH in the feed water and in the blowdown (see
fig.1), temperature of hot water from condenser and cold water after cooling tower.
The ANN Word is perceptron with one hidden layer (more information in [6]). The feature of
such an ANN is a hidden layer divided by three parts. Each part has a unique transfer
function. There are 12 neurons in each of the part of the ANN. To control ANN results
laboratory measurement were made. Results of calculations and measurements are shown at
fig. 2.
Figure 2. Results of calculations (calc.) and measurements (meas.) of SO42- concentration in
the cooling system.
The laboratory measurements were making each hour. This model is show acceptable results.
Next it is need to test this calculation method in the different systems with different cooling
systems and different water chemistry.
2.1
Calculation of oil products concentration at waste water discharge of thermal
power plant
The assessment of oil concentration in rain waters can be used as a next example. At present
rain water treatment at power plants is obligatory. However, partial treatment of rain waters is
implemented because of its huge amount. The percentage of treated wastewater is chosen to
provide the required quality of total discharged water according to the standards and
257
guidelines. Nevertheless higher volume of rainfall compared to calculated one is still possible.
Thus it is essential to know impurities concentration in waste waters even during the
pre-project. Here we have two variable parameters:
 the amount of rain water;
 oil concentration in ran water.
Let us start solving this problem regarding oil concentration. The probability of oil
concentrations occurrence before treatment is presented in fig. 3. This data based on
measurements from several power plants at Moscow region.
According to the figure 2 the most probable concentrations can be determined along with the
most positive (0,05 mg/dm3) and the most negative one (1 mg/dm3). Under the mathematical
model for every concentration before waste water treatment facilities it is possible to
calculate the same value in discharged waters. Let us consider two the most probable
concentrations: 0,2 mg/dm3 and 0,25 mg/dm3
Fig.3. The probability points of oil concentration before waste water treatment system
258
Supposing that 90% of rain water is treated and 10% is discharged then 100% is that
maximum water from rain according the standards and guidelines for design project of rain
water treatment systems. The correlation between concentrations in waste waters and excess
rainfall volume (compared with the guidelines) is presented in figure 4. This graph shows the
results for two selected cases.
Fig.4. Correlation between impurity concentrations in waste waters and excess rainfall
volume (compared with the predicted one)
The fig. 4 shows that when 15% rainfall water excess the concentration in waste waters is
higher than standardized in both cases. The same assessment is possible for different water
facilities. The obtained results allow eliminating the problems of present systems.
3. Conclusion
The abilities of modern computer based chemistry control systems use applications with
mathematical models can expand capabilities of industry objects personal to prevent
additional discharges of waste water. Mathematical modeling of the processes at industrial
site can help to minimize negative impact at the environment. Modern mathematical methods
implemented in the programs for modern personal or industrial computers can improve waste
water control and today it is necessary to improve research and development in this field.
The first results of ANN applications show some perspectives of this method at chemistry
control systems for the prediction of water chemistry state or calculation concentrations of
impurities. But it is necessary to say that in all presented in this paper cases the accuracy of
ANN based models are lesser in cooperation with models based on equations like (1). From
another side the ANN may be used in cases where such mathematical modelling is difficult
for some different reasons, such as insufficient data about chemical process or insufficient of
259
chemistry analyzers.
Finally it is necessary to say that all those models require the reliable chemistry control
analyzers and representative samples for those analyzers.
[1]
[2]
[3]
[4]
[5]
[6]
4. References
Solomon et al., Technical Summary, Section TS.5.3: Regional-Scale Projections, on
IPCC AR4 WG1 2007.
Angela Neville. Promoting sustainable water usage in power generation. Power.
Business and Technology for the Global Generation Industry. Vol. 156. No 4. April
2012: p. 52-57
Petrova T.I., Repin D.A. Factors that has influence on the exploitation of the cooling
systems. Vestnik MPEI. 2009 № 1, p. 106-111 (on Russian)
Smetanin D.S., Voronov V.N.. Application of technological algorithms and
mathematical modeling in cycle chemistry monitoring system. Proceedings of ESDA
2006 8th Biennial ASME Conference on Engineering System Design and Analysis.
July 4-7, 2006, Toronto, Italy.
Voronov V.N. Gotovtsev P.M. Smetanin D.S. Analyses of water chemistry by means of
artificial neural network. Thermal power engineering. Moscow: MPEI - №7 2008 (on
Russian)
Haykin S. Neural networks. A comprehensive foundation. New Jersey: Prentice Hall,
1999.
260
232
Design on low-noise tire groove using the CFD simulation technique
Min-Feng Sunga*,Yean-Der Kuanb,Shi-Min Leec,Rong-Juin Shyud
a*
No.57, Sec. 2, Zhongshan Rd., Taiping Dist., Taichung 41170, Taiwan.
song221@gmail.com
b
c
No.57, Sec. 2, Zhongshan Rd., Taiping Dist., Taichung 41170, Taiwan.
ydkuan@ncut.edu.tw
No.151, Yingzhuan Rd., Tamsui Dist., New Taipei City 25137, Taiwan.
061503@mail.tku.edu.tw
d
No. 2, Pei-Ning Road, Keelung 20224, Taiwan.
rjs@mail.ntou.edu.tw
Abstract
The number of automobiles is increasing, meaning that tire groove design is crucial. This
study applied the computational fluid dynamics (CFD) technique to simulate tire groove
designs and used acoustic software to analyze the sound pressure level (SPL). Consequently,
a simple numerical model can be used to map tire grooves, reducing simulation time. Finally,
this study proposes a design for reducing tire groove noise to approximately 8 dB (RMS).
These results can provide references for future tire design research.
Keyword: Air-pumping, Computational fluid dynamics, Sound pressure level, Groove.
1. Introduction
Because the automotive industry is thriving and automobile costs are down, cars have
become more common. Consequently, reducing vehicle noise is a major topic of research for
automotive manufacturers. Technological innovation has enabled substantial improvements
regarding several crucial sources of noise, including running engines, intake and exhaust,
cooling fans, transmission system, and wind noise. Tire noise that is generated by contact
between the tire and the ground is a major source of noise [1]. Mak et al. [2] presented results
that are based on the close proximity method specified in draft ISO/CD 11819-2. These result
showed a proportional relationship between noise level and vehicle speed, and that peak noise
levels occurred at high-speed and high-acceleration/deceleration points. Consequently, the
Economic Commission for Europe (ECE) developed a new standard [ECE-R117] in 2009
261
stating that tires must comply with its stipulations within a specific period [3]. These
stipulations include fuel efficiency, wet grip performance, rolling resistance, and noise level.
Therefore, this study examines the relationship between the contact surface of the tire and the
ground. For tire grooves, different designs produce different pressure drops, potentially
generating noise. Kim et al. [4] used a simple model to investigate the tire effects.
Eisenblaetter et al. [5] proposed that the main objective of the rolling noise is air-pumping,
followed by air resonant radiation and pipe resonances. Similar to other tire noise analysis
methods, such as FEM, this simulation method must consider the structural and cavity modes
[6, 7]. Bueno et al. [8] proposed that pavement temperature influences tire and road noise,
and their results demonstrated that temperature causes a reduction in proximity sound levels
that were assessed at a rate of 0.06 dB (A)/°C. These results were used to validate the
proposed tire noise reduction methods.
2. Numerical simulation
2.1 Model Structure
The schematic for the contact surface of the tire is shown in Fig. 1. This simplified model
was constructed using rectangles, and the boundaries were set as follows: free outlet (red line)
and wall (black and white lines). Fig. 1(a) shows the full-sized tires. Fig. 1(b) shows the
simple tires. The full-sized tires’ diameter was 6270 mm, and the simplified tires’ diameter
was 780 mm. To validate the difference between the full-sized and simplified models, the
rotational speeds of both were set to 100 rad/min. In addition, to simulate the contact surface
of the tire, this study set the tire center as 1.7 mm lower than the ground, which was
referenced by the ABAQUS simulations result. Fig. 2 shows the time and angle for each
groove in contact with the ground. In addition, because the angle between the grooves was 6°,
and the tires were sunk 1.7 mm lower than the ground, the total angle was 24°. The
completion of each process was determined according to the conversion of the car speed
(km/h), and the speed was 27.8 (m/s).
(a)
(b)
Fig. 1. The schematic for the contact surface of the tire.
262
Fig. 2. Large-scale grooves of the full-sized tires.
Finally, a tire’s groove pitch (0.033 m) was divided by the speed (27.8 m/s) to determine the
total simulation time scale (0.001 s/groove). Because the total number of grooves was four,
the process was completed in 0.004 s, and total contact length was 33 mm.
2.2 Simulation model define
2.2.1 Mesh define
This study used FlowVision, a commercial fluid dynamics software developed by Capvidia,
for the simulations and analyses. This software used the simulation technique of grid
establishment of technology [sub grid geometry resolution, (SGGR)]. This method simulates
complex objects and processes, such as the contact between the surface of the tire and the
ground [Fig. 3(a)]. In this study, the simulation model established a total grid number of
30 000. To enhance the solving accuracy, this study used a split function to analyze the
moving objects, such as the sectorial plate. These techniques were automatically completed
during the simulation, as shown in Fig. 3(b). The grid was able to automatically remesh each
time, and the original grid was retained and was not removed. This approach reduced the
simulation time and enhanced the solving accuracy
(a)
(b)
Fig. 3. The grid establishment of technology (SGGR) and automatic remeshing for the
grooves.
2.2.2 Simulation Theory
Several numerical methods were used, the most common of which were the finite element
method (FE), finite difference method (FD), and finite volume method (FV). In this study, the
CFD software used FV. Because the FE has a fast solution speed, it has been widely used in
various applications, such as automobiles, aviation, shipping, and industry, to solve
deformation problems. Equation (1) is the basis of the formula of the FV, where Ω is the cell
263
volume and ΔSi is the area of the i-cell face.
   F d   F  n   S
i  faces

i
i
i
(1)
However, this study investigated the behavior of tires’ contact surfaces with the ground.
Therefore, the simulated fluid was gas, and heat transfer was not considered. The units and
numerical settings are listed in Table I. Equation (2) was used to solve the total pressure,
where RA was the universal gas constant, and the default was 8.31. By balancing (2), it can
be rewritten as (3), as follows:
 Ttot  Tref
Ptot  Pabs 
 Tabs
Cp
 RA / m

 Pref

(2)
Pabs 
RATabs
m
(3)
Equations (4) and (5) were used to solve the continuity equation and momentum equation,
respectively. Because this study simulated the air flow field, it used the gas energy
Equation (6). Because the turbulent model was considered, this study used (7) and (8).
p
t
   V   0

(4)  PV    V  V   P     eff  F
t
(5)
Here,

 eff
is the effective shear stress tensor.
TABLE I THE UNITS AND NUMERICAL SETTINGS
Notat
ion
Physical quantity
Dimen
sion
m2s-2K
Nota
tion
F
specific
heat=1008.69
acceleration of
external volume
G
face
gravity acceleration
ms-2
Ttot
H
total enthalpy
m2s-2
Tref
K
L
turbulent energy
characteristic length
m2s-2
m
Tabs
Tn
Cp
Dimens
ion
Q
sum of the energy of
different nature
-2
RA
universal gas constant=
8.31441
Jmole-1
K-1
total temperature
reference temperature =
273
absolute temperature
temperature value at
°K
-1
ms
Physical quantity
264
m2s-2
°K
°K
°K
time layer n
M
P
molar
mass=0.02884
kgmile
-1
µ
relative pressure
Pa
µt
Pref
reference pressure=
101000
Pa
V
Phst
hydrostatic pressure
Pa
Vn
Pabs
absolute pressure
Pa
ρn
Ptot
total pressure
Pa
ρn+1
----
ε
Pa
β
Pa
∂T
Prt
Pn
Pn+1
turbulent prandtl
number
pressure value at
time layer n
pressure value at
time layer n+1
kgm-1s-
molecular dynamic
viscosity
turbulent dynamic
viscosity
1
kgm-1s1
relative velocity
m/s
velocity value at time
layer n
density value at time
layer n
density value at time
layer n+1
dissipation rate of
turbulent energy
coefficient of thermal
expansion
relative local specific
m/s
kg m-3
kg m-3
m2s-2
°K -1
Kgs-2
 

 
( H )
P
   VH  
  
 t H   V  F  Q
  C p Prt 

t
t



(6)


( pk )
   Vk       t
t
k

 



k   t  G 
g  T   

Prt
 


(7)


( p )
   V       t
t




 

 

2
   C1 t  G 
g  T   C2 

k 
Prt
k
 

(8)
The model parameters and the expression for generating term G can be rewritten as (9),
(10), and (11):
G  Dij
Vi
 xj
(9)
2
k 

Dij  S ij    V 
3
t  ij
(10)
S ij 
265
Vi V j

x j xi
(11)
In addition, because of the movement of the plate in the X and Y directions, this study
considered the Euler equation and Navier-Stoke equation, and ignored the viscous.
Equations (4), (5), (6), and (7) solved the pressure equations, such as (12), at one time step.
In (12), was the intermediate pressure field, and it could be rewritten as (13). In addition,
because this study examined the ideal-gas flow (Air), (14), (15), and (16) were rewritten, in
which Pabs was solved by (14) [8]. Finally, this study used the pressure value and
simulation time to define the SPL. The SPL can be written as (17).
~ n1   n
 (  n1V n )  (P n 1  P n )

(12)
~ n1   n 
 n n
( P , T )( P n1  P n )
P
(13)
Pabs  Pref  P  
RATabs
m
(14)

m

P R ATabs
(15)
~ n 1 
m ~ n 1
m
 
Pref
n
n
RATabs
RATabs
(16)
P 2
SPL  L p  10 log 10  rms2
P
 ref


  20 log  Prms
10 

 Pref


dB


(17)
3. Simulation result
Fig. 4 shows the simulation results for the original groove and simple groove models. The
original model was a full-sized tire, and the simple model used 1/8 size tire. These result
showed that the original model’s maximum and minimum pressure were 29428 (Pa) and -367
(Pa), and the simple model’s maximum and minimum pressure were 28415 (Pa) and -810
(Pa). These results showed that the pressure curve was approaching, but the simulation time
for the original model was 72 min, and that of the simple model was 18 min. Therefore, the
development time for tire grooves can be reduced. Because the simple model mapped the
original model, the simplified model design can be used to reduce pressure values (Fig. 5).
The depth of these grooves was 7.8 mm, but their geometric designs were completely
different. Fig. 5(a) shows the circular model, and Fig. 5(b) shows the trapezoidal model.
266
However, these models have a commonality: they are narrow on the inside and wide on the
outside (NIWO). Fig. 6 shows the pressure curves of the three designs. These result showed
that the trapezoidal model’s maximum and minimum pressures were 21651 (Pa) and -272
(Pa), and the circular model’s maximum and minimum pressures were 15461 (Pa) and -1027
(Pa). Therefore, the NIWO groove design effectively reduces the internal pressure.
Fig. 4. The pressure result for the original and simple models.
(a)
(b)
Units: mm
Units: mm
Fig. 5. Circular and trapezoidal design for tire grooves
Fig. 6. Pressure curve of the circular and trapezoidal design
The simulation results show that the circular design successfully reduced the pressure
value, and the cause is shown in Fig. 7. The groove design for the original model had
uniform high-pressure streamlines and minor compression acreage [Fig. 7(a)-7(b)]. For the
velocity result, Figs. 7(c)-7(d) show that the original design had uniform high velocity that
was greater than that of the circular design. Therefore, the original design had a greater
267
pressure value. When the grooves left the ground, such as is shown in Figs. 7(e)-7(f), the
original design had a large vortex, but the circular design had a small vortex. Therefore, the
grooves had large pressure drops.
(a)
(e)
(c)
Units: Pressure (Pa)
(b)
Units: Velocity (m/s)
(d)
Units: Pressure (Pa)
(f)
Units: Velocity (m/s)
Figs. 7(a)-(f). Streamlines and velocity results for the circular and trapezoidal designs.
Fig. 8. The SPL result for original and after modification design.
Fig. 8 shows the single point SPL results for the original and modified designs. The
maximum SPL for the original and circular models were 175.66 and 170.42 dB,
respectively. However, the SPL results only represented the interior noise values and units
in dB (RMS). When the noise transfers to the ear, the noise decreases by approximately
268
20%-30% because of environmental conditions such as air temperature and wind speed.
Therefore, this SPL value was used as a reference for the noise created by the contact
between the surface of the tire and the ground
4. Conclusion
This study designed and fabricated low-noise tire grooves using CFD simulation. The NIWO
groove designs successfully reduced the internal pressure. Among these designs, the circular
groove design had the largest reduction of pressure. However, because of the limitations of the
mechanical process, it is difficult to produce circular grooves. However, a simple trapezoidal
design can be easily manufactured. Finally, this study presented a simple model for quickly
analyzing the noise of tire grooves. Although this study presented a 2D method, future studies
can use a 3D model and CGNS output data to determine the various noise positions.
5. Acknowledgment
The author gratefully acknowledges the financial support provided to this study by the Ministry
of Economic Affairs, Taiwan. No.100-EC-17-A-05-S1-168.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
6. References
Anfosso-Le´de´e, Y. Pichaud, “Temperature effect on tyre–road noise,” Applied Acoustics,
Vol.68, 2007, pp.1-16.
K.L. Mak, S.H. Lee, K.Y. Ho, W.T. Hung, “Developing instantaneous tyre/road noise
profiles: A note,” Transportation Research, Vol.16, 2011, pp. 257–259.
http://www.unece.org/
S. Kim, W. Jeong, Y. Park, S. Lee, “Prediction method for tire air-pumping noise using a
hybrid technique,” J. Acoust. Soc. Am, Vol.119, 2006, pp.3799-3812.
J. Eisenblaetter, Stephen J. Walsh, Victor V. Krylov, “Air-related mechanisms of noise
generation by solid rubber tyres with cavities,” Applied Acoustics, Vol.71, 2010,
pp.854–860.
B. S. Kim, G. J. Kim, T. K. Lee, “The identification of sound generating mechanisms of
tyres,” Applied Acoustics, Vol.68, 2007, pp.114–133.
M. Brinkmeier, U. Nackenhorst, S. Petersen, O. von Estorff, “A finite element approach
for the simulation of tire rolling noise,” J. of Sound and Vibration, Vol.309, 2008,
pp.20–39.
M. Bueno, J. Luong, U. Viñuela, F. Terán, S.E. Paje, “Pavement temperature influence on
close proximity tire/road noise,” Applied Acoustics, Vol.72, 2011, pp.829–835.
FlowVision 3.08.01 User’s Manual, Capvidia Inc., (2011).
269
301
An Exploratory study of Green Supply Chain in SMEs Cluster in India
ShubhamGandhia,*, AbhishekDwivedib, AishwaryaMorc
a rd
3 Year Environmental Engineering, Delhi Technological University, Delhi, India
E-mail address: shubhamgandhi14@gmail.com
b th
4 Year Electronics and Comm. Engineering, Delhi Technological University, Delhi,
India
E-mail address: abhishekdwivedi604@gmail.com@gmail.com
c rd
3 Year Civil Engineering, Delhi Technological University, Delhi, India
E-mail address: aishwarya4391@gmail.com
Abstract
Over the past decades India has experienced unprecedented industrial growth, with a focus on
the development of Small Medium enterprises (SMEs). While this increased industrial
activity has brought positive economic benefits to the region, it has caused the negative
environmental effects. Thus, we can see that pressures may potentially arise from regulators,
supply chain partners, competitors and the market (consumers and customers). We present
our case through a rigorous research carried out at the Bawana Industrial Area in the National
Capital Region of India. Our study focuses highly on quantitative details regarding the green
supply chain & logistics management techniques employed therein, reflecting to the situation
and then finding out Barriers and drivers to GSCM (Actors) and further describing the
processes observed in SMEs. Finally a Learning-Action-Performance model is depicted to
cache upon the factors required to imbibe GSCM in SMEs as actions and the impact of
actions is shown as performance.
Keyword: SAP-LAP models, Green supply chain management, SMEs, Industrial clusters
1. Introduction
GSCM has been broadly described ( by Srivastava [1] ) as: Integrating environmental
thinking into supply chain management, including product design, material sourcing and
selection, manufacturing processes, delivery of the final product to the consumers as well as
end-of-life management of the product after its useful life.
In developing countries like India the issues related to GSCM have become even more
critical. Recent studies have shown that a majority of world’s manufacturing will be carried
out in Asia in the next couple of decades. As a major manufacturing country, India has many
270
opportunities, but they also face substantial environmental burdens with this opportunity (Rao
[2]). Moreover, developing countries such as India are becoming increasingly industrialized.
The appropriate development of GSCM concepts and practices may indeed aid these
countries by lessening the environmental burden of both manufacture and disposal of
products, while even potentially improving their economic positioning.
2. Main Body
2.1 Research Methodology
The research methodology adopted is a mix and match of techniques viz. interviews,
conducting focus groups, seminars, expert interviews. A structured questionnaire was framed;
these questions were framed on a five-point Likert scale.For the purpose of conducting out
the research, industries were visited in person.The seminars and meets saw the coming
together of entrepreneurs and the businessmen from Bawana industrial area and experts from
the academia. The aim was to discuss and layout the prevalent barriers for implementing
GSCM in the Bawana Industrial area and possibly formulate a path for their solution too. The
inputs provided by entrepreneurs, workers, academicians and government officials were
recorded and analyzed. There after all the inputs were put up in SAP-LAP framework. The
situation to be dealt with, the actor or actors who deal with the situation and a process or
processes that recreate the situation are all analyzed in the SAP framework. A synthesis of
SAP leads to LAP which deals with Learning, Action, and Performance (Sushil [3]). The
Mckinsey 7s model used to determine how best to apply devised plan.
2.2 Literature Review
Suppliers reputation: Many buying firms are demanding that their suppliers implement
green supply chain management (GSCM) practices and fulfill even additional environmental
requirements; (Lee[5]). However, suppliers are generally concerned with cost, quality, and
delivery, while environmental safety has been taken with a lower priority. In contrast,
manufacturers may list environmental safety and improvement as a major priority.
Manufacturing firms may need to consider their own environmental goals, social
responsibilities, and reputation to consumers. There was no such practices observed by SMEs
at Bawana Industrial cluster and they generally have traditional suppliers of past times, least
fulfilling any of the environmental demand.
Information Technology: IT can be very useful for product development programs
encompassing the design for the environment, recovery and reuse. Efficient information
systems are needed for tracking and tracing the returns of product, linking with the previous
sales. It is required to handle information’s flows associated with both forward and backward
flow of materials and other resources to manage green Supply Chain efficiently (Ravi &
Shankar [6]). Also, IT enablement reduces lot of paper usage, which supports GSCM
271
philosophy. So, lack of IT implementation is an important barrier to achieve efficient GSCM.
Total Quality Management (TQM): TQM is an approach that seeks to improve quality and
performance which will meet or exceed customer expectations. TQM looks at the overall
quality measures used by a company including managing quality design and development,
quality control and maintenance, quality improvement, and quality assurance. (Zhu and
Sarkis [7]) indicate that quality management lubricates implementation of GSCM. By
receiving the certificate for the ISO 14001 environmental management system (EMS)
standard, organizations are able to create structured mechanisms for continuous improvement
in environmental performance. ISO accreditation was observed in few SMEs. Only few
SMEs hold ISO 14001. It also link to lack of top order commitment in delivering best quality.
Information and resource sharing: Inter organizational knowledge sharing in green supply
chains involves activities of transferring or disseminating green knowledge from green
manufacturing firms to their partners with a view to developing new capabilities for effective
actions. To achieve the benefits of inter organizational knowledge sharing; it is essential for
all the parties involved to be in cooperative relationships. With effective knowledge sharing,
the strategic intent of inter organizational collaborations for a sustainable competitive
advantage can be achieved by combining the relevant organizational resources and
capabilities of all parties. The value created by collaborative supply chains benefits all
parties.
One important insight from Resource Development Theory is that firms lacking the required
resources to attain their goals are likely to develop relationships with others for acquisition of
the resources. In many instances, inter-organizational relationship is essential for managing
the internal and external coordination for GSCM to gain the performance outcomes (Zhuet al
[8]),where partner coordination and resources sharing are beneficial for environmental and
productivity improvements.
Interaction with customers: Cooperation with suppliers and customers has become
extremely critical for the organizations’ to close the supply chain loop(Zhu et al. [9]).
Because customer interaction seems costly and time-consuming, some companies just don’t
bother. But as the pressure to be more profitable grows, supply chains won’t be able to afford
the excess inventory, lost sales and missed innovation opportunities caused by inadequate
customer collaboration. Same is observed here.
GSCM practice implementation and employee satisfaction: It may have a similar
positive relationship as it is observed that all personnel in the production lines would feel
safer and be more satisfied with their operations and the working environment when unsafe
toxic materials are removed from the production process.
Inventory planning and management: Optimizing the entire supply chain enables
manufacturers to create an inventory strategy that reduces safety stock requirements,
decreases unnecessary production of goods, lowers inventory targets and reduces inventory in
272
every echelon of the supply chain. Inventory is produced as needed, without excess, so
inventory lost to spoilage, shrinkage and obsolescence decreases and unnecessary products
are not produced. Materials no longer need to be recycled; they are not consumed at all so
profits increases.
Quality HR and HR departments: A Company with higher quality of human resources such
as better training or education will help in implementing Green Supply Chain Management.
Quality human resources can provide new ideas for companies, learn new technologies easily,
share knowledge with each other and use new technologies to solve problem(Yu Lin &Hui
Ho [10]) However, due to financial constraint; quality of human resources is barrier.
Therefore poor quality of human resources and lack of common HR departments is an
important barrier to implement GSCM in SMEs.
Adaption to changing technologies: Technology is a kind of knowledge. An organization
with rich experiences in the application and adoption of related technologies will have higher
ability in technological innovation(Gant [11]). Resistance of organizations to technology
advancement adoption is the resistance to change.
Training and Development Facilities: Training and education are the prime requirements
for achieving successful implementation of GSCM in any organization. Management may
encourage employees to learn green information. Organizations may provide rewards for
green employees. Employees may be helped when they face green problems and may be
provided support to learn green information (Hsu & Hu [12]).
Crime and infrastructure related problems: In addition to the lacs of man hours lost in
transit time that labour suffers during travel, raw materials and finished goods need more
transportation time and hence more cost. As far as the question of maintenance of law and
order is concerned, SMEs in Bawana in general suffer from existence of alarming rates of
crime, civil threat and critically hazardous environment. Hence these issues are of major
concern for new established industrial cluster growth
Situation
Market leaders in various industries have taken a step ahead to green their internal operation
through implementing green supply chain. (Jaffe et al.[4]) conclude that there is little
evidence to support the proposition that environmental regulations damage competitiveness.
The international recession is likely to influence the usage of green technology, as
entrepreneurs are more likely to focus on profit making than on the environment. One of the
initial perceptions about introducing green products in the market is that they lead to higher
cost of manufacturing compared to conventional ones. The growing importance of GSCM
has been substantially driven by increasing environmental concerns, such as environmental
pollution accompanying industrial development, diminishing raw material resources,
overflowing waste sites and increasing levels of pollution. At the same time, government
regulations, changing consumer demands, and the development of international certification
273
standards have concurrently led companies to look at GSCM initiatives with increasing
attention.
Alarming Figure of Dismal Performance (Actor-Processes)
Dimensions
Information
technology
and
management
Quality management and accreditation
Interaction with customers
Information and resource Sharing
among SMEs
Mean
2.526316
Mode
2
Median
2
1.473684
2.052632
1.684211
1
2
1
1
2
1
Inventory management and planning
Suppliers reputation
HR Quality
1.368421
3.947368
2.052632
1
4
2
1
4
2
Training and development
Employee-labour retention
Lack of infrastructure
Adaptation to changing Technology
Relationship with Govt. Organizations
Risk Mitigation Techniques
Scheduling and assembly Techniques
Volume And Process Flexiblity
1.631579
1.315789
1.526316
2.26315
2.15789473
2.63157894
2.31578944
2.63157897
1
1
1
1
1
2
3
1
1
1
1
2
2
3
3
2
Consistency of Raw Materials
Focus on environmental Issues
1.7894736
3.421053
1
3
2
3
274
Figure1 shows the LEARING-ACTION-PERFORMANCE model for implementation of
GSCM in SMEs
We see that that pressure is potentially arising from regulators; supply chain partners,
competitors and the market (consumers and customers).SMEs are having inadequate
weak environmental strategy and environmental management techniques which are
ineffective in dealing with environmental management issues.
Their lack of environmental management capability can have a negative impact on
financial performance and brand image and may lead to loosing competitiveness in global
markets.
Le
ar
ni
n
gs
Collaborative physical logistics:
shared transport, shared
warehouse, shared infrastructure;
Reverse logistics: product
Mitigate business risk by
differentiating themselves from
competitors
Transforming their companies into
industry leaders, building
recycling, packaging recycling,
credibility with stakeholders and
returnable assets
attracting investors;
Reduced operating costs;
Demand fluctuation
Motivated better performing
management: joint planning,
suppliers, become preferred
execution and monitoring;
Joint Scorecard and Business Plan.
Reduction/avoidance of transport;
Performance
Actions
SMEs LAP
MODEL
vendors in green supply
Chaining and attracting consumers
in the rapidly-growing green
marketplace;
Preserve business continuity, by
increased capacity utilization in
attracting (the interest of) top job
transportation;
candidates,
Enhancing employee satisfaction
Increase employment of mass
Earningpublicity with the local,
transportation, if suitable;
regional or even national media
New frontiers of excellence leading
Recycling/reusable packaging;
Figure 1
to consumer satisfaction to be
Reduction of the use of resources
explored.
(water and energy);
Manufacturing Sector will attract
Foreign Investment.
Compliance with ISO’s
Environmental standards.
275
Use of alternative fuels
Use of more recent/less polluting
vehicles
2.5 Impact Factors and Resultant Outcomes
2 Conclusions
Environmental issues become a major concern for business as well as SMEs. Therefore,
efficient actions need to be designed to alleviate these issues. However, proper design of
these actions require proper understanding of the steps needed towards sustainability as well
as barriers and obstacles facing greening activities. Much research and efforts need to be done
to support the evolution of business activities towards sustainable development. This paper is
an attempt to clarify the path towards that end and suggests steps to be taken by business
organizations to make sustainable development a reality.
3 Recommendations
Align green initiatives with the strategic objectives of the company. The company need to
reinforce to suppliers that good environmental performances integral to their business
operation and that the level of performance will be considered in selecting suppliers. An
increasing number of governments have started to promote voluntary actions by private
corporations to achieve their environmental goals. The popularity of this approach stems
from the fact that voluntary actions often are more acceptable to the private sector than
prescriptive mandates or economic instruments like pollution taxes and emissions trading.
One of the more widely used voluntary actions involves an environmental management
system (EMS). Industrial facilities that adopt EMS systematically develop an environmental
policy, evaluate their internal processes that affect the environment, create objectives and
targets, monitor progress, and undergo management review. In particular, ISO 14001, the
EMS standard designed by the International Organization for Standardization (ISO), has
received growing attention.
276
4 Acknowledgement
We would like to express our special thanks of gratitude to Prof. P.B SHARMA, Hon’able
Vice Chancellor Delhi Technological University for their guidance and support. We also
thank SHRI PC JAIN – HEAD, BAWANA CHAMBERS OF INDUSTRIES, SHRI ASHISH
SHARDA and BID-DTU TEAM for their indispensible support to carry out the research
work.
5 References
[1] Srivastava, S.K. (2007), “Green supply-chain management: a state-of-the-art literature
review”, International Journal of Management Reviews, Vol. 9 No. 1, pp. 53-80.
[2] Rao, P. (2002), “Greening the supply chain: a new initiative in South East Asia”,
International Journal of Operations & Production Management, Vol. 21 No. 6, pp.
632-55.
[3] Sushil, (2000),"SAP-LAP models of inquiry", Management Decision, Vol. 38 Iss: 5 pp.
347 – 353.
[4]
Jaffe, A.B., Peterson, S., Portney, P. and Stavins, R. (1995), “Environmental regulation
and the competitiveness of US manufacturing: what does the evidence tell us?”, Journal
of Economic Literature, Vol. 33 No. 1, pp. 132-63.
[5] Lee, K. (2009), “Why and how to adopt green management into business
organizations?”,
Management Decision, Vol. 47 No. 7, pp. 1101-21.
[6] Ravi, V., & Shankar R. (2005). Analysis of interactions among the barriers of reverse
logistics. International Journal of Technological Forecasting & Social change, 72(8),
1011-1029.
[7] Zhu, Q., Sarkis, J., 2004. Relationships between operational practices and perfor-mance
among early adopters of green supply chain management practices in Chinese
manufacturing enterprises. Journal of Operations Management 22 (3),265–289.
[8] Zhu, Q., Geng, Y., Lai, K.H., 2010b. Circular economy practices among
Chinesemanufacturers varying in environmental-oriented supply chain cooperationand
the performance implications. Journal of Environmental Management 91(6),
1324–1331.
[9] Zhu, Q., Sarkis, J. and Lai, K. (2008), “Confirmation of a measurement model for green
supply chain management practices implementation”, International Journal of
Production Economics, Vol. 111 No. 2, pp. 261-73.
[10] Yu Lin, C. (2007). Adoption of green supply in Taiwan logistic industry. Journal of
management study, 90-98.
[11] Gant, R. M. (1996). Prospering in dynamically-competitive environments:
Organizational capability as knowledge integration. Organizational Science, 7(4),
375-387.
[12] Hsu, C.W., & Hu, A.H. (2008). Green Supply Chain Management in the Electronic
Industry. International Journal of Science and Technology, 5(2), 205-216. ISSN:
1735-1472.
277
Material Science and Engineering
14:45-16:45, December 15, 2012 (Meeting Room 5)
Session Chair:
186: Parameters Affecting Voids Elimination in Manufacturing of Polyester
Impregnated High Voltage Automotive Ignition Coils
Ahmad Nawaz
U.E.T Peshawar
Sahar Noor
U.E.T Peshawar
Bilal Islam
U.M.T Lahore
272: Nanostructured Metal Oxides for Dye Sensitized Solar Cell from Transparent
Conductors to Photoanode Materials
Ghim Wei Ho
National University of Singapore
305: Performance Evaluation and Preventive Measures for Photo-thermal Coupled
Aging of Different Bitumens
Zhengang Feng
Wuhan University of Technology
Jianying Yu
Wuhan University of Technology
Bo Zhou
Wuhan University of Technology
308: Platinum Nanoparticles/Graphene Composite Catalyst as A Novel Composite
Counter Electrode for High Performance Dye-Sensitized Solar Cells
Chen-Chi Ma
National Tsing Hua University
Li-hsueh Chang
National Tsing Hua University
318: Recovery of Valuable Metal from Waste-computer
Busayamas Phettong
Prince of Songkla University,
Pitsanu Bunnaul
Prince of Songkla University,
Manoon Masniyom
Prince of Songkla University
329: Photocatalytic Degradation of Acid Orange 7 by ZnO and Sm-Doped ZnO
Pongsathorn Sathorn
Kasetsart University
Nattiya Reungtip
Kasetsart University
Apisit Songsasen
Kasetsart University
278
335: Preparation and Properties of Layered Double Hydroxides/SBS Modified Bitumen
for Waterproofing Membrane
Song Xu
Zhengang Feng
Jianying Yu
Lian Li
Wuhan University of Technology,
Wuhan University of Technology,
Wuhan University of Technology,
Wuhan University of Technology,
279
186
Parameters affecting Voids Elimination in Manufacturing of Polyester
Impregnated High Voltage Automotive Ignition coils
Ahmad Nawaza, Sahar Noora, Bilal Islamb
a
Industrial Engineering Department,
U.E.T Peshawar, Peshawar, Pakistan
Email: nawaz.ngnr@gmail.com
Email: sahar@nwfpuet.edu.pk
b
Industrial Engineering Department,
U.M.T Lahore, Lahore, Pakistan
Email: bilalislam77@gmail.com
Abstract
Bubble formation is very significant and crucial problem in polyester used for laminating/
impregnating of Electrical and power Equipments. Due to voids formation the danger of
occurrence of corona is enhanced in the final product which is sold in market. The study of
different factors which effects the voids elimination in Electrical impregnations separately
and in combination is done in this paper in potting of ignition coils on its manufacturing unit.
It is found in this research that constantly heating mold at constant Temperature and Mixing/
shearing both in combination gives very substantial results in case of room temperature
curing polymers. These results are very useful especially in industry of Electrical equipments
and machines which uses polyester lamination as a prime technique for their products.
Keywords: Voids elimination, Electrical impregnation, Potting, Polyester lamination
1. Introduction
Polyesters are basically a category of polymers which contains ester as a functional group. It
is one of the basic chemical which has enormous significance at the industrial level. Most of
the industries make products in which polyester is the basic ingredient like fabrics, plastic
bottles, film insulations for wires and certain automobile parts like ignition coils, Rectifiers
and CDI Units.
Polyester has certain surface finishing problems after settling in products like ships, rotor
blades, (Parts which are manufactured by RTM (Resin Transfer Molding), VARTM (Vacuum
280
Assistant Resin Transfer Molding), liquid injection Molding. It affects the Mechanical
strength and properties of the product. But our focus is on electrical impregnation although I
will take guidance from the techniques used by scientists in eliminating bubbles from RTM,
VARTM, liquid molding injection etc etc.
While in electrical lamination case the problem is of corona effect. Lamination/ Impregnation
is very important in the high voltage electrical parts because it protects the high voltage to
ignite with air ( in wet conditions) or dust or other particles in air.
The difference between the two cases is that in one case we have Mechanical Strength or
surface finishing problem in which the part or product fails due to voids propagation inside
the product.
When voids are formed inside lamination or impregnation so it causes air or moisture
particles to become oppositely charged as a result when ignition switch circuit is completed
or switch is turned on: so instead of giving spark in engine or boiler it sparks inside the
lamination and causing electrical failure. So ignition is done causing coil to fail by burning its
windings. For this reason it is very essential to laminate the high voltage coils. For this reason
we do impregnation of high voltage electric parts. Our Focus is on to find out the factors or
parameters which are influencing the void formation. We have to check different factors and
their effect on voids formation. The problem is the high rate of void formation in laminated
small sized electrical parts. Which companies were making in Pakistan, Electrical properties
of the products were disturbed due to entrapment of air inside the polyester which was
causing failure of product.
Previously P.C.Sharma and Agarwal [1] has stated in their review VPI (Vacuum Pressure
Impregnation) as the best process for manufacturing of impregnated equipments. I will not
find the pressure and temperature conditions and other factors because it is not suitable at all
for small or light weight impregnated equipments it is very uneconomical for this specific
case. My focus will be totally on the factors involved in production by potting method.
Because it is most suitable method for lot production of ignition coils and most of related
power equipments. Oliver stated that [2], volatile arising from the resin system i.e. evolving
air from inside agglomerates with other voids making their sizes bigger and results in larger
voids. Secondly, he stated that increase of temperature also increases void formation.
Labordus investigated [3], [4] that main cause of bubble formation was outgassing of resin.
Some bubble nucleating materials like UnifiloTM exhibit better bubble nucleation properties.
According to his findings he found that low viscosity achieved by heat treatment can improve
degassing process. Bubble nucleation agents only solve problem up to 50% only. Secondly,
281
lower the velocity more the cycle time is increased slower the production rate becomes. Mr.
Afendi, Banks & Kirkwood [5] in a conference paper of Marine Composites at Plymouth
studied the effect of viscosity on Resin & Hardener solution. They conducted experiment in
which they took two Resins from two different companies one having much more viscosity
than the other. With the help of their experiment it was found that viscosity has a very vital
role in degassing resin. More the viscosity more will be the time taken in degassing the resin.
Secondly, they added a bubble nucleation agent scotch-brite in the solution to degas the resin
& Hardener in the solution. After adding this nucleation agent they also found the same result
that Solution having more viscosity will be degassed in much more time than the solution
having less viscosity. Erhard Haeuser [6] checked the factor of Ultrasonic waves for
continuous degassing and developed mechanism in casting resins but this method is also
failed in this case because of over flow of Resin from Mold or spilling out of polyester out of
mold or capsule so we will not waste time on experimenting on this factor.
Recent research [7] was conducted on Vacuum Assistant Resin Transfer (VARTM)
process in which different factors were studied to prepare high quality composite material
parts. A series of experiments were conducted in which different parameters were tested. The
effect of Vacuum Pressure, Inlet Pressure, and Molding Temperature on the void contents was
tested by conducting different experiments. They tested different conditions for case of
VARTM but we will test different factors in impregnation of small size electrical equipments
without using uneconomical methods involving vacuum.
2. Case Study
After selecting an automobile company for case studies ignition coil was selected as a small
size product in which lamination was to be done by polyester. So that we can get accurate
results according to a certain procedure in potting of ignition coils. Polyester used was cross
linking polyester of Descon which is cured at room temperature.
One Factor at a Time approach
Now to find the factor which have significant and result oriented effect on the surface
finishing on the polyester we have to apply one factor at a time approach. In applying this
approach we will only alter or vary one factor and the rest factors will be kept constant.
Two types of factors are involved in this some are controllable factors and the others are non
controllable factors. Controllable factors include temperature (given to polyester solution),
type of resin, Hardener Quantity, Pouring Method (after solution is Prepared), Response to
vibration, Response to Shear/ mixing. No uncontrollable factors found literature review and
experiments have been found by experimentation.
282
Effects of different factors and their results are concluded using this approach.
i. Temperature before infusion
Temperature of the Polyester solution before pouring in the desired capsule was raised at 5C
interval and results were noted down. There was reduction in the coils percentage affected
due to void formation. Experiment was conducted in the room in which temperature was
maintained to 25C in summer and winter. It was found experimentally that 55C is appropriate
to which it causes less harm to workers surrounding (proper masks should be provided) but
necessary is that the proper cooling system is to be provided. Beyond 55C the evaporation of
polyester fumes is so high that it causes problems for the human health (especially the
surrounding workers who are working there). Secondly, the desired volume to impregnate the
ignition coil is decreased causing loss of material. The polyester used in all cases is of
Descon Chemicals. To analyze relation consult Graph 1 and Table 1.
ii. Resin & Hardener Quantity
By increasing the hardener quantity in the solution we got the following results not at all
satisfactory instead the processing time was increased. From the data it is concluded that
hardener or resin quantity is not a significant factor in the impregnation case. It should be
used according to manufacturer’s recommendations. These experiments were conducted at 55
C at a viscosity of 110 Sec measured from viscosity cup. Because in first experiment it was
found to maintain temperature up to 55 C beyond this temperature evaporation rate is
increased causing problem for the workers. Consult Table2
iii. Response to Vibration
Response of the void elimination in lamination of ignition coils and small transformers were
tested during processing of these products. The reason to test this result was the earlier
research that concluded that giving vibration will eliminate or minimize the bubble formation.
For this reason a fixture was made and fitted with a machine which delivered forced
vibrations to the object placed on it. So capsule was placed upon that fixture and polyester
solution was poured inside it.
Result were worse than the normal situation and the vibration has very negative impact on
elimination of voids instead more air was entrapped in the lamination. Consult Table 3
iv. Mold/ Capsule Temperature
According to the findings that increase in temperature or giving heat to the polyester used in
impregnation. In order to give a continuous heat to capsule/ mold until the curing of polyester
occurs and product is manufactured. To test mold temperature conditions a system of LEDS
283
were installed in the whole lot of ignition coils in the windings of ignition coils which were
placed inside mold. As the electricity passed the LEDs it heated up the polyester as current
converted into heat energy. The temperature rises up slowly and becomes constant at one
stage. As soon as the temperature of polyester rises the air entrapped gains kinetic energy and
some air evolves out of the impregnation/ lamination. But the resulting air which remains
inside the impregnation entrapped at edges and makes bigger bubbles / bigger voids also
indicated by Oliver [2].
It shows that mold temperature reduce bubble formation from 25% to 15%. So it is
significant factor in voids elimination. Consult Table 4, Table 5 & Graph 2
v. Response to Shearing/ Mixing
In this the polyester solution shear force was applied with the help of some rod without
giving mold temperature. But the results were also found satisfactory to certain extent. Void
formation was found in range of 180 out of 1000. This meant 18 % Void formation in full Lot.
So it also concluded that it is also a significant factor in void elimination. Mixing is done
for different time intervals on whole lot. Ist time shearing is done as soon as polyester is
poured. Consult Table 6
Two Factor at a Time approach
In the case if desired results are not achieved with one factor approach or by combining the
significant factors to obtain more sophisticated results two factors at a time approach is used.
Now combining the two significant factors i.e Mold Temperature and Mixing together.
vi.
Response to shear/ Mixing & Constant Mold Temperature
This was done on the assumption that as it was concluded earlier that when constant mold
temperature is heating polyester so some air evolves but some air entraps in upper layers
making bigger bubbles when mixing is done so the layers will be shifted up and down give us
void free Impregnation by proving path for air to escape in the environment. When
experiments were done it was found that the voids were found in just 25-30 Coils out of 1000
lot and assumption proved as it was guessed. Consult Table 7 and Graph 3
3. Results & Discussions
Results are being given in the form of graphs and tables based on the experimental results.
Results clearly show the influencing factors that eliminate voids from the capsule.
S.No Lot
No
Temperature
Heating
Time
Total
lot
Qty
284
% Void
Formed formation
in pieces in coils/
Void
Viscosity
(Viscosity
Cup)
Lot
1
A1
25C
5 min
1000
352
35.2
510 Sec
2
A2
30C
5 min
1000
333
33.3
480 Sec
3
A3
35C
5 min
1000
310
31.0
402 Sec
4
A4
40C
5 min
1000
296
29.6
321 Sec
5
A5
45C
5 min
1000
285
28.5
189Sec
6
A6
50C
5 min
1000
278
27.8
126 Sec
7
A7
55C
5 min
1000
270
27.0
110 Sec
8
A8
60C
5 min
1000
255
25.5
88 Sec
Table 1: Indicating Result of experiment of void formation vs temperature of polyester
before pouring
%Void Formation in ignition
coils / Lot
Void Formation Vs Before pouring Temperature
40
35
30
25
Void Formation Vs Before
pouring Temperature
20
15
10
5
0
25C
30C
35C
40C
45C
50C
55C
60C
Before Pouring Temperature
Graph 1: Indicating Relation between Void Formation and Pouring Temperature
S.No
Lot No
Hardener
Quantity
Total lot
Qty
Affected
Ignition
Coils
% Void
formation
in coils/
Lot
1
B1
¼
hardener
¾ resin
1000
266
26.6
2
B2
½
hardener
½ resin
1000
279
27.9
3
B3
¾
hardener
1000
259
25.9
285
¼ resin
Table 2: Indicating the effect of Resin & Hardener Mixture on void formation
S.No Lot
No
Vibration
Time
Total
Lot Qty
Affected
Ignition
Coils
% Void
formation in
coils / Lot
1
C1
2hrs
1000
423
42.3
2
C2
3hrs
1000
410
41.0
3
C3
4hrs
1000
479
47.9
Table 3: indicating Response of vibration against Void formation
S.No Lot No
Mold
Temperature
Time
1
D1
27C
1 min
2
D1
33C
5 min
3
D1
38C
10 min
4
D1
42C
20 min
5
D1
42C
30 min
6
D1
42C
40 min
Table 4: Indicating Rise of temperature of mold with time and after certain time it becomes
constant
S.No
Lot No
Total lot
Qty
1
E1
1000
2
E2
3
E3
Mixing
Time
Affected
Ignition
Coils
% Void
formation
in coils /
Lot
10 min
193
19.3
1000
20 min
176
17.6
1000
40 min
153
15.3
Table 6: Indicating the affect of applying shear on void Formation becomes constant
S.No Lot
No
1
F1
Mold
Temperature
42C
Mixing
Lot Cycle
Lot
Affected
% Void
Time
Time
Size
Ignition
Coils
formation in
coils / Lot
8.5 1000
28
2.8
8.5 1000
26
2.6
10 min
hrs
2
F2
42C
15 min
286
hrs
3
F3
42C
20 min
8.5 1000
24
2.4
8.5 1000
21
2.1
8.5 1000
20
2.0
hrs
4
F4
42C
25 min
hrs
5
F5
42C
30 min
hrs
% Void Formation in Ignition Coils/
Lot
Table 7: Indicating the affect of mold/ Capsule Temperature and Shear on void Formation
simultaneously
18
16
14
12
10
8
6
4
2
0
Constant Mold Temperature 42 C
Graph 2: Indicating Relation between Void Formation & Constant Mold Temperature
% Void Formation in Ignition
Coils/ Lot
Void Formation Vs Shearing Time at Constant Mold Temperature
3
2.5
2
1.5
1
0.5
0
10
15
20
25
30
Shear/ Mixing Tim e
Graph3: Graph Indicating Effect of Mixing at Constant Mold Temperature on Void Formation
4. Acknowledgements
Our Full acknowledgements to Prof Engr. Javaid Iqbal retired Professor Electrical
Department University of Engineering & Technology, Lahore and his brother Engr. Abid
287
Iqbal for giving us chance to solve problem in their industry situated in Lahore which is
vendor company of 2 Wheeler, 3 wheeler and 4 wheeler Automobiles assembling companies.
Ignition coil was one of their main products which had problem of void formation.
5. Conclusion
It is concluded from the Results that before infusion temperature and constant mold
temperature are the factors that combined give us satisfactory results in case of potting of
ignition coils. Eliminating the voids from their original level of 25% to 2.5% means results
are quite encouraging in finding the controllable significant factors effecting void elimination.
This will help in establishing a technique for elimination of air in high volume of polyester
used in ignition coil and other related equipments for purpose of lamination.
[13]
[14]
[15]
[16]
[17]
[18]
[19]
6. References
Sharma C P, Vadera K.L, Agarwal R.P, Impregnation of electrical machines with
solvent less resins, Pigment & Resin Technology 1993, Vol. 22(1), pp.10 – 11
Olivier P, Cottu J P, Ferret B Effect of cure pressure and voids on some Mechanical
Properties of Carbon/ Epoxy Laminates. Composites, 1995, Vol. 26 page 509-515
Afendi M.D, Banks W. M, Kirkwood D, ACMC/ SAMPE Conference on Marine
Composites Plymouth, 2003
Labordus M, Pieters M, Hoebergen A, Soderlund J The causes of voids in laminates
made with vacuum injection. Proceedings of the 20th International SAMPE Conference.
Paris, 1999
Labordus M, Soderlund J Avoiding voids by creating bubbles, degassing of resins for
vacuum injection process. Center of light weight structures, TUD- TNO , Delft (NL),
2001
Device & Method for continuous Degassing of Casting Resin by Erhard Haeuser US
Patent Number 5591252, 7 Jan, 1996.
Vishwanath R. Kedari, Basil I. Farah, Kuang- Ting Hsiao Effect of Vacuum Pressure,
Inlet Pressure, and mold temperature on the void content, Volume Fraction, of
Polyester/ e-glass Fibre Composites Manufactured with VARTM process published by
Sage on behalf of Journal of Composites Dec 7, 2011
288
272
Nanostructured metal oxides for dye sensitized solar cell from transparent
conductors to photoanode materials
Dr. Ghim Wei Ho
Assistant Professor
National University of Singapore
Singapore
Abstract
Two major components which are of great concern in dye sensitized solar cells are the
expensive and brittle transparent conductors (TC) as well as indirect “tortuous” electron
transport pathways of conventional nanoparticle photoanode material. The current TCs are
industrially manufactured using common semiconductor processing methods such as
sputtering and chemical vapour deposition (CVD). These processes require extensive vacuum
systems and clean room setups that are costly to purchase and maintain. Herein, we have
devised novel low cost DSSC based on replacement of of indium tin oxide (ITO) with
solution based fabrication of Ga:ZnO TC.1-2 Transparent conducting Ga:ZnO films were
synthesized directly on glass substrates via a low-temperature aqeuous route for application
in dye-sensitized solar cells. Preliminary electrical and optical characterization on the films
showed that 85% transparency across the optical range with sheet resistances as low as
10/sq were achievable, making them comparable to commercial TCOs. Novel non-planar
transparent conducting electrodes consisting of pillars, cross-hatched trenches and pit
structures were fabricated to produce films with increased surface roughness and enhanced
and light scattering capabilities which are essential for photovoltaic applications. Furthermore,
to realize high performance DSSC, detailed understanding and control over the growth,
crystal structure, and transport properties of nanostructured materials on device performance
is vital. Compared to the commonly used nanoparticles, 1D nanowires (including nanotubes)
and 3D mesoporous film provide additional benefits of predefined, monodisperse length scale
which enable optimized charge generation and collection through direct electron transport
pathways. Designed and optimized composite nanowires/nanotubes and mesoporous matrixes
of high specific surface area, porosity and superior crystallinity for photovoltaic applications
will be discussed.3-6
[1]
References
Kevin, M., Lee, G. H. and Ho, G.W. (2012). “Non-planar geometries of solution
processable transparent conducting oxide: From film characterization to architectured
electrodes,” Energy Energy Environ. Sci., 5, 7196.
289
[2]
[3]
[4]
[5]
[6]
M Kevin, W H Tho and G W Ho, (2012) " Transferability of solution processed epitaxial
Ga:ZnO films; Tailored for gas sensor and transparent conducting oxide applications" J.
Mater. Chem, 22 (32) , pp. 16442-16447.
S. Agarwala, Z.H. Lim, E. Nicholson, G. W. Ho, (2012) “Probing Morphology-Device
Relation of Fe2O3 Nanostructures towards Photovoltaic and Sensing Applications”
Nanoscale 4, 194.
W.L. Ong, C. Zhang, G.W. Ho, (2011) “Ammonia Plasma Modification Towards A
Rapid and Low Temperature Approach for Tuning Electrical Conductivity of ZnO
Nanowires on Flexible Substrate” Nanoscale 3, 4206.
S. Agarwala, L.N.S.A. Thummalakunta, C.A. Cook, C.K.N. Peh, A. S. W. Wong, L. Ke
and G. W. Ho (2010) “Co-existence of LiI and KI in a filler free Quasi solid state
electrolyte for efficient and stable dye sensitized solar cell” J. Power Sources 196,
1651.
S. Agarwala, M. Kevin, A. S. W. Wong, C.K.N. Peh, V. Thavasi and G. W. Ho (2010)
“Mesophase ordering of TiO2 with high surface area and strong light harvesting for
dye-sensitized solar cell” ACS Appl. Mater. Interfaces 2, 1844.
290
305
Performance evaluation and preventive measures for photo-thermal
coupled aging of different bitumens
Zhengang Feng, Jianying Yu*, Bo Zhou
State Key Laboratory of Silicate Materials for Architectures, Wuhan University of
Technology, Wuhan 430070, P. R. China
*Corresponding author: E-mail: jyyu@whut.edu.cn
Abstract
Three different bitumens were exposed to photo-thermal coupled aging and compared by
physical parameters to select the bitumen with inferior photo-thermal coupled aging
properties. The preventive measures for photo-thermal coupled aging of the selected bitumen
were investigated using various additives (light stabilizers, antioxidants, organic
montmorillonite and layered double hydroxides). Results indicate that the photo-thermal
coupled aging results in a much greater aging degree of bitumen than the single thermal aging.
Different bitumens display distinct susceptibility to the photo-thermal coupled aging. Among
the additives used, antioxidant Irganox1010 shows the best ability to resist the photo-thermal
coupled aging for the selected bitumen. The bitumen with excellent photo-thermal coupled
aging resistance can be prepared with the Irganox1010 at a proper content of 0.4%-0.6%.
Key words: bitumen; photo-thermal coupled aging; physical properties, preventive measure;
antioxidant
1. Introduction
Bitumen aging has attracted many researchers’ attention since the aging of bitumen is
considered to be one of the main factors deteriorating pavement performance [1-6]. It is well
known that the bitumen aging consists of two aspects: thermal-oxidative aging that is caused
by heat and oxygen [1, 3, 4, 6], and photo-oxidative aging that is caused by ultraviolet (UV)
irradiation and oxygen [2, 5, 6]. Although both the heat and UV light can degrade bitumen
properties, their effects on bitumen aging are not the same [6]. The aging behavior of bitumen
would be much more complicated if the heat and UV irradiation are combined together. As
observed in our previous study, the heat and UV irradiation played a coupled role in
accelerating the aging process of bitumen [7].
With respect to the prevention of bitumen aging, some methods concerning the thermal- or
photo-oxidative aging have been investigated [8-14]. Various modifiers such as antioxidants
291
[8-10], UV absorbers [10, 11], organic montmorillonite [12], carbon black [13] and other
phosphorus compounds [14], exhibit an improved resistance of bitumen to the thermal- or
photo-oxidative aging. Nevertheless, the measure that is capable of inhibiting bitumen from
both thermal- and photo-oxidative aging is less put forward in the present study.
The bitumens from different origins vary remarkably in properties and chemistry. Therefore,
the aging properties depend largely on types and origins of bitumen [15-17]. For bitumen that
is susceptible to the photo-thermal coupled aging, a modification may be useful for the
prevention of bitumen aging and extension of pavement life. However, the works dealing
with the selection of inferior binders and measures for preventing bitumen from
photo-thermal coupled aging have not been done sufficiently until now.
In view of this, three bitumens were exposed to different photo-thermal coupled aging
conditions. The bitumen with inferior photo-thermal coupled aging properties was selected by
comparing physical parameters of the three binders. Then the preventive measures for
photo-thermal coupled aging of the selected bitumen were investigated using various
additives. Finally, the bitumen with excellent photo-thermal coupled aging resistance was
prepared.
2. Materials and methods
2.1 Materials
Three 80/100 penetration grade bitumens of different origins (denoted as A, B and C) were
used. Physical properties of the bitumens are shown in Table 1.
Table 1 Physical properties of the bitumens
Bitumens
Penetration, 25 ℃, Ductility,
0.1 mm
℃, cm
A
B
C
88
89
87
115.7
132.2
102.0
10 Softening
Viscosity,
point, ℃
℃, Pa·s
44.5
46.9
46.7
144
163
163
60
The additives used for prevention of bitumen aging include following materials: light
stabilizers (Octabenzone, Bumetrizole, Tinuvin770), antioxidants (Irganox1010, Irgafos168),
organic montmorillonite (OMMT, Na+-MMT modified by hexadecyl dimethyl benzyl
ammonium chloride), layered double hydroxides (LDHs, Zn-Al type).
2.2 Preparation of different additive modified bitumens
The additive modified bitumens were prepared by melt blending using a high speed shearing
machine. To begin with, the bitumen was heated until it was absolutely flowing. Then the
preweighted bitumen and various additives were blended in an iron container at a fixed
mixing speed of 2000 rpm. The modified bitumens were finally obtained after the mixtures
were blended at 150 ℃for 30 minutes. The virgin bitumen was also processed by the same
292
methods to be the control sample.
2.3 Aging procedures
The photo-thermal coupled aging was performed using an intelligent numerical control
photo-thermal aging oven (Model LHX-205, China). The binders were poured into an iron
plate (140 mm in diameter) to form a thin film of about 3 mm. Then the samples were placed
into the oven to undergo photo-thermal coupled aging. The light comes from a UV high
pressure mercury lamp. The lamp is 500 W with main wavelength of 365 nm. In order to
evaluate the photo-thermal coupled aging effect of the binders, the UV intensity was adjusted
at 0, 950 and 1200 μW/cm2, the temperature was set at 60 ℃. All samples were taken out and
tested after aged for 168 hours.
2.4 Physical properties test
The penetration, ductility, softening point and viscosity of the binders before and after aging
were measured according to the standards ASTM D5-06e1, ASTM D113-07, ASTM
D36/D36M-09 and ASTM D4402-06, respectively.
3. Results and Discussion
3.1 Effect of photo-thermal coupled aging on properties of different bitumens
The photo-thermal coupled aging properties of bitumen are evaluated by softening point
increment (SPI) and viscosity increment (VI), which can be calculated by Eq. 1 and Eq. 2,
respectively.
SPI  SPAged  SPUnaged
(Eq. 1)
VI  VAged  VUnaged
(Eq. 2)
where, SPUnaged is the softening point before aging, SPAged is the softening point after aging,
℃; VUnaged is the viscosity before aging, VAged is the viscosity after aging, Pa·s.
The effect of photo-thermal coupled aging on SPI and VI of different virgin bitumens is
shown in Fig. 1 and Fig. 2, respectively. Compared with the thermal aging (0 μW/cm 2), the
SPI and VI values of the binders are much bigger after the photo-thermal coupled aging (950
μW/cm2). The SPI and VI values increase further for all bitumens when the UV intensity rises
to 1200 μW/cm2. The results indicate that the photo-thermal coupled aging results in a much
greater aging degree than the single thermal aging. The stronger the UV intensity, the more
obvious the photo-thermal coupled aging effect is.
With respect to different bitumens, the photo-thermal coupled aging shows distinct effects on
bitumen properties, as also displayed in Fig. 1 and Fig. 2. The SPI and VI values of the
293
bitumen A are the smallest, then the bitumen B, and those of the bitumen C are the biggest
after aged at various photo-thermal coupled conditions. As a result, the bitumen A has the
best resistance to photo-thermal coupled aging, whereas the bitumen C shows the worst aging
resistance among the three binders. Given this, the bitumen C with inferior photo-thermal
coupled aging properties is selected for investigation of the preventive measures.
Fig. 1 Effect of photo-thermal coupled aging on SPI of different bitumens
Fig. 2 Effect of photo-thermal coupled aging on VI of different bitumens
3.2 Effect of various additives on properties of the selected bitumen
3.2.1 Physical properties
The effect of various additives on physical properties of the bitumen C is shown in Table 2.
With the addition of light stabilizers (Octabenzone, Bumetrizole and Tinuvin770) and
antioxidants (Irganox1010 and Irgafos168), ductility of the bitumen increases while softening
point and viscosity decrease slightly, and penetration changes depending on the types of
additives. The OMMT modified bitumen, as well as LDHs modified bitumen exhibits a much
smaller value in penetration and ductility, but a greater value in softening point and viscosity
294
than the control sample. The results manifest that the low-temperature properties of the
bitumen can be improved by the light stabilizers and antioxidants, while the high-temperature
properties can be enhanced by the OMMT and LDHs.
Table 2 Physical properties of different additive modified bitumens
Binders
Penetration,
Ductility,
Softening point, Viscosity,
25 ℃, 0.1 mm 10 ℃, cm ℃
60 ℃, Pa·s
Control sample
85
97.1
47.1
168
C + 0.6% Octabenzone
C + 0.6% Bumetrizole
C + 0.6% Tinuvin770
C + 0.6% Irganox1010
C + 0.6% Irgafos168
C + 3.0% OMMT
C + 3.0% LDHs
83
87
88
86
84
74
73
101.3
109.5
107.9
108.1
99.6
82.3
69.8
46.7
46.5
46.7
46.7
47.0
49.9
48.7
165
163
167
166
166
216
193
3.2.2 Aging properties
The effect of various additives on photo-thermal coupled aging properties of the bitumen C is
displayed in Fig. 3 and Fig. 4. It is clearly observed that the modified bitumens show smaller
values of SPI and VI than the control sample during the aging processes, indicating that the
photo-thermal coupled aging resistance of the bitumen can be improved by the additives.
However, the influence of various additives on the aging extent of bitumen is obviously
diverse. The SPI and VI values of the Irganox1010 modified bitumen are the smallest after
both thermal aging (0 μW/cm2) and photo-thermal coupled aging (1200 μW/cm2), which
implies that the Irganox1010 has excellent ability to resist the photo-thermal coupled aging of
bitumen. Base on the above analysis, the antioxidant Irganox1010 is selected as the modifier
to prepare the binder with excellent photo-thermal coupled aging resistance.
295
Fig. 3 Effect of various additives on SPI of the bitumen
Fig. 4 Effect of various additives on VI of the bitumen
3.3 Effect of Irganox1010 content on properties of the selected bitumen
3.3.1 Physical properties
Effect of different contents of antioxidant Irganox1010 on physical properties of the bitumen
C is listed in Table 3. The penetration, softening point and viscosity of the bitumen are nearly
unchanged when the content of Irganox1010 is less than 0.6%. However, the penetration
increases whereas the softening point and viscosity decrease obviously if the amount of
Irganox1010 exceeds 0.6%. Besides, the ductility of the bitumen gradually increases with the
increase of Irganox1010. The results indicate that the physical properties of the bitumen are
less influenced by the Irganox1010 at lower amount (less than 0.6%). However, the
low-temperature properties of the bitumen are gradually enhanced, and meanwhile the
high-temperature properties are negatively affected at higher amount of Irganox1010 (more
than 0.6%). Considering the physical properties and cost aspect, the content of antioxidant
Irganox1010 had better be less than 0.6%.
296
Table 3 Physical properties of modified bitumens with different contents of Irganox1010
Binders
Control sample
C + 0.2% Irganox1010
C + 0.4% Irganox1010
C + 0.6% Irganox1010
C + 0.8% Irganox1010
C + 1.0% Irganox1010
Penetration,
Ductility,
Softening point, Viscosity,
25 ℃, 0.1 mm 10 ℃, cm ℃
60 ℃, Pa·s
85
86
85
86
88
91
168
169
168
166
159
154
97.1
99.8
102.5
108.1
113.2
117.4
47.1
47.0
47.0
46.7
45.6
44.5
3.3.2 Aging properties
Effects of different contents of Irganox1010 on SPI and VI of the bitumen C are revealed in
Fig. 5 and Fig. 6, respectively. The SPI and VI values of the bitumen gradually decrease with
the increase of antioxidant Irganox1010 after both thermal aging (0 μW/cm2) and
photo-thermal coupled aging (1200 μW/cm2). During the thermal aging, the binder shows a
slight decrease in SPI and VI values when the amount of Irganox1010 is less than 0.4%.
However, the aging parameters decrease sharply when the content of Irganox1010 is between
0.4% and 0.6%. When the amount of Irganox1010 continues to increase, the reduction of SPI
and VI is less notable.
Fig. 5 Effect of different contents of Irganox1010 on SPI of the bitumen
297
Fig. 6 Effect of different contents of Irganox1010 on VI of the bitumen
With respect to the photo-thermal coupled aging, the SPI and VI values of the bitumen reduce
obviously within the Irganox1010 content of 0.6%. Similar to the thermal aging, the SPI and
VI values decrease very slightly if the amount of Irganox1010 is higher than 0.6%. As a result,
the photo-thermal coupled aging of the bitumen can be effectively prevented by the
antioxidant Irganox1010, and the proper content is 0.4%-0.6%.
4. Conclusions
The aging degree of bitumen after the photo-thermal coupled aging is much greater than that
after the single thermal aging. The stronger the UV intensity, the more obvious the
photo-thermal coupled aging effect is. With respect to different bitumens, the photo-thermal
coupled aging shows distinct effects on their aging properties.
The physical properties of the selected bitumen are differently affected by various additives.
The photo-thermal coupled aging resistance of the bitumen can be improved by the additives
to different extents. Among the additives used, the antioxidant Irganox1010 has the most
excellent ability to resist the photo-thermal coupled aging of bitumen. Considering both
physical and aging properties, the proper Irganox1010 content for preparation of the
photo-thermal coupled aging resistant bitumen is 0.4%-0.6%.
5. Acknowledgements
This work is supported by the National Natural Science Foundation of China (51078300), the
National Key Technology Research and Development Program of the Ministry of Science
and Technology of China (2011BAE28B04) and the Applied Basic Research Project of
Ministry of Transport (2011-319-811-420). The authors gratefully acknowledge their
financial support.
298
6. References
[1] Lu XH, Isacsson U. Artificial aging of polymer modified bitumens. J Appl Polym Sci,
2000, 76: 1811-1824.
[2] Wu SP, Pang L, Liu G, Zhu JQ. Laboratory study on ultraviolet radiation aging of
bitumen. J Mater Civil Eng, 2010, 22: 767-772.
[3] Xiao FP, Amirkhanian SN, Shen JN. Effects of long term aging on laboratory prepared
rubberized asphalt binders. J Test Eval, 2009, 37: 329-336.
[4] Cortizo MS, Larsen DO, Bianchetto H, Alessandrini JL. Effect of the thermal
degradation of SBS copolymers during the ageing of modified asphalts. Polym Degrad
Stab, 2004, 86: 275-282.
[5] Mouillet V, Farcas F, Besson S. Ageing by UV radiation of an elastomer modified
bitumen. Fuel, 2008, 87: 2408-2419.
[6] Feng ZG, Yu JY, Liang YS. The relationship between colloidal chemistry and ageing
properties of bitumen. Petrol Sci Technol, 2012, 30: 1453-1460.
[7] Wang H, Feng ZG, Zhou B, Yu JY. A study on photo-thermal coupled aging kinetics of
bitumen. J Test Eval, 2012, 40: 724-727.
[8] Ouyang CF, Wang SF, Zhang Y, Zhang YX. Improving the aging resistance of
styrene-butadiene-styrene tri-block copolymer modified asphalt by addition of
antioxidants. Polym Degrad Stab, 2006, 91: 795-804.
[9] Apeagyei AK. Laboratory evaluation of antioxidants for asphalt binders. Constr Build
Mater, 2011, 25: 47-53.
[10] Feng ZG, Yu JY, Zhang HL, Kuang DL. Preparation and properties of ageing resistant
asphalt binder with various anti-ageing additives. Appl Mech Mater, 2011, 71-78:
1062-1067.
[11] Feng ZG, Xu S, Sun YB, Yu JY. Performance evaluation of SBS modified asphalt with
different anti-aging additives. J Test Eval, 2012, 40: 728-733.
[12] Yu JY, Feng PC, Zhang HL, Wu SP. Effect of organo-montmorillonite on aging
properties of asphalt. Constr Build Mater, 2009, 23: 2636-2640.
[13] Yamaguchi K, Sasaki I, Nishizaki I, Meiarashi S, Moriyoshi A. Effects of film thickness,
wavelength, and carbon black on photodegradation of asphalt. J Jpn Petrol Inst, 2005, 48:
150-155.
[14] Filippis PD, Giavarini C, Scarsella M. Improving the ageing resistance of straight-run
bitumens by addition of phosphorus compounds. Fuel, 1995, 74: 836-841.
[15] Yang P, Liu Z, Yan F, Liao KJ. A study on aging kinetics of Anshan paving asphalt.
Petrol Sci Technol, 2002, 20: 951-960.
[16] Yan F. Study on aging kinetics of Panjin paving asphalt. Petrol Sci Technol, 2005, 23:
273-283.
[17] Yan F. Study on aging kinetics of Saudi Arabian paving asphalt. Petrol Sci Technol, 2006,
24: 779-788.
299
308
Platinum nanoparticles/graphene composite catalyst as a novel composite
counter electrode for high performance dye-sensitized solar cells
Chen-Chi M. Maa,*, Li-Hsueh Changb
a
Department of Chemical Engineering, National Tsing Hua University,
Hsinchu, 30013, Taiwan, R.O.C.
ccma@che.nthu.edu.tw
Abstract
We herein describe our use of a water–ethylene method to prepare a composite material
consisting of platinum nanoparticles and graphene. Results obtained using XPS and XRD
show that the degree of reduction of graphene was increased by the incorporation of Pt, and
in addition, the increased concentration of defects was confirmed by the D/G ratio of the
Raman spectra obtained. In comparison with Pt films, results obtained using CV and EIS
showed that the electrocatalytic ability of the composite material was greater, and afforded a
higher charge transfer rate, an improved exchange current density, and a decreased internal
resistance. SEM images showed that the morphology of PtNP/GR counter electrodes is
characterized by a smooth surface, however, resulting in a lower resistance to diffusion,
thereby improving the total redox reaction rate that occurs at the counter electrode. PtNP/GR
electrodes have a number of advantages over other electrodes that consist solely of graphene
or Pt films, including a high rate of charge transfer, a low internal resistance, and a low
resistance to diffusion. In our study, we showed that DSSCs that incorporate platinum-grafted
graphene had a conversion efficiency of 6.35%, which is 20% higher than that of devices
with platinized FTO.
Keyword: graphene, nanoparticles, platinum
300
318
Recovery of Valuable Metal from Waste-Computer
Busayamas Phettonga,*, Pitsanu
Bunnaulb, Manoon Masniyomc
a,*
Department of Mining and Material Engineering, Faculty of Engineering
Prince of Songkla University, Hat Yai, Songkhla, Thailand 90112
angle_engineer@hotmail.com
b
Department of Mining and Material Engineering, Faculty of Engineering
Prince of Songkla University, Hat Yai, Songkhla, Thailand 90112
pitsanu.b@psu.ac.th
c
Department of Mining and Material Engineering, Faculty of Engineering
Prince of Songkla University, Hat Yai, Songkhla, Thailand 90112
manoon.m@psu.ac.th
Abstract
The objective of this research was to study and develop a process to recover valuable metal
from waste-computer. Computer waste materials, particularly printed circuit boards (PCB)
were collected and shredded into small pieces and stored in plastic bags. Sample of the
shredded materials was ground in a laboratory ball mill and sieved. Printed circuit board
(PCB) finer than 2 millimeter was fed to a designed plastic-metal separator to collect metallic
product .The designed and constructed separator was a twin-rectangular plastic box mounted
on a shaking table. Jerking action caused the heavier materials such as metallic pieces to
move down to the bottom part of the box and move out through an opening. Lighter materials
such as plastic moved up to the top part and overflowed to tailing collector. Metallic products
from the separator comprises of powder metal and wire-typed metal. The recovery of gold,
nickel and tin metal was 100, 89.15 and 78.02% respectively. Copper recovery in metal
product was only 22.61%. Most copper metal was distributed in fine powders and in the
printed layer in plastic products.
Keyword: Metals recovery process, Plastic-metal separator, Electronic waste,Waste computer
1. Introduction
Waste computer becomes an alternative source of valuable metal. Each computer set
contained as high as 49% of metals (see Figure 1 and Table 1). Apart from the recovery of
301
valuable metal, the benefit of treating waste computer is also the minimization of
environmental problem.
Metal Separation technique has been studied and proposed by several researchers. Sink and
float technique was applied for separating plastic parts out of metallic parts (Kue et al [2] and
Cui and Forssberg [3]). Other physical separation applied was electrostatic separation and
gravity concentration. Leaching of metal from waste computer was also applied to recover
copper, gold and silver [3-4].
Mohabuth N. et al [5] applied gravity concentration to recover metallic materials from waste
computer. Vibration of the separation chamber facilitated the movement of heavier metal
down to the bottom leaving the lighter plastic parts in the upper part of the vibrating chamber.
Those vibrating metal particles also moved horizontally to the next chamber.
Figure 1 Materials in a set computer [1]
Table 1 Metal and non metallic Materials contained in a set of computer [5]
Materials
Percent (%)
Glass
24.8
Plastic
23.0
Precious metal
0.02
iron
20.47
Lead
6.3
Aluminum
14.17
Copper
6.93
Others
4.3
Total
100
2. Experimental
2.1 Materials
302
Sixty pieces of printed circuit board (PCBs) from used computers were collected. Removable
parts such as some plastic capacitors and other electronic parts were cut out from the board.
The remaining boards were shredded into small pieces (1×1 cm) and heated at 170°C
before grinding in a ball mill. The ground products were sieved with 2 millimeter sieve.
Those larger than 2 mm were reground again.
2.2 Separator design and construction
Gravity concentration was adopted for this study. According to the work of Mohabuth N. et
at [5], vibration were an effective separation mechanism. Similar separation box was
modified here. Twin-box with different elevation was designed and constructed to allow the
flow of heavy materials from the first box to the second one. Overflow openings at both
boxes were located at the upper part of each box to allow the flow of lighter plastic materials
out of the
boxes. The separator was made of acrylic material. Instead of applying
vibration, lower frequency shaking was applied. Shaking action in the form of jerking was
achieved by
mounting the separator onto a shaking table (See Figure 2 and 3).
Figure 2 Drawings of the twin-box separator
2.3 Separation test
Separation test was a batch test. Continuous separation tests could not be performed because
there was not sufficient amount of feed materials. 500 grams of materials was used in a test.
Shaking and jerking action of the shaking table facilitated the heavier metallic materials and
finer materials to move down to the bottom of the first box pushing lighter plastic to the top
part. The lighter plastic moved out via the overflow opening and was collected as a
T2-product.
Once the segregation between heavier metal and lighter plastic in the box was seen clearly,
the plastic plate was lift to allow the metal and fine materials at the bottom move out to
the second box by the shaking action and the inclination of the shaking table. The plate
was closed again when all the metal and fine materials had moved out.
The same separation occurred in the second box. The material left in the second box
was
collected as metal product M1.
303
The overflow material from the second box was collected as M2 which was mainly
copper-printed plastics. The product left in the first box was labeled as T1-products (see
Figure 3).
Figure 3 Products from the separation
2.4 Metal composition analysis
Metal composition in T1 and M2 products were analyzed using an X-Ray fluorescence (XRF).
However, the product M1 contained both fine plastic materials and pieces of metal wire.
Proper sampling preparation for XRF analyses could not be done. Thus, hand sorting of metal
wires (golden and silver color) from plastic powder was done under microscope. The plastic
powder was analyzed using XRF. The golden and silver-colored wires were analyzed using a
scanning electron microscope which equipped with an energy dispersive X-Ray spectrometer
(SEM-EDX). Weight fraction and chemical composition of each fraction were used to
calculate metal compositions in M1-products (See Table 2).
Table 2 Chemical analysis of M1-products
Material
M1
Weight
Percent
Silver wire
Assay
% Distribution
%Cu %Au
%Ni
%Sn
43.20
7.25
2.59
81.90 48.42
Golden wire
29.60
3.24
Fines
27.20
8.74
0
Total(M1)
100
6.47
24.27
0
81.98 14.77
Cu
Au
0
Ni
Sn
20.09 96.95
0
14.83
100
78.49
0
0.29
4.09
36.75
0
1.42
3.05
5.57
36.49
100
100
100
100
T2-product could be considered as a clean plastic product. There is no metal content
other than bromine. Br was a constituent in plastic added as a fire retarder. Metallurgical
balances of the separation were shown in Table 3.
Table 3 Metallurgical balances of plastic-metal separation
Material
Weight
Percent
Assay
%Cu
%Au
%Ni
304
% Distribution
%Sn
Cu
Au
Ni
Sn
M1
25
6.47
24.27
5.57
36.49 22.61
100
89.15 78.02
M2
3.4
29.59
0
0.28
4.13
14.06
0
0.61
T1
50
9.06
0
0.32
4.86
63.33
0
10.24 20.78
T2
21.6
0
0
0
0
0
0
0
0
Total
100
7.15
6.07
1.56
11.69
100
100
100
100
1.20
3. Results and discussions
It is obvious from Table 2 that the silver-color wire contains mostly tin metal (81.90 % Sn)
with 7.25% Cu and 2.59% Ni. The golden wire contains 81.98%Au and 14.77% Ni. In
M1-material, all gold is in the golden wires whereas around 97% of tin metal is in the
silver-color wires. Further cleaning of M1-product might yield cleaner metal product.
According to the metallurgical balance in Table 3, M1 can be considered as a metal product
with 25% yield. The recovery of gold, nickel and tin is 100, 89.15 and 78.2% respectively.
However, copper recovery is only 22.61%. Most copper is in the T1 products. Copper is in the
form of thin printed layers in plastic material. To recover copper separately, separation of
powder part of M1-products should be done. Combined T1and M2 product with powder part
of M1 may be separately treated to recover copper using chemical leaching.
4. Conclusion
4.1 Separator design
A plastic-metal separator was designed and constructed for the recovery of valuable metal
from waste computer. Shaking and jerking action was achieved by mounting the twin-box
separator onto a shaking table. The separator produced 4 products: M1 (metal products), M2
(mixed plastic-metal product),T1 (mixed plastic-metal product) and T2 (plastic product)
4.2 Metal recovery
The separator could recover gold, nickel and tin metal up to 100, 89.15 and 78.2%
respectively. Copper recovery needed further treatment of M1 product to separate fines
from major metals. Such fines combined with M2 and T1 products might be treated
chemically to recover copper metal. Around 21.6% of plastic material could be recovered as
T2-product from the separation.
5. Acknowledgement
The authors would like to acknowledge the department of Mining and Materials Engineering,
Faculty of Engineering and Graduate School of Prince of Songkla University for their
financial support. Sincere thanks are to friends, especially Atchara Sangchan for their great
assistance in performing experiments and their moral supports.
305
[1]
[2]
[3]
[4]
[5]
6. References
H.Y. Kang and J.M. Schoenung, Electronic waste recycling: A review of U.S.
infrastructure and technology options, Resources, Conservation and Recycling, Vol.45,
2005,pp. 368-400.
K. Huang, J. Guo and Z. Xu, Recycling of waste printed circuit boards:A review of
current technologies and treatment status in China, Journal of Hazardous Materials,
Vol.164, 2009,pp. 399-408.
J. Cui and E. Forssberg, Mechanical recycling of waste electric and electronic
equipment:a review, Journal of Hazardous Materials, Vol.164, 2009,pp. 399-408.
J. Cui and L. Zhang, Metallurgical recovery of metal from electronic waste: A review,
Journal of Hazardous Materials, Vol.158, 2009,pp. 228-256.
N. mohabuth, P. Hall and N. Milles, Investigating the use of vertical vibration to
recover metal from electrical and electronic waste, Minerals Engineering. Vol.20, 2007,
pp. 926-932.
306
329
Photocatalytic Degradation of Acid Orange 7 by ZnO and Sm-Doped ZnO
Pongsathorn Sathorna,*, Nattiya Reungtipa, Apisit Songsasena
Department of Chemistry, Faculty of Science, Kasetsart University,
Chatuchak, Bangkok 10903, THAILAND
E-mail address: deknon_dpst@hotmail.com
Abstract
ZnO and Sm-doped ZnO photocatalysts which contained Sm at 1% and 5%mol, which
calcined at temperature have been prepared by precipitation method, using Zinc acetate
dihydrate as a zinc precursor, Samarium(III) nitrate hexahydrate as a samarium precursor.
The efficiency of the prepared photocatalysts were examined for the UV light induced
degradation of acid orange 7 (AO7). The calcinations temperature in this work were treated
from 400 to 800o C for 2 hrs. The prepared photocatalysts were characterized by TGA, XRD,
UV-Vis DRS and SEM. TGA and XRD patterns showed that Sm-doped ZnO calcined at
temperature above 500o C had hexagonal wurtzite structure. The crystallize size of the
catalysts which calculated from Scherrer’s equation were in the range of 9-53 nm, which
depended on the amount of %Sm loaded and calcinations temperature. From UV-Vis DRS,
ZnO and Sm-doped ZnO photocatalysts exhibited the absorption band in visible region with
band gap of 2.94 to 3.04 eV (calculated from Eg = 1239.8/λ). The effects of various
parameters, such as the initial concentration of Sm-doped ZnO, the calcinations temperature
and the light source of degradation on the photocatalytic degradation acid orange 7 (AO7)
were investigated to find desired conditions. The 1%mol Sm-ZnO photocatalyst that calcined
at 800o C had the highest degradation activity under UV light irradiation with 92.01
percentage of degradation of 20 ppm acid orange 7.
Keyword: Photocatalysis, Sm-doped ZnO nanocrystalline, Photodegradation.
1. Introduction
The environmental impact of synthetic dyes is a concern over the last few decades. Industries
such as textile, leather, paper, plastics, pharmaceutical and food produce a great deal of
wastewater contaminated with dyes annually in the world. Among all synthetic dyes, azo
dyes constitute the largest and the most important class of dyes for industrial applications.
The presence of dyes not only highly colors the effluent even at very low concentration, it
also causes ecological and environmental problems because of their toxic or mutagenic and
307
carcinogenic characteristics. Therefore, the degradation of wastewater contaminated with azo
dyes has aroused worldwide interest.[1]
AO7 is a mono azo dye and is widely used in dyeing of synthetic fibers and wool and cotton
and in paper industries. Active azo dyes have double bound of nitrogen to nitrogen (–N=N–),
which is bounded to an aromatic group so, they can produce harmful health effects and it is
essential to have a proper method to remove this dye from wastewaters.[2]
At present, many kinds of semiconductors have been studied as photocatalyst including TiO2,
ZnO, CdS, WO3 and so on. TiO2 is the most widely used effective photocatalyst for its high
efficiency, photochemical stability, non-toxic nature and low cost. As a contrast ZnO, a kind
of semiconductor that has the similar band gap as TiO2, is not thoroughly investigated.
However, the greatest advantage of ZnO is that it absorb large fraction of solar spectrum and
more light quanta than TiO2. Some researches have highlighted the performance of ZnO on
degradation some organic compounds.[3]
In addition to TiO2, other binary metal oxides have been studied to determine their
photocatalytic oxidation properties. ZnO has been often considered a valid alternative to TiO 2
because of its good optoelectronic, catalytic and photochemical properties along with its low
cost. ZnO has a band gap of 3.0 eV that is lower than that of anatase. Due to the position of
the valence band of ZnO, the photogenerated holes have strong enough oxidizing power to
decompose most organic compounds. ZnO has been tested to decompose aqueous solutions
of several dyes, and many other environmental pollutants. In many cases, ZnO has been
reported to be more efficient than TiO2.[4]
There were many researches that the doping of different transition metals can effectively
inhibit the recombination of electron/hole pairs. This research studies the effect of
concentration of Sm in the Sm-doped ZnO photocatalysts on the chemical, physical and
photocatalytic properties of ZnO. The ZnO and Sm-doped ZnO photocatalysts were prepared
via pricipitration method and characterized by TGA, XRD, UV-Vis DRS and SEM. Prepared
ZnO and Sm-doped ZnO photocatalysts that was varied various percentage of Sm were used
to studies the photodegradation of acid orange 7 under UV light irradiation.
2. Methodology
ZnO catalyst was prepared by precipitation method. 10.00 g of Zinc acetate dihydrate was
taken in a gravimetric beaker and dissolved in distilled water. Then sodium bicarbonate
solution which prepared in distilled water was added to the Zinc acetate solution with
continuous stirring until the pH of solution became 7.0. Then the precipitate was filtered to
remove the excess of sodium bicarbonate and other impurities. After that, the prepared
catalysts were heated at 120o C for 5 hrs and were calcined in the crucible at various
temperatures from 400o C to 800o C for 2 hours.
The same procedure was followed for the precipitation of Sm-doped ZnO by using
308
Samarium(III) nitrate hexahydrate (1%, 5%mol) 0.2022 and 1.011 g, respectively. All of the
catalysts were characterized by Thermal Gravimetric Analysis (TGA), X-ray powder
diffraction (XRD), UV-Vis reflection techniques and Scanning Electron Microscrope (SEM).
Study the photocatalytic activity of ZnO and Sm-doped ZnO under UV light irradiation by
adding catalyst to acid orange 7 solution under stirring and irradiating with UV-lamp. Before
the photocatalytic degradation, the suspension will be stirred in the dark for 30 minutes to
establish an acid orange 7 adsorption/desorption equilibrium. The solution will be collected
every 1 hour. The concentration of acid orange 7 after illumination will be determined by
UV-Vis spectrophotometer.
3. Results, Discussion and Conclusion
The TGA curve in Figure 1 showed a major weight loss step from 50 to 180o C . The weight
loss related to the decomposition of the precursors of ZnO. No further weight loss was
observed at a temperature above 500o C, indicating the decomposition does not occur above
this temperature and the stable residues are ascribed to ZnO nanoparticles, as confirmed by
XRD analysis.
Figure 2, Figure 3 and Figure 4 showed the XRD patterns of undoped ZnO, 1%Sm-doped
ZnO and 5%Sm-doped ZnO photocatalysts that calained at various temperatures from 400o C
to 800oC. The crystal structure and orientation of undoped ZnO, 1%Sm-doped ZnO and
5%Sm-doped ZnO nanoparticles have been investigated by X-ray diffraction (XRD) method.
The sharp and intense peaks from Figure 2, Figure 3 and Figure 4 indicated that the samples
were highly crystalline and ZnO nanoparticles had polycrystalline structure. The XRD peaks
for (100), (002) and (101) planes indicated the formation of phase pure wurtzite structure of
ZnO. The ZnO nanoparticles had a preferred growth orientation along 101 direction.
309
Figure 1. TGA curve of the undoped ZnO.
Figure 2. X-ray diffractogram undoped ZnO calcined at various temperatures
(a) 400o C, (b) 500o C, (c) 600o C, (d) 700o C, (e) 800o C
Figure 3. X-ray diffractogram of 1%Sm-doped ZnO calcined at various temperatures
(a) 400o C, (b) 500o C, (c) 600o C, (d) 700o C, (e) 800o C
310
Figure 4. X-ray diffractogram of 5%Sm-doped ZnO calcined at various temperatures
(a) 400o C, (b) 500o C, (c) 600o C, (d) 700o C, (e) 800o C
Figure 5, Figure 6 and Figure 7 showed the UV-Vis DRS of undoped ZnO, 1%Sm-doped
ZnO and 5%Sm-doped ZnO photocatalysts calcined at various temperatures from 400 to 800o
C which demonstrated absorption edge at 408 to 422 nm. This absorption could be related to
band gap energy of 3.04-2.94 eV which reduced from ZnO (~3.4 eV). On the other hand,
doping with lanthanide ions with 4f electron configuration into ZnO lattice could not only
eliminate the recombination of electron-hole pairs significantly and but also result in the
extension of their wavelength response toward the visible region.[5]
Figure 5. UV-Vis DRS of undoped ZnO calcined at various temperatures
(a) 400o C, (b) 500o C, (c) 600o C, (d) 700o C, (e) 800o C
311
Figure 6. UV-Vis DRS of 1%Sm-doped ZnO calcined at various temperatures
(a) 400o C, (b) 500o C, (c) 600o C, (d) 700o C, (e) 800o C
Figure 7. UV-Vis DRS of 5%Sm-doped ZnO calcined at various temperatures
(a) 400o C, (b) 500o C, (c) 600o C, (d) 700o C, (e) 800o C
Figure 8 showed the SEM images of calcined 1%Sm-doped ZnO with different calcinations
temperature which suggested that the morphology of Sm-doped ZnO catalyst calcined at 800o
C are mainly like rough grains and closed to the hexagonal lattice structure.
312
(b)
(a)
Figure 8. SEM images of 1%mol Sm-doped ZnO (a) before calcine (b) calcined at 800o C
The photocatalytic degradation of acid orange 7 by the prepared catalysts is shown in Table 1.
This indicated that the 1%Sm-doped ZnO calcined at 800o C had high efficiency to catalyze
the decomposition of acid orange 7 by 92.01% in 5 hours under UV light irrdiation whereas
the other catalysts provided lower catalytic efficiency. The high photocatalytic activity of
1%Sm-doped ZnO calcined at 800o C may attribute to the specific particle size, surface area
and surface basicity of the catalysts, which appropriated to photocatalytic degradation.
Table 1. Particle size and band gap energy of different photocatalysts.
Undoped ZnO
Calcination
temperature
( oC)
400
500
600
700
800
Particle
size
(nm)
20.89
37.98
37.97
46.44
52.28
Band gap
Energy
(eV)
3.04
3.02
3.02
2.97
2.95
1%Sm-doped ZnO
Particle
size
(nm)
14.18
16.03
19.53
23.18
29.51
Band gap
Energy
(eV)
3.02
3.01
2.99
2.97
2.95
5%Sm-doped ZnO
Particle
size
(nm)
9.20
12.02
15.36
17.51
24.99
Band gap
Energy
(eV)
3.02
3.00
2.99
2.97
2.94
Table 2. %degradation of acid orange 7 after the irradiation of UV light for 5 hours with
different photocatalysts.
Calcination
temperature
( oC)
%degradation
Undoped ZnO
1% Sm-doped ZnO
5% Sm-doped ZnO
400
500
600
700
800
53.30
54.14
60.34
69.28
47.41
70.35
74.75
78.54
77.37
92.01
52.88
63.90
59.57
69.41
82.21
313
In conclusions, undoped ZnO, 1%Sm-doped ZnO and 5%Sm-doped ZnO photocatalysts were
prepared by precipitation method. The particle sizes of catalysts are in the range of 9-53 nm
with hexagonal wurtzite structure. The visible absorption efficiency of Sm-doped ZnO has
been improved because of the effect of Samarium doping that cause band gap narrowing and
decrease the recombinaion. However, In part of photodegradation of acid orange 7, the
1%Sm-doped ZnO calcined at 800o C exhibits the highest photocatalytic activity under
irradiation of UV light with 92.01 percentage of degradation of 20 ppm acid orange 7.
4. Acknowledgement
Financial support from the Center of Excellence for Innovation in Chemistry (PERCH-CIC),
Commission on Higher Education, Ministry of Education for financial support, The Graduate
School Kasetsart University and the Department of Chemistry, Faculty of Science, Kasetsart
University for all research facilities.
[1]
[2]
[3]
[4]
[5]
5. Reference
Wu, J.; Zhang, H.; Qiu, J., Journal of Hazardous Materials, (2012) 215–216, 138.
Aber, S.; Daneshvar, N.; Soroureddin, S. M.; Chabok, A.; Asadpour-Zeynali, K., Study
of acid orange 7 removal from aqueous solutions by powdered activated carbon and
modeling of experimental results by artificial neural network, In Desalination, (2007);
Vol. 211, pp 87.
Daneshvar, N.; Rasoulifard, M. H.; Khataee, A. R.; Hosseinzadeh, F., Journal of
Hazardous Materials, (2007) 143, 95.
Di Paola, A.; García-López, E.; Marcì, G.; Palmisano, L., Journal of Hazardous
Materials, (2012) 211–212, 3.
Xiao, Q.; Si, Z.; Zhang, J.; Xiao, C.; Tan, X., Journal of Hazardous Materials, (2008)
150, 62.
314
335
Preparation and properties of layered double hydroxides/SBS modified
bitumen for waterproofing membrane
Song Xua, Zhengang Fenga, Jianying Yua,*, Lian Lib
a
State Key Laboratory of Silicate Materials for Architectures, Wuhan University of
Technology, Wuhan 430070, P. R. China
b
School of Marxism Wuhan University of Technology, Wuhan 430070, P. R. China
*Corresponding author: E-mail: jyyu@whut.edu.cn
Abstract
Layered double hydroxides (LDHs)/SBS modified bitumens used for waterproofing
membrane were prepared by melt blending using various contents of SBS and LDHs. Effects
of the LDHs on physical and thermal-oxidative aging properties of SBS modified bitumen
were investigated. The results show that softening point and low temperature flexibility of
LDHs/SBS modified bitumen are increased simultaneously with the rise of SBS content,
while they are little affected with the change of LDHs content. The thermal-oxidative aging
resistance of SBS modified bitumen is gradually improved with increasing LDHs content. In
addition, the aging rate of SBS modified bitumen with LDHs are evidently lower than that
without LDHs over time, which indicates that LDHs improve the ability of SBS modified
bitumen to resist the thermal-oxidative aging effectively.
Keywords: SBS modified bitumen; layered double hydroxides; physical properties;
thermal-oxidative aging
1. Introduction
Bitumen waterproofing material is a kind of important building materials [1, 2]. As one of the
major categories of bitumen waterproofing membrane, the share of polymer modified
bitumen has been rising quickly in the waterproofing material market. Among the polymer
modified bitumens used for waterproofing membrane, the most common one is SBS modified
bitumen [3, 4].
SBS modified bitumen waterproofing membrane is a national focus on the development of
bitumen roll variety and always occupying the leading status in waterproofing roll market at
present. For example, SBS modified bitumen waterproofing membrane occupies more than
85% of the waterproofing roll market in Germany, France, Finland, Norway and so on. In
315
America and Japan, the rate of SBS modified bitumen waterproofing membrane is more than
30% and it shows a tendency to go up yearly. The production and application of SBS
modified bitumen waterproofing membrane dated from 1980s in China, and it performed
excellent physical-chemical properties and good workability [5].
However, there are also a lot of problems along with the application and popularization of
SBS modified bitumen waterproofing membrane. Similar to the bitumen, SBS modified
bitumen is prone to going fragile and stiff due to being exposed to heat, oxygen, ultraviolet
light and complex weather conditions during service life, leading to obvious deterioration of
flexibility at low temperature [6, 7]. As a result, the SBS modified bitumen membrane fails to
adapt to cool shrinkage of roof base and becomes easy to crack, which leads to destruction of
waterproofing layer and leakage of the roof [8-10]. Consequently, it is of great significance to
enhance durability of building roof by improving anti-aging properties of SBS modified
bitumen.
With respect to the improvement of aging resistance for SBS modified bitumen, various
methods have been proposed during the last few decades [11-14]. Some of them have been
approved to be validated to some extent for enhancing the performance of SBS modified
bitumen. However, these methods are much more related to the pavement binders with little
concern on the SBS modified bitumen used for waterproofing membrane. Due to the special
characters of waterproofing membrane, a new method should be put forward to improve the
aging resistant properties of SBS modified bitumen.
Layered double hydroxides (LDHs) as a kind of new functional materials possess a positively
charged hydroxide basal layer, which are electrically balanced by the intercalation of anions
in the interlayer space [15]. LDHs have received considerable attention in the past few
decades because their lamellar structure with high aspect ratio endows them with many
excellent properties [16, 17]. The addition of LDHs may provide polymer materials with
good thermal stability, less oxygen infiltration, high UV reflection, and etc [15-18]. However,
the application of LDHs in the SBS modified bitumen waterproofing membrane has been
rarely reported.
In the present work, LDHs/SBS modified bitumens used for waterproofing membrane were
prepared by melt blending with various contents of SBS and LDHs. The effects of LDHs on
physical and thermal-oxidative aging properties of the SBS modified bitumens were
investigated by means of softening point and low temperature flexibility tests.
2.
Experimental
2.1 Materials
Bitumen, TY-90 bitumen was supported by Tianyuan Petrochemical Co., Ltd., China.
Physical properties of the bitumen were presented in Table 1. SBS, star-shaped thermoplastic
elastomer with brand 4303, was supplied by Yueyang Petrochemical Co., Ltd., China. LDHs,
316
zinc-aluminum layered double hydroxides, were provided by Jiangyin Rui Law Chemical Co.,
Ltd., China.
Table 1 Physical properties of bitumen
Physical properties
Values
Penetration (25℃, 0.1mm)
Softening point (℃)
Ductility (10℃, cm)
Viscosity (60℃, Pa·s)
93
53.6
15.6
1220
2.2 Preparation of LDHs/SBS modified bitumen
The LDHs/SBS modified bitumen was prepared using a high shear mixer and a bench-type
drilling machine. Bitumen was first heated until it became a melted fluid at around 150 ℃ in
an iron container. Then the LDHs were mixed with bitumen by the high shear mixer at 4000
rpm for 1 hour to obtain uniformly dispersed LDHs/bitumen mixtures. Finally, the weighed
amount of SBS was added into the mixtures and stirred by the bench-type drilling machine at
180 ℃ for 3 hours to prepare the LDHs/SBS modified bitumen. The SBS modified bitumen
without LDHs was also processed under the same conditions in order to compare with the
LDHs/SBS modified bitumen.
2.3 Thermal-oxidative aging procedure
The melted modified bitumen of (50 ± 0.5) g was placed in a Φ 140 mm iron pan, the
thickness of the sample was about 3.0 mm. The iron pan was put in a 401AB aging oven to
simulate the thermal-oxidative aging. The temperature in the oven was kept at 70 ℃. The
aging rate was determined by measuring softening point and low temperature flexibility after
a period from 0 to 28 days with interval of 7 days.
2.4 Physical properties test and aging evaluation
The physical properties, including softening point and low temperature flexibility, were tested
according to ASTM D36 and EN 1109, respectively.
In order to characterize the aging degree of LDHs/SBS modified bitumens after
thermal-oxidative aging, softening point increment (ΔS) and low temperature flexibility
decrement (ΔF) were used, and their calculative formulas are expressed as follows:
ΔS = Softening point of aged sample-Softening point of unaged sample
ΔF = Low temperature flexibility of aged sample-Low temperature flexibility of unaged
sample
317
3.
Results and discussion
3.1 Effect of SBS content on physical properties of LDHs/SBS modified bitumen
The effects of SBS content on softening point and low temperature flexibility of SBS
modified bitumen with 3 wt.% LDHs are showed in Fig. 1 and Fig. 2, respectively. It can be
seen that the softening point and low temperature flexibility apparently increase with the rise
of SBS content, which indicates the increment of SBS content can significantly improve the
properties of LDHs/SBS modified bitumen at high and low temperature at the same time.
Considering the specified requirements of SBS modified bitumen waterproofing membrane
(softening point > 115℃, low temperature flexibility < -25℃) and the cost of SBS, the
content of SBS is better determined as 14 wt.%.
Fig. 1 Effect of SBS content on softening point of LDHs/SBS modified bitumen
Fig. 2 Effect of SBS content on low temperature flexibility of LDHs/SBS modified bitumen
3.2 Effect of LDHs content on physical properties of LDHs/SBS modified bitumen
Table 2 presents the effect of various concentrations of LDHs on the physical properties of
SBS modified bitumen. With the increment of LDHs content, the softening point has a slight
increase while the low temperature flexibility has almost no change, which indicates that
318
LDHs can ameliorate the heat resistance of SBS modified bitumen to a certain degree. The
sizes of LDHs particles are relatively small and can be dispersed homogeneously in the
bitumen matrix without obvious chemical reaction during the mixing process. Moreover, as a
result of special lamellar structure of LDHs, the movement of bitumen molecular chain is
limited at high temperature, in other words, the property of fluidity resistance of modified
bitumen at high temperature is improved.
Table 2 Physical properties of SBS modified bitumen with different LDHs contents
LDHs content (by mass) /%
0
1
2
3
4
5
Softening point /℃
114.1
114.8
115.3
115.5
115.7
116.0
Low temperature flexibility /℃
-27
-27
-27
-27
-27
-27
3.3 Effect of LDHs content on thermal-oxidative aging performance of LDHs/SBS
modified bitumen
Fig. 3 and Fig. 4 illustrate the effect of LDHs content on ΔS and ΔF of LDHs/SBS modified
bitumen after 14 days’ thermal-oxidative aging, respectively. It is easily found that the aging
degree of SBS modified bitumen without LDHs is much more serious than that with LDHs,
and the aging degree declines sharply along with the increase of LDHs content, which
indicates that the thermal-oxidation aging resistance of SBS modified bitumen is improved
distinctly by the LDHs. Furthermore, ΔS decreases rapidly (from 15.6℃to 5.9℃) when the
content of LDHs increases from 0 to 3%, but becomes flat gradually as LDHs rises from 3%
to 5%. The ΔF presents a similar variation to the ΔS, nevertheless, ΔF remains almost no
changes after LDHs content exceeds 3%.
Fig. 3 Effect of LDHs content on ΔS of LDHs/SBS modified bitumen after aging
319
Fig. 4 Effect of LDHs content on ΔF of LDHs/SBS modified bitumen after aging
3.4 Effect of aging time on thermal-oxidative aging performance of LDHs/SBS modified
bitumen
With the prolonging of thermal-oxidative aging time, the aging degrees of LDHs (3 wt.%)
/SBS modified bitumen and SBS modified bitumen become more and more severe, as
illustrated in Fig. 5 and Fig. 6. The ΔS and ΔF of two modified bitumens are all increased in
various degrees with time, but ΔS and ΔF of LDHs/SBS modified bitumen are definitely less
than that of SBS modified bitumen at the same aging time. What’s more, the increase rate of
ΔS and ΔF of LDHs/SBS modified bitumen are smaller than that of SBS modified bitumen.
LDHs which consist of multilaminate metal cation layers with high aspect ratio are
distributed in bitumen matrix and those random dispersed particles can efficiently hinder
permeability of oxygen by means of their geometrical constraints. The oxidation aging of
bitumen reduces remarkably and the chemical reactions of generic fractions in bitumen are
necessarily restrained. On the other hand, the metal cation platelet may also obstruct loss of
volatile components in bitumen at high temperature. Therefore, those factors eventually lead
to decrease in aging degree and rate, as well as enhancement in anti-aging capability of
bitumen.
320
Fig. 5 Effect of aging time on ΔS of LDHs/SBS modified bitumen
Fig. 6 Effect of aging time on ΔF of LDHs/SBS modified bitumen
4. Conclusions
LDHs/SBS modified bitumens used for waterproofing membrane were prepared by melt
blending using different contents of SBS and LDHs. The effects of LDHs on physical and
thermal-oxidative aging properties of SBS modified bitumen were investigated. The
following main conclusions are drawn:
(1) The softening point and low temperature flexibility of modified bitumen are
increased simultaneously with the rise of SBS content. However, the softening point has a
slight increase while the low temperature flexibility has almost no change with the increasing
of LDHs content.
(2) The thermal-oxidative aging resistance of SBS modified bitumen is gradually
improved with the increase of LDHs within the content of 3 wt.%. However, the
improvement is unremarkable after the LDHs content is above 3 wt.%.
(3) With the extension of thermal-oxidative aging time, the aging degree of all samples
becomes more and more serious. The aging rate of SBS modified bitumen with LDHs is
evidently lower than that without LDHs, which indicates that the LDHs can improve the
thermal-oxidative aging resistance of SBS modified bitumen effectively.
5. Acknowledgements
This work is supported by the National Key Technology Research and Development Program
of the Ministry of Science and Technology of China (2011BAE28B04). The authors
gratefully acknowledge their financial support.
6. References
[1] Fang CQ, Zhou SS, Zhang MR, Zhao SJ. Modification of waterproofing asphalt by PVC
packaging waste. Journal of Vinyl and Additive Technology, 2009, 15: 229-233.
321
[2] Marques JA, Lopes JG, Correia JR. Durability of the adhesion between bituminous
coatings and self-protection mineral granules of waterproofing membranes. Construction
and Building Materials, 2011, 25: 138-144.
[3] Lopes JG, Correia JR, Machado MXB. Dimensional stability of waterproofing bituminous
sheets used in low slope roofs. Construction and Building Materials, 2011, 25: 3229-3235.
[4] Rodriguez, Dutt O, Paroli RM, Mailvaganam NP. Effect of heat-ageing on the thermal and
mechanical properties of APP- and SBS-modified bituminous roofing membranes.
Materials and Structures, 1993, 26: 355-361.
[5] Liu J. Features and applications of SBS modified asphalt waterproof coiled material.
Synthetic Materials Aging and Application, 2010, 39: 29-32. (in Chinese)
[6] Lu XH, Isacsson U. Chemical and rheological evaluation of ageing properties of SBS
polymer modified bitumens. Fuel, 1998, 77: 961-972.
[7] Cortizo MS, Larsen DO, Bianchetto H, Alessandrini JL. Effect of the thermal degradation
of SBS copolymers during the ageing of modified asphalts. Polymer Degradation and
Stability, 2004, 86: 275-282.
[8] Oba K, Hean S, Björk F. Study on seam performance of polymer-modified bituminous
roofing membranes using T-peel test and microscopy. Materials and Structures, 1996, 29:
105-115.
[9] Navarro FJ, Partal P, Martínez-Boza FJ, Gallegos C. Novel recycled polyethylene/ground
tire rubber/bitumen blends for use in roofing applications: Thermo-mechanical properties.
Polymer Testing, 2010, 29: 588-595.
[10] Baskaran A, Katsman R, Sexton M, Lei W. Investigation of thermally-induced loads in
modified bituminous roofing membranes. Construction and Building Materials, 2003, 17:
153-164.
[11] Ouyang CF, Wang SF, Zhang Y, Zhang YX. Preparation and properties of
styrene-butadiene-styrene copolymer/kaolinite clay compound and asphalt modified with
the compound. Polymer Degradation and Stability, 2005, 87, 309-317.
[12] Ouyang CF, Wang SF, Zhang Y, Zhang YX. Improving the aging resistance of
styrene-butadiene-styrene tri-block copolymer modified asphalt by addition of
antioxidants. Polymer Degradation and Stability, 2006, 91, 795-804.
[13] Yu J Y, Wang L, Zeng X, Wu SP, Li B. Effect of montmorillonite on properties of
styrene-butadiene-styrene copolymer modified bitumen. Polymer Engineering Science,
2007, 47, 1289-1295.
[14] Zhang F, Yu JY, Wu SP. Effect of ageing on rheological properties of storage-stable
SBS/sulfur-modified asphalts. Journal of Hazardous Materials, 2010, 182, 507-517.
[15] Chai H, Li YJ, Evans DG, Li DQ. Synthesis and UV Absorption Properties of
2-Naphthylamine-1,5-disulfonic Acid Intercalated Zn-Al Layered Double Hydroxides.
Industrial and Engineering Chemistry Research, 2008, 47: 2855-2860.
[16] Valente JS, Tzompantzi F, Prince J. Highly efficient photocatalytic elimination of phenol
and chlorinated phenols by CeO2/MgAl layered double hydroxides. Applied Catalysis B:
Environmental, 2011, 102: 276-285.
[17] Tseng CH, Hsueh HB, Chen CY. Effect of reactive layered double hydroxides on the
thermal and mechanical properties of LDHs/epoxy nanocomposites. Composites Science
and Technology, 2007, 67: 2350-2362.
[18] Venugopal BR, Shivakumara C, Rajamathi M. A composite of layered double hydroxides
obtained through random costacking of layers from Mg-Al and Co-Al LDHs by
delamination-restacking: Thermal decomposition and reconstruction behavior. Solid State
Sciences, 2007,9: 287-294.
322
Mechanical Engineering
14:45-16:45, December 16, 2012 (Meeting Room 5)
Session Chair:
283: Multi-dimensional Condition-based Maintenance for Gearboxes: A Review of
Methods and Prognostics
Trent Konstantinu
Curtin University
Muhammad Ilyas Mazhar
Curtin University
Ian Howard
Curtin University
314: Effects of Gas Velocity and Pressure in the Serpentine 3-dimensional PEMFC
Model
Woo Joo Yang
Chonnam National University
Hong Yang Wang
Chonnam National University
Young Bae Kim
Chonnam National University
372: Application of Numerical Optimization Techniques to Shaft Design
Abdurahman M Hassen
University of Tripoli
Neffati M Werfalli
University of Tripoli
Abdulaziz Y Hassan
University of Tripoli
392: Development of a Single-Wheel Test Rig for Traction Dynamics and Control of
Electric Vehicles
King Mongkut’s University of
Apirath Kraithaisri
Technology North Bangkok
King Mongkut’s University of
Suwat Kuntanapreeda
Technology North Bangkok
King Mongkut’s University of
Saiprasit Koetniyom
Technology North Bangkok
411: Material Removal Rate Prediction for Blind Pocket Milling of SS304 Using
Abrasive Water Jet Machining Process
V K Gupta Thammana
PDPM IIITDM Jabalpur
312: Design and Experimental Implementation of Time Delay Control for Air Supply in
a PEM Fuel Cell
323
Ya-Xiong Wanga
Chonnam National University, Korea
Dong-Ji Xuanb
Young-Bae Kima
Wenzhou University, China
Chonnam National University, Korea
283
Multi-dimensional condition-based maintenance for gearboxes:
A review of methods and prognostics
Trent Konstantinua, Muhammad Ilyas Mazharb, Ian Howardc
a
Graduate Mechanical Engineer,
Curtin University, Department of Mechanical Engineering,
GPO Box U1987, Perth WA 6845
Email address: tkostas@mac.com
b
Lecturer, Curtin University, Department of Mechanical Engineering,
GPO Box U1987, Perth WA 6845
Email address: I.Mazhar@curtin.edu.au
c
Associate Professor, Curtin University, Department of Mechanical Engineering,
GPO Box U1987, Perth WA 6845
Email address: I.Howard@curtin.edu.au
Abstract
Condition-based maintenance (CBM) is increasingly becoming a preferred maintenance
management strategy due to its ability to detect faults at early stages of their development.
The CBM approach is also capable of predicting potential system failures by analysing the
operating history of critical indicators of equipment degradation. However, it is not easy to
be applied in practice. There are several uncertainties and difficulties associated with the
implementation of such a strategy especially in applications that involve several modes of
degradation. This paper is aimed at reviewing the existing methods and approaches in the
area of condition-based maintenance with particular emphasis on the multi-dimensional
techniques of data analysis for CBM. The term multi-dimensional is used to describe the
scenarios where more than one parameter is monitored, recorded and subsequently analysed
324
to manage the health of an asset. The research reveals that there are some highly sophisticated
techniques such as vibration, acoustic emission, tribology and ultrasound analysis that are
applied to assess and/or predict the health of an asset. These techniques have been found
documented as extremely effective in machine health diagnosis and prognosis but there are
situations where it becomes almost impossible to devise an integrated strategy due to some
technical and economical limitations. These scenarios have also been discussed in the paper.
Integrating these individual techniques into an effective combined machine diagnostic system
requires unequivocal prognostic capabilities. This study provides a comprehensive
knowledge base for researchers and practitioners working in the field of maintenance
engineering.
Keywords: Condition-based maintenance, multi-dimensional methods, gear defect diagnosis
1.
The ability for organisations to
run safe and efficient operations
that minimise costs through
more targeted maintenance
events and fewer failures has
never been as important given
the current global market
Introduction
Figure 5: Typical cash-flow diagram illustrating
the cost of lost production [5].
conditions [1].
Companies’
profitability
models
have
dramatically changed over the
past decade with increased
focus on shorter delivery times,
lower prices and higher
customer satisfaction levels.
In order for companies to
survive with lower profit
margins and a greater demand by consumers for better products, costs must be strategically
analysed with any unnecessary spending reigned in. To achieve this, organisations are
moving from labour intensive procedures to more efficient technology intensive mechanisms
[2]. Managing the maintenance requirements of machinery no matter how technologically
advanced the operations are is still incredibly important if a loss in production is considered a
drastic occurrence. Put quite simply, the fix it when it breaks mentality of the previous
century is not relevant anymore as time is money. Labour costs are much higher and the
325
production chain is much larger due to the ease of trade between states, countries and
continents. It is therefore becoming a requirement from organisations that KPIs like
machinery availability scores (or levels) be closely monitored because whenever a piece of
machinery is down due to unscheduled maintenance it not only costs the company in lost
profits but the maintenance can involve more costly labour intensive requirements [3, 4].
Figure 1 as adapted from [5] shows the increasing cost of maintenance with multiple
breakdowns and the associated loss in production revenue.
In manufacturing, aviation, mining operations, production and other autonomous industries
there has been a shift towards implementing and using CBM to recommend maintenance
actions and therefore improve the reliability of machinery. CBM is based on a variety of
software and hardware components that are able to perform three key steps. These three key
steps of a CBM program are; data acquisition (collation of information from health systems
such as vibration measurements); data processing (analysis of the information to identify the
fault) and; maintenance decisions (using the system data to provide an effective maintenance
plan). Figure 6 outlines the process flow of CBM that is summarised by Jardine [6].
Whilst the process of CBM seems very straightforward, the amount of investment required to
implement an effective program is quite large. The acquisition of data by using sensors and
other system health parameters is dependent on the application to which the monitoring
devices are going to be applied, the type of data that is required and what processing methods
will be used [6]. An effective CBM plan should take into account the common failure
modes of equipment and develop a maintenance plan that is specific to that piece of
machinery; whether that be for a gearbox, compressor, turbine or pump. In a production
line there are various components that have individual applications and to effectively
maintain all these mechanisms requires a comprehensive management program that utilises a
combination of non-destructive testing (NDT) tools. These include vibration analysis,
tribology, acoustic emission and thermography to name a few. Each of these approaches to
CBM is capable of detecting and diagnosing certain failures from collected data [7].
Figure 6: Steps involved in a CBM program [6].
2. Condition-based maintenance methods
The study of vibration based condition monitoring (CM) methods is particularly well
documented and used in many maintenance operations due to the ability to detect a wide
variety of faults. Vibration analysis is conducted usually using a vibration transducer,
commonly an accelerometer, to record the vibration signals of a piece of mechanical
326
equipment like a gearbox or a pump in order to ascertain the equipment’s real condition
[8-10]. Generally a baseline of the equipment in a good operating state or past recorded
data is compared to subsequent measurements in order to spot a faulty component that
exhibits a different vibration waveform. Vibration monitoring and other techniques are
prone to attenuation of the measured signal mainly due to the difficulty of mounting the
sensor to the gearbox [9]. Overall, vibrational CBM is the most useful instrument on the
market [10].
The use of acoustic emission (AE) monitoring is becoming more popular due to its ability to
detect crack propagation and the growth of subsurface cracks. It however has been around
in CM applications for over three decades [8]. AE is essentially the result of high frequency
elastic wave generation caused by a rapid release of strain energy within or on the surface of
a material [11]. The main advantage of AE monitoring is that the transducer is very
sensitive and can detect microscopic changes in mechanical integrity. This makes it much
better at fault detection than vibration monitoring methods in certain situations. Like
vibration measurements, the measuring capability of AE sensors is weakened by its location
as attenuation is experienced by the sensor’s distance to the component and any interfering
interfaces [11]. Ultrasound monitoring belongs primarily to the noise analysis family of CM
techniques like vibration and AE monitoring. Unlike vibration monitoring, it is suitable for
studying the frequency of components above the 30 kHz range. It primarily finds it use in
areas where high frequencies are present [7].
With further studies into CBM it is evident that simply monitoring components from a
vibration standpoint is not always sufficient and that complementing the vibration
measurements with other indicators like oil analysis, AE and thermography is important. In
fact the combined use of vibration analysis and oil analysis (in its broad form) have been used
successfully for many years particularly in aviation. In rotating machinery like a helicopter
gearbox it is common for chip detectors, oil filters and spectrometric oil analysis (SOA) to be
used as they provide in some instances online capabilities. Chip detectors or magnetic plugs
collect ferromagnetic particles in the oil of a component and in the case of a chip detector,
can warn the operator (in a helicopters case, a pilot) of the presence of metal chips [9].
Other methods like SOA are used in a completely offline process. It is a technique that
examines the metal elements present in an oil sample to determine abnormal wear. Based
on its analysis it can determine which part is wearing within the component and the rate at
which it is wearing [8]. This form of post-event analysis is very common and maintenance
organisations will regularly send away oil samples to be tested at labs to ensure that
machinery is not suffering from an impending fault. Similarly most machines will have
some sort of temperature probe that can monitor the oil temperature or other aspects which
can be integrated into the CBM system.
327
3. Multi-dimensional condition-based maintenance decision support
Merging the core CBM methods into an integrated system that can provide a maintenance
team a plethora of information is only useful if the system can accurately distinguish the
information and provide concise recommendations. This would be one of the main goals of
a functioning multi-dimensional CBM system. John Mitchell, a specialist in maintenance
engineering and asset management, has stated that greater integration of mechanical
condition data is vital and that this is the direction that CBM systems must move towards in
the future if they are truly to become a fully comprehensive management system that can
provide operation decision support and offer value to organisations. This means that there
must be stronger links between each of the proven condition assessment techniques [12].
Integrating the sensors and using the data to provide more accurate and reliable maintenance
decisions can save a considerable amount of money in extra maintenance. The Palo Verde
Nuclear Generating Station in Wintersburg Arizona was able to save $3.7 million (USD) in
one year after introducing a vibration and oil analysis system. They also determined that for
every dollar they invested in CM techniques they would save around $6.50 in maintenance
costs [13]. In deciding if an organisation should implement a multi-dimensional CM
program thought must be given to what equipment is being used and the costs incurred to the
organisation if the equipment fails. Reviews of the multi-dimensional CBM techniques that
have been studied are given below.
3.1. Vibration and Acoustic Emission analysis
Loutas et al. [14] and Tan et al. [11] used accelerometer and AE sensors on a test-rig gearbox
in order to test their ability to accurately detect faults that had been manually introduced
(seeded faults). Loutas et al. determined that the AE technique proved superior to the
vibration equivalent during the early stages of the experimental testing, especially with early
crack propagation. It was able to give substantial warnings and discrepancies in the
monitored parameters during these early to middle stages of the test. The AE testing
parameters also signalled a linear behaviour in the data. This meant that whenever a sharp
change in the gradient was noticed, this could be linked with a change in the crack
propagation rate. Temperature measurements of the oil bath within the gearbox concluded
that oil temperature has a clear effect on the vibration and AE signals [14]. Understanding
328
the use of AE in a CM environment requires a precise understanding of its strengths and
limitations under all conditions. Eftekharnejad and Mba studied its effectiveness in
identifying seeded faults in helical gears. The results showed that the seeded defect in the
gears was most evident in the AE measurements due to its sensitivity over vibration signals.
AE bursts allowed the fault to be identified to the exact tooth where the defect had been
induced. Interaction with fluid within the defect cavity had the ability to provide its own
source of AE activity causing AE RMS (root mean squared) levels to be lower than in other
tests. Further experiments showed that AE RMS values increased with increases in cavity
volume, supporting the phenomenon that entrapped lubrication within the pit or spall can
contribute to the level of AE measured [15]. The experimental study by [16] was performed
using a specifically designed test rig consisting of a worm gearbox with small and large
induced defects. In the first test at 150 rpm and at two different load conditions, the AE
sensor showed transient increases in energy and RMS that reflected the large defect whereas
there was no variation in the vibration levels measure by the accelerometer. Both sensors
did not detect the small defect at this speed. At an increased speed of 300 rpm (same two
load conditions) the corresponding increase in impact energy at the gear mesh caused the AE
sensor to detect both small and large defects, however the vibration levels at the gear mesh
were again not altered – meaning that it had not detected either of the faults. Low speed
fault detection was found to be difficult with both sensors.
Switching focus from gears specifically to bearings, [17] investigated the diagnostic ability of
vibration and AE analysis techniques to detect faults in the outer race of a radially loaded test
bearing with two seeded defects. The testing showed that AE transients were present at the
associated defect frequency on the outer race (BPFO) for a flaw that had material protruding
at a level greater than the average surface roughness. Both AE and vibration (RMS and
maximum amplitude) measurements increased as the defect size increased, however the
magnitude of the changes were not proportional between them. The AE signal was more
sensitive and was able to identify the defect source whilst the frequency spectrum of the
vibration readings could not in many cases without further signal processing. The AE
transients were discovered to increase with speed and load and AE burst duration was found
to directly correlate with the seeded defect length along the race. This was found in
comparison to the vibration signature which required demodulation and band-pass filtering to
enhance its diagnostic capabilities. Echoing thoughts of other authors [14], both [18, 19]
have described that vibration measurements alone are not ideal in detecting early or low
speed faults and that AE is much more effective in distinguishing faults in these cases. In
[19] the results clearly indicated that AE was better than vibration acceleration in discovering
faults. The data processing step is also drastically important and will affect the overall
response of the sensor. In [18], the authors suggested that kurtosis is the most effective
processing method in the time domain and that in the frequency domain the high-frequency
329
resonance technique is well established. The concern with the latter method is its inability
to consistently detect advanced damage. Whilst using other methods like the wavelet
transform to extract weak signals has been recommended, it seems more appropriate to
incorporate both vibration and AE methods as each has complementary attributes. It was
noted in [18] that precautions must be taken when recording sound measurements to ensure
that any external (unrelated) noise is omitted. A gearbox that had been altered to simulate
misalignment (twisted case) and an inner race fault (severe misalignment) was presented in
[20]. The AE sensor was able to unequivocally identify the defect frequencies and the
relevant sidebands of BPFI (ball pass frequency of inner race), although the enhanced DWT
(discrete wavelet transform) method was able to deliver higher peak levels allowing for
evaluation of the machine condition to be accomplished a lot easier and with greater
confidence due to a clean signal. The accelerometer was not able to offer the same
conclusions due to the higher level of noise that it measured in the low frequency range,
resulting in sidebands that were not easily identified with or without the enhanced technique
[20]. In the paper by [21], the application of spectral kurtosis using AE and vibration data
from a defective bearing was tested. They were able to show that it effectively de-noised
both the vibration and the AE signals. As seen with other trials [16], the lack of sensitivity
of the accelerometers in measuring vibration causes it to be unable to detect developing faults
and this was again noted in testing which showed that AE levels increased at approximately
one hour before vibration readings began to change in relation to the onset of the accelerated
defect. The high amount of noise (low signal to noise ratio) that is recorded by the vibration
instruments is detrimental to the selection of an optimum filter frequency. The results
showed that despite using techniques like the least-square FIR filter and the Hilbert transform
which clearly showed the defect frequency, the challenge still lies with selecting an effective
band-pass filter. It must be sensitive enough in order for it to alter its parameters as changes
in speed and load are experienced, if not it becomes very hard (especially with automated
monitoring systems), to correctly select filter frequencies with such high amounts of noise in
the signal. Partially successful de-noising was accomplished using Spectral Kurtosis (SK)
with the vibration signals; with the first test showing that the defect was spotted earlier after
the SK-based filter had been applied but not any earlier with the second test. The calculated
optimum frequency bands for SK analysis of the AE signal were outside the measurement
range of the sensor. Due to the narrow focus of AE compared to the wider frequency range
of an accelerometer issues like this can arise [21].
Yoshioka and Shimizu were able to show that a compound sensor capable of recording both
vibration and AE signals to monitor rolling contact bearing operations is in fact quite
effective in detecting faults, even under grease lubrication. Tests showed that AE event rate
can be used to detect the initiation and propagation of the fatigue crack. They also noted
that when a combination of factors are occurring (like fretting corrosion) the ability to
330
separate the AE signals is difficult and this can undermine the signal processors ability at
interpreting the fault. There were instances where only one of the techniques was able to
indicate an issue. The root cause of this varies and is mainly dependent on the processing
technique that was used. Overall the vibration RMS and the demodulated dispersion of the
sample values around the mean were able to completely detect the appearance of damage
whereas the average of the demodulated AE signal indicated a rate of change of 90% for the
appearance of damage. These results indicate that the combination of vibration and AE
analysis is well served in providing a more advanced CM system that has superior damage
forecast ability [22]. Processing the sensor data effectively as shown in [22] can lead to
good fault detections. The paper by [23] has shown how inner and outer race defects of
rolling element bearings can be detected through sensor signature analysis from an
accelerometer and AE sensor signals. Both sensors given this processing technique were
comparable in their detection abilities but the AE sensor was not as sensitive to inner race
defects because of how the signal must travel through more interfaces then for other defects,
highlighting the importance of sensor placement [23]. In [24], tests showed that AE due to
its higher sensitivity was capable of detecting many stages of the fatigue development,
including crack initiation, propagation, growth, connection, closure and fracture of the
coating materials. Variations in the AE waveforms (sharp peaks, etc) were attributed to
different stages in the damage process. The magnitude of the AE bursts was also linked to
the quantity and rate of crack initiation. In stark comparison the vibration signals were only
able to detect the final stage of fatigue where deep pitting had occurred.
Whilst vibration and AE CM techniques are common, new techniques like stator current
monitoring have been investigated in [25]. They found that vibration, AE, stator current and
shock pulse methods (SPM) were all able to identify the induced circular hole in the outer
race of the ball bearing from an induction motor. However, AE was shown to be most
effective in identifying the fault in a comparative study for both minimum and maximum
defect size in the outer race. In fact, AE was nearly three times better than SPM in
identifying the smallest defect size whereas it was only marginally better with the maximum
defect size. It was the agreed view also of the authors in [19] that AE was the best at
detecting defects at lower speeds and the lowest by the SPM. Both AE and SPM were the
most effective and showed the greatest level increase on the normalised value [25]. Due to
differences in each sensors frequency range, using a combination of vibration and AE sensors
has proven to be beneficial in completely capturing all information. It was shown in [26]
that surface vibration, airborne sound and AE all indicated the surface scratching of spherical
journal bearings at different frequency ranges. Vibration and airborne sound responses were
around 4250 Hz to 8000 Hz whereas AE was in the range of 10 kHz to 200 kHz. Increasing
speed consequently raised the values of RMS and peak value of each of the three frequency
spectrums. AE was still the most sensitive to the surface scratching fault. Failures in a
331
two-stage helical gearbox were tested by Baydar and Ball. They found that while both
vibration and AE signal analysis were successful in detecting gear failures, individual
accuracy varied depending on the fault type. Particular advantage of AE is when there is a
tooth crack or other situation where gear meshing force is reduced, as the less pronounced
impact between teeth makes vibration signals significantly weaker compared to AE signals.
Overall the AE signals were found to be the best for the early detection of faults [27], and
emphasises why it should be used in a multi-dimensional system.
3.2. Vibration, AE and Ultrasound analysis
As discussed earlier from papers by [14, 16-19], monitoring low speed faults with traditional
vibration methods is difficult due to low impact rates and even AE which has shown to be
more sensitive to changes in a component is still linked to the speed and load being applied.
Ultrasound diagnostic techniques are being used more often now, especially since they have
been shown by research institutions like NASA to locate incipient failure of bearings well
before traditional vibration methods [28]. The ultrasound technique can detect and define
sonic signatures emitted by machinery with sound waves above frequency levels of 20 kHz.
Whilst AE can also detect high frequency signals (in the range of 100 kHz to 1 MHz),
ultrasonic measurements are particularly focussed on the frequency range from 20 kHz to
100kHz which allows technicians to identify and to precisely locate faults like bearing
deterioration. Due to the ultrasound techniques high signal to noise ratio it is able to clearly
identify the exact location of the energy source irrespective of any external transients like
noise [29]. Using data processing methods like the FFT, the ultrasonic signal can be
processed to remove the ultrasonic components leaving behind the low frequency
components which can be converted into signals that can be analysed [28].
Tan, Kim and Kosse compared
Figure 7: Percentage differences of kurtosis value using
the
CM
approaches
of accelerometer and ultrasound measurement techniques [30].
low-speed
bearings
using
traditional
vibration
measurements taken with an
accelerometer and also using an
AE sensor and ultrasound
techniques.
Experimental
tests from 30 rpm to 1200 rpm
were conducted on a simulated
defect on the outer race of a
bearing.
The RMS and
kurtosis data values were
deemed to be the better
condition indicators as they demonstrated stable and consistent percentage changes as the
332
operating speed changed. The ultrasound measurements are shown to be the best CM
technique for low speed bearings using the kurtosis data. Figure 3 shows a plot of the
relative percentage differences of kurtosis values as tested by [30] for a healthy and defective
bearing. It is clear to see that the ultrasound technique is able to detect a bearing fault at
lower shaft speeds then a traditional accelerometer which is more effective at higher shaft
speeds. Combining the two measuring instruments makes clear sense, especially if low
speed operations are frequent or diagnosing faults early is paramount to maintenance policy.
A similar experiment was carried out by [29] except this time the fault was simulated on the
inner race of the bearing. It showed the ultrasound technique was quite superior to vibration
acceleration in detecting defects in the bearings at low speed. The RMS of the ultrasound
readings was the best diagnostic parameter at all speeds, but for shaft speeds less than 300
rpm the values for crest factor and kurtosis were shown to be very responsive, echoing the
percentage changes seen in Figure 3.
3.3. Electrostatic analysis with vibration and acoustic emission
Electrostatic (ES) sensing is capable of detecting tribologically originated electrostatic charge
sources, which include shearing, surface cracking and oxidation to name a few [31]. It is
therefore being commonly used alongside AE in monitoring wear of components like
bearings. Both techniques have complementary advantages as each one is sensitive to
different phases of wear development. A paper by [31] studied ES and AE techniques in
monitoring delamination wear from a dry steel sliding contact. Results showed that both
signals correlated with the friction levels and wear rates, and they were both able to
differentiate three wear regimes which were; running in, delamination and oxidation. The
AE RMS values were determined to be most sensitive to running in and oxidation regions of
the testing. The ES sensor was susceptible to varying levels of background noise that
affected its ability at stages of the tests to identify damage features that the AE signals were
able to identify. However both techniques were more sensitive than the other at various
points indicating that the combination of the two was important in providing an accurate
indication of the wear. In the paper by [32] a multivariate technique to extract features from
the signals was used to study the behaviour of the features using a clustering method at
different running stages before it was compared to baseline data from a bearing in good
condition. It could detect wear conditions quite well due to the sensitivity of the clustering
method in isolating changes between good and bad (condition) data signals. The analytical
approach still requires further investigation in order to prove that crack detection correlates
with physical evidence and to be able to use the data to determine fault location.
3.4. Vibration and tribology analysis
A popular combination of CM techniques includes vibration and lubricant analysis. When
333
integrated together they are quite successful in meeting the monitoring objectives of an
operator. The paper by Peng and Kessissoglou studied this integrated approach of fault
diagnosis with a worm gearbox. An obvious increase in vibration amplitude from the baseline
amplitude was associated with wear characteristics. Wear debris was extracted and
examined using a standard and confocal laser scanning microscope. It was able to provide
useful information on the wear rate and wear location and the vibration analysis offered
dependable information on the condition of bearings in the system. Using the two
techniques together would allow CBM specialists to more accurately detect faults and the
cause of faults in gearboxes because results can be compared [33]. Further study into
combining these two methods was done by NASA’s Glenn Research Centre in Ohio [34]
where data was collected from fatigue tests of a spur gearbox in order to analyse the ability
for the system to detect pitting of the spur gears. Dempsey and Afjeh applied fuzzy logic
analysis techniques to the recorded gear failure data (both vibration and oil debris), hence
integrating the information in the process. They were able to show that after defining the
fuzzy rules and associated membership functions that define the range and limit of pitting
damage (allowing it to discriminate between wear stages), that the system had improved
damage detection abilities. When it comes to pitting damage, oil debris analysis was shown
to be more reliable than vibration monitoring in detecting pitting fatigue failure. Vibration
measurements were more sensitive to environmental conditions (location, sampling rates, etc.)
which makes diagnostic capabilities harder [34]. The experiments showed that the vibration
algorithms could not identify damage progression unlike the oil debris analysis which could
indicate it based on an increase of material mass. Choosing correct threshold limits for the
vibration algorithms was difficult and because the limits vary depending on the testing
situations, poorly designed processes could give off false alarms. The membership function
association requires multiple sets of damage data in order to find the perfect threshold limits.
Overall the combined monitoring approach has meaningfully enhanced the diagnostic
capability for gear damage in the NASA fatigue rig and has the ability to be applied to other
situations [34]. A similar conclusion was noted by Mathew and Stecki [35]. In some
instances where sliding wear was dominant, the vibration analysis was less sensitive to fault
detection. However the consensus was that even though both detection methods assess
different wear signs, both methods should be applied in providing an all-encompassing
monitoring process.
The paper by [36] performed tests on a worm gearbox which showed that integration of the
two sensors into the monitoring of a gearbox would result in a more reliable assessment of its
condition. Both sensors were able to diagnose problems from five individual tests but both
had individual advantages in each case that allowed for a broader picture of the condition of
the gearbox. Vibration analysis was quite effective in determining the condition of the
bearings, whereas the wear debris analysis was effective in determining faults due to lack of
334
lubrication and whenever wear rate increased [36].
Issues can arise where only one of the
sensors is able to detect a fault and the other sensor is not able to “back it up” with conclusive
results and this could lead to false recommendations. With worm gearboxes, impacts caused
by faults do not occur periodically. Identification of fault frequency peaks is therefore
harder and requires the vibration signal to be taken over longer periods of time. Using
vibration and wear debris analysis can overcome this inherent problem [37]. Ebersbach,
Peng and Kessissoglou conducted studies with a spur gearbox that was exposed to defect
conditions like constant overload and cyclic load. The overload conditions were detected by
both techniques. The vibration analysis identified an increasing deteriorating gear mesh
fault along with increasing sideband peaks and the wear debris analysis indicated fatigue
particles caused by pitting. The cyclic load testing was not as successful as neither
technique was able to detect the cause of failure. However other indications of wear from
both techniques lead to the conclusion that there was a fault occurring. Both data
acquisition techniques were able to support the other and this was effective in diagnosing
faults with more accuracy. Whilst vibration was good for fault detection and wear debris
analysis was effective in identifying wear modes, the inability to correctly identify a cyclic
load failure was of concern [38].
The vibration and wear debris analysis methods were verified through the use of a test stand
(consisting of a motor and reducer) in a paper by [39]. It echoed other papers in that wear
particle analysis was best served in monitoring friction. Inappropriate viscosity of the oil
caused issues in certain parts of the test rig and vibration analysis alone was not straight
forward in its conclusions without further demodulation of the signals [39]. Advanced CM
setups have been tested as in [40] and include vibration, debris analysis and ES charge
sensing for monitoring tapered rolling bearing wear. Particle counting techniques and wear
debris imaging were shown to be the best at identifying early signs of distress. Wear in the
setup was indicated by a decline in wear site charge, acceleration, temperature, oil-line charge
and particle counts. The particle count was also the first to indicate an impending wear fault.
This was later confirmed by increases in vibration, temperature and wear site charge along
with findings of wear debris. The process was very comprehensive because of the high
amount of tribology sensing. These sensing techniques however would never be beneficial
if they were used on their own. Integrating these methods to monitor marine diesel engines
was studied in [41]. This methodology provided a comprehensive CBM of the diesel engine
in diagnosing faults. The two methods do in fact offer complementary strengths in that
vibration analysis is able to respond quickly to changes and indicate conclusively faults in the
engine whereas wear debris analysis can differentiate the normal and faulty states much easier.
Combining the two methods showed that four faulty patterns could be successfully diagnosed
at a detection rate of 85.6% which was 16% higher than just using vibration analysis. The
ability for power machinery like a marine diesel engine to be monitored more effectively
335
using this multi-dimensional approach is a step towards lower maintenance costs and
increased safety on the water.
3.5. Vibration, Acoustic emission and Tribology analysis
The condition of spur gears was investigated through a combination of CM techniques
including vibration, acoustic emission and spectrometric oil analysis (SOA). The
experimental study completed by Tan, Irving and Mba [42] aimed to show how AE could be
successfully implemented along with vibration and SOA techniques when trying to identify
the pitting of gears. AE levels measured from the gear pinion was found to be linearly
correlated to the pitting rates of the gearbox at all torque conditions. Whereas vibration
analysis required 20-40% pitting of the gear before it could be detected, AE could detect
pitting as early as 8% of the pitted area. Vibration monitoring was only noted to have a
better sensitivity to pitting rates at lower torque levels, however at higher torque levels AE at
the bearing case showed superior sensitivity. It wasn’t until the torque level was high
enough that the SOA technique was better at detecting pit growth compared to the vibration
technique. Both vibration and SOA techniques did not perform very well at the lowest
torque levels [42]. On-line monitoring is definitely where the future of machine
maintenance is heading towards. It offers the most accurate and instantaneous form of CM
for organisations. The paper by Loutas, Roulias, Pauly and Kostopoulos [43] focuses on the
combined use of vibration, AE and oil debris monitoring (ODM) from the lubricant oil in a
online state for rotating machinery. AE monitoring did not offer any significant diagnostic
advantages when it came to monitoring normal gear wear compared to the vast advantage that
AE monitoring gave for cracked tooth detection in a previous study [14] of similar
proportions. However certain parameters from the vibration and AE recordings were found
to be excellent in differentiating gear damage monotonically and therefore capable of
diagnosing gear damage. The ODM sensor was used to determine iron mass and iron mass
rate, and was only able to provide a few useful diagnostic values which were used as part of
the overall data matrix. The data processing techniques like DWT were excellent in
providing diagnostically useful data compared to time and frequency domain techniques
which weren’t as suitable [43].
4. Challenges and difficulties in implementing a multi-dimensional CBM policy
There are inherent challenges with implementing a multi-dimensional CBM policy. A truly
effective CBM program requires online monitoring in order to accurately determine all sorts
of impending defects and to take full advantage of what the sensors can offer. This requires
sensors that are attached to components in the best possible location at all times in order to
capture signals (bearing, gear mesh) with minimal external influences. Issues arise here
because it is not always feasible to do this (physically and theoretically). In most
environments where CBM policies would be implemented there is a multitude of factors that
336
can affect the process.
Whilst all CBM methods are useful in some capacity, integrating
them together so that they give a uniform prognostic appraisal of machine condition and not
conflicting evaluations poses issues because if the latter occurs then an operator will not
know which sensor is providing the correct information. The author comes with experience
in the aviation sector, particularly with helicopter maintenance operations and reliability. It
is an industry that benefits greatly from NDT techniques; however it is not possible to apply
all the multi-dimensional CBM methods to a helicopter for in-flight monitoring due to the
highly sensitive nature of the sensor and the complexity of the set up. Helicopter
components experience much higher dynamic loads then other machines and many
environmental factors and operating conditions put the airframe under more stress which
accelerates wear and component failure. Health and Usage Monitoring (HUM) systems
provide effective vibration monitoring and oil analysis capabilities but the system can be
improved on. Incorporating more advanced methods like those examined in this paper
(especially those that have proven to be able to monitor components experiencing varying
loads) will go a long way to lowering the high maintenance costs and improving the safety of
helicopter operators. Issues like the complexity of CBM software on the market needs to be
considered as it generally requires well trained specialists in order to operate it effectively.
Thought must be given to the operating requirements and the training involved with
introducing an intricate system that incorporates multiple sensors and various data processing
methods especially in online and fully automated systems that are used in helicopters, as
effective diagnostic and prognostic abilities are paramount.
5. Conclusions
There is considerable benefit from using a multi-dimensional technique approach to
monitoring rotating equipment in an organisation. The complementary aspects of all
methods mean that when one method is not as effective, the other generally fills the void.
All multi-dimensional data acquisition techniques have generally come about by trying to
improve on the vibration monitoring process. Moving completely away from vibration
monitoring and using other dedicated sensors has not been effective in most cases because the
best processes used by industry involve using a combined process in order to capture all
developing faults. Studies showed that with a multi-dimensional maintenance approach,
vibration analysis was able to identify the type of fault and that wear debris analysis can
indicate the wearing machine element [37]. AE and ultrasound techniques are more
sensitive to small changes in machine condition and are quite effective at identifying faults at
low speeds. The higher sampling frequency of AE (50 – 1000 kHz) allows external noises
to be eliminated, a common drawback with vibration analysis [24]. Whether an
organisation uses high technological methodology to CM (regardless if it is online or portable)
or a simple approach, depends on the economic constraints of the cost to implement the
337
system and the cost incurred by the equipment being inoperable, as well as any safety
concerns that may arise from the failure of a component. Therefore it is sensible for a
multi-dimensional system to be used as it can dramatically reduce maintenance costs due to
improved fault detecting abilities.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
6. References
Heng, A., et al., Rotating machinery prognostics: State of the art, challenges and
opportunities. Mechanical Systems and Signal Processing, 2009. 23(3): p. 724-739.
Alsyouf, I., The role of maintenance in improving companies' productivity and
profiatbility. International journal of Production Economics, 2007. 105: p. 70-78.
Rao, B.K.N., Handbook of Conditional Monitoring. 1st ed. 1996, Oxford, UK: Elsevier
Advanced Technology.
Smith, R. and R.K. Mobley, Chapter 1 - Understanding Maintenance and Reliability, in
Rules of Thumb for Maintenance and Reliability Engineers. 2008,
Butterworth-Heinemann: Burlington. p. 3-25.
Davies, A., Handbook of condition monitoring: techniques and methodology. 1998,
London: Chapman & Hall.
Jardine, A.K.S., D. Lin, and D. Banjevic, A review on machinery diagnostics and
prognostics implementing condition-based maintenance. Mechanical Systems and
Signal Processing, 2006. 20(7): p. 1483-1510.
Smith, R. and R.K. Mobley, Chapter 4 - Predictive Maintenance Program, in Rules of
Thumb for Maintenance and Reliability Engineers. 2008, Butterworth-Heinemann:
Burlington. p. 47-56.
Tandon, N. and A. Parey, Condition Monitoring of Rotary Machines, in Condition
Monitoring and Control for Intelligent Manufacturing, L. Wang and R.X. Gao, Editors.
2006, Springer London. p. 109-136.
Danai, K., Fault Diagnosis of Helicopter Gearboxes, in Vibration Monitoring, Testing,
and Instrumentation, C.W.d. Silva, Editor. 2007, CRC Press: Boca Raton, FL. p. 1-26.
Mobley, R.K., Predictive Maintenance Techniques, in An Introduction to Predictive
Maintenance. 2002, Butterworth-Heinemann: Burlington. p. 99-113.
Tan, C.K. and D. Mba, Identification of the acoustic emission source during a
comparative study on diagnosis of a spur gearbox. Tribology International, 2005. 38(5):
p. 469-480.
Mitchell, J.S., From Vibration Measurements to Condition Based Maintenance: Seventy
Years of Continuous Progress. Sound and Vibration, 2007. 41(1): p. 62-78.
Van Rensselar, J., VIBRATION ANALYSIS: The other half of the equation. Tribology &
Lubrication Technology, 2011. 67(8): p. 38.
Loutas, T.H., et al., Condition monitoring of a single-stage gearbox with artificially
induced gear cracks utilizing on-line vibration and acoustic emission measurements.
Applied Acoustics, 2009. 70(9): p. 1148-1159.
Eftekharnejad, B. and D. Mba, Seeded fault detection on helical gears with acoustic
emission. Applied Acoustics, 2009. 70(4): p. 547-555.
Elforjani, M., et al., Condition monitoring of worm gears. Applied Acoustics, 2012.
73(8): p. 859-863.
Al-Ghamd, A.M. and D. Mba, A comparative experimental study on the use of acoustic
emission and vibration analysis for bearing defect identification and estimation of defect
size. Mechanical Systems and Signal Processing, 2006. 20(7): p. 1537-1571.
338
[18] Tandon, N. and A. Choudhury, A review of vibration and acoustic measurement methods
for the detection of defects in rolling element bearings. Tribology International, 1999.
32(8): p. 469-480.
[19] Tandon, N. and B.C. Nakra, Comparison of vibration and acoustic measurement
techniques for the condition monitoring of rolling element bearings. Tribology
International, 1992. 25(3): p. 205-212.
[20] Gu, D., et al., Detection of faults in gearboxes using acoustic emission signal. Journal of
Mechanical Science and Technology, 2011. 25(5): p. 1279-1286.
[21] Eftekharnejad, B., et al., The application of spectral kurtosis on Acoustic Emission and
vibrations from a defective bearing. Mechanical Systems and Signal Processing, 2011.
25(1): p. 266-284.
[22] Yoshioka, T. and S. Shimizu, Monitoring of Ball Bearing Operation under Grease
Lubrication Using a New Compound Diagnostic System Detecting Vibration and
Acoustic Emission. Tribology & Lubrication Technology, 2010. 66(4): p. 32-38.
[23] Shiroishi, J., et al., Bearing condition diagnostics via vibration and acoustic emission
measurements. Mechanical Systems and Signal Processing, 1997. 11(5): p. 693-705.
[24] Zhi-qiang, Z., et al., Investigation of rolling contact fatigue damage process of the
coating by acoustics emission and vibration signals. Tribology International, 2012.
47(0): p. 25-31.
[25] Tandon, N., G.S. Yadava, and K.M. Ramakrishna, A comparison of some condition
monitoring techniques for the detection of defect in induction motor ball bearings.
Mechanical Systems and Signal Processing, 2007. 21(1): p. 244-256.
[26] Raharjo, P., et al., A Comparative Study of the Monitoring of a Self Aligning Spherical
Journal using Surface Vibration, Airborne Sound and Acoustic Emission. Journal of
Physics: Conference Series, 2012. 364(1): p. 012035.
[27] Baydar, N. and A. Ball, A comparative study of acoustic and vibration signals in
detection of gear failures using Wigner-Ville Distribution. Mechanical Systems and
Signal Processing, 2001. 15(6): p. 1091-1107.
[28] Enercheck Systems. Using Ultrasound with Vibration Analysis To Monitor Bearings.
2012
[cited
2012
29
July];
Available
from:
http://www.enerchecksystems.com/articl21.html.
[29] Kim, Y.-H., et al., Condition Monitoring of Low Speed Bearings: A Comparative Study
of the Ultrasound Technique Versus Vibration Measurements, J. Mathew, et al., Editors.
2006, Springer London. p. 182-191.
[30] Tan, A., Y.-H. Kim, and V. Kosse, Condition monitoring of low-speed bearings - a
review. Australian Journal of Mechanical Engineering, 2008. 6(1): p. 61-68.
[31] Sun, J., et al., Wear monitoring of bearing steel using electrostatic and acoustic emission
techniques. Wear, 2005. 259(7–12): p. 1482-1489.
[32] Chen, S.L., et al., Wear detection of rolling element bearings using multiple-sensing
technologies and mixture-model-based clustering method. Proceedings of the Institution
of Mechanical Engineers, 2008. 222(O2): p. 207-218.
[33] Peng, Z. and N. Kessissoglou, An integrated approach to fault diagnosis of machinery
using wear debris and vibration analysis. Wear, 2003. 255(7–12): p. 1221-1232.
[34] Dempsey, P.J. and A.A. Afjeh, Integrating Oil Debris and Vibration Gear Damage
Detection Technologies Using Fuzzy Logic, in International 58th Annual Forum and
Technology Display2002, National Aeronautics and Space Administration: Glenn
Research Centre: Montreal, Canada.
[35] Mathew, J. and J.S. Stecki, Comparison of Vibration and Direct Reading Ferrographic
Techniques in Application to High-Speed Gears Operating Under Steady and Varying
Load Conditions. Lubrication Engineering, 1987. 43(8): p. 646-653.
339
[36] Peng, Z., N.J. Kessissoglou, and M. Cox, A study of the effect of contaminant particles
in lubricants using wear debris and vibration condition monitoring techniques. Wear,
2005. 258(11–12): p. 1651-1662.
[37] Vähäoja, P., S. Lahdelma, and J. Leinonen, On the Condition Monitoring of Worm
Gears, J. Mathew, et al., Editors. 2006, Springer London. p. 332-343.
[38] Ebersbach, S., Z. Peng, and N.J. Kessissoglou, The investigation of the condition and
faults of a spur gearbox using vibration and wear debris analysis techniques. Wear,
2006. 260(1–2): p. 16-24.
[39] Gonçalves, A.C., R.C. Cunha, and D.F. Lago, Vibration and wear particles analysis in a
test stand. Industrial Lubrication and Tribology, 2007. 59(5): p. 209.
[40] Craig, M., et al., Advanced condition monitoring of tapered roller bearings, Part 1.
Tribology International, 2009. 42(11–12): p. 1846-1856.
[41] Li, Z., et al., A New Intelligent Fusion Method of Multi-Dimensional Sensors and Its
Application to Tribo-System Fault Diagnosis of Marine Diesel Engines. Tribology
Letters, 2012. 47(1): p. 1-15.
[42] Tan, C.K., P. Irving, and D. Mba, A comparative experimental study on the diagnostic
and prognostic capabilities of acoustics emission, vibration and spectrometric oil
analysis for spur gears. Mechanical Systems and Signal Processing, 2007. 21(1): p.
208-233.
[43] Loutas, T.H., et al., The combined use of vibration, acoustic emission and oil debris
on-line monitoring towards a more effective condition monitoring of rotating machinery.
Mechanical Systems and Signal Processing, 2011. 25(4): p. 1339-1352.
340
314
Effects of Gas Velocity and Pressure in the Serpentine 3-dimensional
PEMFC Model
Woo-Joo Yanga, Hong-Yang Wanga and Young-Bae Kima,*
a
Dept. of M.E. Chonnam National Univ., 300 Yongbong, Gwangju, Republic of Korea
E-mail address: ybkim@jnu.ac.kr
Abstract
The construction of a reliable numerical model and the clarification of its operational
conditions are necessary for maximizing fuel cell operation. The geometrical shape of the
fuel cells should also be considered in the prediction of performance because this shape
affects the reaction speed and distribution of species. Specifically, the land ratio of the gas
channel and rib is an important parameter affecting PEMFC performance because current
density distribution is influenced by this geometrical characteristic. Three main variables
determine the current density distribution, namely, species concentration, pressure, and
diffusion velocity distributions. These distributions should be considered simultaneously in
assessing fuel cell performance with a given PEMFC cell-operating voltage. In this paper,
three different land ratio models are considered to obtain better PEMFC performance.
Keyword: polymer electrolyte membrane fuel cell (PEMFC), 3-dimensional model,
computational fluid dynamics (CFD)
1. Introduction
The efficiency of the fuel cell depends on the kinetics of the electrochemical process and
performance of the components. Several experiments have been conducted by researchers for
varying design dimensions of the cell components, operating parameters. However,
experimental investigations are costly and numerical models have been developed to
understand and optimize the kinetics of the process. To obtain higher power density, current
density in the cell should be maximized, and the cell must be operated at an optimal operating
cell voltage range. The current density is influenced by many factors, such as cathode and
anode fuel and oxygen concentrations, water contents, inlet humidity, and cell geometries.
Many researchers have studied cell geometry variations, such as height and width of the
channel and rib, as well as channel flow pattern using numerical simulations. Wang et al.
studied PEMFC performance by comparing the inter-digitated and parallel channel
arrangements [1]. They performed three-dimensional (3D) analysis using an in-house
program, through which they found that inlet humidity is the major factor that influences
341
PEMFC performance. Cordiner et al. simulated the 3D PEMFC model with Fluent [2]. They
used straight parallel-type channel PEMFC with the addition of a cooler channel. Their
findings are not new but they verified their simulation results with those of the experiment.
Kumar et al. tried to find the best channel and rib optimal geometry using the 3D numerical
method [3]. They obtained optimal channel width, rib width, and channel height using the
fuel consumption. Manso et al. used 3D numerical simulation to obtain the optimal channel
aspect ratio, i.e., considering the channel height and width ratio [4]. Ahmed et al. used
Star-CD and performed numerical analysis by changing the channel shapes into rectangular,
trapezoidal, and parallelogram [5]. In considering the previous research results, the
geometrical effect on the PEMFC performance can be identified by focusing on the channel
and rib-width ratio and channel height variation for straight channeled or serpentine
channeled PEMFC. For serpentine-channeled PEMFC, the serpentine flow pattern is the
major issue affecting the increase of PEMFC performance.
In the present paper, we consider a 3D PEMFC unit cell with serpentine-type channel. Three
land ratios of channel and rib were considered for the performance comparison. Pressure drop
between inlet and outlet is also an important factor in designing the fuel cell because large
pressure drop requires large extra compressor or pump power to deliver sufficient air or
hydrogen, which ultimately degrades the total fuel cell efficiency. As pressure drop and
diffusion velocity strongly depends on the land ratio variation, its effects are discussed.
2. Model Equations
The governing equations for the numerical simulation are conservation of electron and proton
charges, momentum transport, and species transport. The electron and proton charge
conservation laws are applied using the following:
  ( s s )  Si ,
(1)
  ( mm )  Si ,
(2)
where  s and  m represent the conductivity of the solid electrode and membrane,
respectively. The volumetric current density, S i , is determined from the Butler-Volmer
equation for the anode and cathode reactions respectively given as [6]:
 c H 2    a F

  F

 exp
S an  jo,a 
(   )  e x p c (s  m )  ,
 c H ,r e f  RT s m 
 RT

 2 
(3)
 cO2   a F

 c F

 exp
S ca  jo,c 
 RT (s  m )  e x p RT (s  m )  ,
 cO ,r e f



 2  
(4)
where jo,c and jo,a are the cathodic and anodic current exchange densities, respectively. In
342
Equation (2), the membrane conductivity can be defined as [7]:



 m   0.514


 1 1 
cw  0.326 exp1268    ,



r y
 T0 T  


M m,r d
 m,d
y
(5)
where T0  353K .
The following momentum transport equation is used:
  ( uu)  p    (u)  S p ,
(6)
where S p is the sink source term for porous media in the x, y, and z directions. As the
pressure drops in porous media, Darcy's law is taken into account in the model. The source
term is defined as:
Sp  
u
,

(7)
where  is the permeability.
At the anode and cathode GDLs, no electrochemical reactions occur, only species transfer
and diffusions. Therefore, the Maxwell-Stefan equation is used for the species transfer and
diffusion defined as:


N


j 1
M 
M 
p  

  j   j
  (x j   j )
  Ss
M 
p  
 M j 

u  i    i  Dij 
(8)
,
where  i represents the mass fraction of species i. The source term, Ss, is implemented
based on electrochemical kinetics, i.e., consumption of the reactants to produce a current. It
becomes S s   jca M O2 / 4F and  janM H 2 / 2F in the cathodic and the anodic GDLs,
respectively, except for water vapor. Water vapor is S s   jcaM H2O / 2F  Svl and
S s  jca (1  2 )M H2O / 2F  Svl for the anode and cathode, respectively, and Sv l represents
the condensation and evaporation of the water. Its expression is represented by [8]:
M H 2O
 M H 2O

S vl   
( x H 2O p  psat ) f ( S )  E
S ( x H 2O p  psat )(1  f ( S ))
RT
RT

,
(9)
where  is the condensation constant, E is the evaporation constant, and f(S) is the switch
function, which is dependent on the relative humidity inside the GDL and catalyst layer. Here,
f(S) is expressed as:
343

0 if

f (S )  
1 if


x H 2O p
ps a t
x H 2O p
ps a
1
.
(10)
1
t
Fig. 1 Polarization curves of 3 different channel/rib
Fig. 2 Schematic illustration for channel/rib velocity
configurations
PEMFC 3D model
3. Results and Discussions
A series of simulations were conducted from high to low cell-operating voltages. The
geometrical and physical parameters for the simulation are shown in Table 1. The polarization
curves obtained for the three land ratio variation cases are shown in Fig. 1. The result clearly
shows that the land ratio and inlet relative humidity affect PEMFC performance. To prove the
validity of the hypothesis of this study, the experimental result reported by Ferng [9] is also
presented in the figure. The comparison between our simulation results with the experimental
findings clearly shows that the present model represents the polarization characteristics of
PEMFC well. As shown in Fig. 1, Case #3 has the narrowest rib, which indicates the best
performance, whereas Case #1 has the widest rib model, which indicates the worst
performance. The figure also reveals that lower relative humidity can increase fuel cell
performance for the same land ratio case. Given that current density distribution is dependent
on the oxygen or hydrogen mole fraction, a thorough pressure and velocity distribution study
is necessary for the design and control of PEMFC to increase fuel cell performance.
3.1 Velocity Distribution Effect
Fig. 2 shows the velocity field distribution for the Case #1 with y-z planes.
344
Model
Table 1. Geometry details (in millimeters)
Total volume
Rib
Channel
Ratio (rib : channel)
(W x H x L)
Case #1
10 x 4.5 x 20
2
2
1:1
Case #2
8 x 4.5 x 20
1
2
0.5 : 1
Case #3
7 x 4.5 x 20
0.5
2
0.25 : 1
Fig. 3 Comparison anode velocity in GDL
Fig. 4 Comparison cathode velocity in GDL
345
Also Fig.3 and Fig.4 show the anode and cathode velocity distribution with three positions
of x=5, 10, 15 mm, respectively. The figure reveals that the velocity distribution does not
change much along the x-axis, but it shows large variation along the y-axis, i.e. cross the
channel. This is because there is a cross-over effect along the channel and rib, and this
phenomenon can lead to the change of fuel cell performance. The velocity is faster near the
rib than channel. By comparing the cases of #1-#3, it can be found that the shorter rib width
fuel cell gives the better fuel cell performance. Fig. 5 and Fig. 6 show the velocity
Fig. 5 Comparison anode velocity in GDL
Fig. 6 Comparison cathode velocity in GDL
Fig. 7 Gas velocity through the GDL (case #1-#3)
Fig. 8 Schematic illustration for anode and cathode
Pressure in 3D PEMFC model
distribution with x=10 mm position. The maximum velocity can be found at the boundary
between channel and rib, while it shows minimum values with little variation under the rib.
This figure reveals that even if the rib width ratio varies, the velocity distribution under the
channel is minimal. Fig. 7 shows the detailed velocity distribution vector for the cases #1-#3
with x=10 mm position. For the wider rib case of #1, the cross over velocity vector plays a
346
dominant role for the diffusion of the species, while for the narrower rib cell case, there
happens little species cross over. This phenomenon can be expressed with relative depth
variation between the fuel cell models of case #1-#3. As the ratio between depth/width under
the channel is the largest for the model of #3, it is difficult for the species to cross to the next
channel area; therefore, the dominant velocity component is the vertical direction. We can
find also that the larger velocity is obtained at the cathode than anode area; therefore, cathode
electrochemical reaction happens more actively than anode.
Fig. 9 Comparison anode pressure in GDL
Fig. 10 Comparison cathode pressure in GDL
3.2 Pressure Distribution Effect
The analysis is performed with 1.1 atm inlet pressure for both cathode and anode. Fig. 8
shows the 3-dimensional pressure distribution at x=5, 10, 15 mm, respectively. Also Fig. 9
shows the anode pressure distribution at x=10 mm for three cases. The figure reveals that the
hydrogen pressure drop happens near the electrode area. The larger pressure drop can happen
with the higher current density than with the lower current density, because larger current
density requires more species of hydrogen, therefore, the larger pressure drop can be found.
In the figure, the largest pressure variation and pressure drop can be found with larger
electrode width than the smaller electrode width fuel cell case. This figure reveals that the
narrower electrode fuel cell shows little pressure fluctuation; therefore, better current density
can be obtained. Fig. 10 shows the pressure distribution at the cathode for the three cases of
fuel cell. The pressure distribution shows the similar tendency with the anode case.
4. Conclusion
In this paper, we performed a numerical analysis of current density distribution with varying
land ratios of channels and lands. To study the effects of the land ratio variation on PEMFC
performance, three ratio cases, specifically, 1:1, 0.5:1, and 0.25:1, are considered. Velocity
distribution and pressure distribution have been selected as the main variables that distinguish
347
fuel cell performance among the three land ratio cases. By comparing the land ratios, the
following conclusions are drawn:
1) A narrower rib gives the best performance, whereas a wider rib produces poor
performance.
2) A wider rib generates higher horizontal velocity distribution both in the anode and cathode,
which results in lower fuel cell performance.
3) For the effect of pressure distribution, the narrower rib cell shows lower pressure drop and
fluctuation, resulting in higher cell performance and efficiency.
5. References
[1] Wang XD, Duan YY, Yan WM, Wemg FB, Effect of humidity on the cell performance
of PEM fuel cells with parallel and interdigitated flow fields designs, Journal of Power
Sources, 2008, 176:247-258.
[2] Cordiner S, Lanzani SP, Mulone V, 3D effects of water-saturation distribution on
polymeric electrolyte fuel cell (PEFC) performance, International Journal of Hydrogen
Energy 2011, 36:10366-10375.
[3] Kumar A, Reddy RG, Effect of channel designs and shape in the flow-field distributor
on the performance of polymer electrolyte membrane fuel cells, Journal of Power
Sources 2003, 113:11-18.
[4] Manso AP, Marzo FF, Mujika MG, Barranco LA, Numerical analysis on the influence
of the channel cross section aspect ratio on the performance of a PEM fuel cell with
serpentine flow design, International Journal of Hydrogen Energy 2011, 36:6795-6808.
[5] Ahmed DH, Sung JJ, Effects of channel geometrical configuration and shoulder width
on PEMFC performance at high current density, Journal of Power Sources 2006,
162:327-339.
[6] Gurau V, Liu H, Kakac S, Two-dimensional model for proton exchange membrane fuel
cells, AICHE Journal 1998, 44:2410-2422.
[7] Springer E, Zawodinski TA, Gottesfeld S, Polymer electrolyte fuel cell model, Journal
of the Electrochemical Society 2001, 148:A1324-35.
[8] Natarjan D, Nguyen TV, A two-dimensional, two-phase, multicomponent, transient
model for the cathode of a proton exchange membrane fuel cell, Journal of the
Electrochemical Society 1991, 138:A2334-2342.
[9] Ferng YM, Su A, A three-dimensional full-cell CFD model used to investigate the
effects of different flow channel designs on PEMFC performance, International
Journal of Hydrogen Energy 2007, 32:4466-4477.
[10] YANG WJ, KANG SJ, Kim YB, Numerical investigation on the performance of proton
exchange membrane fuel cells with channel position variation, International Journal of
Energy Research 2012, 36:1051-1064.
[11] YANG WJ, WANG HY, Kim YB, Effects of the humidity and the land ratio of channel
and rib in the serpentine three-dimensional PEMFC model, International Journal of
Energy Research 2012, DOI: 10.1002/er.2935
348
372
Application of Numerical Optimization Techniques to shaft design
Abdurahman M. Hassena,* ; Neffati M. Werfalli b; Abdulaziz Y. Hassanc
a,b,c
Mechanical & Industrial Engineering Dept., Faculty of Engineering
University of Tripoli, P. O. Box 9022, Tripoli, Libya
abeda@tripoliuniv.edu.ly
Abstract
The augmented Lagrangian method provides a strategy to handle equality and inequality
constraints by introducing the augmented Lagrangian function, then the aim is how to find a
minimum to this function. In this study the augmented Lagrangian function introduced as
unconstrained function and the Steepest-descent method as one-dimensional search
sub-problem were used for the optimization problem. This method is the most basic one to
compute a search direction. It simply uses the gradient of the function at the current iteration,
which points towards increasing values of the function. Defining the negative gradient as the
search direction will result in a safe descent direction. The shaft weight is considered as an
objective function. Von Misses for static failure criterion and modified Goodman for fatigue
criterion are considered as constraints. The design variable is the shaft diameter. Finite
element method, Galarkin weighted residual is implemented for stress analysis. The obtained
results were satisfied and achieved in a few numbers of iterations, the shaft weight is reduced
without constraints violation.
Keywords, Structural optimization, Augmented Lagrangian method, shaft design.
1.
Introduction
In practical always engineers looking for a simple creative method to design structure,
although the steps of shaft design is known to engineers, but the result is excessive weight
due to the conservative of these methods of shaft design. Structural syntheses played a
major rule to minimize weights of structures since Schmit report [1], that work open a wide
door for researchers in this field, tens of methods are adopted for this purpose, some are
complicated others are straight forward. In this study Augmented Lagrangian method has
been used for solving constrained objective function by converting it to unconstrained
function by using the sequential unconstraint minimization technique (SUMT) using
Steepest-descent method as one dimensional search for obtaining the optimum. In
engineering fields shafts are widely used, as a rotating member usually of circular cross
349
sections, used to transmit power or motion. There are some stresses which play more
effective roles on the failure than the other ones for a specific loading situation and may
cause different failures such as static, fatigue, creep, dynamic failure… etc. All of these
constrains and design limit of shafts are also play effective roles in shaft design optimization.
The shaft design optimization problem we are going to develop is in the aspect of static and
fatigue failure as constraints which dominate various particular situations and the shaft
weight as an objective function.
2.
Design optimization problem statement
The geometry of the shaft has to be designed to meet failure due to static or steady loading
and fatigue loading. A static loading can produce axial tension or compression, a shear load,
bending load, a torsion load, or any combination of these.
The stress at an element located on the surface of a solid round shaft of diameter d subjected
to bending moment M, axial loading F, and twisting of torsion Load T [2] is,
32 M 4 F

d 3 d 2
16T
 3
d
x 
 xy
Where the axial component of the normal stress  x
(1)
may be additive or subtractive. By using
Von Mises failure criterion [2] the Von Mises stress could be expressed as
4
[(8M  Fd ) 2  48T 2 ]1 / 2
d 3
(2)
When an allowable value of  ' is given in terms of yield strength of a material Sy and for a
 '
design factor nd , the Von Mises theory of ductile failure of an allowable stress is given by
 ' all 
Sy
nd
(3)
The behavior of the shaft is entirely different when they are subjected to varying loading. So
the shaft shall be examined under different loading and limit them to a certain values to resist
such conditions. The modified Goodman fatigue failure criterion is used and is given as [2]:
Sa
S
 m 1
Se
S ut
(4)
Equations (4) display the relationships among stress, strength, and design factor of safety for
a number of models. When S a is substituted for n a and S m for n m , the equation for the
failure given as
350
nf a
Se

nf m
Sut
1
(5)
The concept of optimization is the process of obtaining the best possible result under the
circumstances. The result is measured in terms of an objective which is minimized or
maximized. The circumstances are defined by a set of equality and/or inequality constraints.
Various methods can be used to achieve the design goal. Numerical optimization techniques
offer a logical approach to design automation. The general statement of the nonlinear
constrained optimization problem can be mathematically written generally as follows,[3], [4],
[5]:
Objective function
Minimize:
(6)
F (X )
Subject to:
g j (X )  0
Inequality constraints
j  1, m
(7)
hk ( X )  0
Equality constraints
k  1, l
(8)
Side constraints
Xl  X  Xu
Where design variables are:
(9)
X  X1 X 2 X 3 . . X n 
T
The vector X is referred to as the vector of design variables. Equation (9) defines bounds on
the design variables X and so is referred to as a side constraint.
The general approach for minimizing the constrained function is to minimize the objective
function as an unconstrained function but to provide some penalty to limit constraint
violations. This penalty is increased as the minimization progresses. This requires the
solution of several unconstrained minimization problems in obtaining the optimum
constrained design; this refers as sequential unconstrained minimization techniques or SUMT
to identify these methods [4], [5], [6].
The classical approach to using SUMT is to create a pseudo-objective function of the form
 ( X , rp )  F ( X )  rp P( X )
(10)
Where F(X) is the original objective function. P(X) is an imposed penalty function, the form
of which depends on the SUMT being employed. The scalar rp is a multiplier which
determines the magnitude of penalty, and rp is held constant for a complete unconstrained
minimization. One of the powerful methods for solving the constrained function is
351
Augmented Lagrange multiplier (ALM) which can be presented for equality and inequality
constrained problem as:
m

A( X ,  , rp )  F ( X )    j j  rp
j 1
2
j
  
l
k 1
k m
hk  rp [hk ( X )]2

(11)
Where

j
 max[ g j ( X ),
 j
2rp
]
(12)
The update formula for Lagrangian multipliers are now




 pj 1   pj  2rp max[ g j ( X ),
 p j
2r p


]


j  1, m
(13)
kp1m  kp m  2rp hk ( X p )
k  1, l
(14)
The objective function of the shaft design optimization in this analysis is to minimize the
weight of the shaft under specific loading conditions that may cause Failure. The design is
developed in the domain of static failure analysis and then fatigue failure analysis. So the
constraints such as the effective stress (Von Mises) as Static failure constraint function and
modified Goodman fatigue failure as a dynamic constraint function, which dominate various
particular situations.
The objective (weight) and constraint functions are given by
F ( x)   V
(15)
Subject to
g1 ( x) 
ns eff
Sy
1  0
(16)
g 2 ( x) 
nfa
Se

nfm
S ut
1  0
(17)
Substituting these equations in Eqs.(11) gets
A( X ,  , rp )  F ( X )  1 1  rp 1  2 2  rp 2
2
Where r p
2
(18)
scalar parameter, and  Lagrangian multiplier which convert the constraint
optimization to unconstraint optimization. For unconstraint minimization, the optimum value
occur when the gradient of the objective function vanish A( X ,  , rp )  0 , for new design
352
point pick new X vector by:
X 1  X o  F ( X o )
(19)
Picking several values for the scalar parameter α, and for each resulting X vector evaluate
F(X) = F(α ), then some value α* will provide a minimum F(X) for this search direction. By
using the gradient of F(X), to limit the search to a specific direction, rather than randomly
searching the entire space. Golden section method is used as one dimensional search for
obtaining the solution.
A critical part of the overall optimization process is determining when to stop our search for
the optimum. The termination criteria we choose can have a major effect on the efficiency
and reliability of the optimization process. The termination criterion which should be used is
a check on the progress of the optimization is absolute or relative change in the objective
function and the Kuhn-Tucker necessary conditions for optimality [6].
3.
Numerical Example
The mathematical model of shaft design optimization problem built in previous sections used
here with arbitrarily three examples of shaft design to find its optimum weight. In each case,
the gradient of augmented function is used as search direction and the Golden Section is used
for one dimensional search.
3.1 Example 1: Pedal Shaft
This example fig.(1) Pure’s human-powered water still [7], has the following input data
module of elasticity of 207GPa, yield strength of 350MPa, ultimate tensile strength of
658MPa, density of 7650kg/m3, flywheel mass of 1.76 kg, uniform shaft diameter of 15mm,
xy plane force of 202N, xz plane force of 202N and Shaft torque of 1.0Nm. these geometry
and boundary conditions are used to redesign of the shaft then optimize its dimensions.
Figure (1) pedal shaft.
The shaft is divided to eight elements. The designed diameters obtained from analysis would
be as initial design of the optimization processes by taking these design and the boundary
conditions then insert them into optimization algorithm. The iteration history of the method
shows that the design started for the outside diameter with shaft weight of 0.363 Kg then
353
reduced to 0.229 Kg, saving about 36.9 % of the shaft weight in a few number of iterations
and for the inside diameter from 0.229 Kg of weight, reduced to 0.21462 Kg, as extra saving
about 6.28 % of the shaft weight. The shaft needs to be fitted to its place fig.(2) and check its
rigidity fig.(3). The optimization process showed a valuable reduction of shaft weight as
shown in table (1).
Figure (2) optimized shaft dimensions after fitting.
Figure (3) shaft rigidity check after fitting
Table (1) comparison of initial and optimum shaft weight.
Example 1 (Pedal Shaft)
Cases
weight(kg)
Initial
Our work
(Fortran)
Cosmos
Nastran
0.363
0.363
0.363
354
Optimum
0.229
0.233
0.1973
Reduction
36.9%
35.8%
45.6%
Variation*
-
1.1%
-8.7%
Iteration
5
-
2
*variation of the results of our work (Fortran) and the other two commercial
programs.
3.2Example 2: Rail’s car Shaft
Figure (4) the section view of loading on the shaft.
The design is an industrial car shaft running on rails, [8] when a loaded of 10kN is applied on
each wheel and an average value of 100Nm for the value of torque is applied on the shaft, the
free body diagram of the shaft is depicted as fig.(4).
The shaft has the following input data; module of elasticity of 207GPa, yield strength of
350MPa, ultimate tensile strength of 658MPa, material density of 7650 kg/m3.
The shaft has the following dimensions D=58.0 mm, D1=1.2D and D2=0.8D.
The shaft is divided to ten elements. The iteration history of the method that the design
started for the outside diameter with shaft weight of 17.366Kg then reduced to 10.967Kg
saving about 36.85% of the shaft weight in a few number of iterations and for the inside
diameter from 10.967Kg of weight reduced to 9.241Kg, that more saving about 1.574 % of
the shaft weight. The shaft fitted as fig.(5) and check its rigidity fig.(6). The optimization
process showed a valuable reduction of shaft weight as tabulated in table (2).
355
Figure (5) optimized shaft dimensions after fitting.
Figure (6) shaft rigidity check after fitting
Table (2) comparison of initial and optimum shaft weight.
Example 2 (Rail’s car shaft)
Cases
weight(kg)
Our work
(Fortran)
Cosmos
Nastran
Initial
17.366
17.366
17.366
Optimum
10.967
11.814
10.513
Reduction
36.85%
31.97%
39.46%
Variation*
-
4.88%
-7.49%
iteration
9
-
2
*variation of the results of our work (Fortran) and the other two commercial
programs.
356
3.3 Example 3: Sprockets Shaft
The problem is designing of a sprocket shaft with minimal bending deflection. The
actual design input data are listed as in fig.(7).
Figure (7) the section view of the shaft.
The shaft material is 1045 Steel, cold drawn, subjected to the following forces; T1=1000Nm,
T2=1000Nm, F1= 7400N F2=10800N.
The shaft is divided to six elements. The optimization process showed a valuable reduction of
shaft weight. The iteration history of the method shows that the design started for the outside
diameter with shaft weight of 8.31904 Kg reduced to 5.3661Kg, saving about 35.5 % of the
shaft weight in a few number of iterations and for the inside diameter from 5.3661Kg of
weight reduced to 5.1349Kg, with extra save about 4.5% of the shaft weight. The shaft needs
fitted as shown in fig.(8) and check its rigidity fig.(9). The optimization process showed a
valuable reduction of shaft weight as shown in table (3).
357
Figure (8) optimized shaft dimensions after fitting.
Figure (9) shaft rigidity check after fitting
Table (3) comparison of initial and optimum shaft weight.
Example 3 (Pinion Shaft)
Cases
weight(kg)
Our work
(Fortran)
Cosmos
Nastran
Initial
8.31904
8.31904
8.31904
Optimum
166..5
5.3269
4.875
Reduction
35.5%
35.97%
41.399%
Variation*
-
0.45%
5.899%
iteration
5
-
2
*variation of the results of our work (Fortran) and the other two commercial
programs.
358
4.
Conclusions
The obtained results show that the shaft weight of the three examples are reduced
without constraints violation. Clearly the initial design is conservative and the using of
optimization techniques can improve design and safe material. The Augmented Lagrangian
method with Golden section method “sequential unconstraint minimization technique”, is
efficient, reliable and comparable.
5.
References
[1] L. Schmit and H. Miura; “Approximation concepts for efficient structural synthesis”;
NASA CR-2552; 1976
[2] Josiph E. Shigley, Charles R. Mischke and Richard G. Budynas, 2003, Mechanical
Engineering Design, McGraw Hill, Seventh Edition.
[3] G. Beveridge and R. Schechter, Optimization; Theory and Practice, McGRAW-HILL
[4]
[5]
[6]
[7]
[8]
Kogakosha, LTD.
Hojjat Adeli, 1994, Advanced in Design Optimization, Chapman & Hall, first edition.
Manohar P. Kamat, 1993, Structural Optimization; Status and Promise, American
Institute of Aeronautics and Astronautics, Inc.
Garret N. Vanderplaats, 1984, Numerical Optimization Techniques for Engineering
Design, McGraw Hill.
Kyle Becker, 2006 , Shaft Design and Flywheel Dimensioning
Semih Uğurluel, 2005, Design Of An Industrial Railway Car Shaft, University of
Gaziantep, Department Of Mechanical Engineering.
359
392
Development of a Single-Wheel Test Rig for Traction Dynamics
and Control of Electric Vehicles
Apirath Kraithaisria, Suwat Kuntanapreedab, Saiprasit Koetniyomc
King Mongkut’s University of Technology North Bangkok,
Bangsue, Bangkok, Thailand
a
a-kraithaisri@hotmail.com
b
suwat@kmutnb.ac.th
c
saps@kmutnb.ac.th
Abstract
Traction is the vehicular propulsive force produced by the friction between rolling wheels
and road surfaces. Since the characteristic of the friction is highly nonlinear, traction
dynamics and control are complex. Additionally, the traction control plays an important role
in vehicle motion because it can directly enhance drive efficiency, safety, and stability. In this
paper, a single-wheel test rig is designed and developed to mimic longitudinal dynamics of
electric vehicles. The main components of the test rig include a drum set, a wheel set and a
computer-based measurement/control unit. The role of the drum set is to simulate the wheel
moving on the road. The test rig can be utilized, for instance, to study dynamic responses of
the traction, to investigate some slip characteristics of road/tire, and to test traction control
algorithms. According to conducted experiments, it was found that the developed test rig is
an effective experimental platform for studying vehicle traction dynamics and control.
Keyword: Vehicle traction, Traction control, Electric vehicle, Experimental.
1. Introduction
Electric vehicles (EVs) and hybrid electric vehicles (HEVs) have become very attractive for
replacing conventional internal combustion engine vehicles because of environmental and
energy issues. Understanding of traction dynamics and control is essential for improving
vehicle performance. In particular, traction control plays an important role in the vehicle
motion control because it can directly enhance drive efficiency, safety, and stability. Traction
is the vehicular force produced by the friction between rolling wheels and road surfaces. The
characteristic of the friction is very nonlinear, which makes traction dynamics and control
complicated. An objective of the traction control is to operate vehicles such that a desired
condition is obtained. Since main parts of the propulsion system of EVs are electric motors,
360
which can produce very quick and precise torques compared to internal combustion engines,
the traction control of EVs has attracted attention of many researchers (e.g., see [1-4]). In [1],
the longitudinal traction control of EVs using a fuzzy controller and a sliding-mode controller
was presented. Experimental tests with a test rig proved the controllers have a good response
without any knowledge of the road characteristics. In [2], the traction control based on the
maximum transmission torque estimation was proposed. The estimation is carried out by an
open-loop disturbance observer. Experiment results using an experiment EV illustrated the
effectiveness and practicality of the proposed control design. The work was extended in [3]
by replacing the open-loop observer with a PI-observer. By doing this, the robustness of the
control system is enhanced. In [4], the traction control of EVs using a sliding-mode observer
to estimate the maximum friction was proposed. The controller uses this estimated maximum
friction to find the suited maximum torque for the wheels.
Test rigs are preferable tools in engineering research and development because of time and
cost effectives. For vehicle related studies, it usually requires some kind of the test rigs due to
not only the time and cost issues but also some safety reasons. In [5], a dynamic test rig was
developed for tire testing. The test rig consists of a large drum that is powered by an electric
DC motor and is capable of a maximum test velocity of 160 km/h. The test rig guides the tire
to be tested on the drum, while characteristic values such as slip angle, camber angle, wheel
load and tire inflation pressure can be adjusted individually. In [6], experimental data
obtained from a tire test rig was used to develop a mathematical tire model. The model was
implemented in a multi-body software for motorcycle dynamics simulations. The
experimental test shows a good correlation between experiments and simulations. In [7], a
quarter car test rig consisting of an active shock absorber was developed. The test rig
simulates the vertical dynamics of a single corner of a car. It was utilized to design a robust
linear controller to attenuate the body acceleration in such a way that both the comfort and
handling characteristics are improved. In [8], a powertrain test rig was designed to simulate a
vehicle mass of 1500 kg via a flywheel system. The test rig includes all the components of
the vehicle powertrain. It was used to study impulsive responses in automatic transmission
systems with multiple clearances under transient excitations.
In this paper, a test rig for traction dynamics and control studies of small-size electric
vehicles was designed and developed. The developed test rig was used for studying dynamic
traction responses, investigating road/tire slip phenomenon, and experimenting traction
control algorithms.
2. Single-Wheel Test Rig
The developed test rig consists of a drum set, a wheel set and a measurement/control unit. A
general view of the test rig is shown in Fig. 1.
361
Fig. 1 General view of test rig
2.1 Drum Set
The role of the drum set is to simulate the wheel moving on the road. The drum set consists
of a large drum and its support (see Fig. 2). The drum was made of a 0.008-m steel sheet
wrapped around two parallel 1-m-diameter disks of 0.28 m apart. The centers of the disks
were jointed together by a steel shaft of 0.01-m diameter and 0.5-m long. After all parts were
welded together, the drum was rounded on a lathe, and then it was balanced. The finishing
diameter and width of the drum are approximately 1.0 and 0.3 meters, respective. This
dimension of the drum is suited for any tire with a rim diameter between 4" and 6", which is a
common rim diameter for small-size electric vehicles. The shaft was also installed with a
brake disk and an ABS sensor ring for rotational speed measurement. A corresponding ABS
speed sensor was installed on the frame of the support. Last of all, the drum was placed on
the support though four bearings which allows it to rotate freely. The rotational inertia of the
drum was found to be approximately 26.76 kg-m2. Note the only force driving the drum is the
traction from the wheel.
362
Fig. 2 Drum set
2.2 Wheel Set
The wheel set consists of a wheel, a brushed DC motor, and loading masses (see Fig. 3). All
components were mounted on a rigid support plate. The plate attached to the frame of the test
rig through a rotational joint, which allows the plate to swing up and down. The wheel was
directly driven by the motor thought a rigid shaft. The shaft was also installed with a brake
disk and another ABS sensor ring. A corresponding ABS speed sensor was mounted on the
plate to measure the rotational speed of the wheel. The capacity of the motor is 500 watts;
however it can be changed if needed. The motor was driven by a PWM motor drive which is
commanded through an analog voltage signal. The current drawn by the motor was sensed by
a hall-effect current sensor. Note that the value of the drawn current can be used to determine
the torque driving the wheel. According to a calibration process, the nominal relation
between the current ( i ) and the torque ( T ) was found as
T  0.0687  i .
The loading masses were supported by a set of spring-damper representing a suspension
mechanism. The masses kept the mounting plate down such that the tire always touches the
drum. The number of masses determines the normal force at the tire/drum contact. The
relation between the number of masses ( n ) and the normal force ( N ) are given as
N  2.3  0.4  n ,
where n  1, 2, 3, ...
363
Fig. 3 Wheel set
2.3 Measurement/Control Unit
The measurement/control unit consists of a signal condition circuit and a PC computer. The
computer was installed with a 12-bit analog/digital interface board. The circuit, which is a
microcontroller-based circuit, was for the ABS speed sensors. It converted frequency signals
to analog signals. Note that the current sensor signal is already analog. The
measurement/control unit can be used as a data acquisition logger and a controller. All
signals from the sensors were fed to the computer through the interface board. In control
experiments, the control law was implemented digitally in the computer with the sampling
period of 0.5 sec. The control signal from the computer was converted to an analog signal by
the interface board, and is transmitted to the motor drive closing the control loop.
3. Experimental Results
To illustrate the feasibility and effectiveness of the developed test rig, three experiments were
conducted: Steady-state characterization, Dynamic responses, and Closed-loop control.
3.1 Steady-state characterization
In this first experiment, various voltage commands were issued to the motor through the
motor drive to rotate the wheel. Three different numbers of the masses (i.e., n = 4, 6 and 8)
were set for the experiment. After the speed of the wheel and the drum were in the steady
state, the speed signals were recorded. The slip ratio can be directly computed from these
364
signals. The results are shown in Fig. 4. It was observed that the slip first raised the value of
0.25 at the wheel speed about 110 rad/sec. Note that the maximum adhesive coefficient
usually reaches its maximum value at the slip ratio around 0.2-0.3.
0.3
0.25
Slip Ratio
0.2
0.15
0.1
n=4
n=6
n=8
0.05
0
40
60
80
100
Wheel Speed (rad/sec)
120
140
Fig. 4 Slip ratio versus wheel speed
3.2 Dynamic responses
In this experiment, the wheel was first operated at a steady state condition with the constant
voltage command of 1.70 V. Then, at time = 50 sec., a command step change (dU) was given
to the motor drive. The speed of the wheel was recorded in real-time. Without loss of
generality, the number of loading masses was fixed to eight. As shown in Fig. 5, there is slip
occurred in the case dU = 0.20 volt. The slip starts at time = 130 sec. The slip results in the
oscillation of the speed. In addition, for the cases dU = 0.05 and 0.10 V where there is no slip,
the dynamic time-constant of the wheel is approximately 50 sec.
75
70
dU = 0.20 V
Wheel Speed (rad/sec)
65
60
55
dU = 0.10 V
50
45
40
dU = 0.05 V
35
0
50
100
150
200
250
300
Time (sec)
Fig. 5 Step responses
365
350
400
450
500
3.3 Closed-loop control
In this experiment, a conventional PI controller was utilized to control the wheel to operate at
the desired wheel speed of 50 rad/sec. The parameters of the controller were tuned
experimentally. The values of the proportional gain (kp) = 0.08 and the integral gain (ki) =
0.001 were chosen. As shown in Fig. 6, the controller reduces the dynamic time-constant to
approximately 10 sec. Moreover, when there are some disturbances acting on the drum (e.g.,
at time = 50 and 100 sec), the controller is able to bring the wheel speed back to the desired
speed effectively.
52
Wheel Speed (rad/sec)
50
48
46
44
42
40
38
0
20
40
60
80
Time (sec)
100
120
140
20
40
60
80
Time (sec)
100
120
140
Control Command (volt)
3
2.5
2
1.5
0
Fig. 6 Control responses
366
4. Conclusions
A single-wheel test rig has been designed and developed to study traction dynamics and
control of small-size electric vehicles. The test rig consists of a drum set, a wheel set and a
measurement/control unit. The drum simulated the wheel moving on the road. The wheel was
directly driven by a DC motor. Experimental results illustrated that the developed test rig is
an effective tool for traction studies of electric vehicles.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
5. References
V.D. Colli, G. Tomassi, and M. Scarano, ‘Single-Wheel’ longitudinal traction control for
electric vehicles, IEEE Trans. Power Electron., vol. 21, no. 3, pp. 799-808, 2006.
D. Yin, S. Oh, and Y. Hori, A novel traction control for EV based on maximum
transmissible torque estimation, IEEE Trans. Ind. Electron., vol. 56, no. 6, pp.
2086-2094, 2009.
J.-S. Hu, D. Yin, and Y. Hori, Fault-tolerant traction control of electric vehicles, Control
Engineering Practice, vol. 19, pp. 204-243, 2011.
G.A. Magallan, C.H.D. Angelo, and G.O. Garcia, Maximization of the traction forces in
a 2 WD electric vehicle, IEEE Trans. Veh. Technol., vol. 60, no. 2, pp. 2369-380, 2011.
P. Holdmann, P. Köhn, J. Holtschulze, Dynamic Tyre Properties Under Combined Slip
Situations in Test and Simulation, Proceeding of the 7th European Automotive Congress,
Barcelona, Spain, 20 June - 2 July 1999.
R. Lot, A motorcycle tire model for dynamic simulations: Theoretical and experimental
aspects, Meccanica, vol. 39, pp. 207-220, 2004.
C. Lauwerys, J. Swevers, and P. Sas, Robust linear control of an active suspension on a
quarter car test-rig, Control Engineering Practice, vol. 13, pp. 557-586, 2005.
A.R. Crowther, R. Singha, N. Zhangb, and C.s Chapman, Impulsive response of an
automatic transmission system with multiple clearances: Formulation, simulation and
experiment, J. Sound and Vibration, vol. 306, pp. 444-466, 2007.
367
411
Material removal rate prediction for blind pocket milling of SS304 using
Abrasive Water Jet Machining Process
T V K Guptaa,*, J Ramkumarb, N S Vyasb , Puneet Tandona
a
PDPM Indian Institute of Information Technology Design & Manufacturing, Jabalpur,
India tvkg@iiitdmj.ac.in; ptandon@iiitdmj.ac.in
b
Indian Institute of Technology Kanpur, India
jrkumar@iitk.ac.in; vyas@iitk.ac.in
Abstract
A study on the material removal rate (MRR) for pocket milling through Abrasive Water Jet
Machining (AWJM) technique on stainless steel (SS304) material is discussed in the present
work. Investigations are carried out to study the influence of process parameters on the
material removal rate under different experimental conditions. Based on the machining
conditions, it is observed that individual process parameters have minimum influence on the
material removal rate in comparison to that with a combination. On the basis of experimental
observations, it is found that traverse speed (the interaction time between the jet and the work
surface) has major effect on MRR of all the parameters considered. Further to the above
investigations, the MRR has been calculated for standoff distance, abrasive flow rate,
abrasive sizes with varying traverse speeds. Using dimensional analysis technique, a
predictive model for material removal rate has been developed as a function of process
parameters. It has been found that the predicted model is in agreement with the experimental
results with deviation ranging from 0.2% to 10%
Keyword: Ts, AFR, SOD, MRR, Mesh
1. Introduction
AWJM process is an unconventional machining process in which the high pressure water is
converted to a very high kinetic energy by a small diameter of the orifice. This water when
mixed with abrasives is accelerated with generation of high kinetic energy, enhancing the
capability to cut materials. The water jet alone is capable of cutting soft materials like wood,
rubber, plastics, fabrics etc. and along with abrasives, this process gained importance in
cutting any material including concrete, steel, metals etc. and hard materials like Titanium
Inconel etc. AWJM process is a collective influence of process parameters like hydraulic,
368
pneumatic, abrasive, cutting, mixing parameters etc. Material removal mechanism in an
abrasive water jet cutting process is essentially a mechanical erosion process [3]. In this
process there are two kinds of mechanisms are present as cutting wear mode which occurs at
low angle of impact and deformation wear mode occurring at large angles of particle impact
[2]. Modeling of the process through cutting wear and deformation wear modes has been
carried by Bitter [3, 4]. Fundamentally the modeling of AWJ process is based on the surface
generation technique where a new surface is generated after the erosion of each single
particle impact and a different particle starts eroding the surface which is a continuous
process during cutting [5]. Hashish [6] reported that the material removal rate improves by a
factor of three by reducing the jet impingement angle onto the target material as compared to
a conventional 90o impact angle. Ives and Ruff [7] investigated that at a high impact angle of
90o extensive grit embedment occurs and it is not very much severe at low impact angles.
Similar tendencies are also observed by Hashish [6] in AWJ machining. In the last decades,
considerable efforts were put by researchers to understand the process and improve its
performance in terms of cutting such as depth of cut, surface finish etc. for a variety of
process parameters. With the advancements in technological areas like intensifiers, system
reliability, reduction in the cost, etc. water jet machining has become a traditional technology
in contour cutting applications. Its potentials in turning, milling drilling have also been
demonstrated [8]. Enough literature is available on AWJM for part separation (cutting
through) operations on various materials ranging from soft to hard, ceramics to polymers etc.
as specified above.
Milling with abrasive water jets has been first attempted by Hashish [9] as a preliminary
investigation to study the process. Laurinat [10] developed a model for material removal rate
based on the kerf profiles generated during milling of hardened steels, these are validated for
austenitic and ferrite steels also. Results concluded that mrr can be higher for higher
pressures and nozzle diameters, with the abrasive flow rate is of 30-35% of the water mass
flow rate. Ojmertz [11] initiated milling on alloy steels and aluminum for varying machining
parameters where the milled surface is characterized by a variation of the depth and also due
to the stochastic nature of the process. Zeng and Kim [12] used theory of intergranular
cracking to impact – induced stress waves to model the process for milling of polycrystalline
ceramics. The authors used this crack network model to evaluate the material removal. A
simple technique of milling consisting of longitudinal and transverse feeds are used to study
the material removal of a low carbon steels by regression analysis etc. Paul [13]. Wang [14]
analyzed the cutting performance in multipass machining of alumina ceramics using AWJ.
The authors reported the superiority of multipass cutting over single pass cutting operations
analyzing the kerf quality and depth for varying traverse speeds and traverse directions. Very
recently Fowler [15, 16, 17] worked on control depth milling of titanium alloy, where he
reported that the grit embedment is present in the 40% area of the milled surface where it can
369
be minimized with increase in traverse speeds and these depend on the complex interactions
of the process parameters, further the authors investigated on the characteristics of the milled
surface for varying grit sizes and traverse speeds. Further the authors observed significant
changes in the material removal rates with traverse speeds, emphasizing on the mechanism of
material removal based on the surface properties of the milled specimen.
Based on the above literature it is observed that the research carried in the area of milling by
AWJM is still at a nascent stage. The present work emphasizes to develop a predictive model
for MRR based on the process parameters for pocket milling applications. Process parameters
considered in the present work are traverse speed (Ts), abrasive flow rate (AFR), standoff
distance (SOD) and abrasive size with target material selected is stainless steel (SS304).
2. Experimental study on the MRR of SS304 for blind pocket milling
2.1 Experimental Setup
Blind pocket milling has major applications in automobiles, aerospace, defense sectors for
reducing the weight of the component without losing its strength, in particular for hard
materials machining for reduced heat affected zone. Figure 1 shows schematic of the
experimental setup. The work piece is held in a fixture fabricated for this purpose to
accommodate a specimen of size 30mm x 30mm x 10mm. The chemical composition of the
workpiece material, SS304 is given in Table 1 and the physical and mechanical properties are
given in Table 2.
CAD model of fixture
Fixture
Zzzzzzzzz
zzz
Plate
Figure 1: Experimental setup and fixture plate
Table 1: Chemical composition of SS304
% by wt.
Fe
C
Cr
Ni
Mn
Si
P
S
66
0.08
17.5-20
8-11
2
1
0.045
0.03
Table 2: Physical and mechanical properties of the target material
Hardness (HB) max
Thermal conductivity(W/m-K)
123
Modulus of elasticity (GPa)
o
16.2
Melting point ( C)
370
200
1455
Density (Kg/mm3)
7861
Poisson ratio
0.29
Tensile strength, Yield (MPa)
215
Shear modulus (GPa)
86
The tool path is generated for a 20 mm x 20 mm pocket through raster contour method only.
Figure 2 shows the tool path method and the step over, which is half of the diameter of the jet.
Experiments are conducted on an Abrasive Water Jet machine of OMAX USA, Model No.
2626 having an intensifier pumping system with a maximum operating pressure of 345MPa.
The technical specifications of the machine are given in Table 3.
Abrasive water jet tool path direction
Figure 2: Pocket milling using AWJM
Table 3: Technical Parameters of OMAX 2626
Maximum traverse speed (mm/min)
Jet impingement angle(Degrees)
4500
90
Orifice diameter (mm)
0.33
Mixing tube diameter (mm)
0.762
Mixing tube length (mm)
101.6
Maximum pressure (345MPa)
1455
2
Working area(mm )
762x600
The parameters considered for experimentation in the present study and their levels and
values are specified in Table 4. The selection of these process parameters is based on the
limitations of the machine and the practical applications. Since the machine is not equipped
with a tilt jet mechanism, all experiments are conducted at an impingement angle of 90 o and
35Ksi (240MPa) pressure only. Apart from the impingement angle, parameters like orifice
diameter, mixing tube diameter and length are not changed during experimentation.
Table 4: Design of Experiments
Process Variables
Traverse speed (mm/min)
Abrasive flow rate (kg/min)
Standoff distance (mm)
Abrasive size (mesh)
Level 1
Level 2
Level 3
3000
0.27
3
80
3500
0.38
4
120
4000
0.49
5
160
A full factorial experimental design at 3 levels of each parameter has been made using
MINTAB software resulting in a total 81 experiments. The performance parameters such as
pocket depth, material removal rate, dimensional accuracies, surface roughness etc. are
measured after experimentation. The work presented in the paper pays attention to the MRR
only, which is measured by a weight balance method.
2.2 Effect of process parameters on MRR
371
Fig.3 shows the experimental plots of the effect of process parameters (standoff distance,
abrasive flow rates and abrasive mesh sizes) with varying traverse speeds. The results thus
obtained through experimentation are well in favor of the results obtained and reported by
earlier researchers.
(a)
(b)
(c)
Figure 3. Effect of process parameters SOD(a); AFR(b); Mesh(c) on the MRR-experimental data
The plots shows that an increase in traverse speed decreases the MRR in most of process
parameters, because as the traverse rate increases, the jet interaction time with the target
material decreases resulting in a decrease in the MRR. The paper does not discuss the process
parameters influence on MRR.
3. Prediction for MRR
3.1 Dimensional Analysis using Buckingham pi-theorem
In AWJM a large number of variables affect the material removal process based on the models
developed by researchers. Though the literature reveals a variety of dominating parameters in this
process, we have considered limited parameters as tabulated in Table4. In the present paper, we have
considered the Vickers Hardness of the specimen and the material input property. The material
removal rate can be expressed as:
V  f Q, h, Ts , d , P, H w ,
(1)
where the average diameter of the abrasive particle is considered according to the specification chart
provided by the supplier. The set of parameters in Eq. (1) can be expressed in terms of three
fundamental dimensions, length L, mass M and time T. According to the Buckingham π-theorem we
select
Q, P, H w as the repeating variables. The number of dimensionless products is equal to the
total number of variables minus the number of repeating variables. In our case it is 4 (=7-3)
dimensionless groups (π1, π2, π3, and π4) are obtained which can be expressed as
1 
VL2
Ts
(2)
2 
h
d
(3)
372
3 
Pd 2
QTs
4 
H wd 2
QTs
(4)
(5)
According to dimensional analysis, the functional relation between the dimensionless groups in Eqn.
(2)-(5) is expressed as,
 1  f ( 2, 3 ,  4 )

VL2
 h   Pd 2
 f  , 
Ts

 d   QTs
(6)
  d 2 H w 

, 

  QTs 

(7)
The above expression can be reduced to the final form for material removal rate as
X1
Ts 
 h   Pd 2

V  2   
L  d   QTs




X2
 d 2Hw 


 QT 
s 

X3





(8)
3.2 Assessment of the model
For proceeding further the constants in Eq. (8) needs to be determined from the experimental data
obtained from the above section. Solving the above equations for X1, X2, X3 with this data through
regression least squares fitting method, the values obtained are as X1= -0.5301, X2=2.185,
X3=-1.0497, the final expression for material removal rate is:
Q1.1353P 2.185d 4.8007
V  (1.5003  10 14 ) 0.5301 0.1353 1.0497 ,
h
Ts
Hw
(9)
where the parameters and their units are mentioned in the nomenclature. The above equation (9) is
valid for blind pocket milling of SS304 material under the specified machining conditions only. The
plots in Fig. 4-6 show the comparisons between the assessed model and experimental values of MRR.
The difference in the values range from 0.1-0.5 mm3/s has been observed, however the trend in the
model prediction with parameters is consistent with findings carried in the past. The variations are
observed are due to that, the process is being influenced by a variety of parameters as mentioned and
the model development has been carried considering selected parameters only. In terms of %
deviations, the MRR ranges from 0.2%-10.0% for varying SODs; for different AFRs deviations of
0.6%-9.0% is observed and further in case of abrasive sizes the deviations range from 0.4-5.0 %.
3.3 Nomenclature
X1, X2, X3: constants
P: pressure (MPa)
d: average particle diameter(microns)
Q: Abrasive flow rate (kg/min)
h: standoff distance(mm)
Ts: Traverse speed (mm/min)
Hw: Vickers Hardness (kgf/mm2)
V : Volumetric MRR (mm3/min)
MRR: Material Removal Rate
373
SOD 3
SOD 4
SOD 5
Figure 4. Comparisions between model predictions and experiemntal data for MRR at different SODs
AFR 0.27 kg/min
AFR 0.38 kg/min
AFR 0.49 kg/min
Figure 5. Comparisions between model predictions and experiemntal data for MRR at different AFRs
Mesh 80
Mesh 120
Mesh 160
Figure. 6. Comparisions between model predictions and experiemntal data for MRR on various abrasive sizes
3.4 Conclusions
A full factorial experimental investigation on MRR for blind pocket machining for SS304 using
AWJM has been carried and reported. It has been shown that the predicted model exhibits a deviation
374
of 0.2-10% with the experimental results obtained within the tested conditions. This model has been
assessed both qualitatively and quantitatively for prediction of MRR and the objective of this paper is
to monitor and control MRR for a given set of parameters. The work for formulation of parametric
optimization for blind pocket milling using this process is in progress.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
4. References:
I. Finnie, “Erosion of surfaces by solid particles”, Wear, Vol. 3, 87-103, 1960.
R. K. Swanson, M. Kilman, S. Cerwin, W.Tarver, Study of particle velocities in water
driven abrasive jet cutting, pp.163-171, 1997.
J. G. A. Bitter, “A study of wear phenomenon Part I”, Wear, Vol. 6, 5-21, 1963.
J. G. A. Bitter, “A study of wear phenomenon Part II”, Wear, Vol. 6, 169-190, 1963.
Eltobgy, M., E-G Ng, Elbestawi M.A., “Modelling of Abrasive Watejet Machining: A
New Approach ”, CIRP Annals, Manufacturing Technology, Volume 54, Issue 1,
285-288, 2005
M. Hashish, Turning, milling and drilling with abrasive water jets, Proceedings of 9th
International Symposium on Jet Cutting Technology, Paper C2, Japan, 4-6th October,
1988.
A W Ruff., L K Ives, Measurement of solid particle velocity in erosion wear, Wear 35:
195-199, 1975
M. Hashish, The effect of beam angle in abrasive water jet machining, J.Eng. Ind. 115,
pp 51-56, 1993.
M. Hashish, Milling with abrasive water jets: a preliminary investigation, in: M.Hood,
D. Dormfeld (Eds), proceedings of the Fourth US Water Jet Conference, ASME., New
York, pp. 1-10, 1987.
A. Laurinat, H.Louis, G.Meier-Wiechert, A model for milling with abrasive water jets,
Proc. of 7th American Water Jet Conference, Paper 8, Seattle, WA, pp. 119-132, 1993
K M C Ojmertz, Abrasive Waterjet Milling: An experimental Investigation, Proc. of 7 th
American Water Jet Conference, Paper 58, Seattle, WA, pp. 777-792, 1993
J. Zeng, Thomas J. Kim, An erosion model for abrasive waterjet milling of
polycrystalline ceramics, Wear 199, 275-292, 1996
S. Paul, A.M.Hoogstrate, C.A. van Luttervelt, H.J.J. Kals, An experimental
investigation of rectangular pocket milling with abrasive water jet, Journal of Materials
Processing Technology 73, pp. 179-188, 1998
J. Wang, D.M.Guo, The cutting performance in multipass abrasive waterjet machining
of industrial ceramics, Journal of Materials Processing Technology, 133, pp.371-377,
2003
Fowler, G., Shipway, P.H., Pashby, I.R., “A technical note on grit embedment
following abrasive water jet milling of titanium alloy”, Journal of Material Processing
Technology, 159, 356-368, 2005
Fowler, G., Shipway, P.H.,, Pashby, I.R., “Abrasive Water-jet controlled depth milling
of TiAl4V, alloy-an investigation of role of jet-workpiece traverse speed and abrasive
grit size on the characteristics of the milled surface”, Journal of Material Processing
Technology, 161, 407-414, 2005
Fowler, G., Shipway, P.H., Pashby, I.R., “The effect of particle hardness and shape
when abrasive jet milling of titanium alloy Ti6Al4V”, Wear, Vol. 266, Issues 7-8,
613-620, 2009
375
312
Design and Experimental Implementation of Time Delay Control for Air
Supply in a PEM Fuel Cell
Ya-Xiong Wanga, Dong-Ji Xuanb, Young-Bae Kima,*
a
Department of Mechanical Engineering, Chonnam National University, Gwangju,
500-757, Republic of Korea
b
College of Mechanical and Electronic Engineering, Wenzhou University, Wenzhou,
325035, China
E-mail address: yaxiongwang@hotmail.com, xuandongji@163.com, ybkim@chonnam.ac.kr
Abstract
This paper presents a new control strategy for regulating the air supply of a proton exchange
membrane (PEM) fuel cell using time delay control (TDC). PEM fuel cell is eco-friendly and
highly efficient energy production that considered as a potential alternative power source for
vehicular applications. The stoichiometry of oxygen is a key parameter for the PEM fuel cell
safe and optimal operation. In order to prevent oxygen starvation that might happen in the
PEM fuel cell, TDC has been developed. A low-order linear fuel cell model is obtained by
the model identification taking into account the disturbances and uncertainties of the system
to apply TDC to the PEM fuel cell. Also, to prove the control performance improvement of
the present method, feedforward control (FFC) and proportional-integral control (PIC) are
applied and compared. The proposed controller is implemented and validated on a LabVIEW
based experimental rig that based on a 1.2 kW PEM fuel cell- Nexa Power Module.
Keyword: air flow control, oxygen excess ratio, model identification, time delay control
1. Introduction
Fuel cells are electrochemical devices that convert chemical energy of the fuel directly into
DC electricity. Since the reaction product is water, fuel cells are good candidates for clean
energy generation. The proton exchange membrane (PEM) fuel cell is one type of the fuel
cells, which obtained much of the attention in the automotive field due to its low operating
temperature, quick start-up capability, low sensitivity to orientation, high power density, low
corrosion, zero emission, and low noise and vibration [1, 2]. A quick power delivery to meet
sudden high-power requirements should be provided by the PEM fuel cell, as vehicle
operation includes quick start, stop, acceleration and deceleration. If the stoichiometric
oxygen excess ratio is not adequate within in a short interval, oxygen starvation might occur,
which implies a fast stack degradation and lower power generation, causes immediate and
376
permanent damage and reduces the life of fuel cell. Therefore, the regulation of the air flow
control for the PEM fuel cell is a crucial issue.
In regulating air flow to prevent oxygen starvation, many control approaches including
feedforward control [3], conventional PID control [4], linear quadratic Gaussian (LQG)
control [3], fuzzy control [5], neural network control [6], H∞ and adaptive control [7], sliding
mode control [8], or model predictive control (MPC) [1, 9] have been developed. However,
most of the control studies are just simulation results. The experimental implementation has
been rarely applied. Pukrusphan et al. [3] proposed LQG control to prevent oxygen starvation;
however the control requires an observer and long-winded linearization process. Arce et al.
[10] used real-time MPC to regulate the oxygen excess ratio; however the control-oriented
model was so complicated that needed the heavy computation load.
Kim [2] presented TDC model by MATLAB/Simulink to meet the control requirements. The
control logic of the presented TDC is simpler and more compact in compared with other
control-oriented model [2]. This paper focuses on designing and experimental
implementation of TDC for regulating the oxygen excess ratio of a PEM fuel cell system.
2. PEM Fuel Cell System
2.1 PEM Fuel Cell System Description
The applied experimental equipment is a 1.2kW Ballard PEM fuel cell (NexaTM Power
Module Ballard, 2003, see Fig. 1), which is composed of 46 cells each with a 110 cm2
membrane electrode assembly (MEA). The ancillaries of the PEM fuel cell system are the air
compressor, humidifier, cooling fan and hydrogen control valve. To measure the data of PEM
fuel cell, National Instruments DAQ PCX-6229 card is used. Also, in order to apply a
variable power demand, an electric load from BP Solution Co. is utilized.
Hydrogen
Control Valve
Air Compressor
Fuel Cell Stack
Humidifier
Communication
Connector
Cooling Motor
Motor Controller
Relay
NexaTM Power Module Control Board
Fig. 1. 1.2kW Ballard PEM fuel cell system
2.2 Oxygen Excess Ratio Estimation
377
The focus of the air supply system in a fuel cell system is to avoid oxygen starvation and to
respond to the immediate power requirement. As the hydrogen and oxygen flows are
correlated, the oxygen excess ratio is considered as the main variable for control objectives
[1]. The oxygen excess ratio is defined as [1, 3]
O2 
WO2 ,ca ,in
WO2 ,reacted 
(1)
The oxygen flows consumed in the reaction WO2,reacted depends on the stack current, and it can
be calculated by electrochemistry principles as [3]
WO2 ,reacted  M O2 
n  I st
.
4F
(2)

The oxygen mass flow WO2,ca,in available in the dry air Wa,ca,in at the cathode inlet is obtained
from
Wa,ca ,in  xO2 ,ca ,inWO2 ,ca ,in .
(3)
The oxygen mass fraction xO2,ca,in can be obtained by,
xO2 ,ca ,in 
y O2 ,ca ,in M O2
y O2 ,ca ,in M O2  (1  y O2 ,ca ,in ) M N 2
,
(4)
where MO2 and MN2 are the molar masses of oxygen and nitrogen, respectively. The oxygen
mole fraction yO2,ca,in is assumed at the atmospheric air condition. The mass flow rate of the
dry air at the cathode inlet is,
Wa ,ca ,in 
1
1   ca ,in
Wca ,in ,
(5)
where  ca,in is the humidity ratio which is calculated from
 ca ,in 
Pv,ca ,in
Mv
,

M a ,ca ,in Pa ,ca ,in
(6)
where Mv is the molar mass of vapor and Ma,ca,in is the molar mass of air at cathode, shown as
M a,ca,in  yO2 ,ca,in M O2  (1  yO2 ,ca,in )M N2 ,
(7)
in which, Pv,ca,in and Pa,ca,in describe the vapor and dry air partial pressures, respectively. The
relation between Pv,ca,in and Pa,ca,in is obtained as
Pv,ca,in  ca,in Psat (Tca ,in ) ,
(8)
Pa,ca,in  Pca,in  Pv,ca,in  Pca,in  ca,in Psat (Tca,in ) ,
l
(9)
where ca,in denoting the relative humidity of air at the cathode inlet and it is assumed as
ca ,in  1 . Tca,in is the inlet flow temperature of the cathode representing the saturation pressure,
which depends on the temperature as [11],
log10 ( Psat )  (1.69  1010 )T 4  (3.85  107 )T 3  (3.39  104 )T 2  0.143T  20.29 .
l
(10)
From del Real et al. [12], it was known that the pressure Pca,in depends not only on the air
378
flow rate Wca,in, but also on the stack current Ist , which has been modeled by the following
linear regression [1]
Pca,in  1.003  (2.1  104 )Wca,in  475.7  106  I st .
(11)
The stack current of the fuel cell is applied to share the load current Inet and the auxiliary
current Iaux as
(12)
I st  I net  I aux .
The current of auxiliary Iaux consumed by the auxiliaries that is a function of the compressor
input voltage of Vcm, which is expressed as
2
.
(13)
I aux   0  1Vcm   2Vcm
The air flow rate Wca,in is measured with an on-board sensor. Its characteristic curve is defined
with a third degree polynomial
2
3
.
Wca ,in  1Vsensor   2Vsensor
  3Vsensor
(14)
Combining with Eqs.(1)- (14), the estimation equation of oxygen excess ratio can be obtained
in following form [1]
O2 
Wca ,in
I st
 f ( pca ,in , Tca ,in , ca ,in , n, F , y O2 ,ca ,in , M O2 , M N 2 , M v ) .
(15)
3. Air Flow Controller
3.1 Main Control Loop
The PEM fuel cell system has so many input variables as seen from eq. (15), which shows the
complex nonlinear behavior. To control the air flow effectively, correct plant dynamic model
should be identified, both theoretically and empirically. In order to track the set-point
trajectory accurately and avoid lengthy modeling process, the present paper developed a
double action of the feedforward control and the TDC compensation. The manipulated
variable is compressor voltage Vcm, and control objective is regulating the oxygen excess ratio
λO2. The overall control scheme is shown in Fig. 2.
Feedforward
Control
Set-point
+
-
TDC
u +
Inet, Tca,in,
Pca,in, d(t)
u ff
Vcm
+
PEM Fuel
Cell Plant
Feedback Loop
Fig. 2. Overall control scheme
379
λO 2
The feedforward control implies the state variation of PEM fuel cell dynamics at certain
control instant to the TDC controller. Also, the output variable of feedforward control is a
control action of the fuel cell's steady state operation, which obtained by experimental tests.
Furthermore, TDC creates a negative feedback loop with the controlled variable of the PEM
fuel cell. The TDC adjusts the feedforward control deviation which is caused by the
insufficient feedforward control trial lookups and quick external load change. Therefore,
TDC improves the transient response of the PEM fuel cell system when the net load suddenly
changed and any other uncertain disturbances occurred.
3.2 Time Delay Control Algorithm
Considering the following time invariant non-linear dynamical system by assuming all state
variables and their derivatives are observable [13].
x(t )  f ( x, t )  h( x, t )  B( x, t )u(t )  d (t ) ,
(16)
where x(t), u(t), f(x,t), h(x,t), and d(t) represent state vector, control vector, known dynamics,
unknown dynamics, and disturbance, respectively. The purpose of the TDC is to design a
controller to achieve the desired performance under external disturbances and system
uncertainties. The required performance can be defined with the following time invariant
reference model,
(17)
x m (t )  Am xm (t )  Bm r (t ) ,
where xm(t) and r(t) are state vector and referenced input vector for the reference model. If
error vector is defined as e(t)= xm(t)-x(t), then the desired error dynamics can be obtained as:
(18)
e(t )  Ae e(t ) .
where Ae represents the error system matrix. If all characteristic values of Ae are located at the
left half plane, then the errors will become zero as time passes. The error system is
asymptotically stable. Therefore, the following equation can be achieved as:
l
(19)
B( x, t )u(t )   f ( x, t )  h( x, t )  d (t )  x m (t )  Ae e(t ) ,
Assumed that B(x,t) is not square, the left pseudo-inverse of B is used as B+=(BTB)-1BT. Hence,
the approximated solution of u(t) can be obtained as:
(20)
u(t )  B  ( x, t ){ f ( x, t )  h( x, t )  d (t )  x m (t )  Ae e(t )} .
If the unknown function h(x,t)+d(t) is continuous, and the time delay L is sufficiently small,
then the estimation of unknown function can be expressed as:
hˆ( x(t ), t )  dˆ (t )  h( x(t  L), t  L)  d (t  L)
 x (t  L)  f ( x(t  L), t  L)  B( x(t  L), t  L)u (t  L)
(21)
In most cases, as the matrix B(x,t) is unknown or uncertain, the estimation of B(x,t) is always
used. Then, combining Eq. (20) with Eq. (21), the TDC law can be described as
(22)
u(t )  B  ( x, t ){ f ( x, t )  x (t  L)  f ( x(t  L), t  L)  Bˆ (t  L)u(t  L)  x m (t )  Ae e(t )} .
The manipulated variable of the system from Eq. (22) can be expressed simply by making the
time delay L have the same time period of the sampling time Ts or its integer multiple.
380
3.3 PEM Fuel Cell Model for Control
The lookup table of feedforward control output variable uff is directly obtained from the
experimental tests. For easy implementation and reducing the computational burden, the first
order models of PEM fuel cell system are identified with the mid-range and high-load power
operations. The open loop test for model identification at mid-range load is shown in Fig.3
and Fig.4, respectively. Also, the model identification with the first order model is compared
with the third order model in Fig. 4, which shows good agreement. The state space models at
different load levels are shown as follow:
(23)
x (t )  a1 x  b1u ,
(24)
x (t )  a2 x  b2 u .
8
Oxygen Excess Ratio
Compressor Voltage (V)
3.5
3
2.5
2
1.5
1
0.5
0
20
40
60
Time (s)
80
100
120
6
4
2
0
0
20
40
60
Time (s)
80
100
120
Fig. 3. Open Loop test for PEM fuel cell at midrange power operation
4
2
6
4
2
18
0
20
40
60
Time (s)
80
100
Oxygen Excess Ratio
Oxygen Excess Ratio
6
8
8
Oxygen Excess Ratio
1st Order Identification
3rd Order Identification
Experimental Data
8
19
120
20
Time (s)
21
22
6
4
2
48
49
50
Time (s)
51
Fig. 4. PEM fuel cell at mid-range power operation model identification result
3.4 Time Delay Control Design
The state variable for the reference model with the first order PEM fuel cell can be expressed
as constant in the following:
xm (t )  O2 ,ref .
(25)
Based on the tracking requirement, the derivative of the state variable can be simply obtained
as:
(26)
x m (t )  0 .
Then, the tracking error and error dynamics are expressed as
e(t )  O2 ,ref  O2 (t )
(27)
e(t )  Ae e(t )  K[O2 ,ref  O2 (t )]
(28)
381
52
If K has a large negative value, then the Eq. (27) is always convergent. In the real experiment,
the data sampling time always impacts TDC controller design. After trial and errors, the
suitable time delay L of the PEM fuel cell system is selected as
(29)
L  Ts  2.5ms .
^
Comparing Eq. (23) with Eq. (16), f(x,t)=a 1, B(x,t)=b 1, and B+=1/b 1 are obtained.
Replacing above equations and Eq. (28) into Eq. (22), and considering the different o
peration points, the input of the TDC can be written as following
u (t ) 
1
ˆ
{ai O2 (t )  O2 (t  L)  ai O2 (t  L)  bi u (t  L)  K [O2 ,ref  O2 (t )]} ,
bi
(30)
where ai {a1 , a2 } , bi {b1 , b2 } , O2 (t ) is the oxygen excess ratio at t instant, and ˆO2 (t  L)
is evaluated by the numerical approach, given as
ˆO2 (t  L)  Q[O2 (t  L)  O2 (t  2L)] ,
(31)
in which Q is a constant.
4. Experimental Results
The experiment with three controllers (FF, PIC, and TDC) is implemented on the Nexa power
module. Figs. 5 illustrate the real-time control results. Single feedback controller shows
control deviations of control response of λO2 tracking set-point. The PIC has a very slow
control response when the load increased. Also, the mean square errors of the each controller
Load (A)
30
20
10
0
0
10
20
30
40
Time (s)
50
60
70
80
Compressor Voltage (V)
are calculated and represented in Table 1. The results indicate that the TDC shows the best
control performance in regulating the fuel cell air supply system to prevent the air starvation
in a practical view point.
4
TDC
FF
3
2
1
0
0
10
20
30
40
Time (s)
8
6
4
FF
Set-point
2
0
10
20
30
40
Time (s)
50
60
70
80
0
10
20
30
40
Time (s)
PIC
Set-point
2
40
Time (s)
60
50
60
70
80
382
5
4
3
2
1
9.5
TDC
FF
PIC
Set-point
8
Oxygen Excess Ratio
Oxygen Excess Ratio
Oxygen Excess Ratio
4
30
50
(d)
6
20
80
TDC
Set-point
2
6
10
70
4
(c)
0
80
6
8
0
70
8
0
0
60
(b)
Oxygen Excess Ratio
Oxygen Excess Ratio
(a)
50
10
Time (s)
10.5
6
4
2
39.5
40
Time (s)
40.5
(e)
(f)
Fig. 5. Real-time control results. (a) Load profile; (b) Compressor control signal; (c)
Feedforward result; (d) TDC result; (e) PIC result; (f) Control response comparisons in 1s.
Table 1 Mean square errors comparison with different controllers
Performance index
TDC
FF
PID
MSE(steady state)
MSE(transition)
0.2168
0.8971
0.4026
0.9217
0.4602
0.9810
5. Conclusions
The paper proposes a new control approach that based on low order PEM fuel cell model. In
order to regulating air flow supply, the TDC controller is designed. With real-time
experimental implementation, the TDC results, compared with the single feedforward control
and PIC, illustrate better control performance with a higher control accuracy, a faster settling
time, and smaller fluctuation during the external load variation.
6. Acknowledgment
This work was supported by the Second BK21, Department of Mechanical Engineering, CNU,
Korea, and also by the National Natural Science Foundation of China (No. 61203042).
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
7. References
J. K. Gruber, M. Doll and C. Bordons, Design and experimental validation of a MPC
for the air feed of a fuel cell, Control Engineering Practice, Vol.17, 2009, pp. 874-885.
Y. B. Kim, Improving dynamic performance of proton-exchange membrane fuel cell
system using time delay control, Journal of Power Sources, Vol.195, 2010, pp.
6329-6341.
J. T. Pukrushpan, A. G. Stefanopoulou and H. Peng, Control of fuel cell breathing,
IEEE control Syst. Mag., Vol. 24, 2004, pp. 14-25.
J. W. Ahn and S. Y. Choe, Coolant controls of a PEM fuel cell system. Journal of
Power Sources, Vol. 179, 2008, pp. 252-264.
J. O. Schumacher, M. Denne, M. Zedda and M. Stueber, Control of miniature proton
exchange membrane fuel cells based on fuzzy logic. Journal of Power Sources, Vol.
129, 2004, pp. 143-151.
M. J. Khan and M. T. Iqbal, Dynamic modeling and simulation of a fuel cell generator,
Fuel Cells, Vol. 5, 2005, pp. 97-104.
F. C. Wang, H. T. Chen, Y. P. Yang and J. Y. Yen, Multivariable robust control of a
proton exchange membrane fuel cell system, Journal of Power Sources, Vol. 177, 2008,
pp. 393-403.
W. Garcia-Gabin, F. Dorado and C. Bordons, Real-time implementation of a sliding
mode controller for air supply on a PEM fuel cell, Journal of Process Control, Vol. 20,
2009, pp. 325-336.
383
[9]
[10]
[11]
[12]
[13]
C. Panos, K. I. Kouramas, M. C. Georgiadis and E.N. Pistikopoulos, Modelling and
explicit model predictive control for PEM fuel cell systems, Journal of Power Sources,
Vol. 67, 2012, pp. 15-25.
A. Arce, A. J. del Real, C. Bordons and D. Ramirez, Real-time implementation of a
constrained MPC for efficient air flow control in a PEM fuel cell, IEEE Indust. Elec.,
Vol. 57, 2010, pp. 1892-905.
C. A. Ramos-Pajia, Fuel Cell Modeling and Control for Fuel Consumption
Optimization, Ph. D. Dissertation, Rovira I Virgili University, Spain, 2009.
A. J. del Real, A. Arce and C. Bordons, Development and experimental validation of a
PEM fuel cell dynamic model, Journal of Power Sources, Vol. 173, 2007, pp. 310-324.
T. C. Hsia and L. S. Gao, Robot manipulator control using decentralized linear time
invariant time delayed joint controllers, IEEE Proc.- Robotics and Automation, Vol. 3,
1990, pp. 2070-2075
384
Program - Poster Sessions
Biomedical Engineering
14:30-16:15, December 16, 2012 (Meeting Room 3)
258: The Effect of Coronary Tortuosity on Coronary Pressure: A Patient-Specific Study
Xinzhou Xie
Fudan University
Yuanyuan Wang
Fudan University
Hu Zhou
Fudan University
316: Modification of Poly (3-Hydroxybutyrate-Co-4-Hydroxybutyrate)/Chitosan Blend
Film by Chemical Crosslinking
Amirul A. A
Universiti Sains Malaysia
Rennukka M.
Universiti Sains Malaysia
385
258
The Effect of Coronary Tortuosity on Coronary Pressure: A
Patient-Specific Study
Xinzhou Xiea, Yuanyuan Wang, Hu Zhoub
a
Department of Electronic Engineering, Fudan University, No. 220 Handan Rd.,
Shanghai, China
E-mail address: 10110720031@fudan.edu.cn
b
Department of Electronic Engineering, Fudan University, No. 220 Handan Rd.,
Shanghai, China
E-mail address: yywang@fudan.edu.cn
Abstract
Coronary tortuosity is commonly observed in clinical screenings and it will cause a reduction
in the coronary pressure. However, whether this reduction leads to a significant decreasing in
the coronary perfusion pressure is still unknown. A computational fluid dynamic (CFD) study
was conducted to evaluate the effect of tortuosity on the coronary pressure. A 3D model of a
tortuous left anterior descending coronary artery (LAD) was reconstructed based on
multislice computerized tomography (CT) images. The rest and the exercise conditions are
modeled by specifying proper boundary conditions to investigate the influence of exercise on
the tortuous coronary pressure. Results indicate that the pressure reduction in the tortuous
coronary artery is negligible during rest condition. However, exercise will cause notable
decrease of the tortuous coronary pressure. This may cause an insufficient perfusion pressure
in tortuous coronary artery and may lead to myocardial ischemia during exercise conditions.
Keyword:Coronary tortuosity;
Perfusion pressure; CFD;
Exercise.
1. Introduction
Tortuous coronary arteries are commonly observed in clinical screenings. It has been reported
that coronary tortuosity may be related to aging, atherosclerosis, hypertension and diabetes
mellitus [1-3]. However, little attention has been paid to the impact of coronary tortuosity on
the coronary perfusion pressure. Zegers et al. reach a hypothesis that the coronary tortuosity
may lead to ischemia by clarifying three cases of patients with the coronary tortuosity [4].
The ventricular function is hinder by tortuous coronary arteries [5]. Patients with the coronary
tortuosity often suffer the exercise-induced chest pain and typically disappear at rest [4]. All
386
these findings imply that the coronary tortuosity may affect the coronary blood supply.
However, no clinical data can directly support it. Whether coronary tortuosity will lead to a
significant decreasing in the coronary perfusion pressure is still unknown.
The computational fluid dynamic (CFD) offers a new way to solve this problem. The CFD is
a branch of fluid mechanics which uses numerical methods and algorithms to solve and
analyze problems about fluid flows. An accurate reconstruction of the blood flow in
physiological and pathological conditions can be obtained by using reconstructed geometries
and proper boundary conditions. Recently, Li et al. performed a 2D CFD simulation to
investigate the impact of tortuosity on coronary pressure [6]. In their study, they simplified
the simulation to a 2D CFD study and idealized geometry models were used, thus the result
should be validated by more accurate simulation model. The coronary blood supply will
increase markedly during exercise and this was also ignored in their work.
In this paper, the blood flow in a patient-specific 3D left anterior descending coronary artery
(LAD) model was reconstructed using the CFD approach. The rest and the exercise
conditions are modeled by specifying the proper boundary conditions, in order to investigate
the influence of tortuosity on coronary pressure in different physiological conditions.
2. Method
2.1 Vascular Geometry Reconstruction
A 3D model of the LAD of a patient with coronary tortuosity was reconstructed based on
multislice computerized tomography (CT) images acquired at mid-diastole, 69% into the
duration of the cardiac cycle, using SOMATOM Definition (Siemens Medical Systems,
Forchheim, Germany) CT scanner. The in-plane image resolution was approximately 0.445
mm/pixel and the slice interval was 0.4 mm. The parallel plane scans were translated into 3D
images using Mimics (Mimics, Materialise, Leuven, Belgium). The LAD was then
semi-automatically segmented from the stack of cross-sectional images. Then the 3D LAD
model was reconstructed using Mimics. The reconstructed LAD geometry is depicted in
Figure 1 (A). The 3D geometry was further discretized for CFD simulations using the
software package ICEM CFD (ANSYS, Inc.). The mesh independent study was performed
and computed pressure drop for final chosen mesh differed by 2% from those for doubled
mesh resolution. With this criteria, the number of the discretized elements reached 328000.
The inlet face mesh was shown in Figure 1 (B).
387
Figure 8. (A) The reconstructed 3D LAD geometry model. (B) The face mesh of the inlet
plane.
2.2 Boundary Conditions
The inlet conditions for the simulation were based on the measurements carried out by
Perktold et al. in a replica of the LAD branch, using the laser Doppler velocimeter (LDV) [7].
Figure 2 displays the corresponding flow waveform. A small negative flow can be seen at the
onset of the systole. The Womersley velocity profiles based on the specific velocity
waveform were applied at the inlet. A zero relative pressure condition was used as the outlet
boundary condition. Exercise will increase the coronary blood flow with peak values during
the maximal exercise typically three to five times than that of the resting level [8]. To
investigate the impact of the coronary tortuosity on the coronary pressure during exercise
conditions, we simply increased the inlet velocity threefold to model the moderate exercise
condition and quintuplicate it to model the heavy exercise condition. The no slip condition
was applied on the vessel wall.
Figure 9. The inlet velocity profile for the rest condition.
388
2.3 Computational Method
The numerical simulation was achieved by using the CFD software ANSYS FLUENT
(ANSYS Inc.). ANSYS FLUENT is finite-volume-based software for fluid mechanics
computations. The blood was modeled as an incompressible Newtonian fluid with dynamic
viscosity μ and density ρ, respectively, set to 4 cP and 1070 kg/m3.
It has been reported that the coronary artery deformation in the radial direction is small and
the effect of temporal radius variation on hemodynamic parameters is negligible [9]. So the
temporal variation of the cross-sectional shape and radius was ignored in the present model.
The dynamic vessel motion of coronary arteries will significantly affect the blood flow
pattern but has limited influence on the pressure drop distributions. So it was not included in
this simulation work.
The computational simulation was conducted to explore the impact of exercise on coronary
pressure in the tortuous coronary arteries. The mean driving pressure (MDP) and the time
averaged resistance (TAR) of the tortuous section is investigated for all simulated conditions.
Extravascular compression during systole markedly affects the coronary flow and most of the
coronary flow occurs during diastole. Therefore the mean diastole driving pressure (MDDP)
for the rest and exercise conditions is also investigated.
3. Results
The parameters of the reconstructed LAD geometry as shown in Figure 1 (A) are as follows.
The inlet diameter Din = 0.212 cm; the outlet diameter Dout = 0.178 cm; the curve length of
the vessel L = 5.146 cm. There are five large bends alone the path line and five bend angles
are 76.3o, 70.1o, 84.5o, 64.9o and 67.8o, respectively.
Figure 10. The pressure drop distribution at t = 0.42 s for three simulated conditions. (A)
Rest condition. (B) Moderate exercise condition. (C) Heavy exercise condition.
Figure 3 shows the pressure drop distribution at t=0.42 s for the rest and exercise conditions.
At this time, the coronary blood flow reaches the maximum rate. The results clearly show
that exercise will significantly increase the driving pressure of tortuous coronary arteries. At
389
rest, the peak driving pressure is only 336.7 Pa (2.53 mm Hg). While during heavy exercise
conditions, this value will increase to 2596.7 Pa (19.48 mm Hg).
Figure 11. The MDP and the MDDP for three simulated conditions.
The TARs for three simulated conditions are: 36120 KPa s/m3 for the rest condition, 46429
KPa s/m3 for the moderate exercise condition and 55715 KPa s/m3 for the heavy exercise
condition. The MDP and the MDDP in different simulated conditions are shown in Figure 4.
The MDP and the MDDP increase with the increase of exercise intensity. For moderate
exercise condition, these two parameters increase almost 4-fold in contrast to the rest
condition; at the same time, the TAR increases 1.29 times. During heavy exercise condition,
the MDP and the MDDP increase nearly 8-fold and the TAR increases 1.54 times.
4. Discussion
The tortuosity will affect the coronary blood flow and cause a reduction in coronary pressure.
Whether this reduction is significant and will lead to myocardial ischemia is still unknown.
To determine the impact of the tortuosity on coronary pressure, we performed a quantitative
analysis of the coronary pressure in a patient-specified tortuous LAD model during rest and
exercise conditions.
From the results above, the coronary perfusion pressure of a tortuous coronary artery will be
reduced significantly and may cause myocardial ischemia during exercise conditions. While
in rest conditions, the driving pressure of the tortuous section is small and the influence on
the coronary perfusion pressure is negligible.
The resistance of this tortuous LAD section increases only 1.6 times during heavy exercise
conditions as compared with the rest level. However, this increase may be large enough to
make the regulation of the blood flow ineffective. As in all vascular beds, it is small arteries
390
and arterioles in the microcirculation that are the primary sites of vascular resistance. During
heavy exercise conditions, the increasement of aortic pressure only slightly exceeds the
increasement of effective back pressure and the effective coronary perfusion pressure
increases by no more than 20–30% [8]. While the coronary blood flow increases 3 to 5-fold
by a decrease in coronary vascular resistance. Thus the coronary vascular resistance reduces
to 20–30% of basal resting values [8]. In the tortuous coronary arteries, exercise will increase
the resistance of the tortuous sections. On the one hand, the resistance of small arteries is
reduced to 20–30% of basal resting values. On the other hand, the resistance of the tortuous
section increases to 129-154% of the rest level. Much more parts of the total coronary
perfusion pressure will be used to drive the blood to travel through the tortuous section. This
causes the regulation of the blood flow ineffective and the demand of the myocardial blood
supply may not be met during exercise condition.
We have to clarify that this study has some limitations. During exercise conditions, the
heart rate will increase and the coronary blood flow pattern will also change. Here we just
increased the inlet velocity to model exercise conditions. Some small branches of the LAD
were removed from the reconstructed model due to the low resolution of CT images.
5. Conclusion
In this paper, a CFD study was conducted to evaluate the effect of tortuosity on the coronary
pressure. A 3D model of a tortuous LAD was reconstructed based on CT images. The rest
and the exercise conditions are modeled by specifying proper boundary conditions to
investigate the influence of exercise on the tortuous coronary pressure. It is demonstrated in
the results that the influence of the coronary tortuosity on the coronary perfusion pressure is
negligible during rest conditions. While the coronary perfusion pressure of the tortuous
coronary artery will be reduced significantly and may cause myocardial ischemia during
exercise conditions.
6. Acknowledgement
This work was supported by the National Natural Science Foundation of China (Grant No.
61271071 and No. 11228411), the National Key Technology R&D Program of China (No.
2012BAI13B02) and Specialized Research Fund for the Doctoral Program of Higher
Education of China (No.20110071110017).
7. Reference
[1] H. C. Han, Twisted blood vessels: symptoms, etiology and biomechanical mechanisms,
Journal of vascular research, vol. 49, pp. 185-197, 2012.
[2] Y. Li, C. Shen, Y. Ji, Y. Feng, G. Ma, and N. Liu, Clinical implication of coronary
tortuosity in patients with coronary artery disease, PloS one, vol. 6, p. e24232, 2011.
391
[3] O. Turgut, I. Tandogan, K. Yalta, M. B. Yilmaz, and R. Dizman, Geodesic pattern of
coronary arteries as a predictor for cardiovascular risk: clinical perspectives,
International journal of cardiology, vol. 142, pp. e38-e39, 2010.
[4] E. Zegers, B. T. J. Meursing and A. J. M. O. Ophuis, Coronary tortuosity: a long and
winding road, Netherlands Heart Journal, vol. 15, pp. 191-195, 2007.
[5] O. Turgut, A. Yilmaz, K. Yalta, B. M. Yilmaz, A. Ozyol, O. Kendirlioglu, F. Karadas,
and I. Tandogan, Tortuosity of coronary arteries: an indicator for impaired left
ventricular relaxation?, The International Journal of Cardiovascular Imaging, vol. 23,
pp. 671-677, 2007.
[6] Y. Li, Z. Shi, Y. Cai, Y. Feng, G. Ma, C. Shen, Z. Li, and N. Liu, Impact of coronary
tortuosity on coronary pressure: numerical simulation study, PloS one, vol. 7, p. e42558,
2012.
[7] K. Perktold, M. Hofer, G. Rappitsch, M. Loew, B. Kuban, and M. Friedman, Validated
computation of physiologic flow in a realistic coronary artery branch, Journal of
biomechanics, vol. 31, pp. 217-228, 1997.
[8] D. J. Duncker and R. J. Bache, Regulation of coronary blood flow during exercise,
Physiological reviews, vol. 88, pp. 1009-1086, 2008.
[9] R. Torii, J. Keegan, N. B. Wood, A. W. Dowsey, A. D. Hughes, G. Z. Yang, D. N.
Firmin, S. A. M. G. Thom, and X. Y. Xu, MR image-based geometric and hemodynamic
investigation of the right coronary artery with dynamic vessel motion, Annals of
biomedical engineering, vol. 38, pp. 2606-2620, 2010.
392
316
Modification of Poly(3-Hydroxybutyrate-Co-4-Hydroxybutyrate)/Chitosan
Blend Film by Chemical Crosslinking
Amirul A. Aa,b* and Rennukka M.a
a
School of Biological Sciences, Universiti Sains Malaysia, 11800 Penang, Malaysia
b
Malaysian Institute of Pharmaceuticals and Nutraceuticals, MOSTI, Malaysia
*
Corresponding author: e-mail: amirul@usm.my
Abstract
Paradigm shift from the synthetic to natural polymers opened a window for the utilization
and development of the abundant polymers into hybrid material composed of two or more
polymers with modified properties. In this study, the fabrication and characterization of the
poly(3-hydroxybutyrate-co-4-hydroxybutyrate) [P(3HB-co-4HB)] and chitosan blend film
chemically crosslinked with glutaraldehyde (GA) was evaluated. Film solubility and water
absoption capacity of the crosslinked films were decreased with the increasing concentration
of GA. The formation of covalent bonding between the components was confirmed with the
formation of imine group due to the interaction between the amine group of chitosan and
aldehyde group of GA. Thermal properties were determined by differential scanning
calorimeter (DSC) and thermogravimetric analysis (TGA). Ionic exchange capacity (IEC)
was conducted to reveal the extent of crosslinking in the blend films by GA and the finding
suggested that the concentration of the GA had pronounced effect on the percentage of
crosslinking. Assessment of the antimicrobial performance clearly showed that the
crosslinked blend films reduced the cell population of both Escherichia coli and
Staphylococcus aureus.
Key word: Antimicrobial, glutaraldehyde, polyhydroxyalkanoate,
characteristic; thermal properties
393
polysaccharide; structural,
Chemical Engineering
14:30-16:15, December 15, 2012 (Meeting Room 3)
276: Aromatic-turmerone’s anti-inflammatory and Neuroprotective Effects in
Microglial Cells are Mediated by PKA and HO-1 Signaling
Sun Young Park
Pusan National University
Mei Ling Jin
Pusan National University
Young Hun Kim
Pusan National University
Sang-Joon Lee
Pusan National University
282: The Anti-inflammatory Effect of Nardostachys Chinensis on LPS or LTA
stimulated BV2 Microglial Cells
Ah jeong Park
Pusan National University
Sun Young Park
Meiling Jin
Hye Won Eom
Young Hun Kim
Sang Joon Lee
Pusan National University
Pusan National University
Pusan National University
Pusan National University
Pusan National University
302: Synthesis and Characterization of ZnS Nanopowder Prepared by
Microwave-assisted Heating
Wei Huang
National Taiwan Normal University
Min-Hung Lee
National Taiwan Normal University
San Chan
Tungnan University
Yueh-Chien Lee
Tungnan University
Ming-Kwen Tsai
Tungnan University
Sheng-Yao Hu
Tungfang Design University
Jyh-Wei Lee
Ming Chi University of Technology
417: Bridging Gap between Transient and Steady State Gas Permeation
Kean Wang
The Petroleum Institute
394
276
Aromatic-turmerone’s anti-inflammatory and neuroprotective effects in
microglial cells are mediated by PKA and HO-1 signaling
Sun Young Parka, Mei Ling Jinb, Young Hun Kima and Sang-Joon Leeb*
a
b
Bio-IT Fusion Technology Research Institute
Department of Microbiology, Pusan National University, Busan, 609-735 Korea
E-mail: sangjoon@pusan.ac.kr
sundeng99@pusan.ac.kr
Abstract
Despite data supporting an immune-modulating effect of ar-turmerone in vitro, the underlying
signaling pathways are largely unidentified. Here, we investigated ar-turmerone’s
anti-neuroinflammatory properties in amyloid
- and LPS-stimulated BV-2 microglial cells.
Increased the pro-inflammatory cytokine and chemokine, pGE2, NO and ROS production and
MMP-9 enzymatic activity by amyloid
lated microglial cells was inhibited
by ar-turmerone. Subsequent mechanistic studies revealed that ar-turmerone inhibited
amyloid - and LPS induced JNK and p38 MAPK and NFar-turmerone decreased the phosphorylation of
and LPS induced STAT-1. We
next demonstrate that ar-turmerone induced HO-1 and Nrf-2 activation suppressed the
activation of neuroinflammatory molecules in
induced microglial cells,
and that down-regulation of HO-1 signals was sufficient to induce the expression of iNOS,
COX-2 and ROS production in microglial cells. Interestingly, we founded that ar-turmerone
induced phosphorylation of CREB by inducing cAMP level in microglial cells. Furthermore,
HO-1 activation via PKA-mediated CREB phosphorylation attenuated the expression of
neuroinflammatory molecules in
induced microglial cells. Through this
study, we have proved that HO-1 and its upstream effectors PKA play a pivotal role in the
anti-neuroinflammatory and neuroprotective response of ar-turmerone in
stimulated microglia.
Key words: ar-turmerone, neuroinflammation, MMP-9, ROS, HO-1, PKA
1. Introduction
The neuroinflammatory responses in the central nervous system (CNS) are well-known
features of various neurodegenerative diseases, including Alzheimer’s disease (AD),
Parkinson’s disease (PD), HIV-associated dementia (HAD), stroke, and multiple sclerosis
395
(MS), and is interceded by the activation of microglia, the resident immune cells of the CNS
[1-3]. Microglial cells can be activated in 2 ways: first, they can be activated by responding to
neuronal death or neuronal damage caused by neuroinflammatory responses. Second, they
can be activated by responding to environmental toxins, such as bacterial and viral pathogens
[4]. Activation of microglia has been proposed to play roles in host defenses and tissue repair
in the CNS. However, deregulated or chronic activation of these cells can induce too many
neuroinflammatory molecules, leading to inflammation-mediated neuronal cell death and
brain injury [5,6]. Therefore, regulation of microglial activation has been considered an
important therapeutic approach for neurodegenerative disease. As such, the development of
anti-inflammatory compounds could provide potential therapeutic agents for
neurodegenerative diseases [7].
Microglia express pattern recognition receptors (PRR) that can bind to pattern-associated
molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs), such as
lipopolysaccharide (LPS), and amyloid
-like receptors (TLRs), a major class of PRRs,
play a crucial role in host defenses by regulating the innate immune response. Activation of
TLRs in microglia causes the production of inflammatory molecules. Recently, several
studies have indicated that LPS is involved in the pathogenesis of CNS infectious diseases
and can induce neuronal cell death. In a study using cultures of microglia prepared from
TLR-2/4-deficient mice, it was established that TLR-2/4 plays a pivotal role the recognition
of LPS and is involved LPS-induced microglia activation. LPS signaling through TLR-2/4
induces several intracellular pathways to induce broad changes in gene expression, ultimately
inducing the release of various pro-inflammatory molecules and neurotoxic factors. Signal
transduction via TLR-2/4 is mediated by different adaptor proteins, including MyD88, which
promote downstream signaling via MAPK, NFpro-inflammatory molecules [8-12]. Accumulation of amyloid
roduced by
processing amyloid precursor protein is correlated with the pathogenic cascade that
ultimately leads to AD. It is well supported in the literature that fibrillar forms of amyloid
serve as an inflammatory stimulus for neuronal cells, and several reports have identified the
underlying signaling mechanisms. Deposition of amyloid in the brain and cerebral vessels
causes activation of microglia cells, which produce neurotoxic inflammatory molecules and
contribute to neurodegeneration in AD [13,14]. These pro-inflammatory molecules produced
by microglia can cause neuronal cell damage. Several papers have demonstrated that TLR-2/4
plays a critical role in fibrillar amyloid
–induced microglial activation. Microglia are
activated by fibrillar -amyloid and LPS and release various pro-inflammatory and
neurotoxic factors, such as cytokines, chemokines, prostaglandin E2 (PGE2), nitric oxide
(NO), reactive oxygen species (ROS), and matrix metalloproteinases (MMPs). The
overproduction of these neurotoxic pro-inflammatory molecules plays a dominant role in
accelerating the degenerative process in the provoked CNS [15-18].
396
Aromatic (ar)-turmerone, turmeric oil isolated from Curcuma longa that has been used for
that has been used for centuries in Southeast Asia as both a remedy and a food [31].
Curcumin, demethoxycurcumin, bisdemethoxycurcumin, ar-turmerone, and
-turmerone are the major bioactive compounds found in C. longa. In modern pharmacological
studies, C. longa constituents, particularly curcumin, have been shown to have
anti-neuroinflammatory, neuroprotective, chemopreventive, immunomodulatory, and
potentially chemotherapeutic properties [32-38]. Recently, we reported the anti-melanogenic
effects of ar-turmerone[39]. However, the effects of ar-turmerone of Curcuma longa on the
neuroinflammation have not been investigated. Elucidating the signaling pathways mediating
the activation of microglia may identify inhibitory compounds that could have beneficial
effects in neurodegenerative diseases.
2. Results
397
2.1.
Ar-turmerone
suppresses
amyloid
and
LPS
induction
of
various
neuroinflammatory molecules in microglia
(A) Cells were treated with different concentrations of ar-turmerone ( 5, 10 and 20 μM) for
1 h then with amyloid
(5 μM) or lipoteichoic acid
-free
conditions. After 16 h of stimulation, matrix metalloproteinase-9 (MMP-9) enzymatic activity
was analyzed by gelatin zymography (Zym). Protein expression of MMP-9, inducible nitric
oxide synthase (iNOS), cyclooxygenase-2 (COX-2), and tubulin was detected by western blot
using specific antibodies. (B) Nitrite content was measured using the Griess reaction. The
concentration of prostaglandin E2 (PGE2), tumor necrosis factor-alpha (TNF-6, and monocyte chemotactic protein 1 (MCP-1) in the culture media was
measured using a commercial enzyme-linked immunosorbent assay (ELISA) kit. (C) Cell
viability was assessed by MTT assay, and the results are presented as the percentage of
surviving cells compared to control cells. Each bar represents the mean (SE) from 3
independent experiments per group. *P < 0.05 vs. the -amyloid- or LPS-treated group.
2.2. Effect of ar-turmerone on amyloid -induced reactive oxygen species
production in BV-2 microglial cells.
Cells were treated with different concentrations of ar-turmerone (5, 10 and 20 μM) for 1 h
then with amyloid
ells were
incubated with DCFH2-DA for an additional 1 h. The intracellular levels of reactive oxygen
species (ROS) were then determined by confocal microscopy (A), flow cytometry (B). The
values represent mean fluorescence intensity ± S.E. from 3 independent experiments per
group. *P < 0.05 vs. the -amyloid- or LPS-treated group.
398
2.3. Inhibitory effects of ar-turmerone on amyloid - and LPS-induced activation of
nuclear factor kappa-light-chain-enhancer of activated B cells.
(A) BV-2 microglial cells were treated with ar-turmerone followed by amyloid (5 μM) or
treatment for 0.5 h. Nuclear translocation of nuclear factor
kappa-light-chain-enhancer of activated B cells (NFblotting. (B) Nuclear translocation of NFroscopy. (C)
Cells were co-luc reporter and the control Renilla luciferase plasmid,
pRL-CMV. (D) Cells were incubated with ar-turmerone for 0.5 h and then incubated with
for 4 h. DNA was immunoprecipitated by an anti-NFwas purified.
399
2.4. Ar-amyloid- and LPS-induced activation of signal
transducers and activators of transcription (STAT)-1 and STAT-3.
(A) BV-2 microglial cells were treated with ar-turmerone followed by LPS (1
2 h.
Phosphorylation of STAT-1 and STAT-3 was confirmed by western blotting. (B) Nuclear
translocation of STAT-1 was assessed by confocal microscopy.. (C) Cells were incubated
with ar-turmerone for 1 h and then incubated with
for 4 h. DNA was
immunoprecipitated by an anti-pSTAT-1 antibody and was purified.
2.5. Ar-turmerone inhibition of amyloid
- and LPS-induced phosphorylation of p38,
c-Jun N-terminal kinase, and mitogen-activated protein kinase in BV-2 microglial cells.
(A) BV-2 microglial cells were treated with the indicated concentrations of ar- for 1 h and
then stimulated with amyloid
for 2 h. An equal amount of cell
extract was analyzed by western blotting. (B) The cells were treated with LPS for 16 h in the
and
ar-turmerone (2
). Subsequently, the intracellular levels of ROS were determined by
flow cytometry. Each bar represents the mean (SE) from 3 independent experiments per
group. *P < 0.05 and ** P < 0.01 vs. the LPS-treated group.
400
2. 6. Effects of HO-turmerone-mediated anti-neuroinflammatory
effects in LPS-stimulated microglial cells.
(A) Cells were cultured with increasing concentrations of ar-turmerone for 8 h or 2
ar-turmerone for the indicated times. HO-1 expression was determined by western blot. (B)
Cells were incubated with 2
ar-turmerone for the indicated length of time and were
incubated with the indicated concentration of ar-turmerone for 1 h. Nuclear localization of
Nrf-2 was determined by western blot. (C) Cells were transfected with the ARE-luciferase
construct and were then treated with indicated concentrations of ar-turmerone. (D) The cells
were co-incu
ar-turmerone (20
-1 activator) for 12 h. The
quantity of bilirubin produced in the culture media was measured spectrophotometrically and
calculated using a molar extinction coefficient of bilirubin dissolved in benzene. (E) Cells
were transfected with si-control and si-Nrf-2 or si-HO-1. Forty-eight hours after transfection,
the cells were treated with 2
ar-turmerone for 1 h and then were stimulated with LPS
(1 µg/mL) for 16 h. (F) The si-Nrf-2 or si-HO-1 transfected cells were treated with 2
ar-turmerone for 1 h, then stimulated with LPS (1 µg/mL) for 16 h. The iNOS promoter
activity and NO and ROS levels were determined.
401
2. 7. Role of protein kinase A-cAMP response element binding protein in ar-turmerone
-mediated attenuation of LPS-induced microglial cells
(A) Cells were cultured with increasing concentrations of ar-turmerone for 0.5 h or 2
of ar-turmerone for the indicated times. p- CREB and CREB expression were determined
by western blot. (B) Cells pre-treated with ar-turmerone were stimulated with increasing
concentrations. Intracellular cAMP concentration was measured. H-89 is selective PKA
inhibitor. (C) Cells were pre-treated with Har-turmerone for 0.5 h. HO-1 expression was determined by western blot. (D) Cells were
pre-treated with H(1 µg/mL) for 16 h.
The iNOS promoter activity and NO and ROS levels were determined.
402
Fig. 8. Ar-turmerone’s neuroprotective effect through inhibition of amyloid
-activated microglial cells. BV-2 microglial cell were stimulated with amyloid
and/or
ar-turmerone for 16 h; then, the media were transferred to HT-22 hippocampal cells. HT-22
hippocampal cells was assessed by TUNEL assay (A) and MTT assay (B) after a
24-h-incubation period. The HT-22 hippocampal cells were stimulated with amyloid
and/or ar-turmerone for 16 h. Cells were then assessed by TUNEL assay (C) and MTT
assay (D) after a 24-h-incubation period.
3. References:
[1] Good PF, Werner P, Hsu A, Olanow CW, Perl DP. Evidence of neuronal oxidativedamage
in Alzheimer's disease Am J Pathol. 1996;149:21-28.
[2] Hald A, Lotharius J. Oxidative stress and inflammation in Parkinson's disease: is there
acausal link? Exp Neurol. 2005;193:279-290.
[3] Pratico D, Trojanowski JQ. Inflammatory hypotheses: novel mechanisms of Alzheimer's
neurodegeneration and new therapeutic targets? Neurobiol Aging. 2000;21:441-5;
discussion 451-3.
[4] Banati RB, Gehrmann J, Schubert P, Kreutzberg GW. Cytotoxicity of microglia Glia.
1993;7:111-118.
[5] Boje KM, Arora PK. Microglial-produced nitric oxide and reactive nitrogen oxides mediate
neuronal cell death Brain Res. 1992;587:250-256.
[6] Chao CC, Hu S, Molitor TW, Shaskan EG, Peterson PK. Activated microglia mediate
neuronal cell injury via a nitric oxide mechanism J Immunol. 1992;149:2736-2741.
[7] Yan Q, Zhang J, Liu H, Babu-Khan S, Vassar R, Biere AL, Citron M, Landreth G.
Anti-inflammatory drug therapy alters beta-amyloid processing and deposition in an
animal model of Alzheimer's disease J Neurosci. 2003;23:7504-7509.
[8] Hobbs AJ, Higgs A, Moncada S. Inhibition of nitric oxide synthase as a potential
therapeutic target Annu Rev Pharmacol Toxicol. 1999;39:191-220.
[9] Kim YS, Joh TH. Microglia, major player in the brain inflammation: their roles in the
pathogenesis of Parkinson's disease Exp Mol Med. 2006;38:333-347.
403
[10] Olson JK, Miller SD. Microglia initiate central nervous system innate and adaptive
immune responses through multiple TLRs J Immunol. 2004;173:3916-3924.
[11] Dheen ST, Kaur C, Ling EA. Microglial activation and its implications in the brain
diseases Curr Med Chem. 2007;14:1189-1197.
[12] Kielian T. Toll-like receptors in central nervous system glial inflammation and
homeostasis J Neurosci Res. 2006;83:711-730.
[13] Sung S, Yang H, Uryu K, Lee EB, Zhao L, Shineman D, Trojanowski JQ, Lee VM,
Pratico D. Modulation of nuclear factor-kappa B activity by indomethacin influences A
beta levels but not A beta precursor protein metabolism in a model of Alzheimer's disease
Am J Pathol. 2004;165:2197-2206.
[14] Ebert S, Gerber J, Bader S, Muhlhauser F, Brechtel K, Mitchell TJ, Nau R.
Dose-dependent activation of microglial cells by Toll-like receptor agonists alone and in
combination J Neuroimmunol. 2005;159:87-96.
[15] Qin L, Li G, Qian X, Liu Y, Wu X, Liu B, Hong J, Block ML. Interactive role of the
toll-like receptor 4 and reactive oxygen species in LPS-induced microglia activation Glia.
2005; 2005;52:78 <last_page> 84.
[16] Yeo SJ, Yoon JG, Yi AK. Myeloid differentiation factor 88-dependent
post-transcriptional regulation of cyclooxygenase-2 expression by CpG DNA: tumor
necrosis factor-alpha receptor-associated factor 6, a diverging point in the Toll-like
receptor 9-signaling J Biol Chem. 2003;278:40590-40600.
[17] Candelario-Jalil E, Yang Y, Rosenberg GA. Diverse roles of matrix metalloproteinases
and tissue inhibitors of metalloproteinases in neuroinflammation and cerebral ischemia
Neuroscience. 2009;158:983-994.
[18] Laflamme N, Echchannaoui H, Landmann R, Rivest S. Cooperation between toll-like
receptor 2 and 4 in the brain of mice challenged with cell wall components derived from
gram-negative and gram-positive bacteria Eur J Immunol. 2003;33:1127-1138.
404
282
The anti-inflammatory effect of Nardostachys chinensis on LPS or LTA
stimulated BV2 microglial cells
Ah jeong Parkb, Sun Young Parka, Meiling Jinb, Hye Won Eomb, Young Hun Kima and
Sang-Joon Leeb*
a
b
Bio-IT Fusion Technology Research Institute
Department of Microbiology, Pusan National University, Busan, 609-735 Korea
E-mail: sangjoon@pusan.ac.kr
sundeng99@pusan.ac.kr
Abstract
Excessive activation of microglial cells cause chronic inflammatory environment in CNS and
influence on the neuro-inflammatory diseases including Alzheimer’s disease, Parkinson’s
disease. Induction of HO-1 through Nrf-2/ARE pathway is known for its anti-inflammatory
effect that leads to neuro-protection. Dried root or rhizome of Nardostachys chinensis has
been used for anti-malarial, anti-nociceptive, neurotrophic agents in several Asian countries.
In this study, the anti-inflammatory effect of Nardostachys chinensis ethyl acetate extraction
(EN) is suggested in connection with the up-regulation of Heme-oxygenase-1 (HO-1) in LPS
or LTA stimulated BV2 microglial cells. EN suppressed the activated BV2 cells derived
production of pro-inflammatory cytokines. It appeared to have influenced by HO-1 induction
based on the result of reversed cytokine level under HO-1 inhibitor, SnPP treatment. It was
found that EN has increased HO-1 induction at transcription and translation level through
Nrf-2/ARE signal pathway. EN also suppressed the activation of MAPK, NF-κB and STAT1,
3 which intensify the pro-inflammatory responses. From these results, EN has
anti-inflammatory effect through HO-1 expression in both gram negative and gram positive
cell wall components stimulated BV2 microglial cells. And this further leads to the
neuro-protective effect in HT22 hippocampal neuronal cells which are damaged by the
pro-inflammatory factors from each stimulator-activated BV2 cells.
Keyword: Nardostachys chinensis, lipopolysaccharide, lipoteichoic acid, inflammation, heme
oxygenase(HO)-1
1. Introduction
Microglial cells are the professional macrophage of central nervous system. They play a
crucial role in immunological defense against virulent factors in the brain (1). Under some
405
conditions with illness however, microglial cells are continuously over activated and become
neurotoxic, releasing lots of cytotoxic agents such as nitric oxide, superoxide, free radicals,
pro-inflammatory mediators that results in chronic inflammation (2). In stimulation of
microglial cells, intracellular cascades such as Nuclear factor-kappa B (NF-κB),
Mitogen-activated protein kinases (MAPKs) and Janus kinase (JAK)-signal transducer and
activator of transcription (STAT) pathway become activated. It makes increase of
pro-inflammatory molecules including TNF-α, IL-1β, IL-6, COX-2 (3) Current researches
emphasize on the neuro-inflammation triggered by microglial over activation that represents
a potent effect on the pathogenesis of several neurodegenerative diseases such as Alzheimer’s
disease, Parkinson’s disease, Amyotrophic lateral sclerosis and Huntington disease (4). In this
regard, transcription factor NF-E2-related factor-2 (Nrf-2)/ Antioxidant response element
(ARE) signal pathway is suggested as a central modulator in neuroprotection by inducing
HO-1 which has been intensively studied in brain for its potential neuroprotective and
anti-inflammatory effect (5). This enzyme has a rate-limiting catabolic activity in heme
degradation and engenders ferrous iron and biliverdin, subsequently converted into Ferritin
and bilirubin, and carbon monoxide which are known to have anti-inflammatory and
anti-oxidant properties (6). Dried root and rhizome of Nardostachys chinensis Batal or its
relative Nardostachys jatamansi DC, named as ‘‘Gansongxiang’’, have been used in oriental
medicine for sedative, analgesic, hyperlipidemia (7) in several Asian countries. Recently, it
has been reported that N. jatamansi has anti-stress and anti-oxidant activity that are
implicated with various NDs, mainly in vivo (8). Meanwhile, the effect of N. chinensis on
anti-oxidant or anti-inflammatory properties related with NDs is not well established
compared to N. jatanamsi. In this study, we suggests the anti-inflammatory effect of N.
Chinesis related with neuro-protection and the partial molecular mechanisms in microglial
cells which are activated with each gram-negative or gram-positive bacterial cell wall
components.
406
2. Results
Fig.1.
Fig.1 Schematic diagram of the extraction process of Nardostachys chinensis
Fig.2
Fig.2 The inhibitory effect of EN on s.LPS or LTA casued NO production in BV2 cells
407
measured in the collected culture supernatant by Griess method (A, B). Cell viability of BV2
microglial cells under various doses of EN and stimulator treated condition was detected
through MTT assay (C, D).
Fig.3
408
Fig.3. Suppression of s.LPS or LTA mediated production of pro-inflammatory cytokines
and mediators by EN in BV2 cells
The cells were incubated with increasing doses of EN for 1 h in presence SnPP (40
followed by s.LPS or LTA treatment for 24 h. The secreted volume of pro-inflammatory
cytokine TNF-α (A, D), IL-1β (B, E), IL-6 (C, F) was measured in the cell culture
supernatant by ELISA. The pro-inflammatory mediator iNOS, COX-2 and HO-1 was
detected in the same culture media by western blot assay (H, I).
409
Fig.4
Fig.4. Induction of HO-1 and transcription factor Nrf2 by EN in BV2 cells
for 6 h. Relative HO-1 mRNA expression (2-Δct) was measured by real-time qPCR and
calculated by subtracting the Ct value of GAPDH from the Ct value of HO-1. (B) The cells
Total
cellular extracts were harvested and examined by western blot assay. (C) The cells were
were transfected with the ARE-luciferase construct and treated with EN.
410
Fig.5
Fig.5. The inhibitory effect of EN on s.LPS or LTA mediated MAPK phosphorylation in
BV2 cells The cells were incubated with s.LPS (A) or LTA (C) for indicated times and the
optimal induction time of ERK, pERK, JNK, pJNK, P38 and p P38 was detected. Then the
cells were treated with EN for 1 h and s.LPS for 0.5 h (B) or LTA for 1 h (D). The cell
extracts were collected and subjected to western blot assay.
Fig.6
411
Fig.6. Effect of EN on the reduction of s.LPS or LTA mediated NF-κB nuclear
translocation and transcription activity in BV2 cells
The cells are treated with s.LPS (A) or LTA (C) for indicated length of times and then
pre-treated with EN for various doses followed by s.LPS (B) or LTA (D) stimulation for 0.5 h
and 2 h, respectively. The promoter activity of NF-κB was measured by dual-luciferase
activity. Cells were co-transfected with κB-luc reporter and control Renilla luciferase plasmid
pRL-CMV for 24 h. Then the cells were treated with increasing doses of EN for 1 h and
exposed to s.LPS (E) or LTA (F) for 24 h.
Fig.7
Fig.7. Effect of EN on s.LPS or LTA induced STAT1/3 activation in BV2 microglial cells
412
The cells were stimulated for indicated times with s.LPS (A) or LTA (C) to identify the
maximum activation time of STAT1, phosphorylated STAT1, STAT3, phosphorylated STAT3.
Then the cells were incubated with various doses of EN and treated with s.LPS (B) or LTA (D)
for 4 h. The nuclear extracts of those cells were prepared and examined by western blot assay.
Fig.8
Fig.8. Suppressive effect of EN on neuronal HT22 cell death caused by s.LPS and LTA
stimulated BV2 microglial cell activation
BV2 cells were incubated with indicated doses of EN for 1 h and treated with s.LPS or LTA.
After 24 h the cell media were harvested, centrifuged for 5 min at 2000rpm and the
supernatants were collected. HT22 cells were seeded in advance and exposed to the prepared
cell media of BV2 which are stimulated with s.LPS (A) or LTA (B). The cell viability of
HT22 was measured by MTT assay. Each bar represents the mean ± SE from 3 independent
experiments in each group.
[1]
[2]
[3]
[4]
[5]
[6]
3. References
Schlachetzki,J.C.Hull,M.,Microglial activation in Alzheimer's disease,Curr.Alzheimer
Res., 2009, 6, 6, 554-563
Moss,D.W.; Bates,T.E.,Activation of murine microglial cell lines by lipopolysaccharide
and interferon-gamma causes NO-mediated decreases in mitochondrial and cellular
function,Eur.J.Neurosci., 2001, 13, 3, 529-538
Karin,M.; Lin,A.,NF-kappaB at the crossroads of life and deathNat.Immunol., 2002, 3,
3, 221-227
Esiri,M.M.,The interplay between inflammation and neurodegeneration in CNS
diseaseJ.Neuroimmunol., 2007, 184, 1-2, 4-16
Vijayan,V.; Mueller,S.; Baumgart-Vogt,E.; Immenschuh,S.,Heme oxygenase-1 as a
therapeutic target in inflammatory disorders of the gastrointestinal tractWorld
J.Gastroenterol., 2010, 16, 25, 3112-3119
Tenhunen,R.; Marver,H.S.; Schmid,R.,The enzymatic conversion of heme to bilirubin by
413
microsomal heme oxygenaseProc.Natl.Acad.Sci.U.S.A., 1968, 61, 2, 748-755
[7] Hwang,J.S.; Lee,S.A.; Hong,S.S.; Han,X.H.; Lee,C.; Lee,D.; Lee,C.K.; Hong,J.T.;
Kim,Y.; Lee,M.K.; Hwang,B.Y.,Inhibitory constituents of Nardostachys chinensis on
nitric oxide production in RAW 264.7 macrophagesBioorg.Med.Chem.Lett., 2012, 22,
1, 706-708, Elsevier Ltd
[8] Subashini,R.; Yogeeta,S.; Gnanapragasam,A.; Devaki,T.,Protective effect of
Nardostachys jatamansi on oxidative injury and cellular abnormalities during
doxorubicin-induced cardiac damage in ratsJ.Pharm.Pharmacol., 2006, 58, 2, 257-262
414
302
Synthesis and characterization of ZnS nanopowder prepared by
microwave-assisted heating
Wei Huanga, Min-Hung Leea, San Chanb, Yueh-Chien Leeb,*, Ming-Kwen Tsaib,
Sheng-Yao Huc, Jyh-Wei Leed
a
Institute of Electro-Optical Science and Technology, National Taiwan Normal University,
Taipei, Taiwan
b
c
Department of Electronic Engineering, Tungnan University, New Taipei City, Taiwan
Department of Electrical Engineering, Tungfang Design University, Kaohsiung, Taiwan
d
Department of Materials Engineering, Ming Chi University of Technology, New Taipei
City, Taiwan
*Corresponding author: Yueh-Chien Lee
E-mail address: jacklee@mail.tnu.edu.tw
Abstract
In this study, we present the zinc sulfide (ZnS) nanopowder (NP) synthesized via
microwave-assisted heating of a mixed thioacetamide (CH3CSNH2: TAA) and zinc sulfate
(ZnSO4) precursors in deionized water (DI water). The variations in the crystalline
performance and morphology of ZnS synthesized with different molar ratios of TAA to
ZnSO4 are investigated by X-ray diffraction (XRD), Fourier transform infrared spectroscopy
(FTIR) spectra, and scanning electron microscopy (SEM) images. With increasing the molar
ratio of TAA to ZnSO4, the XRD results indicate the synthesized ZnS NP tending towards
crystallization, while the FTIR spectra show the increase in the amount of hydroxyl group
and C=O vibration modes. SEM images confirm that the higher concentration in TAA is
responsible for increasing the diameter of spherical particles consisted of chains
agglomeration.
Keywords: Zinc sulfide; Microwave-assisted synthesis; X-ray diffraction; Scanning electron
microscopy
415
1. Introduction
Recently, nanocrystalline II-VI semiconductor materials have been studied extensively
because they have the unique electrical and optical properties as compared to those of the
bulk material [1-3]. Among the II-VI compounds, ZnS has attached considerable attention
due to its applications in flat panel displays, electroluminescence devices, photonic crystal
devices, sensor, laser and photocatalysis [1-3]. Up to now, several methods for synthesizing
ZnS nanostructure have been reported such as the sol-gel technique [4], solvothermal method
[5], thermal evaporation [6], and microwave irradiation [7-8], etc.
Among different synthesis methods, the microwave-assisted synthesis has advantages over
other approaches such as lost cost, rapid heating, thermal uniformity and energy efficiency
[7-8]. In the microwave-assisted synthesis, the precursor solution is irradiated with a
microwave source and the efficient energy transfer through either resonance or relaxation can
result in a rather rapid heating process. Furthermore, microwave heating process can result in
homogeneous heating of the precursor solution in a rather short duration to achieve a uniform
distribution of particle size. Consequently, the microwave hydrothermal process is kinetically
more efficient than the conventional hydrothermal process for preparing various
nanostructures [7-8].
In this work, we synthesized ZnS NP by the controlled microwave-assisted heating of TAA
to ZnSO4 solution with DI water as the solvent. The structural properties and morphology of
the ZnS synthesized with different molar ratios of TAA to ZnSO4 are characterized by XRD,
FTIR, and SEM measurements, respectively.
2. Experimental
The ZnS NP were prepared by microwave-assisted heating process. The precursor solution
was prepared by stirring a mixture with different molar ratios (1:1, 2:1, 3:1, 4:1, and 5:1) of
CH3CSNH2 to ZnSO4 in DI water at room temperature for 30 minutes to obtain a
well-dissolved solution. The solution was then placed into a microwave oven and heated from
room temperature up to 95 °C with magnetic stirring for one hour. The reactions occurring
during microwave irradiation, leading to form ZnS nanoparticles, can be described as [9]:
CH3CSNH 2 +H 2O  CH3CONH 2 +H 2S
(1)
(2)
S2- +Zn2+  ZnS
Equation (1) describes the irreversible reaction releasing H2S in a homogeneous way in the
solution. Then Sulfide (S2−) anions coming from H2S react with Zn2+ cations to yield ZnS [9].
Afterwards, the semi-clear solution was cooled down to the room temperature and the ZnS
NP were rinsed with an ethanol solution and dried at 95 °C in the microwave oven for 5
minutes.
The synthesized ZnS NP were characterized by XRD, FTIR, and SEM measurements. The
XRD spectrometer (Shimadzu XRD-6000) with a CuKα line of 1.5405 Å was used to study
416
the crystal phases in the synthesized samples. The variations in the vibration modes of the
ZnS NP synthesized as function of molar ratio were measured by FTIR spectroscopy
(PerkinElmer, Spectrum One). The SEM images were taken on a JEOL- JSM7001F to
observe the morphology.
3. Results and discussion
Figure 1 shows the XRD patterns of the synthesized-ZnS samples. All samples can exhibit
three significant diffraction peaks, which are corresponded to the lattice planes of (111),
(220), and (311) by the JCPDS card No. 05-0566. The results indicate all synthesized
samples having the zinc blende structure [7,10]. Additionally, three weak peaks at about 33.1,
69.5, and 76.8, assigned to the lattice planes of (200), (400), and (311), can be observed in the
samples synthesized with higher molar ratio. Furthermore, it is noted that the full width at
half maximum (FWHM) of the pronounced peaks are narrowed with increasing the molar
ratio of precursor, which implies the increases in the particle size. The average particle size in
the synthesized ZnS NP can be estimated by the Sherrer’s relation [8]:
D
0.9
B cos 
(3)
where D,  ,  , and B are respectively, the crystal size, X-ray wavelength, Bragg diffraction
angle, and FWHM in radians. The average particle size of the ZnS NP synthesized with
controlled molar ratio of 1:1, 2:1, 3:1, 4:1, and 5:1 are estimated to be about 5.79, 6.02, 6.62,
7.94 and 8.34 nm, respectively.
Figure 1 XRD patterns of ZnS NP synthesized with different molar ratios of precursor: (a) 1:1,
(b) 2:1, (c) 3:1, (d) 4:1, and (e) 5:1.
Figure 2 displays the FTIR results, which are measured to investigate the variations of
vibration modes in the ZnS NP synthesized with different molar ratios. As shown in Fig. 2(a),
the FTIR spectrum of the ZnS NP synthesized with 1:1 molar ratio exhibits several typical
417
vibration peaks [10-11]. It can be observed that the characteristic Zn-S vibration peaks are at
620 and 1120 cm-1 and the intensity of that are enhanced as a function of molar ratio of
precursor. The bands at 1500−1650 cm-1 are attributed to the C=O stretching modes arising
from the absorption of atmospheric CO2 on the surface of nanoparticles. A broad absorption
peak in the range of 3000−3600 cm-1 is assigned to the OH stretching mode of hydroxyl
group, which indicates the existence of water absorbed in the surface of nanocrystals.
However, with increasing the molar ratio of precursor, the amount of the OH and C=O
stretching modes would be much more than that of the Zn-S vibration, resulting in a much
broader absorption range in the FTIR result. The phenomenon could be due to the increased
amount of the hydroxyl group and C=O produced form the TAA decomposition.
Figure 2 FTIR spectra of ZnS NP synthesized with different molar ratios of precursor: (a) 1:1,
(b) 2:1, (c) 3:1, (d) 4:1, and (e) 5:1.
The morphology of the ZnS NP synthesized with different molar ratios of precursor is
revealed by SEM images as presented in Fig. 3 and the showed magnification of the SEM
images is fixed to be 20,000 to compare. As shown in Fig. 3(a)−(e), it can be observed clearly
a trend in the increase in average size of the ZnS NP with increasing the molar ratio, which is
consistent with the estimation from XRD results. However, the estimated particle size in
SEM images is much larger than the crystal size calculated by the Scherrer’s equation. It has
been known that the discrepancy may be understood by noting that the SEM images give the
size of the ZnS NP which may be as a result of the agglomeration of many nano-particles.
Therefore, the size of the ZnS NP as shown by the SEM image is larger than the average
particle size calculated from the XRD spectra. It is also observed that needle-like structures
would be covered on the surface of spherical particle and can be presented clearly with
increasing the molar ratio of precursor. M. K. Mekki Berrada et al. have attributed the
needle-like structures to the agglomerated chains of about twenty crystallites [9]. These
chains then agglomerate and form spherical porous particles of 1−5 μm and the diameter of
418
spherical particle would be increased with increasing the concentration of TAA, which is in
agreement with the presented SEM images.
Figure 3 SEM images of ZnS NP synthesized with different molar ratios of precursor: (a) 1:1,
(b) 2:1, (c) 3:1, (d) 4:1, and (e) 5:1.
4. Conclusion
In summary, we have successfully synthesized ZnS NP by microwave-assisted process using
TAA and ZnSO4 solution with DI water as the solvent. The XRD result presents an increase
in the crystalline performance and particle size as a function of molar ratio of TAA to ZnSO4.
However, the absorption peaks of hydroxyl group and C=O vibration modes would broaden
FTIR spectrum with increasing the molar ratio of precursor, which is related to the higher
concentration in TTA decomposition. SEM images show that the diameter of ZnS spherical
particles consisted of chains agglomeration becomes larger with increasing the molar ratio of
precursor, which is consistent with the results by XRD.
5. Acknowledgement
The author Y.C. Lee would like to acknowledge the support of the National Science Council
Project No. NSC 99-2112-M-236-001-MY3
6. References
[1] M. Lin, T. Sudhiranjan, C. Boothroyd, K. P. Loh, Influence of Au catalyst on the growth
of ZnS nanowires, Chemical Physics Letters, Vol. 400, 2004, pp. 175-178.
[2] H. Zhang, L. Oi, Low-temperature, template-free synthesis of wurtzite ZnS
nanostructures with hierarchical architectures, Nanotechnology, Vol. 17, 2006, pp.
3984-3988.
[3] X. Fang, L. Wu, L. Hu, ZnS nanostructure arrays: a developing material star, Advanced
Materials, Vol. 23, 2011, pp. 585-598.
419
[4] N. I. Kovtyukhova, E. V. Buzaneva, C. C. Waraksa, and T. E. Mallouk, Ultrathin
Nanoparticle ZnS and ZnS:Mn Films: Surface Sol-Gel Synthesis, Morphology,
Photophysical Properties, Materials Science and Engineering: B, Vol. 69-70, 2000, pp.
411-417.
[5] Q. Zhao, L. Hou, R. Huang, Synthesis of ZnS nanorods by a surfactant-assisted soft
chemistry method, Inorganic Chemistry Communications, Vol. 6, 2011, pp. 971-973.
[6] Y. Wang, L. Zhang, C. Liang, G. Wang, X. Peng, Catalytic growth and
photoluminescence properties of semiconductor single-crystal ZnS nanowires, Chemical
Physics Letters, Vol. 357, 2002, pp. 314-318.
[7] Y. Zhao, J. M. Hong, J. J. Zhu, Microwave-assisted self-assembled ZnS nanoballs,
Journal of Crystal Growth, Vol. 270, 2004, pp. 438-445.
[8] Y. C. Lee, C. S. Yang, H. J. Huang, S. Y. Hu, J. W. Lee, C. F. Cheng, C. C. Huang, M.
K. Tsai, H. C. Kuang, Structural and optical properties of ZnO nanopowder prepared by
microwave-assisted synthesis, Journal of Luminescence, Vol. 130, 2010, pp. 1756-1759.
[9] M. K. Mekki Berrada, F. Gruy, M. Cournil, Synthesis of zinc sulfide multi-scale
agglomerates by homogeneous precipitation–parametric study and mechanism, Journal
of Crystal Growth, Vol. 311, 2009, pp. 2459-2465.
[10] M. Kuppayee, G.K. Vanathi Nachiyar, V. Ramasamy, Synthesis and characterization of
Cu2+ doped ZnS nanoparticles using TOPO and SHMP as capping agents, Applied
Surface Science, Vol. 257, 2011, pp. 6779-6786.
[11] S. Ummartyotin, N. Bunnak, J. Juntaro, M. Sain, H. Manuspiya, Synthesis and
luminescence properties of ZnS and metal (Mn, Cu)-doped-ZnS ceramic powder, Solid
State Sciences, Vol. 14, 2012, pp. 299-304.
420
417
Bridging the gap between gas permeation properties at the transient and
steady states
Kean Wang
Department of Chemical Engineering
The Petroleum Institute
Abu Dhabi
United Arab Emirates
E-mail: kwang@pi.ac.ae
Abstract
Two phenomena are frequently observed for the gas permeation processes in microporous
media, they are, (1) the pressure dependence of both the transient state permeation property
(time lag) and the steady state permeation property (flux or permeability) deviate from the
classical Darken relation. The degree of deviation is not necessarily the same for transient and
steady state properties; and (2) the apparent diffusion coefficients derived from the transient
state are very much different (upto orders of magnitude) from those derived from the steady
state. These diffusion anomalies vary from one system to another and have been found to
be dependent on the microstructure of the porous media.
The fundamental reason for these anomalies is attributed to the structural heterogeneity of the
microporous network. Blind pores (dead-end pores) plays a significant role in the pressure
dependence of time lag (transient state property) while the pore irregularities and pore
connectivities play a more dominant role in the steady state permeation fluxes. It is found that,
by considering those factors, the gap between the transient and steady state permeation
properties can be significantly narrowed.
K. Wang, Leslie Loo, K. Haraya, “CO2 permeation in carbon molecular sieve membrane with different degree
of carbonization”, Ind. Eng. Chem. Res. , 46,1402, 2007
K. Wang, H. Suda, and K. Haraya. “ Characterization of CO2 Permeation in a CMSM Derived from Polyimide”,
Separation & Purification Tech, 31, 61, 2003.
421
Civil Engineering
14:30-16:15, December 16, 2012 (Meeting Room 3)
292: Analysis on Service Quality of Internet Traffic Information
Korea Institute of Construction
Feng Li
Technology
Korea Institute of Construction
Weoneui Kang
Technology
Korea Institute of Construction
Bumjin Park
Technology
Hyokyoung Eo
Korea Institute of Construction
Technology
319: Performance Evaluation of Penetration Reinforcing Agent for Power Plants
Facilities
Korea Institute of Construction
Ki Beom Kim
Technology
Korea Institute of Construction
Jong Suk Lee
Technology
Hydro & Nuclear Power CO.LTD
Myong Suk Cho
Central Research Institute
Do Gyeum Kim
Korea Institute of Construction
Technology
330: A Study on the Improvement for Measuring Accuracy about the High Speed
Weigh-In-Motion
Korea Institute of Construction
Hyo Kyoung Eo
Technology
Korea Institute of Construction
Bum Jin Park
Technology
Korea Institute of Construction
Weon Eui Kang
Technology
Korea Institute of Construction
Feng Li
Technology
422
331: A Study on the Method for Measuring the Field Density
Korea Institute of Construction
Weoneui Kang
Technology
Korea Institute of Construction
Bumjin Park
Technology
Korea Institute of Construction
Feng Li
Technology
Korea Institute of Construction
Hyokyoung Eo
Technology
423
292
The Analysis for Service Quality of Internet Traffic Information
Li Fenga,*, Kang, Weoneuib, Park, Bumjinc, Eo, Hyokyoungd
a.
*Korea Institute of Construction Technology, 283, Goyangdae-ro, Ilsanseo-gu, Goyang,
Republic of Korea
E-mail address: yustbong@kict.re.kr
b
Korea Institute of Construction Technology, 283, Goyangdae-ro, Ilsanseo-gu, Goyang,
Republic of Korea
E-mail address: yikang@kict.re.kr
c
Korea Institute of Construction Technology, 283, Goyangdae-ro, Ilsanseo-gu, Goyang,
Republic of Korea
E-mail address: park_bumjin@kict.re.kr
d
Korea Institute of Construction Technology, 283, Goyangdae-ro, Ilsanseo-gu, Goyang,
Republic of Korea
E-mail address: hke8408@kict.re.kr
Abstract
Recently, there are so many kinds of media for providing traffic information, such as VMS,
radio, navigation and smartphone applications. By the survey, it is shown that most of users
prefer the internet traffic information service. Therefore, it is necessary to evaluate the quality
of Internet traffic information service for users. The purpose of this study is to analyze the
quality of Internet traffic information service. In this study, the quality of internet traffic
information service is quantified by SERVQUAL (SERvice QUALity) technique for user's
expectations and satisfaction. Through the results of this study, the methodology is suggested
for evaluating the quality of internet traffic information service.
Keyword: internet traffic information; service quality; SERVQUAL
1. Introduction
In recent years, the frequency of utilization of traffic information that provides operation path
and traffic status has increased. In particular, investigation results showed that users give
more preference to internet portal site than any other traffic information service media due to
424
its great convenience[1]. However, since traffic information services through web portals
collect and provide information in private sector on its own, verification system regarding
reliability is still lacking compared to services provided by public sector. Therefore, it is
necessary to conduct an investigation into the user satisfactions levels on the traffic
information services provided by portal sites.
In this study, a questionnaire survey was carried out to identify utilization status and
satisfaction levels of users on the traffic information services provided by portal sites, and a
methodology to assess the quality of the traffic information services through internet portal
sites applying SERVQUAL technique was proposed.
2. Literature Review
2.1 Research Trends
Relevant previous studies were reviewed, focusing on the major content and methodology of
this study. In relation to research content, a review of advance researches to measure the
service quality applying transportation service concepts and SERVQUAL technique was
conducted.
Kim Jonghak et al.(2010) identified the service quality differently regarded by passers when
selecting transportation means, and Park Jaegeun(2009) analyzed the causes of a rapid drop
in user preference of traffic information services via Seoul Traffic Broadcasting due to the
generalization of traffic information and emergence of a variety of media. In addition, Jang
Jang Seokyong et al.(2009) researched service improvements that can increase the service
utilization rate of users by applying a structural equation and traffic information service
quality gap models for the revitalization of urban traffic information system[2,3,4].
Researches to assess the service quality using SERVQUAL technique are as follows. Kim
Sungkuk(2006) conducted a research on the service quality improvement of container ports in
Korea using SERVQUAL technique and IPA, and Hong Mina(2007) measured the service
quality of Asian restaurants in the United States using SERVQUAL technique in connection
with issues related to the service quality in the foodservice industry[5,6].
In this regard, this study has its differentiation from previous studies in that the service
quality of traffic information through portal sites is evaluated as the gap between perceptions
and expectations of users.
2.2 Method
Developed by PZB(Parasuraman, Zeithaml and L. Berry), SERVQUAL technique to be applied in this study is a
method to assess the service quality, and it is an analytical technique to measure the direction and degree of
coincidence between expectation and perception (experience) of customers on the services, where expectation
signifies a prediction about the phenomenon, whereas perception means recognition of its target in reality. The
425
concept is based on the expectation-disconfirmation model proposed by Oliver(1980) to conceptualize the
service quality and customer satisfaction[7,8].
In SERVQUAL technique, the greater the satisfaction with services provided to customers, the higher the
service quality. As a rule, reliability, assurance, tangibles, empathy and responsiveness are defined to be 5
factors affecting the quality level. The conceptual diagram of SERVQUAL to measure a gap between perception
and expectation of customers is shown in the figure below[9].
Figure 1. Conceptual diagram of SERVQUAL
The case where expectation and perception levels are the same is defined as satisfaction, expectation is less than
perception over-servicing, and expectation is greater than perception under-servicing.
And, the following equations are presented as illustration.
(1)
Where
SQ = gap of service quality
PS = value of perceived service
ES = value of expected service
n = number greater than 1.
3. Analysis and Results
3.1 Survey Analysis
In this study, a questionnaire survey on the utilization status and satisfaction level was
conducted to identify user’s recognition on the accuracy of traffic information through portal
sites.
426
The total number of questionnaires collected through the survey was 143, but 126
questionnaires (88%) were counted as valid materials for research. From the results of basic
analysis, it was found that among respondents, the number of males was 93(73.8%), and
females 33(26.2%) in gender, and the number of people in their 20s was 40(31.7%), in their
30s 58(46.0%), in their 40s 15(11.9%), and in their more than 50s 13(10.3%) in age.
The results of survey on the frequency of use of traffic information services through portal
sites showed that the number of people who use the services 0 to 1 time per week was
55(45.5%), 2 to 3 times per week 48(39.7%), 4 to 6 times per week 15(12.4%), and more
than 7 times per week 3(2.5%).
In addition, the results of research through multiple response survey method found that traffic
information services primarily used by users were predictive traffic information (85 people),
real-time traffic information (65 people), public transport information (64 people), and other
information (5 people).
3.2 Results and discussion
The quality of internet portal traffic information services was evaluated by means of
SERVQUAL technique to assess the service quality through gaps between users’ expectations
and perceptions. As main items of traffic information services, traffic information services
and public transport information services were used, and the service quality was evaluated by
the gap between expectations and perceptions.
3.2.1 Traffic information service quality assessment
The service quality perceived by users of traffic information services through internet portal
sites was evaluated using SERVQUAL technique. Each service item was assessed on the
basis of 7 points (full marks), and the gap between users’ expectations, the expected
satisfaction level and perceptions, the actually perceived satisfaction level was measured.
From its results, the order of priority of expected satisfaction on the traffic information
services was found to be predictive traffic (5.3), real-time traffic (5.2), traffic camera (4.5),
and journalistic incident (4.4), and that of perceived satisfaction was predictive traffic (4.5),
real-time traffic (4.1), traffic camera (4.0), and journalistic incident (3.7). However, the order
of priority of the gap between expected satisfaction and perceived satisfaction turned out to
be traffic camera (0.5), journalistic incident (0.7), real-time traffic (0.8), and predictive traffic
(1.1), which shows the different analysis results from the previous ones.
427
7
6
5
4
5.2
4.1
5.3
4.5
4.5
4.0
4.4
3.7
3
2
1
0
Real-time traffic
Predictive traffic
Traffic camera
Journalistic
incident
Figure 2. Tolerance zone of traffic information
These results found that predictive traffic information ranked highest in the satisfaction level
expected by users (5.3), but traffic camera had the lowest difference between expectations
and perceptions (0.5), which can be interpreted that it is most preferred in comparison with
expectation.
3.2.2 Public transport information service quality assessment
Among traffic information services through internet portal sites, the recognition about the
service with high frequency of use when using public transport such as buses was evaluated.
Like traffic information service, it was evaluated on the basis of 7 points (full marks), and the
gap between expectations and perceptions was measured.
Form its results, the order of priority of expected satisfaction on the public transport
information services was found to be predictive public transport(5.3), bus arrival information
(5.3), bus/subway route(5.2), bus position information(5.1), and public transport fare(4.6),
and that of perceived satisfaction was bus position information(5.0), predictive public
transport(4.6), bus arrival information (4.5), bus/subway route(4.3), and public transport
fare(4.2). However, the order of priority of the gap between expected satisfaction and
perceived satisfaction turned out to be bus/subway route(0.2) , public transport fare(0.4), bus
position information(0.7), predictive public transport(0.8), and bus arrival information(0.8),
which shows the different analysis results from the previous ones.
428
7
6
5
5.3
5.3
4.6
4.5
5.2
5.0
5.1
4.3
4
4.6
4.2
3
2
1
0
Predictive public
transport
Bus arrival
information
Bus position
information
Bus/subway
route
Public transport
fare
Figure 3. Tolerance zone of public transport information
These results found that predictive public transport and bus arrival information ranked
highest in the satisfaction level expected by users (5.3), but bus/subway route had the lowest
difference between expectations and perceptions (0.2), which can be interpreted that it is most
preferred in comparison with expectation.
4
Conclusion
In this study, the utilization status and satisfaction level of users was evaluated by conducting
a questionnaire survey, targeting portal site among traffic information service media in the
private sector, and based on the results, a methodology to assess the quality of traffic
information services through portal sites utilizing SERVQUAL technique was proposed.
From the research results, it was found that the expectations of predictive traffic information
were highest (5.3), but the gap between expectations and perceptions of traffic camera was
the lowest (0.5) in traffic information service quality assessment. The results of public
transport information service quality assessment showed the expectations of predictive public
transport and bus arrival information were highest (5.3), but the gap between expectations
and perceptions in bus/subway route was lowest (0.2), which indicates that the gap between
expectations and perceptions should be considered in addition to the absolute values of
expected and perceived satisfaction levels in the quality assessment of traffic information
services through internet portal sites.
An emphasis should be placed on the accuracy as well as diversity of traffic information
contents via internet portal sites. To secure customers continuously, the accuracy of traffic
information needs to be verified before providing a variety of services.
429
In the future, it is required to formulate improvement plans based on the results of evaluation
and conduct further research on the public roles in traffic information service quality
improvements in addition to evaluation on the quality of traffic information services.
5
Acknowledgement
This Study was carried out under the sponsorship of the Korea Institute of Construction
Technology as part of the Internal Basic Project.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
6
References
Park Bumjin, Eo Hyokyoung, Improvement of traffic information contents of portal site
focused on user’s satisfaction, The Korea Contents Association, 2012, vol.12, No.9, pp.
500-511.
Kim Jonghak, Kim Ikki, The evaluation of transportation service quality by
SERVQUAL method, The Korea Spatial Planning Review, 2010, vol. 64, pp. 77-96.
Park Jaegeun, A study on the ways to improve the traffic information services of TBS,
Ajou University, 2009.
Jang Seokyong, Jung Hunyoung, Ko Sangseon, UTIS vitalization countermeasures
using traffic information use satisfaction rate model, Journal of Korean Society of Civil
Engineers, 2009, Vol. 29, No. 2D, pp. 199-207.
Kim Sungkuk, A Study on the post occupancy evaluation of liner’s on container
terminal using SERVQUAL-IPA model, Journal of Industrial Economics and Business,
2006, Vol. 19, No. 5, pp. 1955-1976.
Hong Mina, Measuring service quality of Asian theme casual restaurants using the
SERVQUAL model, Journal of Foodservice Management, 2007, Vol. 10, No. 3, pp.
245-73.
Parasuraman A., Berry Leonard, Zeithaml Valarie A., SERVQUAL: A multiple-item
scale for measuring consumer perceptions of service quality, Journal of Retailing, 1988,
Vol. 64, No. 1, pp. 12-40.
Richard L. Oliver, A cognitive model of the antecedents and consequences of
satisfaction decisions, Journal of Marketing, 1980, Vol. 17, No. 4, pp. 460-485.
J. van Iwaarden, T. van der Wiele, L. Ball and R. Millen, Applying SERVQUAL to
websites: An exploratory study, International Journal of Quality & Reliability
Management, 2003, Vol. 20, No. 8, pp. 919-935.
430
319
Performance Evaluation of Penetration Reinforcing Agent for Power Plants
Facilities
Ki-Beom Kima, , Jong-Suk Leea, Myong-Suk Chob, Do-Gyeum Kima*
a
Korea Institute of Construction Technology, Structural Engineering Research Division,
1190, Simin-daero, Ilsanseo-Gu, Goyang-Si, Gyeonggi-Do, 411-712, Republic of Korea
E-mail address: kibeom@kict.re.kr
b
Hydro & Nuclear Power CO.LTD Central Research Institute, 1312-Gil 70
Yuseongdae-ro, Yuseong-gu, Daejeon, Republic of Korea
E-mail address: concrete@khnp.co.kr
Abstract
When establishing a plan for aging deterioration of concrete structures, the key focus should
be on the fact that the molecule of concrete is not as tight as steel and is a porous material
with many gel-pore making it vulnerable to salt damage, neutralization and other key factors
to penetrate the surface and degrades the durability of the structure. Thus, this study aims to
create a new application system geared toward the usage of new material base surface
penetration supplements and the characteristics of nuclear or power plants to protect the
concrete structure from deterioration factors and recover the capability of the structure.
Keyword: Plant structures, Concrete durability, Aging deterioration, Surface penetration
supplements
1. Introduction
Most power plant structures- including the nuclear reactors- are constructed mainly out of
concrete and are exposed to harsh environmental conditions as they are constructed near the
sea. The establishment of an effective management method to prevent and counter aging
degradation is very important in the perspective of life cycle management due to the
characteristics of structures where complete replacement of problem sources is not possible.
Currently studies are being conducted both domestically and internationally to develop a
surface barrier to protect the concrete surface and the structure itself and a few products have
been developed and are currently being used. These products, however, still contains many
problems such as the limits of penetration, short lifespan, lacking the ability to improve the
structure and more which needs to be addressed. In the case of the domestic market, all of the
products used are unproven foreign products which are not living up to the standards in terms
of quality when actually applied in the field. Thus, considering the importance and the
431
growing needs of extending the life span and durability of concrete power generation
facilities as well as major industrial structures, this field of study is at a point where research
and development must be conducted at a national level. Imported engineered inorganic
mineral based surface coating methods are widely used in the domestic fields. However,
without any national level research a standard to test these products could not be established
and various untested products flooded the domestic market. It is as if it was like a product
expo for foreign developers. In Korea, at the very least the legislation for surface coating
method standard KS F 4930[Penetrating water repellency of liquid type for concrete surface
application] was passed taking big leap forward.
2. Experiment
2.1 Deterioration Factors
Power structures such as nuclear power plants and thermal power plants located near the sea
may suffer from chloride attack damages. Damage by chloride refers to the condition when
the chloride content exceeds the 0.03% (0.7kg/m3) concrete weight ratio. If 90% of the
chloride content can be prevented or suppressed, steel corrosion during the life span of the
structure can be prevented. Also, for structures already operating, if 70% or more of the
chloride content level within the structure can be fixated, the structure can be protected from
the risks of steel corrosion.
Thus, the goal for chloride attack prevention ability in a Penetrating Reinforcing Agent has
been set to suppress 90% or more of the chloride contents. For the recovery ability the goal
has been set to fixate at least 70% of the chloride content within an operating structure..
Concrete neutralization refers to steel corrosion deterioration due to the low alkalinity. This
occurs when airborne carbon dioxide penetrates into the concrete and transforms calcium
hydroxide to calcium carbonate through the process of hydration.
Thus, the goal for the Penetrating Reinforcing Agent is to prevent neutralization at 70%
which is enough to suppress steel corrosion. Recover performance on operating structures has
been set at 70% as well.
2.2 Experiment Method
Test specimens manufactured in accordance to the following chart. RMC were used to create
the specimens and concrete strength 24MPa and 35MPa has been used for the assessment.
The aggregate were to be smaller than 15mm, target slump at 10cm, and target air volume at
4.5±1.5%.
432
Table 1 Mixing ratios of concrete
Strength of
design
Unit content(kg/m3)
W/C(%) S/a(%)
Water
Cement
Sand
Gravel
fc=24MPa
48
46
178
370
771
891
fc=35MPa
42
46
169
401
754
891
2.3 Deterioration Experiment
In order to test the chloride attack recovery performance, a chloride content of 0.01, 0.03, 0.1,
0.2% has been mixed into a 54% w/c mortar and then assessed the depth where it has been
fixated by the penetration reinforcing agent. Chloride prevention ability has been assessed
using chloride immersion tests. On the bottom of a 100×100×100mm specimens penetration
reinforcing agents were applied followed by a coating of epoxy on the on each side to force
penetration in a single direction. Then the test piece was immersed in a solution of NaCL
3.6% for 28 days and 90 days and measured the chloride ratio at each depth. The
measurements were conducted by collecting a 20g sample in an interval of 10mm from the
concrete surface and extracting the chloride in accordance to the Japan Concrete Institution’s
standards. AG-100, a product of a Japanese Company K that uses ion electrodes was used to
measure the chloride.
Fig.1 depicts the results of immersing the test specimens coated with the synthetic material in
3.6% NaCl solution for 91 days. Regardless of the hardness of the basic test piece, the test
piece with the synthetic material showed a much lower Chloride penetration and AcTe-1,
AcTe-2 and AcU cf showed greater effects in decreasing the penetration depth. As like the
previous tests, AcTe-1 and AcUcf prevents the penetration of chloride by coating the surface
of the concrete. AcTe-2 fills any gaps and strengthens the bond of the concrete structure to
lower permeability coefficient. The result for chloride attack recovery assessment showed
that the fixation effects were around 70~72%. This is assessed to occur due to the chemical
bonding process during the Sol-gel process of Silicate
Table 2 Chloride penetration about synthetic materials
Chloride
content
(%)
Type(Ⅰ)
Fc=24MPa Preventation
Ability Ratio
Chloride
content (%)
Type(Ⅱ)
Fc=35MPa Preventation
Ability Ratio
AcTe-1
Coating(synthetic materials)
AcTe-2 TeaTe GrTe
AcUcf
TeaUcf
0.053
0.0032
0.0013
0.026
0.037
0.0098
0.036
1
0.94
0.98
0.51
0.3
0.82
0.32
0.041
0.006
0.004
0.017
0.025
0.0068
0.022
1
0.85
0.9
0.59
0.39
0.83
0.46
No
Coating
433
Figure 1 Chloride content at concrete depth.(91 days)
In order to test the neutralization recovery performance, a cylinder shaped test piece was
neutralized for 90days and then test the recovery performance after it was coated with the
penetration reinforcing agent. Neutralization suppression assessment was conducted by
coating a Ф100×100mm test specimens with the surface penetration agent and coated the
sides with epoxy to cause penetration of Carbon dioxide in 2 directions. The test subject was
exposed in a accelerated neutralization inducer for 7, 28 and 90 days under the conditions of
10% Carbon Dioxide, 30±3 ℃ temperature, and 60±5% humidity. The result for
neutralization recovery performance, the recovery properties was assessed to be around
25.8%. This is assessed to occur due to the chemical bonding process during the Sol-gel
process of Silicate
Figure 2 Neutralization Experiment
434
Figure 3 Neutralization Result
Fig 2. depicts the color change caused by phenolphthalein solution in the accelerated
neutralization inducer under the conditions of 10% Carbon Dioxide, 30±3℃ temperature,
and 60±5% humidity for 7, 28 and 90 days. Regardless of the hardness of the basic test piece,
the test piece with the synthetic material showed a much slower neutralization and AcTe-1,
AcTe-2 and AcU cf showed greater effects in decreasing the neutralization speed. As like the
chloride tests, AcTe-1 and AcUcf slows down the neutralization process by coating the
surface of the concrete. AcTe-2 fills any gaps and strengthens the bond of the concrete
structure to decrease the neutralization speed.
3. Conclusions
As the result of the physical and chemical properties assessment of the developed penetration
reinforcing agent, when applied to cement, it reacts with the carbon hydroxide filling the
pores strengthening the structural integrity of the concrete.
As the result of the deterioration suppression assessment, when coated with the penetration
reinforcing agent, it prevented 97% of deterioration, 96% of neutralization compared to a
uncoated surface. Thus proving the penetration reinforcing agent can efficiently suppress
concrete deterioration.
As the result of the deterioration recovery assessment, when coated with the penetration
reinforcing agent, it improved the improved chloride attack damage recovery by 72%,
neutralization recovery 25.8 and showed many other improvements. This is assessed to occur
due to the Sol-Gel chemical bonding process of Silicate which recovers the damaged
concrete.
4. Acknowledgement
This work was supported by the Nuclear Research & Development of the Korea Institute of
Energy Technology Evaluation and Planning(KETEP) grant funded by the Korea government
Ministry of Knowledge Economy. (No. 2011T100200161)
435
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
5. Reference
ACI Committee 515, " A Guide to the Use of Waterproofing, Dampproofing,
Protective, and Decorative Barrier System for Concrete", ACI Manual of Concrete
Practice, 1995
ACI Committee 546, "Concrete Repair Guide", ACI Manual of Concrete Practice, 1997
J.G. Cabrera et al., "Performance Properties of Concrete Repair Materials",
Construction and Building Materials, Vol.11, No.5-6, pp. 283~290, 1997
Kim Sang Young et al., “A Kinetic Study on the Hydrolysis and condensation of TEOS
in the Basic Condition by Sol-Gel Method", J. of KICE, Vol. 32, No. 4, pp. 557~565,
2001
Buenfeld, N. R. and Zhang, J. Z., "Chloride diffusion through surface-treated mortars
specimens", Cement and concrete Research, Vol. 28, No. 5, 1998. 24.
Lianfang, L. et. al., "In situ leaching investigation of pH and nitrite concentration in
concrete pore solution", Cement and Concrete Research, Vol. 29, 1999.
Andrade, C. and Alonso, C., Preliminary testing of as a curative corrosion inhibitor
for steel reinforcement in concrete, Cement and Concrete Research, Vol. 22, 1992.
Nakayama, N., "Inhibitory effects of nitrilotris(methylenephosphonic acid) on cathodic
reaction of steel in saturated Ca(OH)2 solutions, Corrosion Science, Vol. 42, 2000.
436
330
A study on the improvement for measuring accuracy about the high speed
Weigh-In-Motion
Hyo-kyoung Eoa,*, Bum-jin Parkb, Weon-eui Kangc, Li Fengd
Korea Institute of Construction Technology,
2311 Deawha-Dong Ilsanseo-Gu, Goyang-Si, Gyeonggi-Do, Korea
a
E-mail address: hke8408@kict.re.kr
b
E-mail address: park_bumjin@kict.re.kr
c
d
E-mail address: yikang@kict.re.kr
E-mail address: yustbong@kict.re.kr
Abstract
As an equipment to measure the weight of a vehicle in motion, WIM(Weigh-In-Motion) was
introduced to enforcement on illegally overloaded vehicles. WIM system is classified into
HSWIM(High-Speed-WIM) and LSWIM(Low-Speed-WIM) according to the vehicle’s
permissible speed at measure of weight. LSWIM with high accuracy is used for enforcement on
illegally overloaded vehicles, whereas HSWIM is used for prescreening illegally overloaded
vehicles because accuracy of HSWIM is lower than that of LSWIM. Accuracy of HSWIM which
is installed to effectively operate checkpoints is directly connected to the detection rate of
violation at checkpoints. Therefore, the accuracy of measured values by HSWIM should be
improved as much as possible. This study endeavored to devise means to improve HSWIM
measuring accuracy using measured value by LSWIM and then a solution was found through
regression analysis of measured values by LSWIM and HSWIM, applying the result of
regression analysis, and changing the calculation method for gross weight. First, to correct gross
weight which was measured by HSWIM, measured values by LSWIM were set as the dependent
variable and measured values by HSWIM were included in the model to carry out analysis of the
regression analysis. And then the calculation method for gross weight was applied to the
calibration of HSWIM. After the first calibration of HSWIM, RMSE and the number of samples
which have more than 10% of the error rate were reduced by 20.7% and 0.3% respectively.
Furthermore, after the second calibration of HSWIM, RMSE and the number of samples which
have more than 10% of the error rate were reduced by 28.7% and 12.8% respectively.
Keyword: Weigh-In-Motion, HSWIM, Regression Analysis, RMSE, Tolerance
437
1. Introduction
South Korea achieved a high level of economic growth in the late 1960s, and the freight
business in the road sector has continued to grow with economic growth. However, the trend
of a steady increase in freight business led to the increase in operation of overloaded vehicles,
which resulted in shortened durability of road and bridge structures, a consequent rise in
maintenance and repair costs, increased accident risk and environmental problems such as
noise, etc. Under these social backgrounds, a crackdown on the overloaded vehicle operation
using WIM System has been implemented to improve the efficiency of enforcement work,
reduce direct and indirect social costs and protect the lives and property of citizens from
major traffic accidents. In recent years, there have been many efforts to further increase the
efficiency of the crackdown on the overloaded vehicle operation, including primary screening
of overload suspected vehicles using HSWIM System. In this regard, this study attempted to
propose measures that can enhance the measurement accuracy of the HS-WIM.
2. The Characteristics of Weigh In Motion System
As an equipment to measure the axle loads of vehicles in operation and display gross weight
by summing them, WIM System has widely been used for cracking down on the operation of
illegally overloaded vehicles. In particular, the WIM System has long been used in Korea to
control the entry of overloaded vehicles that pass the highway offices. As discussed
elsewhere [1, 2], the WIM System is classified into HSWIM and LSWIM depending on the
driving speed of instrumented vehicles, the WIM System that is currently used in highway
offices of Korea belongs to the LSWIM. The reason to distinguish the types of WIM is that
its types are classified depending on the driving speed of instrumented vehicles, but more
fundamental reason lies in the different kinds of WIM Sensor. According to paper by Kwon
and Oh[3, 4] the WIM Sensor can be largely divided into four types. In addition, since it has
different features depending on the types, it is required to select WIM sensor suitable for the
purpose and location in which WIM system is installed.
Bending plate type
Piezo electric type
Piezo ceramic type
Piezo quartz type
Figure 1 Typical type of WIM sensor
As shown in the figure above, various types of WIM sensor have different characteristics from
each other. Bending plate type is used as a LSWIM sensor, and three Piezo types are mainly
used as a HSWIM sensor. The bending plate type Sensor used as a LSWIM sensor has its
438
disadvantage in that the driving speed of vehicles should be low, but it boasts relatively higher
measurement accuracy compared to Piezo type sensor. Therefore, the LSWIM system has
widely been used as substantial crackdown equipment, and the HSWIM system is to be used
as pre-screen equipment for screening illegally overloaded vehicle operation
Since the greater the measurement accuracy of HSWIM installed for pre-screen purpose, the
higher the crackdown ratio of illegally overloaded vehicle operation, the accuracy of HSWIM
system is also said to be an important part.
3. WIM Data Collection and Analysis
To compare measured values between LSWIM and HSWIM being operated on the road, the
measured values of LSWIM (Bending plate type) and those of HSWIM (Piezo Quartz type)
on the same vehicles were collected. The collection point is determined as overload
crackdown checkpoint, and in the downstream of the checkpoint exists HSWIM system. The
instrumented vehicles at the overload crackdown checkpoint are guided, and all measured
values of LSWIM and HSWIM on the same vehicles were collected.
The collected data were first classified by vehicle type. The classification standards by
vehicle type are based on the vehicle classification guide [5] used for a traffic census. From a
total of 12 kinds of vehicles, passenger cars, buses and small vans were excluded. The kinds
of vehicles classified in this study are shown in the table below.
Table 1 Vehicle type classification standards on the LSWIM, HSWIM collected data
Type
4
Vehicle Image
Axle
Unit
2
1
5
3
1
6
4
1
7
5
Type
Axle
Unit
8&9
4
2
10&11
5
2
1
439
Vehicle Image
12
6
2
As identified in the above table, axles and constituent units of individual vehicles differ from
each other, and it is expected that the weight distributed to each axle by vehicle kind will not
always be the same and vehicles in low-speed operation will be significantly different from
that of vehicles in high-speed operation. In this connection, this study performed an analysis
and attempted to estimate a regression equation that takes the LSWIM measurement values
with a high degree of accuracy (95%) as dependent variables and the axial loads of the same
vehicles measured by HSWIM equipment as independent variables.
(a) Vehicle Type 4
(b) Vehicle Type 5
(c) Vehicle Type 6
(d) Vehicle Type 7
(e) Vehicle Type 8&9
(f) Vehicle Type 10&11
(g) Vehicle Type 12
Figure 2 HSWIM based on the LSWIM measurement values
In the above figure, as Standard Function(y=x) and slope become identical, measured value
distribution of LSWIM and HSWIM is similar to each other. However, the slope by vehicle
kind and axle shows that a certain rule does not exist, which indicates that independent
440
correction equation is to be applied by vehicle kind and axle of each vehicle kind in
measuring axial loads of vehicles in operation.
4. HSWIM Measurement Value Calibration
4.1 Calibration method
Based on previous results, HSWIM measurement value calibration method is to be proposed.
The core of the proposed calibration method is to utilize the database of LSWIM and
HSWIM measurement values investigated in the field. The calibration in this study is divided
into primary and secondary calibration, and only primary calibration was applied to axle load,
and both primary and secondary calibrations were carried out for gross weight.
Axle load calibration – A regression equation that can calculate LSWIM measurement values
of the same vehicles is estimated through the HSWIM measurement values by axle load
recorded in database. The primary calibration of axle load is achieved by adding HSWIM
field measurement values to the regression equation estimated using the past data.
Gross weight calibration – The primary calibration on the gross weight is achieved by
summing primarily calibrated HSWIM axle load values. For secondary calibration of the
gross weight, database of LSWIM and HSWIM measurement values are utilized again. A
regression equation to calculate LSWIM gross weight through calibrated HSWIM axle load,
not merely summing HSWIM axle load is estimated, when HSWIM axle load calibration
values are applied to the estimated regression equation, thereby achieving secondary
calibration on the gross weight. For identification of results, RMSE (Root mean square error)
between HSWIM calibration values and non-calibrated HSWIM measurement values was
estimated based on LSWIM measurement values. In addition, the number of sample over a
10% of the error rate was compared after calculation.
4.2 Root mean square error (RMSE)
RMSE is a method to evaluate the accuracy of approximate models, and it is one of the most
intuitive and significant measures. As RMSE is closer to ‘1’, similarity to the entire
population is achieved. In this study, the population is LSWIM measurement values, and
non-calibrated HSWIM measurement values and HSWIM calibration values are applicable to
comparison group.
Table 2 RMSE of calibrated gross weight and gross weight measured by HSWIM
Vehicle Type
Non Calibration
4
5
6
7
8&9
10&11
12
2.01
1.88
3.47
3.69
2.48
3.61
6.28
441
Primary Calibration
2.02
1.74
3.58
1.66
2.25
2.54
3.32
Secondary Calibration
2.11
1.70
3.40
1.72
1.64
2.32
1.82
Reduction rate (%)
5.02
-9.35
-1.80
-53.34
-34.09
-35.61
-71.10
Table 3 RSME of calibrated axle load and axle load measured by HSWIM
Vehicle Type
4
5
6
7
8&9
10&11
12
Non Calibration
1.07
0.83
1.03
1.61
1.18
0.89
1.22
Primary Calibration
1.04
0.70
0.98
0.72
0.85
0.62
0.75
Reduction rate (%)
-2.95
-16.42
-4.64
-55.29
-28.04
-29.97
-38.62
From the results of RMSE calculation and comparison, it was analyzed that the primary
calibration value of HSWIM axle load and secondary calibration value of HSWIM gross
weight is low approximately 25 to 28% in RMSE compared to non-calibrated HSWIM
measurement value. In addition, it was identified that the gross weight RMSE of 4-type
vehicles increased by 5% after the calibration, but RSME calculation values of all the other
types of vehicles dropped significantly.
7.00
1.80
6.00
1.60
1.40
5.00
1.20
4.00
1.00
3.00
0.80
0.60
2.00
0.40
1.00
0.00
Type 4
0.20
Type 5
No Calibration
Type 6
Type 7
Primary Calibration
Type 8&9
Type 10&11
0.00
Type 4
Type 12
Type 5
Type 6
No Calibration
Secondray Calibration
Type 7
Type 8&9
Type 10&11
Type 12
Primary Calibration
Figure 3 RMSE calculation values according to calibration conditions of HSWIM measurement
values.
4.3 Number of Sample over a 10% of the error rate
The comparison group, or RSME of HSWIM calibration value is considered to be a suitable
measure to evaluate the similarity between two groups. However, since HSWIM system is
substantial crackdown equipment, the number of sample outside the acceptable error rate is
also very important. In Korea, the error rate of the crackdown equipment is typically allowed
up to 10%, this study focused on the number of sample over a 10% of the error rate.
Table 4 Number of sample over a 10% of the error rate of calibrated gross weight and gross weight
measured by HSWIM
442
Vehicle Type
4
5
6
7
8&9
10&11
12
Non Calibration
2
19
20
138
16
62
10
Primary Calibration
2
34
24
19
31
32
4
Secondary Calibration
3
33
25
21
17
25
0
Reduction rate (%)
50.00
73.68
25.00
-84.78
6.25
-59.68
-100.00
Table 5 Number of sample over a 10% of the error rate of calibrated axle load and axle load measured
by HSWIM
Vehicle Type
4
5
6
7
8&9
10&11
12
Non Calibration
6
133
118
1601
180
537
75
Primary Calibration
4
125
105
567
155
238
54
Reduction rate (%)
-33.33
-6.02
-11.02
-64.58
-13.89
-55.68
-28.00
In case of the HSWIM axle load calibration, the number of samples over a 10% of the error
rate decreased in all types of vehicles. However, in case of the gross weight calibration, the
number of samples significantly decreased in 7-type, 10-type, 11-type and 12-type vehicles,
but the number of samples over a 10% of the error rate increased in all the other types of
vehicles. The different calibration results of HSWIM axle load and growth weight are
attributed to the impact of dynamic loads caused by high-speed driving [6, 7].
160
1800
140
1600
120
1400
1200
100
1000
80
800
60
600
40
400
20
200
0
0
Type 4
Type 5
No Calibration
Type 6
Type 7
Primary Calibration
Type 8&9 Type 10&11
Type 4
Type 12
Type 5
Type 6
No Calibration
Secondray Calibration
Type 7
Type 8&9 Type 10&11
Type 12
Primary Calibration
Figure 4 Number of sample over a 10% of the error rate according to calibration conditions of
HSWIM measurement values
5. Conclusion
The heavy duty vehicles are classified into various types of vehicles according to axles and
constituent units. In addition, the axle load distribution of vehicles in high-speed operation
showed a big difference from that of stationary vehicles or vehicles in low-speed operation,
which indicates that individual calibration methods should be prepared and applied towards
443
HSWIM measurement values by axle and type of vehicles. A simple regression equation was
estimated using LSWIM and HSWIM data on the same vehicles recorded in the database, and
by applying this result to calibration of the HSWIM system measurement values, RMSE
reduction and decrease in the number of samples over a 10% of the error rate was identified.
In case of the HSWIM System, the axle load and gross weight is calculated only through
instantaneous load of vehicles in operation, when the instantaneous dynamic load varies
depending on the number of axles and constituent units of vehicles as well as location and
form of cargo loaded, which leads to conclusion that it is required to formulate individual
calibration methods by axle and type of vehicles, and further studies need to be carried out
based on these results.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
6. Reference
American Society for Testing and Materials, Standard Specification for Highway
Weigh-In-Motion(WIM) System with User Requirement and Test Method, ASTM
E1318-02, 2002.
COST323, European Specification on WIM of road vehicle, EUCO-COST/323, 1999.
Soonmin Kwon, Youngchan Suh, Development and Application of the High Speed
Weigh-in-motion for Overweight Enforcement, Korean Society of Road Engineers,
Vol.11, No.4, 2009, pp.69-78.
Jusam Oh, Development of Truck Axle Load Estimation Model using Weigh In Motion
Data, Korea Society of Civil Engineers, Vol.31, No.4D, 2011, pp.511-518.
Guide for Vehicle Classification, MLTM, 2008.
B. Jacob, E.J. O'brein and Jehaes, Weight-In-Motion road vehicle : Final report of the
COST323 Action 1993-1998, LCPC, boublevard Lefebvre, Paris, 1998
Jorgen Christensen, Klaus Peter Glaeser, Terry Shelton, Barry Moore, Loes Aarts,
Innovation in Truck Technologies, OECD/ITF, 2010.
444
331
A study on the method for measuring the field density
Kang, Weoneui a,*, Park, Bumjinb, Li Fengc, Eo, Hyokyoungd
a
Korea Institute of Construction Technology, 283, Goyangdae-ro, Ilsanseo-gu, Goyang,
Republic of Korea
E-mail address: yikang@kict.re.kr
b
Korea Institute of Construction Technology, 283, Goyangdae-ro, Ilsanseo-gu, Goyang,
Republic of Korea
E-mail address: park_bumjin@kict.re.kr
c
Korea Institute of Construction Technology, 283, Goyangdae-ro, Ilsanseo-gu, Goyang,
Republic of Korea
E-mail address: yustbong@kict.re.kr
d
Korea Institute of Construction Technology, 283, Goyangdae-ro, Ilsanseo-gu, Goyang,
Republic of Korea
E-mail address: hke8408@kict.re.kr
Abstract
Recently, When analyzing the traffic flow, we normally use the field density in order to
accurately evaluate the traffic density and LOS(Level of Service) of a specific section.
However, from the view of the operator responsible for determining the LOS of the road, it is
difficult for the operator to measure the field density by the point detection. So, the practical
method such as the measure using the point detection has to be developed. The purpose of
this study is to suggest the method for estimating field density using real-time traffic
information by image detectors. In this study, the methodology to estimate field density is
suggested by applying the traffic flow theory in traffic engineering. The field density is
estimated with the number of vehicles within a unit length using image detectors or CCTVs
used in ITS. And the methodology is applied to Bongan tunnel on national highway #6.
Therefore, through the result of this research, the methodology using image detectors or
CCTVs on the road is suggested to estimate the field density. This methodology should help
the operators in determining the service level of the road and making a decision for
transportation policy.
Keyword: traffic density; field density; traffic flow; Platoon’s movement
445
1. Introduction
We understand it’s necessary to use the field density in a bid to accurately evaluate the traffic
density and LOS (Level of Service) at the specific section. However, from the viewpoint of
the operator responsible for determining the LOS of the traffic condition, it’s difficult for the
operator to measure the field density using the point detection. So, the practical method such
as the measure using the point detection has to be developed[1]. Existing image detector or
CCTV is intended to collect the traffic information using the image at the certain spot and has
been developed to obtain the traffic and speed at certain spot instead of measuring the real
density. However it’s more important to continuously monitor the certain spot instead of
monitoring the section from the distance.
This study is aimed at obtaining the field density using real-time traffic information through
image detector and thus in this study, traffic information collected through image detector or
CCTV was analyzed to count the number of vehicles within the unit distance so as to measure
the field density, which was verified by applying to the same site.
2. Overview of the Methodology
2.1 Research Trends
In previous studies, review was made focusing on key issues and methodologies. Koshi et
al(1983) collected microscopic and macroscopic traffic data using detector, aerial photos and
video device and attempted to analyze the congestion of traffic flow[2].
Fred L. Hall(1986) studied on measure utilizing the occupation ratio to narrow the gap
between the theory and the actual as well as achieve easy gathering in a way of analyzing the
relationship between the density and occupation ratio[3].
Frank M. Croft et al(1985) verified accelerated noise would possibly contribute to LOS
judgment through the analysis of the speed, traffic volume and density which is considered
the macroscopic indicator and the acceleration noise which is microscopic indictor in traffic
flow theory[4].
Joonho Ko et al(2006) suggested the traffic quality experienced by the drivers is very variable.
Furthermore, evaluation of individual vehicles’ speed and acceleration was made and the
relationship with the density, speed and acceleration noise was compared and analyzed[5].
As a result of reviewing the referential documents, traffic density was found to have been the
most critical factor in analyzing the traffic flow.
2.2 Method
Uninterrupted flow refers to the movement of vehicles on road without stopping. Lighthill et
al(1955) compared it with the fluid and presented the shockwave theory, and other
researchers applied viscosity concept which is considered to be the resistance to fluid flow to
446
uninterrupted flow, defining it the element function that interrupts the low and introduced the
new continuity equation[6]. We could desirably imagine the uninterrupted flow as follows.
When vehicle A is assumed to run the highway in the state of uninterrupted flow, and when
the observer waits to monitor vehicle A at point B, observer would surely monitor vehicle A.
Observer would surely observe the vehicle group, when vehicle A is assumed to be vehicle
group.
Thus such assumption could be expressed in a way in Figure 1. As seen in Fig, the observer
moving along the dotted line would observe the vehicles in some time.
Figure 1. The Platoon‘s Movement and Observer
When indicate Fig 1 in working drawing, it may be Fig 2. The graph on top of the left
indicates the distance required to measure the real density at a certain time and remaining
figures indicate the spatial location of vehicle group moving over the time as indicated △t,
△t+t, △t+2t.
A diagonal line in figure indicates the trace of the vehicle. On assumption that the traffic state
is constant, when observing the vehicle within a certain distance for time t at the specific
point, all the vehicles in figure would possibly be observed after all. The arrow indicated in
dotted line within dotted square represents the number of vehicles and the same even in case
of plural number of dotted square. In conclusion, 7 traces of the vehicle in figure could be
observed at point △t in time t
447
Figure 2. The Platoon‘s Movement in the Time-Space Diagrams
Thus, space time function of the vehicles in uninterrupted flow can be defined as the only
value remaining unchanged over the time and space. That is, number of vehicles represented
by the density could be indicated as follows.
(1)
Where
t = Specific time
x = Specific spot
c = Line integral range to specific zone in working drawing (in clockwise)
= t, Number of vehicles in x
Thus, equation (1) can be represented as follows.
(2)
When considering the real density by traffic situation, appropriate distance to measure the
real density will also be changed in line with the change of v/c. That is, as v/c increases,
448
mean number of vehicle and dispersion within the unit distance also increase, resulting in
increase in optimal distance for measuring the density.
Assuming Real-Traffic Density Measuring Length on graph op top of the left in Fig 2 is
500m, it can be explained as follows.
Measurement of the real density during uncongestion hours can be identified by measuring
500m proposed in this study, which means density can be converted from the number of
vehicles within 20m unit distance measured 25 times continuously. Then for more accurate
conversion to real density, it’s necessary to identify optimal time interval (2t) depending on
vehicle speed.
Time interval refers to the time taken by the vehicle running from the start point to the end
point of the measurement distance which may be calculated through the following process.
First, maximum speed of the vehicles at FFS (Free Flow Speed) Then critical speed at LOS E
when uncongestion state is changed to congestion state and the speed of the vehicles which
serve the base of congestion is determined. Lastly, the time taken by the vehicle group to pass
through the measurement distance is determined which is considered the optimal time
interval.
In this study, maximum speed of the vehicles at FFS state was set as 100km/h as the limit
speed and 60km/h at LOS E was set as critical speed and 40km/h at LOS F was set as the
base for the speed changed from uncongestion state to congestion state.
Table 1 summarizes the time taken by each vehicle group to pass through 500m which is the
optimal distance at uncongestion state and 1,000m at congestion state, and the sample
number of vehicle within 20m distance shall be taken for 30 seconds at uncongestion state
and 90 seconds at congestion state.
Table 1. Time Interval for Obtaining the Image Capture
Platoon Speed and Driving Time
Interval
100km/h
90km/h
80km/h
70km/h
60km/h
50km/h
40km/h
Length
for measuring
Driving
Driving
Driving
Driving
Driving
Driving
Driving
Density
(m/s) Time (m/s) Time (m/s) Time (m/s) Time (m/s) Time (m/s) Time (m/s) Time
(sec)
(sec)
(sec)
(sec)
(sec)
(sec)
(sec)
Uncongestion
500m
28
18
25
20
22
23
19
26
17
30
-
-
-
-
Congestion
1,000m
-
-
-
-
-
-
-
-
-
-
14
72
11
90
Based on time interval suggested above and using CCTV, the methodology to calculate the
density by calculating the number of vehicles within the unit distance will be verified.
3. Test and Results
In a bid to apply the methodology above to the road, Bongan tunnel on highway #6 was
determined as the study site. Total length of Bongan tunnel is 1,000m and a 4-lane tunnel
449
with goof\d vertical and horizontal alignment. 4 CCTVs are installed inside the tunnel for
monitoring. The figure below is the perspective view and own number 1, 2, 3, and 4 was
given to each CCTV.
Figure 3. Perspective view of Bongan tunnel
In figure, monitoring zone by each CCTV was indicated and each CCTC keeps monitoring
the same zone, except the particular event. The features inside the tunnel is monitored which
may be indicated by 20m-distance as the arrow in figure on a freeze-frame of CCTV
In this study, Nov 15, 2009, Sunday was selected as analysis time to monitor both the
congestion state and uncongestion state and the video recorded by CCTV #1 ~ #4 was
collected. And time zone representing congestion state and uncongestion state on 15th was
selected after confirming the video and traffic information by the detector. The video at
uncongestion state was taken during 10:00:00~10:05:00 and congestion stage during
18:00:00~18:05:00 for 5 minutes, respectively.
The video showing the vehicles passing CCTV was captured and during uncongestion state,
25 20m-unit samples at 500m point were taken and during congestion state, 50 20m-unit
samples were taken at 1,000m point for 30 seconds.
During uncongestion state, as indicated in Table 2, real density of CCTV1~CCTV4 was
15.30, 15.00, 16.00 and 14.90(pc/km/lane), respectively, indicating a similar density. It’s not
sure if such values represent the density because of not monitoring the entire alignment of the
tunnel, but it’s noteworthy that the densities measured by the number of vehicles with 25
samples which can be measured according to all v/c at uncongestion state were similar. Thus
unless any sudden change to the traffic occurs, with 25 samples taken within 5 minutes, real
density representing entire alignment at uncongestion state can be calculated as above.
Table 2 Uncongestion: No. of pc in the 20m Unit Each CCTVs
No. of
Samples
1
2
3
Capture
Time
10:00:00
10:00:01
10:00:02
CCTV1
(pc/20m/2lane)
1.0
1.2
0
CCTV2
(pc/20m/2lane)
0
1.0
1.0
450
CCTV3
(pc/20m/2lane)
0
0
2.0
CCTV4
(pc/20m/2lane)
0
0
1.0
4
5
10:00:03
10:00:04
1.0
0
1.2
0
0
1.5
0
2.0
1.0
1.2
0
0
0
0.61
0.43
0
1.0
0.7
0.5
0
0.60
0.35
0
1.0
0
1.0
1.2
0.64
0.53
1.2
0
0
1.0
0
0.60
0.58
15.30
15.00
16.00
14.90
…
…
…
21
22
23
24
25
10:00:25
10:00:26
10:00:27
10:00:28
10:00:29
Average
Variance
Real Traffic Density
(pc/km/lane)
In case of congestion as shown in Table 3, real density of CCTV1~CCTV4 were 72.00, 74.55,
73.35, 72.85(pc/km/lane), respectively, indicating a similar values. Thus when the state
reached near forced flow state, real density can be calculated by even fewer samples.
Table 3 Congestion: No. of pc in the 20m Unit Each CCTVs
No. of
Samples
1
2
3
4
5
46
47
48
49
50
Capture
Time
18:00:00
18:00:01
18:00:02
18:00:03
18:00:05
18:01:16
18:01:18
18:01:20
18:01:22
18:01:29
Average
Variance
Real Traffic Density
(pc/km/lane)
CCTV1
(pc/20m/2lane)
3.4
4.2
4.2
4.0
3.0
CCTV3
(pc/20m/2lane)
2.4
2.7
2.2
3.4
3.9
CCTV4
(pc/20m/2lane)
4.0
2.7
2.2
3.4
3.4
2.0
3.2
1.2
3.4
2.7
2.88
0.70
CCTV2
(pc/20m/2lane)
2.7
2.0
3.4
4.2
4.2
…
…
…
1.2
3.4
2.7
3.2
3.4
2.97
0.98
2.9
3.2
4.4
1.2
3.4
2.93
0.69
2.2
3.1
3.2
3.9
3.5
2.91
0.65
72.00
74.15
73.35
72.85
Then it’s necessary to say about the need of following assumption to calculate the real
density which will represent Bongan tunnel. That is, the assumption that the traffic flow
within the tunnel would not be suddenly changed while CCTV is collecting the video (5
minutes) Because it may be able to secure the sufficient number of vehicles within 20m
451
distance for 30 seconds and 90 seconds as indicated in Table 1 and Table 2 when such
assumption exists.
4. Conclusion
In this study, the methodology measuring the real density in a way of analyzing the traffic
information & data obtained from image detector or CCTV using traffic flow theory so as to
count the number of vehicles within the unit distance and such methodology was actually
applied to Bongan tunnel on highway #6.
As a result of applying to Bongan tunnel, real density at uncongestion state from CCTV 1 ~
CCTV 4 was 15.30, 15.00, 16.00 and 14.90(pc/km/lane), respectively and real density at
congestion state from CCTV1~CCTV4 was 72.00, 74.55, 73.35 and 72.85(pc/km/lane),
respectively, indicating the similar level. Thus when the traffic is not significantly changed,
it’s necessary to secure the number of vehicles within 29m distance for 30 seconds and 90
seconds, respectively, for measuring the real density at uncongestion state and congestion
state.
In future, the study explaining the transition process from LOS E to F as the optimal distance
for calculating the traffic density in uninterrupted flow will be needed.
5. Acknowledgement
This Study was carried out under the sponsorship of the Korea Institute of Construction
Technology as part of the Internal Basic Project.
[1]
[2]
[3]
[4]
[5]
[6]
6. References
Park Bumjin, Some issues on traffic density of continuous traffic flows, Ph.D
Dissertation, Department of Urban Planning and Engineering of Yonsei University.
2010. Seoul.
Koshi M., M. Iwasaki and I. Ohkura, Some findings and an overview on vehicular flow
characteristics, Proceedings of the Eighth International Symposium on Transportation
and Traffic Theory, 1983, Toronto.
Fred L. Hall, "The Relationship Between Occupancy and Density", Transportation
Forum 3-3, 1986, pp 46~51.
Frank M. Croft, JR., and J. Edwin Clark, "Quantitative Measure of Levels of Service",
Transportation Reserch Record 1005, 1985, pp 11~20.
Joonho Ko, Randall Guensler and Michael Hunter, "Variability in Traffic Flow Quality
Experienced by Drivers : Evidence from Instrumented Vehicles", Transportation
Research Board No 1998 , 2006, pp 1~9.
Lighthill M. J. and Whitham G. B., On kinetic waves 2 : A theory of traffic flow on
long crowed roads, Proceedings Royal Society, 1955, A229, pp. 317-345.
452
Computer and Information Sciences
10:15-12:00, December 15, 2012 (Meeting Room 3)
231: A Chinese Text Watermarking Algorithm Resistive to Print-Scan Process
Beijing University of Posts and
Xing Huang
Telecommunications
Beijing University of Posts and
Lingwei Song
Telecommunications
Beijing University of Posts and
Jianyi Liu
Telecommunications
Ru Zhang
Beijing University of Posts and
Telecommunications
277: Traversal Method for Connected Domain Based on Recursion and the Usage in
Image Treatment
Lanxiang Zhu
Jilin University
Yaowu Shi
Jilin University
Lifei Deng
Jilin University
Hongwei Shi
Jilin University
397: Quick Calculation for Multi-Resolution Bag-of-Colors Features
Yu Ma
Fudan University
Yuanyuan Wang
Fudan University
409: Speaker-independent Isolated Word Recognition Based on Enhanced Cross-words
Reference Templates for Embedded Systems
Chih-Hung Chou
National Cheng Kung University
Guan-Hong He
National Cheng Kung University
414: Intelligent Framework for Heterogeneous Wireless Networks
Yu-Chang Chen
Shu-Te University
453
231
A Chinese Text Watermarking Algorithm Resistive to
rint-Scan Process
Huang Xinga
a
b
Song Lingweia
Liu Jianyia,b Zhang Rua,b
National Engineering Laboratory for Disaster Backup and Recovery, Beijing
University of Posts and Telecommunications, Beijing, China
Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry
of Education, Beijing University of Posts and Telecommunications, Beijing, China
Abstract
Based on an analysis of existing text digital watermarking techniques, a digital watermarking
algorithms for binary image of Chinese text is proposed, which has a high resistance of
print-scan process. The algorithm is based on an approximate invariant of print-scan process.
A new concept ‘virtual average dots number’ is proposed to achieve a capacity of one
watermark bit per Chinese character. Meanwhile, a simplified strategy for flipping pixels in
binary image is used in the algorithm. By matching the text’s characters with the watermark,
the algorithm reinforced its resistance to print-scan process. Experimental results show that
the algorithm has a higher capacity than similar algorithms, and its resistance of print-scan
process is less affected by characteristics of the text.
Key words: digital watermarking; print-scan; Chinese text
1. Introduction
A text watermarking algorithm resistive to print-scan process means after the watermarked
text was printed, the embed information still can be extracted from its scanning image.
Extracting embedded information from the scanning image is more difficult, because after the
print-scan process, the text has relatively greater changes. At present, many text
watermarking algorithms have been proposed. These algorithms can be divided into three
categories. The first category embeds information based on structures of the text document to
be watermarked. Commonly, these algorithms hide information by adjusting line spacing or
word spacing, or by changing the color or size of words in a text [1-2]. These schemes can
resist slightly print-scan attack. That is, only after high quality print-scan process these
schemes can extract the watermark. But they have relatively lower capacity and are more
visible. The second category of algorithms embeds information based on natural language
454
process, including text zero-watermark[3] algorithms and semantics-based text watermarking
algorithms[4-5]. Print-scan process has no effect on these schemes, but they are relatively new,
and most of the algorithms haven’t been proved to be mature and effective in practice.
Algorithms of the third category based on traditional image watermarking algorithms.
However, text images mostly are binary images, so when applied to text watermarking, image
watermarking algorithms based on transform domain are no more effective, especially after
print-scan process. Thus, many watermarking algorithms based on space domain were
proposed to response print-scan attack. A typical algorithm of this kind was proposed by Qi
based on an approximate invariant though print-scan process in [6]. This algorithm adjusts
the ratio of black dots number of a single character to average black dots of all characters of
the text to embed data. This algorithm can resist print-scan attack effectively, but its
feasibility depends on the characteristic of the text to be watermarked, and its performance
isn’t good when applied to commonly used text. In this paper, we propose a new text
watermarking algorithm for binary text image based on research of Qi, the new algorithm is
less confined by the text and has a higher capacity.
2. Effect of print-scan process
1.1. Effect of print-scan process on character image
Print-scan process has a complex effect of text image. For ink-jet printer, after ink droplets
touching a paper, they spread along the fiber, so the black dots on the paper differ from
theoretical ones in their diameter and roundness and the width of printed lines is usually
wider than its theoretical value[7].For the scanning process, the nature of the work process on
scanner is a digital-analog conversion, which will inevitably introduce error and noises. Fig.1
shows the changes of a Chinese character before and after print-scan process.
Fig. 1. A Chinese character before and after print-scan process
From the above analysis, we can assume that if the text watermarking scheme needs a
pixel-level restore of the original image, such as algorithms based on word spacing or
pixel-based block characteristics, then it would be difficult for these schemes to resist the
print-scan attack. However, if the watermark information exists in a characteristic of statistics
which is less affected by print-scan process, then the characteristic can be used to design a
algorithm resistive to print-scan process.
1.2 An approximate invariant in print-scan process
455
Literature [6] considers the transformation of an image before and after print-scan process to
be an approximate convolution process, and proved that the ratio of number of black dots in a
single character to average black dot number of characters in the text is an approximate
invariant(for binary image, each black pixel is a black dot). Based on this ‘invariant’,
literature [8] proposed an algorithm with high capacity. But the ‘invariant’ actually changes
within a certain range in practice, and the changes have decisive impact on the extracting
result when using the above algorithms. Both [6] and [8] didn’t have sufficient analysis on
the effect of the approximate invariant’s variation.
The next experiment is designed to study the invariant’s actual variation range. First, one
hundred Chinese characters with font of Song and font-size of 12pt were chosen to compose
a Chinese text. Then the text was transformed by a virtual printer into its binary image form.
Then, character segmentation is applied to this binary text image, so we can get black pixel
number for every Chinese character A1, A2 , , A100 and their average value X .Do the same
character segmentation to scanning image of the printed text, we can get black pixel number
for every character c and their average value X  . Next, we can get the ratio of total black pixel
number per character to their average value A X before print-scan process and the ratio of
i
total black pixel number per character to their average value
Ai' X 
after print-scan process.
Then, for every character, the absolute value of the two ratios’ difference
Ai X  Ai X 
is
calculated and all the absolute values are listed in ascending order in Fig.2.
Fig. 2. Variation range of the invariant
As shown in Fig.2, the ratio of single character image’s black pixel number to average black
pixel number per character of all characters varies before and after print-scan process, and the
varying value is around 0.08 at most. This means for all the algorithms based on this invariant,
the algorithms should ensure that under the error of this invariant their extraction shall still be
efficient. In section 5 of this paper, we will do a further analysis on this error’s affect on
watermarking extraction.
1.3 Pixel flipping strategy
At present, the strategy of pixel flipping mostly based on Wu’s theory proposed in [9]. But
this theory is designed for binary image. This means Wu’ strategy can be simplified when
applied to binary Chinese text image.
456
Wu assigned flippability score for 3x3 patterns appeared in binary image. A lower score
means the flipping of the center pixel in a 3x3 pattern is more noticeable. Some typical 3x3
patterns and their scores are listed in Fig.3.
Fig. 3. Scores on some typical 3x3 patterns
Chinese character image composed by straight lines and curves with certain width.
Comparing with printings and photos, there are fewer kinds of 3x3 patterns in Chinese text
image, and the ratios of them are very uneven. So, we can use parts of the flippable pixel
pattern in Chinese text image to achieve both good invisibility and sufficient flippable pixels.
In our proposed algorithm, all flippable pixel patterns are shown in Fig.4.
Fig. 4. Flippable pixel patterns used in the proposed algorithm
As has been pointed in [9], when one pixel has been flipped, its neighbor pixels are no longer
considered to be flippable. When using patterns in Fig.4 as flippable fixel patterns, most
edges of Chinese character image will be covered by flippable pixels and their neighbors, so
using these small number of patterns have nearly the same effect of using all flippable pixel
patterns.
3. Proposed algorithm
3.1 The algorithm for watermarking embedding
1) Do character segmentation to the binary Chinese text image; get all the characters in
the text.
2) For every character in the text image, count positions and numbers of all flippable
pixels, and store this information. For every character, its flippable pixel numbers
contain two values, one for black flippable pixels and the other for white flippable
pixels.
3) Choose the first N characters which have most amounts of flippable pixels, N is the
457
length of watermark in bit. C denotes the collection of these N characters. Black
pixel numbers of characters in C are A , A , , A . Then, calculate the average value of
1
2
N
these N numbers, denoted by X .
Choose the average black pixel number of all characters in C to be initial value of the
virtual average black pixel number X . The quantization step is denoted by k .
First, analyze embeddibility for each character in C. For a character i , its black pixel
number after flipping is denoted by A  . If character i will be embedded with 0,
A  X should be even multiple of k . Meanwhile, A  should be the closest integer to A .
If character i will embedded with 1, A  X should be odd multiple of k , and A  also
should be the closest integer to A .
i
i
i
i
i
i
i
According to embeddibility of characters in C, they can be divided into two parts:
characters that are bidirectional-embeddable and characters that are
unidirectional-embeddable. The bidirectional-embeddable characters refer to
characters that can be both embedded with 1 and 0, and the
unidirectional-embeddable characters refer to characters that can only be embedded
with 1 or 0.
If the number of 0 or 1 of the watermark can’t meet the need of
unidirectional-embeddable characters, the value of virtual average black pixel number
X should be decreased gradually until a proper value of X is found so that every
unidirectional-embeddable character can find its matching bit in the watermark. Then,
match bidirectional-embeddable characters with the 1 and 0 remainded in the
watermark. Finally, calculate the number of black pixels for each character to
decrease or increase.
4) According to the positions of all flippable pixels which we got in step 2 and the
flippable pixels needed by each character which we got in step 4, do the pixel
flipping, and then get the text image which has been embedded with the watermark.
3.2 Calculate the value that the average black pixel number of characters in C after they have
been modified. We denote the average number by X  . Then, calculate the adjustment factor h .
h  X X  , X is the virtual average black pixel number we got in step 4. Finally, store the
quantization step k , adjustment factor h , each modified character’s position in the text and
the index of its matching bit in the watermark.
3.3 The algorithm for watermarking extraction
1) Do character segmentation to the scanned text image which has been watermarked.
2) According to the watermarked characters’ positions which we stored in the
embedding process, locate each watermarked character’s position in the scanned text
image.
458
3) Statistics each watermarked character’s total black pixel number A and their average
value X . Then calculate the bit embedded in each character according to the stored
'
quantization step
k
and adjustment factor h . Calculate
  for
each character.
   A  X   h  k 

(1)

If the closest integer to   is even, the character is considered to have been embeded
with 0, otherwise 1.
4) Arrange the extracted bits by their index in the watermark, to get the extracted
watermark.
3.4 Virtual average black pixel number
The average black pixel number of watermarked characters before print-scan process is X ,
after print-scan process, this average number becomes X  . We denote the virtual average black
pixel number used in embedding by X . A is the total black pixel number of a character after
embedding and before print-scan. After print-scan process, the black pixel number of this
character becomes A . Thus, the invariant can be described as:
A X  A X   m
(2)
After the character has been embedded with one bit data, its total black pixel number
meets A  X  k    , k is the quantization step. That means if we can restore the value of  , we
can extract the correct watermark. h is the adjustment factor,



h X X

  A  X  k   A X  h  k  A' X  h  k  m  h  k 
'
.So we get:
(3)
Because m is an invariant in print-scan process, both h and k are constants determined in
embedding process, so  doesn’t change in print-scan process. Thus, we can extract the
correct watermark after print-scan process.
But actually, the extracting result is affected by the invariant’s variation. The next section will
analyze this affection.
4. Experimental result and evaluation
4.1 Analysis of invisibility
For a more typically experimental result, the text for experiment was composed by 32
frequently used Chinese characters, printed in 600DPI (dots per inch) resolution. The model
of printer is EPSON Stylus Photo R210. Fig.5 shows the text image before embedding.
Fig. 5. Text image before embedding
459
Embed 32 bits watermark 0xaaaaaaaa to this text, choose 0.2 to be the quantization step. The
watermark can be embedded when the adjustment factor reach 0.72. After the watermark has
been embedded, print the text, then scan it in 600PPI (pixels per inch) resolution, the
scanning image is:
Fig. 6. Text image after embedding and print-scan
From comparison of the above two images, we can hardly find the text has been modified.
Thus, we can consider that the pixel flipping strategy we chosen has a good visual effect.
4.2 Comparisons of similar algorithms
Print the watermarked text thirty times, all in 600DPI resolution. Scan the printed texts in
1200PPI, 600PPI and 300PPI for every ten of them separately, and then do watermarking
extraction. In the ten experiments of 1200PPI scanning, there is one experiment with 1 bit
error, and other nine experiments extracted the fully correct watermark, the same result with
experiments with 600PPI scanning. In the ten experiments of 300PPI scanning, there is one
experiment with 4 bit error, and other experimental results are all correct. The experiments
demonstrate that the proposed algorithm has a good resistance of print-scan process. And
after largely declined the scanning resolution, the extraction successful rate doesn’t decline
significantly.
Using the method proposed by [6] with the text used in the above experiments as its
embedding part, we still need other 5 characters as its adjusting part. This shows that the
proposed algorithm has a higher capacity than algorithm proposed by literature [6].
With different adjusting factors and quantization steps, the variation of the invariant will
cause different extraction effect. More specifically, the smaller the product of adjusting factor
and quantization step is, the lower the extraction successful rate will be. To demonstrate this
affect, we did a series of experiments with different adjusting factor which were determined
by the characteristic of the texts to be watermarked. Each experiment embedded 32 bits in 32
characters which composed the text to be watermarked. Fig. 7 shows the experimental results.
In this figure, h is adjusting factor, k is quantization step. ‘Error’ means the average error
bits extracted in ten experiments of each text.
h·k
0.336123
0.615375
0.672329
0.719766
0.748173
0.952381
Fig. 7. Error rate reduce with
error
font
font size
7/32
1/32
0.5/32
0.1/32
0.1/32
0.2/32
Song
Song
Kai
Song
Song
Kai
14pt
12pt
12pt
14pt
12pt
10.5pt
hk
460
The method proposed in [6] doesn’t use an adjusting factor, but its quantization step plays the
same role with the product of quantization step and adjusting factor used in the proposed
algorithm. Literature [8] uses a quantization function rather than a fixed quantization step, but
the multiple quantization steps of its quantization function also affect the error rate according
to the law shown in Fig.7. If we use the method proposed by [6] to embed the same
watermark into text of Fig.7, we can reach a maximal quantization step of 0.05, which has a
weak resistance of print-scan process. But using the method proposed in this paper, we can
reach a maximal equivalent quantization step of 0.144(with quantization step of 0.2 and
adjusting factor of 0.72), which has a much stronger resistance.
Under normal circumstances, there are always come characters in the text which have only a
few flippable pixels. If we don’t consider the existence of these characters, as scheme
proposed in [6], they will confine the value of quantization step, thus confine the watermark’s
resistance of print-scan process. In the proposed algorithm, we handle the characters which
have fewer flippable pixels prior, thus get less confine. So, generally, the proposed algorithm
has a better resistance of print-scan process.
5. Conclusion
The paper proposed a novel Chinese text watermarking algorithm which can resist to
print-scan process. The algorithm match characteristic of text with watermark, therefore it
voids the bottleneck cased by characters with fewer flippable pixels. By using virtual average
black pixel number, the algorithm can embed 1bit for each character, thus has a high capacity.
Experimental results demonstrate that the proposed algorithm has a good resistance of
print-scan process for texts with different font and font-size.
However, the proposed algorithm requires a certain quality of print and scan. If the printer’s
resolution is much lower than resolution of the text image, or the printed text was defaced or
deformed, the extraction result will be impact. Besides, both the characteristics of the printing
paper and printing ink have certain impact on the extraction result. So, further research on a
more robust watermarking algorithm is needed.
6. Acknowlegements
The work is supported by National Natural Science Foundation of China (61003284),Beijing
Natural Science Foundation(4122053), Press and Publication Administration : major
scientific and technological projects(GXTC-CZ-1015004/09)(GXTC-CZ-1015004/15-1)
7. References
[1] Brassil J, Low S, Maxemchuk N F, et al. Electronic marking and identification
techniques to discourage document copying [J]. IEEE Journal on Selected Areas in
Communications. 1995, 13(8):1495-1504.
[2] Brassil J, Low S,Maxemchunk N F. Copyright protection for the electronic distribution
461
of text documents [J]. Proceedings of the IEEE 1999,87(7):1181-1196.
[3] Gupta Gaurav, Pieprzyk Josef, Wang HuaXiong. An attack-localizing watermarking
scheme for natural language documents [C]. Proceedings of the 2006 ACM Symposium
on Information, Computer and Communications Security, ASIACCS 2006. Taipei,
China, 2006:157-165.
[4] Atallah Mikhail J, Raskin Victor, Crogan Mchael, et al. Natural language watermarking :
design, analysis, and a proof-of-concept implementation [C]. The Fourth International
Information Hiding Workshop. Pittsburgh, 2001:185-199.
[5] Atallah Mikhail J, Mcdonough Craig J, Raskin Victor. Natural language for information
assurance and security: an overview and implementation [C]. New Security Paradigm
Workshop. New York: ACM Press, 2000:51-65.
[6] Qi Wenfa, Li Xiaolong, Yang Bin, Cheng Daofang. Document watermarking schema for
information tracking[J]. Journal on Communications, 2008,29(10):183-190.(in Chinese)
[7] Li Luhai, Zhang Shufen, Yang Jinzhong, Zou Jing. Present Status and Future Prospects
of Inkjet Printers and Materials Technology[J].Image Technology,2003(1):5-10.
[8] Guo Chengqing,Xu Guoai,Niu Xinxin,Li Yang.High-Capacity Text Watermarking
Resistive to Print-Scan Process.Journal of Applied Sciences,2011,29(2):140-146.
[9] Wu Min,Liu Bede.Data Hiding in Binary Image for Authentication and
Annotation[J].IEEE Transactions on Multimidia,2007,9(3):475-486.
[10] Li Xiao,Gao Baojian,Wang Cuifang.Binary Document Authentication Technology Based
on Chinese Character Structure Hiding.Computer Engineering and Applications,201
462
277
Traversal Method for Connected Domain Based on Recursion and The
Usage in Image Treatment
Lanxiang Zhu; Yaowu Shi;Lifei Deng
Jilin University, College of Communication Engineering
Abstract
Studying on traversal method for connected domain, this paper extends the conception of
connected domain and traversal, and then proposes the traversal method based on recursion.
The method makes efficient use of the computer’s stack control functions and the system
stack is used to save the traversal chain. This method can label the connected domain by a
single scan of the image. It can also record points of the connected domain and its edge at the
same time. Four kinds of improvements of the method are given,so that the method can be
adaptive to the image. It can execute a wide range, directional search with adaptive connected
condition in the traversal process. The paper proposes some applications of the method such
as the edge search of the gradient, background recognition, gray image segmentation and
color image segmentation method.
Keywords: Connected domain, recursion, image treatment; Edge Labelling
1. Introduction
Connected domain traversal is indispensable in most image processing applications and it is
the premise of shape analysis and recognition. The efficiency of this operation has a
significant impact on the efficiency of the whole system.
The existing studies on connected domain are mainly concerned with labelling. Existing
labelling methods include methods based on point scanning such as sequential scanning,
recursion method, region rising method, and methods based on line segment scanning such as
run-length labelling methods. All of those methods can accurately label connected domains
but the differences are the calculations, representation of the results and memory cost. Among
those methods, methods based on point scanning are easy to understand and apply, so they
are widely used. But sequential scan and regional rising pattern requires multiple scans of the
image. Xu Zhengguang proposed a method of recursion transfer, but it also needs to conduct
a scan of the image, mark the pair and traverse connection areas by recursive. The basis of
this method is still point scanning, so it does not make efficient use of the recursive method.
The concept of connected domain is not limited to binary image, we can also analyze the
connected domain with gray and colour images. Then we can do the image segmentation or
463
get continuous edge by connected domain analysis of gradient image and improve the
efficiency of watershed method with the labelling operation of connected domain.
This paper extends the conception of connected domain, and proposes the traversal
method based on recursion. It gives the common form of recursion methods, and further gives
the method to label the domain edge with traversal process. It analyzes the traversal method
for binary images, grayscale images, color images, and image gradient. It proposes
wide-traversal method for connected domain that can get results of connected domain even
the domain fractures in certain ways. Finally the paper analyzes the algorithm for shape
analysis, gives the formula for central point, the center of gravity and second-order moment
of the domain. It gives analysis method for multiple shapes such as line, circle and rectangle.
1.1 extended the concept of the connected domain
The traditional concept for connected domain is a point set which the points are adjacent and
has the same pixel value in the binary image. Take the digital images as u (i, j ) , a
two-dimension matrix. We can define connected domain as a collection:
T  {(i, j ) | u(i, j )  ut , R(i, j ) T  }
(1)
Where ut represents the aim value of connected domain pixel, R(i, j ) represents the
neighborhood of point (i, j ) . Common defines for R(i, j ) includes 4-neighborhood and
8-neighborhood.
Traditional label methods for connected domain mostly deal with binary image, it needs
segment operation for gray or color image. They apply a memory block from the system and
then give different numbers for different connected domains through the first scan of the
image and through the second scan combine the connected domains that have same numbers.
For gray images, the first step is segment. Most methods give a threshold for gray value;
segment the pixels by the follow operation:
0
u (i, j )  
1
g (i, j )  gT
g (i, j )  gT
(2)
g
Where u represents the segmentation results, g (i, j ) is the gray value of the pixel and T
is the segmentation threshold. Combine (1) and (2), we get the definition of connected
domain for gray images:
T  {(i, j ) | g (i, j )  gT , R(i, j ) T  }
(3)
and
T  {(i, j ) | g (i, j )  gT , R(i, j ) T  }
(4)
464
Equation 3 is black domain. Equation 4 is white connected domain.
Extending the formula further, we define a conditional function C (i, j ) for a single point,
which gives 1 for the points to label and 0 for others. We can give a common definition for
the connected domain as follows:
T  {(i, j ) | C(i, j )  1, R(i, j ) T  }
(5)
The general form of C (i, j ) include:
1. In binary images, judge by the pixel value (0 or 1).
2. In gray images, judged by the threshold.
4. In gray images, with in gray range centered with the gray value of the seed point
4. In color images, a color measurement area centered with a color value or seed color
value.
5. In gradient image, gradient threshold search for flat areas, edge search information
above.
2. The connected domain traversal based on the recursive
The traditional connected domain analysis methods are mark-oriented, that is, connected
domain will be numbered and marked during the scanning process. In practical work, the
more common thing to do is a loop-through, which traverses all the points in a connected
domain and records or calculates information, such as the calculation of the mean gray value
in the connected domain, gravity center of connecting area and other information.
The traditional component labeling methods, such as sequential scanning, requires at least
two scan to complete mark. If you want to perform a calculation on a connected domain, you
will need to scan for the third time; it significantly slows down the speed of operation.
Recursion is one of the programming methods. It uses the system stack and function call
operation, and the classic example is the calculation of factorials, a recursive method to make
full use of the system stack. Because push and pop operations occurs when function was
called and returned, the assembly instructions are very concise. If programmers use linked list
structure to construct a stack, the code generated would be far more complex than the system
stack.
Recursion is one of the techniques commonly used in programming. Its essence is self recall,
so be sure to note the end of the call conditions. If there is no termination condition or the
condition is difficult to arrive, it will be in a loop of infinite self recall and crashes the system
quickly. At the same time we should pay attention to the control of recursion depth when
using recursive method, because a call means a stack. In the connected domain traversal, if
the connected domain is too large (such as the background through), it may cause excessive
465
recursion depth and result in program crash. We can write down the current return of
coordinates when the recursion depth is too deep and end current recursion, and then start
another recursion from the current point.
Traversal method processes as follows: scan image sequentially, enter the recursive function
if one point meets the traversal conditions, entering a recursive function C (i, j )  1 , first mark
the point to scan and then pressing a certain sequence scanning points around the point
according to R(i, j ) . If one point corresponds to traverse conditions, we make recursive calls
to it as a parameter (one input of the stack) or continue to determine the next, return output of
the stack until all finished. Therefore, from the initial entry point on the recursive process,
although there are certain directions (the same judgment sequence on the points), because
inconsistency returns when periodicity condition is not met, the scanning process can be
completed on a complicated graph traversal.
We should take the current point coordinates as a recursive function parameters in the image
of the recursive calls. In computing, we need to calculate whether the range of the
coordinates is in a normal range. For example, access (-1, -1) coordinates, can lead to access
a memory access error.
In the process of recursion, important operations are: determine whether the point is in line
with traversal conditions, determine whether the point is outside the image area, and
determine whether the recursion depth exceeds the set value. The discriminate way of
connected domain is 4 neighborhoods and 8 neighborhoods; it has little effect on the
implementation of the method, only a few instructions.
Fig 1 shows the flow of traversal process with recursion and Fig 2 show the flow of recursion
function.
466
Fig 1 Flow of Traversal with Recursion
Fig 2 Flow of Recursion Fucntion
467
3.
Several improved recursive traversal
3.1 Wide Search
Connected domain in practice often fracture at some location,especially for edge-connected
domain traversal, at these points we need to ignore the breaking point in the process of the
search, so that we can get a more complete connected domain.
In the operation, we can modify the definition of R(i, j ) in equation 3: Point (i, j ) of a
range in the neighbourhood of point sets, instead of 4 or 8 neighborhood connectivity rules.
3.2 The changed traverse conditions
We can change the condition of traversal flexibly in the process of traverse. Common changes
include changes in basis points, threshold, scope, and so on. For example, in the process of
the gray-scale range, as the traversal condition and connected domain changes, we can relax
the gray-level criteria to accommodate pixels better and avoid connected domain intermediate
fracture. When we use gradient to search edge information, we can relax gradient threshold
along with the growth of the edge to avoid fracture of the edge
.3.3 The record of connected domain edge
In the process of recursive, if the current point is judged to be not consistent with recurrent
conditions, it returns output stack as a recall. Before returning, we can write down the
coordinate of the point, it is the edge of current connected domain. When the recursion is
completed, we can use the collection of edge points to analyze the shape features of
connected domain. The edge must be closed and a single line to be beneficial to the analysis
of subsequent shape for connected domain. For example, the geometric centre, rectangular,
circular, length and other information, we can be easily calculated from the edge line.
3.4 Handling of the search direction
During the search, the search direction to a single order may result in sequential one-sided
phenomenon and eventually the sequence of recording point will be more confusing. It may
cause trouble for subsequent sorting and processing. In order to solve this problem, we can
start the search before judging which of the current search directions is best. We can calculate
the maximum bounding rectangle from the current sets. According to this rectangle, we can
calculate the direction of current sets; taking conforming direction or perpendicular direction
of the current point sets as needed for first direction to enter the recursion.
4. Application of traversal method for connected domain based on recursion
Connected domain traversal has a very wide range of applications in image processing. The
traversal method for connected domain based on recursive is flexible and can be used in a
wide range of applications. The following are some examples of the application. Here,
468
instead of giving examples of application, we only explain the principle method and points
require attention.
4.1 Segmentation of gradient image
Because the concept of gradient image is intuitive, easy to understand, and the edge can be
labelled accurately and widely used, it is one of the most commonly used concepts in the
image edge detection and segmentation. In gradient image, the location of high gradient is
generally an edge location; traversal high part in a gradient image with a given threshold, we
can get continuous edges.
4.2 Background regions identification
As mentioned, connected domain traversal process of gradient image can be applied to the
recognition process of the background area. In some image processing problem, there are
obvious background area, and little objective in the background. Background is generally
graded and basically covered the entire smaller figure of a connected domain, and
background image is needed to remove in image processing. The character of the background
area is flat and with low gradient. So we can execute a traversal on low gradient with a given
thresh to label the background area. After the labelling, the data is little to treat with and the
calculation amount of subsequent operation is sharply reduced.
4.4 Connecting area segmentation based on gray-scale range
In the method of image segmentation based on a region, if gray scale values are approximate,
it can be labeled as a connected area. At this point, we can take gray value of the points as the
center and as condition to traverse in a gray scope, so we can get segmentation graph based
on the gray level range, at the same time get labeling results of connected domain.
4.4 Segmentation for color image.
Similar to the image segmentation, with the color measurement method defined, we can take
the color value of a reference point as the center with its distance in a certain range as
traversal condition to take color image segmentation. The core of this approach is the
definition of color distance measurement.
5. Conclusions
This paper extends the conception of connected domain, proposed the common definition of
connected domain. It proposes a traversal method for connected domain based on recursion,
explains how the method work, and give some important points when execute the method.
The traversal method based on recursion need single scan for the image, so it needs a small
amount calculation. It makes full use of system stack, so the executing speed is high. In the
traversal process we can record the edge of the connected domain. Because it use system
469
stack to execute the algorithm, so it need more caution to avoid program crash.
This paper also proposes some improvement for the traversal operation: wide search, flexible
condition function, direction search and recording of domain edge. These improvements can
improve the ability or speed of the traversal process.
This paper proposed some applications of connected domain traversal. Such as segment of
gradient image, background labeling, segment by gray range and segment of color image.
There applications extend the use of traversal method for connected domain.
[1]
[2]
[3]
[4]
[5]
6. Reference
Costantino Grana, Daniele Borghesani, Rita Cucchiara, Connected Component Labeling
Techniques on Modern Architectures. [817-824]
WANG Jing, Segment labeling algorithm of connecting area in binary images and its
realization. Infrared and Laser Engineering. Val39 No4 2010-8 [761-765]
Zuo Min, Zeng Guang-Ping, Tu Xu-Yan. A Connected Domain Labeling A lgorithm
Based On Equivalence Pair In Binary Image, Computer Simulation, Vol 28 No 1,
2011-1 [14-17]
XU Zhengguang, BAO Donglai, ZHANG Lixin, Pixel Labeled Algorithm Based on
Recursive Method of Connecting Area in Binary Images, Computer Engineering, Vol 23
No 24,2006-12 [186-189]
Song Bin Novel fast pixel label ing method for binary image, EL ECTRONIC
MEASUREMENT TECHNOLOGY, Vol 32 No 9, 2009-9 [67-73]
470
397
Quick Calculation for Multi-Resolution Bag-of-Colors Features
Yu MA, Yuanyuan WANG
Dept. of Electronic Engineering, Fudan University, 220 Handan Road, Shanghai, CHINA
E-mail address: mayu@fudan.edu.cn
Abstract
Color distribution is one of the most important features in many applications. Recently,
bag-of-colors (BOC) becamea powerfulfeature to represent color distribution. In this paper, a
quick calculation algorithm for multi-resolution bag-of-colors features is proposed. Using the
mapping between different groups of codebook colors, the BOC features for different
resolutions can be obtained quickly. It makes up for the disadvantage of traditionalBOC
calculation in both computation flexibility and storage. Experimental results show that it has
a better performance than traditional BOC calculation, especially in reducing the computation
and increasing the speed.
Keyword: bag-of-colors; multi-resolution;quick calculation
1. Introduction
As the most important and widely used feature in large scale natural image retrieval, the color
distribution of an image is powerfulin distinguishing different types of images. There are
several kinds of ways to represent color distribution, of which the simplest is the color
histogram. All the emergent colors in an image arecounted and then the occurrence frequency
of each color is obtained. The color histogram, which can reflect the detailed color
distribution of an image,is usually constructed on a three-dimensional basis. In the retrieval,
the color histograms of all the images are compared, where the similarities between
histograms are used to evaluate the similarities of images [1].
Although color histogram is the most direct way to reflect color distribution, it still has some
disadvantages. First, the histogram bins are distributed in the high-dimensional space andthe
number of bins growsexponentially with the number of dimensions, which both requires large
storage and limits the accessing and computing speed. It takes much more time to build a
three-dimensional index than a one-dimensional one. Second, the color histogram is very
sensitive to both intensity and color contrastsof the whole image. For example, a slight
illumination difference may change the histogram greatly. These problems come from the
uniform separation of the color space when calculating color histograms. All the probable
colors are counted even if most of them hardly appear in the image, resulting in a lot of
471
redundant storage and computing. In all, the histograms are fixed-size structures, so that a
good balance between expressivenessandefficiency cannot be easily achieved [2].
In fact, the number of important colors in most natural images isnot that large. The dominant
colors of an image may reflect the content better than the total color distribution. Color
signature is one kind of that representation [3], which uses several representative colors as the
kernel instead of the uniform separation of color space. Among all the color signature
methods, the latest bag-of-colors (BOC) method is the most effective one [4]. It utilizes a
training process to find a group of colors that best represent the color distribution of a large
amount of natural images. That group of colors are set as the ‘representative colors’, or
‘codebook colors’. For an image, each color is related to its corresponding codebook color
and contributes to its statistical value, just like putting all the colors into different bags. As a
result, all the codebook colors have their respective occurrence frequencies in an image,
which together construct the BOC feature of the image. The BOC feature is equivalent to a
one-dimensional color histogram. It occupies far less storage compared to high-dimensional
color histogram, and also requires less computation in image retrieval [5].
The BOC feature provides a better option in image retrieval, however,the computation in
obtaining the BOC feature for each image is more complex than the high-dimensional
histogram. Besides, the codebook colors do not have order so that it is hard to compute the
similarity between two BOC features with different codebook colors. If different numbers of
codebook colors are required in different applications, the BOC features of all the images
have to be calculated again. It also means that all the images have to be stored for the
calculation. In contrast, for the histogram features,a coarser histogram can be built by using
the combination of bins on a finer histogram, which means the histogram is the only entrythat
has to be stored and the image itself can be thrown away. It is shown that the BOC feature has
advantage in describing the contents of images, but its computation is less flexible.
In order to make up for the disadvantage of BOC feature in computation flexibility, we
propose a quick calculation algorithm to acquire BOC features with different number of
codebook colors.Providinga fine-to-coarse transformation way to obtain BOC features, the
method makes the computation of BOC features more flexible, and reduces the computation
and storage cost.
2. Calculation for Multi-Resolution BOC Features
2.1 Traditional BOC Features
To obtain BOC features for images, the representative colors, also known as codebook colors,
are necessary. In contrast to the regular partition of color space used in calculating color
histograms, an irregular partition is used in calculating BOC codebook colors. The codebook
colors, which build the basis of BOC signatures, are learned on a large number of natural
images. It is like the principle of bag-of-words (BOW) applied on colors [6].
472
The obtainment of codebook colors consists of several steps: the image regularization of
training images, the color space transform, the generation of sample colors, and the
generation of codebook colors. First, 10,000 natural images are selected randomly, each of
which is then resized to 256×256 pixels. Second, the color space of images is converted into
CIE-Lab space, where the distance between two colors is consistent with the perceptual
difference of the eyes for any two colors. Previous investigation results have shown that the
CIE-Lab space is the best in evaluating the color differences. Third, each image is divided
into 256 blocks with the same size 16×16. The color that appears most times in one block is
set as the dominant color of the block. As a result, each image has 256 dominant colors. The
dominant colors of 10,000 images are combined together to build the training samples, whose
number is 2,560,000. Finally, a k-means clustering is used to group these training samples
into kc clusters. The parameter kc is the number of required codebook colors. As the most
important parameter of BOC method, kc determines the resolution of color space separation.
With the codebook colors, the BOC feature for any image can be calculated. For each color
of an image, the most similar codebook color is found and its statistical value gets an
increment. Finally, each image has a histogram-formed vector with the dimension of kc,
which is used to represent the color distribution of this image. Once kcis selected and the
codebook colors training is finished, the color distribution of an image is fixed. However, in
different tasks,e.g. image retrieval and object tracking, there may be different requirements
for the number of colors. It means the BOC features with different color resolutions, which
we name as multi-resolution BOC features (MRBOC), may be required at the same time.
When the value of kcchanges, all the BOC features have to be calculated again by clustering
the colors of images to new codebook colors and counting them once more. This duplication
of effort not only cost huge computation and storage but also makes previous BOC data
worthless.
2.2Multi-ResolutionBOC Features
In order to calculate BOC features with different color resolutions, i.e. different numbers of
codebook colors, we propose a quick algorithm which utilizes a simple indexing technique to
obtain BOC features for a small number of codebook colors from existing BOC features for a
larger one. The necessary data for the computation is just the BOC features for the largest
number of codebook colors, rather than the original images. For example, if BOC features
with different resolutions (kc equals to 400, 300, 200, 100, 50, and 20 respectively) for a
group of images are required, the BOC features with the largest kc (400) have to be calculated
first. Besides, the respective codebook colors in these resolutions have to be trained as done
in the usual BOC calculation. After that, the images themselves are no longer necessary. The
core idea of the quick calculation is to utilize the relationships between the different groups
of codebook colors. The details are described as follows.
473
(1) The sample color database is used to train different codebook colors for all the
required kcs.They are denoted as{Cx(i) | i=1,2,…, kc}, where x is the label of the set when kc is
set as different values.
(2) Without loss of generality, let k1 to be the largest among all the kcs, which
corresponds to the maximum codebook colors {Ck1(i) | i=1, 2, …, k1}. For each image in the
database, the BOC feature F({Ck1(i)}) for codebook color {Ck1(i)} is computed, whose
dimension is k1.
(3) For any value k2that is smaller than k1, list the corresponding codebook colors {Ck2(j)
| j=1, 2, …, k2}. Find the most similar color in {Ck2(j)} for each color in {Ck1(i)} according
to their Euclidean distance in CIE-Lab space, and then build the mapping between the two
color sets, denoted as M: {Ck1(i) | i=1, 2, …, k1}  {Ck2(j) | j=1, 2, …, k2}.
(4) Calculate the BOC feature F({Ck2(j)}) for codebook colors {Ck2(j)}, according to
Formula (1):
(1)
In a nutshell, the BOC feature in a high resolution is mapped, rearranged and combined to
build the BOC features in lower resolutions according to neighboring relationships between
different groups of codebook colors. In this way, all the BOC features for different kcs that
are smaller than k1 can be obtained easily.
3. Experiments and Results
3.1BOCCodebook Colors
Using 2,560,000 sample colors coming from 10,000 images to train, the codebook colors for
different resolutions are obtained. For example, when kc equals to 32 and 16, the codebook
colors are shown in Figure 1(a) and (b)respectively. The position in the three-dimensional
CIE-Lab space denotes the values of L, a, and b for different colors, while the colors of the
balls represent their true colors in perception. The figuresare better to view in color.
(a) 32 codebook colors
(b)16codebook colors
474
Figure 1. Illustrations for codebook colors of BOC
3.2The Mapping Between Multi-Resolution BOC Codebook Colors
We still use the previous kcs as an example to show the mapping. For each color in Figure
1(a), its most similar color in Figure 1(b) is searched. The mapping of these neighboring
relationships are shown in Figure 2, where the large balls denote the codebook colors in
Figure 1(b), and the small balls denote the codebook colors in Figure 1(a). The segments
connecting the balls denote the mapping relationships from small balls to large ones, i.e. from
the 32 codebook colors to the 16 ones. It should be noticed that the actual colors are in the
three-dimensional space, but they are shown in certain projection plane. As a result, there are
some illusory overlaps seen from this perspective, and the actual neighboring relationships
arealso not the same as seen in the figure.
Figure 2. The mapping between multi-resolution BOC codebook colors
3.3Evaluation on the Computation for Multi-Resolution BOC Features
The most important advantage of the quick calculation algorithm for multi-resolution BOC is
the reduction of computation, which can be shown by the time costs. In the tests, 1,500
images each with 16,384 pixels are used to compare the speed between traditional BOC
method and the quick calculation one. The tests are conducted in the same hardware and
software environments, Intel Xeon(R) CPU X5675, 32GB Memory, and 64bit Windows 7
Operating System. At the beginning, the BOC features for kc=512 are calculated by the
traditional BOC method, and stored as the original data for multi-resolution BOC features.
Then the respective time costs of two methods for calculating multi-resolutionBOC features
with different kcs are listed in Table 1, where the quick calculation algorithm is denoted as
QBOC. Without loss of generality,kc is set as 256, 128, 64, 32, and 16 respectively just for the
475
convenience of observation. In fact, it can be any natural number smaller than the initial one.
As shown in the results, the QBOC has a significant advance in the speed when calculating
multi-resolution BOC features. It is because in the QBOC there are just mappings and
accumulations, while in traditionalBOC all the colors of all the images have to be re-clustered
when the number of codebook colors changes.
Table 1. Comparison of the computation cost for 1500 images (each with 16384 pixels)
kc
Time (s)
BOC
QBOC
256
128
64
32
16
69.99
< 0.1
65.89
< 0.1
54.36
< 0.1
45.64
< 0.1
38.11
< 0.1
Another advantage of QBOC is that it requires less storage. In traditional BOC, all the
images have to be stored if multi-resolution features are required. In contrast, in the proposed
method, only the one-dimensional BOC features for the largest kc are necessary. The storage
is significantly reduced.
4. Discussions
Based onthe mapping of codebook colors, the quick calculation algorithm for
multi-resolution BOC featuresoutperforms traditional method in both speed and storage. It
makes up for the disadvantages of BOC features in computation flexibility and
thereforeimprovesthe practicability of BOC featuresin representing color distributions.
One aspect that has to be considered is that the quick calculation algorithm may obtain a
different result from the traditional method.This situation is possible when certain color has
different nearest neighbors through direct and indirect search. However, in the experiments
this situation takes a very small percentage. Besides, when it happens, the ambiguous color
usually has similar distances with two or more codebook colors,therefore it may have little
influence on either feature mapping or similarity evaluation.
5. Conclusions
In this paper, we propose a quick calculation algorithm to acquiremulti-resolution BOC
features. Experimental results show that the method makes the computation of BOC features
more flexible. It reduces both the computation and the storage for BOC features so that better
and further applications can be expected.
6. Acknowledgments
This work is supported by the Natural Science Foundation of China (No.61202264) and the
Program of Shanghai Subject Chief Scientist (No. 10XD1400600)
476
[1]
[2]
[3]
[4]
[5]
[6]
7. References
T. Lu, C. Chang. Color image retrieval technique based on color features and image
bitmap.Information Processing and Management, 43(2), 2007, pp. 461-472.
Y. Ma, X. Gu, Y. Wang. Histogram similarity measure using variable bin size distance.
Computer Vision and Image Understanding, 114(8), 2010, pp. 981-989.
Y.
Rubner,
C.
Tomasi.
Perceptual
metrics
for
image
database
navigation.StanfordUniversity, Stanford, CA.1999.
C. Wengert, M. Douze, H. Jégou. Bag-of-colors for improved image search. Proceedings
of the 19th ACM International Conference on Multimedia, 2011, pp. 1437-1440.
H. Jégou, M.Douze, C Schmid. Improving bag-of-features for large scale image search.
International Journal of Computer Vision, 87(3), 2010, pp. 316-336.
J. Sivic, A. Zisserman. Video Google: A text retrievalapproach to object matching in
videos. Proceedings of the 9th IEEE International Conference on Computer Vision, 2003,
pp.1470-1477.
477
409
Speaker-Independent Isolated Word Recognition Based on Enhanced
Cross-Words Reference Templates for Embedded Systems
Chih-Hung Choua, Guan-Hong Hea, Bo-Wei Chena, Po-Chuan Linb,
Shi-Huang Chenc,*, Jhing-Fa Wanga, and Ta-Wen Kuana
a
b
Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan
Electronics Engineering and Computer Science, Tung-Fang Design University, Kaohsiung,
Taiwan
c
Department of Computer Science and Information Engineering, Shu-Te University,
Kaohsiung, Taiwan
e-mail: *shchen@stu.edu.tw
Abstract
In this study, novel enhanced cross-words reference templates (ECWRTs) are proposed and
applied to speaker-independent isolated word recognition. The ECWRT is a reference
template generated from a set of templates. The main procedures of ECWRT generation are
DTW-matching and average operations. Due to the variance of lengths of templates, the
average operations cannot be performed directly. To solve this problem, dynamic time
warping (DTW) is used. After DTW-matching, the matched frames of templates are averaged
to form the ECWRT. Experimental results show that the proposed system with linear
prediction cepstral coefficients (LPCCs) for 30-word vocabulary can achieve an average
accuracy rate of 98.4%. Such a recognition rate is higher than 97.8% using CWRTs and
91.4% using conventional reference templates, respectively. The experimental results
demonstrate the effectiveness of the proposed idea.
Keyword: Reference templates, dynamic time warping, isolated word recognition.
1. Introduction
Dynamic time warping (dtw) and hidden markov models (hmms) are both popular techniques
for speech recognition. when the vocabulary set is small, the recognition rates of dtw and
hmms are comparable; however, the computational cost of dtw is less than that of hmms. for
an embedded system which is a resource-limited hardware platform, dtw is more suitable than
478
hmms [1-5].
The recognition rate of DTW-based speech recognition system is significantly affected by
reference templates, especially for speaker-independent isolated word recognition. To find a
set of reliable reference templates, Levinson et al. [6] proposed the clustering techniques to
select the reference templates. Abdulla et al. [7] developed a technique to generate the
reference templates called cross-words reference templates. However, the clustering
techniques involve a large amount of computation and memory usage. Moreover, more than
one reference template is selected for the same vocabulary, which is inefficient in the
recognition phase due to large computation caused by numbers of reference templates.
The cross-words reference templates (CWRTs) are a set of reliable reference templates
generated by DTW-alignment and average operations. Such simple procedures are
computationally less expensive than clustering techniques. Besides, one CWRT is generated
for one vocabulary, which is efficient for recognition processes. Nevertheless, the procedure
of alignment is inappropriate due to the (27°-45°-63°) local path constraints of DTW. In
addition, the memory usage of aligned templates is considerably high because the average
operations are performed when all the templates are aligned.
To increase recognition rates, a set of novel enhanced cross-words reference templates
(ECWRTs) is proposed in this study. During the ECWRT generation, DTW-matching with
(0°-45°-90°) local path constraints and average operations are applied to each template
iteratively. The iterative operations consume less memory than the technique of CWRT
preparation.
The rest of this work is organized as follows. Section 2 introduces the
speaker-independent isolated word recognition system. Section 3 describes details of the
proposed system. Next, Section 4 summarizes the performance of the proposed system and
the analysis results. Conclusions are finally drawn in Section 5, along with recommendations
for future research.
2. System overview
In this work, the flowchart of speaker-independent isolated word recognition system is shown
in Figure 8. Before the feature extraction, the preprocessing stage, including voice activity
detection (vad), automatic gain control (agc), and framing, are used to normalize the input
speech for following procedures. The feature extraction consists of two parts which are linear
predictive coefficients (lpcs) and linear prediction cepstral coefficients (lpccs). For each frame,
the lpcs are extracted to calculate the lpccs. The dtw is used to evaluate the distances between
the sequence of feature vectors and reference templates. The shortest distance means the
highest similarity between the input speech and the vocabulary. Thus, the vocabulary with the
highest similarity is the recognition result.
479
Input speech
Preprocessing
Feature extraction
LPCs
LPCCs
Reference
templates
DTW
Result of recognition
Figure 8. Overview of the proposed system
3. Proposed System
3.1. Preprocessing
For following procedures, the input speech signal is normalized by voice activity detection
(vad), automatic gain control (agc), framing and zero-mean normalization. The vad is used to
extract voiced segments from input speech. The gain of voiced segment is required to adjust
by agc for the different volume. When the vad and agc are completed, the normalized speech
signal is separated to several frames for feature extraction.
3.2. Linear predictive coefficients
The linear predictive coefficients (lpcs) are the coefficients of the previous samples for the
linear combination to predict the current sample. With the prediction error, the equation of lpcs
is shown in (1).
P
s[n] 
 a s[n  i]  e[n]
i
i 1
(1)
Where ai is the i-th linear predictive coefficient, p is the order of lpcs, s[n] is the current
sample, s[n-i] is the i-th previous sample, and e[n] is the prediction error. The lpcs [a1,
a2,…,ap] have infinite solutions due to the different prediction error e[n]. To find the optimal
480
solution, the mean square error (mse) of prediction needs to be minimized. When the mse of
e[n] reaches the minimum, the optimal values of lpcs are derived.
3.3. Linear prediction cepstral coefficients
The linear prediction cepstral coefficients (lpccs) are the lpcs represented in the cepstrum
domain [8]. Instead of taking inverse fourier transform on the logarithm of the power
spectrum, the lpccs are obtained from lpcs through the recursive operations given in (2).
ln  2 ,
n0


n

1
C[n]  
i
C[n]  a[n  i ], 0  n  P
a[n] 
n

i 1


(2)
Where the c[n] is the lpcc, an is the lpc, p is the order of lpcs, and σ2 is the gain of lpc model.
3.4. Dynamic time warping
Dynamic time warping is an algorithm based on dynamic programming. Dtw is widely used to
estimate the similarity between a sequence of feature vectors of input speech and a reference
template in speech recognition. The recurrence relation of dtw with (0°-45°-90°) local path
constraints is shown in(3).
 D(i  1, j )

D(i, j )  d (i, j )  min  D(i  1, j  1)
 D(i, j  1)

(3)
where the d(i,j) is Euclidean distance between the i-th frame of the feature vectors and the j-th
frame of the template, and D(i,j) is the accumulative distance between the i-th and j-th frame.
3.5. Reference templates
In dtw-based recognition, the reference templates significantly affect the recognition rate. This
paper proposes a set of reliable reference templates “enhanced cross-words reference
templates” based on the concept of cross-words reference templates (cwrts) [7].
To point out the difference of the original and enhanced CWRTs, the CWRT is described first.
The CWRT is generated from a set of templates through DTW-alignment and average
operations. In order to align the other templates with the initial reference template, the frame
of speech signal is extended by replication or merged by average operations with the adjacent
frame. Due to the great variation of some adjacent frames, the average operations result in the
481
accumulative errors to decreases the recognition rate.
To increase the recognition rate, the enhanced cross-words reference templates (ECWRTs)
are proposed in this paper. The CWRT preparation is inappropriate due to the recurrence
relation with (27°-45°-63°) local path constraints. This type of DTW matches some frames
between the initial reference template and speech signal rather than all frames. Therefore,
another recurrence relation with (0°-45°-90°) local path constraints of DTW is adopted
shown in (3). To prepare an ECWRT, three processes need to be executed. The flowchart is
shown in Figure 9. First, an arbitrary template is chosen as the initial reference template.
Second, a template is selected from the rest of templates. Third, the DTW-matching and
average operations are applied to the initial and selected template to update the initial one.
The second and third processes are not repeated until all the templates are selected. Finally,
the latest updated template is the ECWRT.
Templates
Initial
reference template
NO
Selected
template
DTW
matching
All the template
are selected?
Average
Update
initial template
Yes
ECWRT
Figure 9. Flowchart of the ECWRT generation
In the ECWRT preparation procedure, the optimal path is obtained through DTW-matching
between the reference and selected template. Each frame of reference template is matched
with single or multiple frames of selected template shown in Figure 10.
482
Reference
Template
Selected
Template
Figure 10. DTW-matching between the reference and selected templates
The frame of reference template is averaged with the matched frame in single matched case.
For multiple matched frames, the frame of reference template is averaged with the matched
frame which has the minimum distance with the frame of reference one. The physical
meaning is to average the nearest frames; therefore, the ecwrts is more reliable than cwrts.
The pseudo code for generating an ecwrt is shown as follows:
1 Begin
2
T = choose an arbitrary template with
3 L frames;
4
5
6
7
8
9
10
11
12
13
for i = 2 to Number of templates
w = (i-1)/i;
t = choose one from the rest of
templates;
DTW between T and t ;
for j = 1 to L
MFs = the matched frames of T[j];
K = arg min{ dist( T[j],
k
MFs[k] ) };
T[j] = w × T[j] + (1-w) × MFs[K] ;
end for
end for
End
In the above descriptions, the notation w is the weighting factor which can be adjusted for
training or retraining. To obtain an enhanced cross-words reference template (ECWRT),
several steps need to be performed. An arbitrary template with the length L is chosen as the
483
initial reference template T. The selected template t is matched with T using DTW. The frame
of reference template T[j] is averaged with the matched frame MFs[K] which has minimum
distance with T[j]. The reference template T is not updated until all the templates are selected.
Finally, the ECWRT is derived.
4. Experimental Results
4.1. Environment settings and resource usage
The proposed system is implemented on the lpc1768 produce by nxp. The clock speed of the
micro processor can achieve 100 mhz. The rom and ram are 512k and 32k bytes. In such an
environment, the rom and ram usage are 95.5k and 19.6k bytes.
4.2. Training database
The training database is a 30-word mandarin vocabulary set. All the isolated words of the
vocabulary set are commands of home appliance control (i.e., turn on, turn off .etc) shown in
Table I. The training database is collected from 1080 utterances of 12 speakers (six males and
females). Each speaker utters each word of the 30-word mandarin vocabulary set three times.
TABLE I VOCABULARY SET
No
.
No
.
Command
1. da kai dian shih
2. kuan bi dian shih
16.
17.
shiuan tai shang
shiuan tai shia
3. da kai lu ying ji
4. kuan bi lu ying ji
5. shu wei jie shou
6. H D M I shu ru
7. ying yin shu ru
yi
8. ying yin shu ru
er
9.
chian jin
10. kuai su chian jin
11.
hou tuei
18.
19.
20.
21.
yin liang shang
yin liang shia
tsai dan
she ding
22.
duei bi
23.
se wen
24.
25.
26.
lian jie wang lu
face book
G mail
12.
13.
14.
15.
kuai su hou tuei
bo fang
jan ting
27.
28.
29.
ting jr
30.
shing shih li
tian chi
li ti sheng
shian tzai shih
jian
Command
484
4.3. Recognition rates
The linear prediction cepstral coefficients (lpccs) and mel-frequency cepstral coefficients
(mfccs) are two kinds of feature extraction for following evaluations. Besides, the proposed
enhanced cross-words reference templates (ecwrts), cross-words reference templates (cwrts)
[7] and conventional reference templates are three types of reference template for the test. The
conventional reference template is selected from a set of templates with the highest
recognition rate. An inside test with 60 utterances spoken five times by 12 speakers for each
vocabulary is shown in Table II. An outside test is also tested by 12 speakers and given in
Table III.
TABLE II AVERAGE RECOGNITION RATES OF THE INSIDE TEST
Feature
Reference templates
ECWRTs
CWRTs [7]
Conventional
LPCCs
98.4%
97.8%
91.4%
MFCCs
99.8%
99.2%
92.8%
TABLE III
AVERAGE RECOGNITION RATES OF THE OUTSIDE TEST
Feature
Reference templates
ECWRTs
CWRTs [7]
Conventional
LPCCs
97.7%
96.7%
87.6%
MFCCs
99.2%
98.5%
88.7%
The Table II and Table III show that the recognition rates of system with the proposed
ECWRTs are generally higher than the others with the baseline reference templates. The
average accuracy rates of the LPCCs and MFCCs are comparable; however, the recognition
time of MFCCs is five times than that of LPCCs. For real time applications, the LPCCs are
more suitable for embedded systems.
4.4. Comparison of memory usage
The technique of ecwrt generation is to iteratively update the initial reference template. Thus,
there are two templates which need to be stored in memory for each iteration. The cwrt
preparation aligns all the templates and averages them with the initial reference template.
These templates are stored in advance for the average operations. Assume that the memory
usage of a template is η, the techniques of ecwrt and cwrt generation consume 2η and nη,
respectively (the notation n is the number of aligned templates and the initial one).
485
5. Conclusions
Enhanced cross-words reference templates (ECWRTs) is a set of reliable reference templates
proposed in this paper. To generate an ECWRT from a set of templates, an initial reference
template and one of the remaining templates are selected. Subsequently, DTW-matching and
average operations are applied to the initial and the selected template to update the initial one,
iteratively. Compared with the techniques of CWRT preparation, the ECWRT generation has
less memory usage. Furthermore, the experimental results show that the ECWRTs are more
reliable than CWRTs, and the combination of LPCCs and ECWRTs is suitable for embedded
systems. The average accuracy rate of the proposed system can achieve 98.4%, which is
higher than 97.8% using the CWRTs and 91.4% using conventional reference templates. Such
experimental results indicate the effectiveness of the proposed idea.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
6. References
Liu, “Research and implementation of the speech recognition technology based on
DSP,” in Proc. 2nd Int. Conf. Artificial Intelligence, Management Science and
Electronic Commerce, Zhengzhou, China, 2011, Aug. 8-10, pp. 4188-4191.
Qu, and L. Li, “Realization of embedded speech recognition module based on STM32,”
in Proc. 11th IEEE Int. Symposium on Communications and Information Technologies,
Hangzhou, China, 2011, Oct. 12-14, pp. 73-77.
Phadke, R. Limaye, S. Verma, and K. Subramanian, “On design and implementation of
an embedded automatic speech recognition system,” in Proc. 17th Int. Conf. VLSI
Design, Mumbai, India, 2004, Jan. 5-9, pp. 127-132.
Zhang, “Research of improved DTW algorithm in embedded speech recognition
system,” in Proc. Int. Conf. Intelligent Control and Information Processing, Dalian,
China, 2010, Aug. 12-15, pp. 73-75.
Wan, and L. Liu, “Research and improvement on embedded system application of
DTW-based speech recognition,” in Proc. 2nd Int. Conf. Anti-counterfeiting, Security
and Identification, Guiyang, China, 2008, Aug. 20-23, pp. 401-404.
E. Levinson, L. R. Rabiner, A. E. Rosenberg, and J. G. Wilpon, “Interactive clustering
techniques for selecting speaker-independent reference templates for isolated word
recognition,” IEEE Trans. Acoustics, Speech, and Signal Processing, vol. 27, no. 2, pp.
134-141, Apr. 1979.
H. Abdulla, D. Chow, and G. Sin, “Cross-words reference template for DTW-based
speech recognition,” in Proc. IEEE Region 10 Conf. Convergent Technologies for the
Asia-Pacific, Bangalore, India, 2003, Oct. 15-17, pp. 1576-1579.
Rabiner, and B. Juang, Fundamentals of speech recognition: Upper Saddle River, NJ:
Prentice-hall, 1993.
486
414
Intelligent framework for heterogeneous wireless networks
Yu-Chang Chen1,a, Ya-Bo Hu2,b, Chen-Yo Fang3,c,and Kuang-Yu Yen4,d
Dept. of Computer Science and Information Engineering, Shu-Te University, Kaohsiung
County, Taiwan 824, ROC.
sclass@stu.edu.tw
Abstract
Various wireless network protocols and standards have been proposed and used to achieve
ubiquitous networking environments, which also caused many problems to arise especially
from heterogeneous wireless networks. The most important of them is QoS (Quality of
Service) which has been an issue since the early stages of the Internet. At that time, the
concept of QoS in network transmission was to provide as much services as possible, but that
was restricted by various factors such as limit of bandwidth, transmission distance and
network loading. In recent years, with the improvement of software and hardware techniques,
various Networking Standards organizations proposed different new generation protocols and
standards in which the implementation and design of QoS are rather different. This paper
focuses on LTE (Long Term Evolution) and WiMAX (Worldwide Interoperability for
Microwave Access) wireless networks and proposes an intelligent framework with QoS
guarantees to integrate different QoS in heterogeneous wireless networks. Furthermore,
ANFIS (Adaptive Network-based Fuzzy Inference System) has been chosen as a QoS
mapping system. Afterwards, the methods, proposed in this paper, can be implemented in
NS3 simulation system. Finally, lots of satisfactory simulation results about throughput, delay,
jitter and packet loss rate were given to show the brilliant performance of vertical handoff
implemented by proposed intelligent framework.
Keyword: QoS, LTE, WiMAX, intelligent framework, ANFIS.
1. Introduction
As networks are developed with a multiservice offering, QoS requirements increase. Various
wireless network protocols and standards have been proposed and used to provide a
ubiquitous networking environment and heterogeneous wireless networks composed of
different types of networks involve many problems to be solved[1, 2].
Research studies proposed solutions to the integration of QoS in different network standards.
The Layer 2 packet formats in heterogeneous wireless networks were translated to ensure
487
QoS during handover in our previous studies [3, 4]. Layer 2 packet formats translation is a
fundamental research achievement for QoS guarantees in heterogeneous networks. However,
protocols and network types increase over time causing problems to packet formats
translation. The main problem with packet formats translation is that it is a linear mapping
and every mapping needs to be fully defined. The more types of packet formats there are, the
more complicated mapping becomes. Moreover, this method is limited to Layer 2, but for
network transmission protocols Layer 3 and Layer 4 are also used. Therefore, the way
Cross-Layer QoS guarantees are achieved is another main topic of this paper.
2. Research Method
In this study, ANFIS (Adaptive Network-based Fuzzy Inference System) has been chosen as a
QoS mapping system. Training data have been collected and processed so that the ANFIS
system could generate the expected output of an intelligent QoS mapping. LTE and WiMAX
were used as interface for the heterogeneous wireless networks and ANFIS to map the QoS
between these two interfaces. Moreover, a CL-MIH (Cross Layer MIH) has been chosen to
store QoS data for seamless handover.
1.1. Research Framework
Our intelligent framework for heterogeneous wireless networks with QoS guaranteed makes
use of the ANFIS system to classify the QoS together with Cross-Layer MIH as shown in
Fig.1. As can be seen from the figure, the intelligent framework is built inside a
heterogeneous wireless network environment composed by LTE and WiMAX, and ANFIS is
built in Layer 2 of the heterogeneous wireless networks so that the QoS between different
layers can be classified. The Cross-Layer MIH is built in Layer 2 and extended to Layer 3
and Layer 4 so that messages across layers can be passed.
Fig. 1. Intelligent framework for heterogeneous wireless networks with QoS guarantees
1.2. ANFIS System
The ANFIS architecture, as shown in Fig.2, is made up by 5 layers. There are 5 input
488
variables, i.e. Connection Type (real-time or non-real-time), Packet Type (fixed or variable
packet length), Priority, Packet Loss and Packet Delay. The output is the QoS Class of LTE
and WiMAX.
1.3 Definition of Cross-Layer MIH Information
Cross Layer MIH Information records QoS class and parameters of LTE/WiMAX for each
connection. The mapping between QoS parameters of LTE/WiMAX and MIH information is
given in
Table 4.
Table 4. Mapping between QoS and MIH Information
MIH Information
LTE QoS Parameter
WiMAX QoS Parameter
MIH_ QoS_Class
QCI
SFT
MIH_ Traffic_Rate
MBR
AMBR
GBR
MSTR
MRTR
MIH_Delay
-
Maximum Latency
Maximum Jitter
MIH_Priority
ARP
Traffic Priority
489
Fig. 2. Apply ANFIS for QoS mapping in heterogeneous wireless networks
MIH_QoS_Class records the QoS class of LTE and WiMAX. There are 9 QCI classes in LTE
and 5 SFT (Service Flow Type) classes in WiMAX.
MIH_Traffic_Rate is the traffic rate requirement of the LTE and WiMAX connection. For
LTE, there are GBR (Guaranteed Bit Rate), MBR (Maximum Bit Rate) and AMBR
(Aggregate MBR) and for WiMAX, there are MSTR (Maximum Sustained Traffic Rate) and
MRTR (Minimum Reserved Traffic Rate) [5, 6].
3. Performance Verification and Simulation Results
In this section, our research method had been applied to a simulation environment for
performance verification, whose results have later been analyzed. In the simulation
environment, as shown in Fig.3, the user receives a handover request from LTE to WiMAX.
During the handover process, the LTE base station transmits the Cross-Layer MIH message
to the Cross-Layer MIH Server, and initiates the handover procedure to establish a WiMAX
connection in advance so that seamless handover can be achieved[7, 8].
490
Fig. 3. Simulation environment
NS3 (Network Simulator 3) has been adopted for the simulation, whose results of Throughput,
Packet Loss, Packet Delay and Jitter have been analyzed to verify the feasibility of our
method.
The overall performance evaluation has been done by connecting services in all classes at the
same time to evaluate the performances of Throughput as shown in Fig.4.
The results obtained with and without our intelligent framework have been compared. The
handover time without our intelligent framework was about 10 seconds (using break before
make) whereas the handover time obtained by our intelligent framework with a
make-before-break configuration was within 1 second.
The throughput of our method decreases temporarily during handover and then falls back to
the original value. In regular situations, handover comes with disconnections bringing the
throughput down to 0.
4. Conclusions
ITU-T Y.1541 includes 6 recommended network QoS classes. From performance analysis,
our method can meet the QoS requirements. The differences in performance when our
method was used and when it was not are summarized in
Table 5.
491
Fig. 4. Throughput
Table 5. Performance comparison of different methods
Influence during vertical handover
With our method
Real-time service
1. Time Delay
increases slightly.
2. Packet
Throughput
Without our method
Non-real-time
service
1. No influences on
Time Delay.
2. No influences on
Packet Throughput.
Real-time service
Non-real-time service
1. Connections are
temporarily
disconnected.
2. Packet Throughput
1. Connections are
temporarily
disconnected.
2. Packet Throughput
492
decreases slightly.
3. No influences on
drops to zero.
drops to zero.
3. Packet Loss
increases slightly.
Packet Loss.
3. Packets transmitted
during handover are
all lost.
3. Packets transmitted
during handover are
all lost.
From
Table 5 we can see that in general situations (without our method) the performance during
handover deteriorates greatly, with disconnections and the absence of QoS guarantees which
are inconvenient for users. On the other hand, our method can effectively improve the
performance during handover and ensure the QoS requirements after handover. Even though
real-time services are slightly affected by some parameters, non-real-time services are almost
free from any influence, hence ensuring a seamless handover.
5. Acknowledgments
The authors thank the editor and the anonymous referees for their invaluable comments made
on this paper. This paper is fully supported by the National Science Council and Estinet,
Taiwan, Republic of China, under grant number NSC 100_2632_E_366_001_MY3 and
NSC99-2220-E-366-003.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
6. References
K. Piamrat, A. Ksentini, J. Bonnin, C. Vihob, Radio resource management in emerging
heterogeneous wireless networks, Computer Communications, Vol.34, Issue 9,
pp.1066-1076, 2011.
S. Paul, J. Pana, R. Jaina, Architectures for the future networks and the next generation
Internet: A survey, Computer Communications, Vol.34, Issue 1, pp.2-42, 2011.
Y.C. Chen, J.H. Hsia, Y.J. Liao, Advanced Seamless Vertical Handoff Architecture for
WiMAX and WiFi Heterogeneous Networks with QoS Guarantees, Computer
Communications, Vol.32, Issue 2, pp.281-293, 2009.
Y.C. Chen, S.C. Tseng, Y.B. Hu, Seamless Layer 2 Framework for Heterogeneous
Networks, in:Fourth International Conference on Genetic and Evolutionary Computing,
pp.598-601, 2010.
J. Márquez-Barja, C. T. Calafate, J.C. Cano, P. Manzoni, An overview of vertical
handover techniques: Algorithms, protocols and tools, Computer Communications,
Vol.34, Issue 8, pp.985-997, 2011.
M. Kassar, B. Kervella, G. Pujolle, An overview of vertical handover decision strategies
in heterogeneous wireless networks, Computer Communications, Vol.31, Issue 8,
pp.2607-2620, 2008.
R. Stankiewicz, A. Jajszczyk, A survey of QoE assurance in converged networks,
Computer Networks, Vol.55, Issue 7, pp.1459-1473, 2011.
R. Stankiewicz, P. Chołda, A. Jajszczyk, QoX: what is it really?, IEEE Communications
Magazine, Vol.49, Issue 4, pp.148-158, 2011.
493
Electrical and Electronic Engineering I
10:15-12:00, December 15, 2012 (Meeting Room 3)
054: Prototype of a Microcontrolled Device for Measuring Vibrating Wire Sensors
Guilherme Natsutaro Descrovi
State University of Western Parana
Nabeyama
Rolf Massao Satake Gugisch
State University of Western Parana
João Carlos Christmann Zank
State University of Western Parana
Carlos Henrique Zanelato Pantaleão State University of Western Parana
260: Numerical Study of Thin-GaN LED with Holes on n-electrode
Taoyuan Innovation Institute of
Farn-Shiun Hwu
Technology
494
054
Prototype of A Microcontrolled Device for Measuring Vibrating Wire
Sensors
Guilherme Natsutaro Descrovi Nabeyamaa,Rolf Massao Satake Gugischb,João
Carlos ChristmannZankc,Carlos Henrique Zanelato Pantaleãod
a
Control and Automation Laboratory, LCA, State University of Western Parana, UNIOESTE,
Av. Presidente Tancredo Neves 6731, Jardim ITAIPU, Parque Tecnológico Itaipu, PTI, ZIP
85867-900, Foz do Iguaçu, Paraná
nabeyama@msn.com
b
Control and Automation Laboratory, LCA, State University of Western Parana, UNIOESTE,
Av. Presidente Tancredo Neves 6731, Jardim ITAIPU, Parque Tecnológico Itaipu, PTI, ZIP
85867-900, Foz do Iguaçu, Paraná
rolfsatake@gmail.com
c
d
joaozank@yahoo.com.br
eng_pantaleao@yahoo.com.br
Abstract
The knowledge of state and seasonal behaviour of a hydroelectric dam is of great importance,
because it is necessary knowing if it is operating in its limits. Currently the vibrating wire
sensors used for monitoring the ITAIPU hydroelectric dam, have their measurements taken
periodically and manually. This paper proposes a microcontrolled device for measure
vibrating wire sensor installed in ITAIPU. These sensors have a frequency as a response
which is measure by the microcontroller PIC18F4550 and excited by the PIC16F628A. The
two circuits, one performs the measuring and excitation, the other handling the treatment of
the output signal, which consists of using analog filters, digital filters and statistical data
treatment. A prototype was developed and tested with a 4450 sensor, manufactured by
GEOKON®, mounted on a test bench. The value obtained by the prototype come from a
frequency sample in which is taken the mean and standard deviation which indicates that the
data are close to the actual value of the sensor, with this results it’s possible to develop a
remote measuring system for the sensors.
Keyword:Vibrating wire sensors, microcontrollers, acquisition system, measure system,
495
hydroelectric dam
1. Introduction
The monitoring of hydroelectric dams is of great importance because it is necessary to know
their status and if the behavior is within the operational limits.
From the project linked to the Centre for Advanced Studies on Safety of Dams (CEASB), the
study of the vibrating wire sensors currently installed in the ITAIPU dam, which are three:
extensometers, piezometers and Level Sensors, manufactured by GEOKON ®, used for
monitoring the dam.
The measurement system of these sensors, today, is done manually, collecting data
periodically. So it takes a long time to perform this task, because there are several sensors
installed along of the dam.
Develop a microcontrolled system is to facilitate and expedite the data acquisition of these
sensors for in the future send the collected data automatically and remotely.
2.
Development
2.1 Working Principle
The sensor works as follows: the deflection caused by the extension or compression of the
sensor changes the tension on the vibrating wire installed inside, it changes its frequency
response, then an electromagnet installed near the wire makes is possible to excite and
capture the signal [1]. This frequency responds to the equation below:
(1)
Where:
– Resonance Frequency
L – Length of Wire
T – Tension on the Wire
– Mass Linear Density
Figure 1 shows in detail the internal construction of a Piezometer, Model 4500, manufactured
by GEOKON ®. In this sensor the Diaphragm Pressure Sensitive causing the variation of
strain on the rope thus changing the frequency, the diaphragm is welded to a capsule that is
sealed and hermetically sealed. Some models have an air intake on the sensor that is intended
to compensate for barometric pressure.
496
Figure 1 – Details of the Piezometer Model 4500, GEOKON®.
2.2 Prototype Vibrating Wire Sensor
Based on the study of sensors and mainly on the physical aspect of them, this instrument was
mounted in order for better understand the operation of sensors. Because it is not possible to
open or to modify an industrial sensor, due to its high cost and the risk of damaging them, the
basic idea was to build an electric guitar with one string, but it should be "played" by exciting
the resonance frequency of the string.
First it was mounted to the frame where it was set a string of any guitar, until then to know
the resonance frequency was used a guitar tuner where it showed its frequency response to
mechanical stimulation.
The second challenge was how her excitement, knowing that the sensor GEOKON ® possess
an internal coil becomes an electromagnet that generates an electromagnetic field excited at
the resonance frequency of the rope with it makes it oscillate. The idea was to reproduce the
same with the prototype. Then it was used a coil with coil 2500, fed with a signal generator,
to the excitation, but did not obtain the expected result.
In another attempt, which was used loudspeakers applied on a frequency they should make
the string vibrate. With this procedure the rope vibrated slightly, but very little, and cannot
capture its own vibration using the speaker.
The solution was computer buzzers, with the signal generator applying on them a square
wave with 5 volts of peak with the wire’s resonant frequency, it had the best result. It was
also observed that the position of the buzzer influenced as of excitation and collection.
With a mechanical stimulation of the wire, the buzzers proved also more efficient than the
other methods tested in order to capture the signal.
Figure 2 shows the sensor developed, which was assembled fromreadily available
commercial materials. A wood support for the components, a string stretched over the two
screw and buzzers computer to perform the change in strain on the rope.
The rope that showed better results, both for excitation and for capture, was 0.010 for
nickel steel guitar.
497
Figure 2 –Prototype Sensor Built for Testing.
The distance between the buzzers must be sufficiently large that the magnetic field does not
influence each other. In the configuration used the buzzer in the middle was the exciter and
the one on the right edge was the capture.
The wire responded well to the frequencies of 300 Hz to 1000 Hz was not performed any
study related to strain on the rope or the effect of temperature on it, only the measurement
and recording the frequency response.
The signal is captured in the range of 20 mV to 50 mV, with the aid of a digital oscilloscope
was possible to validate the measurements made with the guitar tuner.
2.3 Excitation and Measuring System
The measuring prototype of the vibrating wire consists of two parts, the first module is the
excitation of the string and the other is the frequency meter module.
The excitation system of the sensorwas mounted with a PIC16F628A manufactured by
MICROCHIP®, the objective of this microcontroller is generate a square wave signal with
variable frequency in a digital output. This wave is then amplified with a photo coupler and a
transistor, and sent to the internal coil sensor that performs the excitation of the vibrating
wire.
498
Figure 3 – Test Bench for the Sensor Model 4450, GEOKON®.
Performing tests on the sensor, it was observed that it response is a sinusoid ware with
amplitude of 20mV peak. A problem in the development was that there is only one internal
coil in the sensor, this being for excitation and measurement of the frequency. To separate the
signals is not possible to use diodes, transistors or logic gates, since the signal amplitude has
a very low voltage, which is not able to polarizeany semiconductor, it is necessary to amplify
the signal to be filtered and captured.
In the same node where it is applied is the excitation signal from which the response signal
will go out. So at this point is connected in parallel a amplifier to increase the signal voltage
of the sensor response.
After amplifying, the signal passes through a band pass filter constructed of an operational
amplifier, TL084, comprising a low pass filter in series with a high pass filter, both of the
second order, and finally a gain module. The layout of the filters used was VCVS at unity
gain. Figure 4 shows the circuit schematic.
Figure 4 – Schematic Band Pass Filter.
After treatment of the filters the analog signal passes through a Schmitt Trigger, and this is
connected to an AND-type logic gate, the other input of this AND gate is connected to the
microcontroller which carries out the excitation and in turn the output of the AND gate
connected in the frequency meter. This is done so that the signal sent to the meter is not the
same excitation signal, the PIC16F628A enables the gate only when is not doing the
excitation, and disables when the excitement begins, and it makes only the response signal
the vibrating string passes through the gate.
The second part of the system is a module for measuring the frequency and temperature,
499
fitted with a PIC18F4550, manufactured by Microchip ®, using one of its peripheral module
CCP Capture Compare PWM configured in the capture mode and analog to digital converter
module (ADC) to calculate the temperature. Figure 5 shows the prototype mounted on a
breadboard.
Figure 5 – System Prototype.
The microcontroller causes the capture of time after the rising edge and the falling edge time,
it is thus the time of the square wave period. With this estimated frequency. The
microcontroller was programmed to make these measurements within an interrupt, so it will
perform the measurement at any instant of time that a state change happens at the entrance of
the CCP peripheral.
To verify if the system was measuring the frequency properly, it was used a square-wave
signal generator. The microcontroller was stable and gave satisfactory results in the
frequencies of 10Hz to 100kHz.
One problem encountered was the variation of the data received by the PIC18F4550 because
the Schmitt trigger logic gate and sometimes oscillated due interference in the system. With
this, the CCP module capture noises, in other words,frequency outside the range and
operation. To circumvent this problem was added to the program a digital filter to store only
the frequencies within the desired range.
Another problem was the variation of frequency measurements. It was observed that the
frequency was varied between them, due to the fact that the program measures the wave
periods independently, ie, it detects a rising edge and carries its measurement from that often
these signals still contain noise, thus causing an error in the measurement.
The solution was received more samples and then calculate the average and standard
deviation. With five data and the result is satisfactory, some samples were obtained a standard
deviation zero, but a standard deviation is acceptable up to 2 Hz.
Statistical analysis with a digital filter and data shown by the microcontroller frequency was
very close and in most cases the same frequency excited by the signal generator on the sensor,
500
the method used to determine which the resonant frequency of the sensor.
The temperature measurement is carried out through the ADC port. As seen above the sensors
have an internal thermistor. To make this measurement of the thermistor was placed in series
with a 10kΩ resistor and was fed with power from the microcontroller. With the ADC module
measure the voltage across resistor known, from the value measured by the Kirchhoff's first
law, it is possible obtain the resistance of the thermistor and finally calculates the temperature
using equation Stein-Hart, the parameters a, b and c, may be found in the instruction manual
for each sensor.
1
= a + b  ln R  + c  ln³ R 
T
(2)
Where a, b and c are the parameters of the thermistor, and R is the resistance.
To store and display the data were developed two firmwares. One that shows only the final
result in a 16x2 alphanumeric LCD display, as seen in Figure 5, and another that sends data to
the computer via the USB port.
3. Results
The sensor was fixed at a certain point and maintained at a constant temperature inside a
room. It was performed with the excitation signal generator to check what their resonance
frequency, reached the value of 1575Hz.
Then using bothfirmwares designed for checking the frequency to make a comparison of the
results.
The data acquisition was done through the USB port via software 232Analyzer. Figure 6
shows the received data.
Figure 6 – Interfaceof the 232Analyzer.
501
In the second firmware data are shown in the LCD display, one at a time after the touch of a
button, given the limited display characters. Figure 7 shows the data analyzed.
Figure 6 – Data Shown no Display LCD.
4. Conclusion
The whole system shows satisfactory results,acceptable response time and accuracy. The
completion of the measurements is easy, simple and intuitive.
The system is limited to the vibrating wire sensors that operate at a frequency of 1000Hz to
3500Hz, because the operating ranges of the excitation system. But it is possibleto make an
update to work on other frequencies, being necessary to change the parameters of the band
pass filter and also the firmware of the microcontroller that performs the excitement.
This work contributed to perform the measurement of a vibrating wire sensor for a faster and
more data can be stored on a computer. Since the microcontroller system can perform the data
acquisition, it is possible to develop a system for data transmission, for example, CAN
network or Ethernet.
5. Acknowledgements
To PTI C&T, Center for Advanced Study Safety of Dams (CEASB), the State University of
West Paraná and everybody connect direct and indirect to this research.
502
6. References
[1] R. Pallás-Areny and J. G. Webster., "Sensors and Signal Conditioning"; John Willey &
Sons, Inc, 1991.
[2] Silveira, J.F.A., Instrumentação e Segurança de Barragens de Terra e Enrocamento,
Oficina de Texto, São Paulo, 2006.
[3] Silva, T.I., Carvalho A.A., Leister D.D., Silva A.C.R. (2001), “Desenvolvimento de um
Sensor a Corda Vibrante para Aplicações em Engenharia de Reabilitação” – II
Congresso Latino-americano de Engenharia Biomédica
503
260
Numerical Study of Thin-GaN LED with Holes on n-electrode
Farn-Shiun Hwu
Department of Mechanical Engineering, Taoyuan Innovation Institute of Technology, Jhongli
City, 32091, Taiwan, R.O.C.
E-mail address: hfs@tiit.edu.tw
Abstract
A novel design is proposed for n-electrode with holes to apply in Thin-GaN light-emitting
diodes (LEDs). The influence of the n-electrode with holes on the thermal and electrical
characteristics of a Thin-GaN LED chip is investigated by a 3-D numerical simulation. For
n-electrodes with and without holes, the variations in current density and temperature
distributions in the active layer are very tiny. The percentages of light output from these holes
are 29.8 % and 38.5 % for cases with 5 μm holes and 10 μm holes, respectively; the side
length of the n-electrode (L) is 200μm. Furthermore, the percentage increases with the size of
the n-electrode. Thus, the light output can be increased 2.45 times using the n-electrode with
hole design. The wall-plug efficiency (WPE) is also improved from 2.3 % to 5.7 %. The most
appropriate n-electrode and hole sizes are determined by WPE analysis.
Keywords: Thin-GaN LED, Numerical Simulation, WPE, Temperature, Current Density.
1. Introduction
Light-emitting diodes have many advantages over conventional light sources, such as their
narrow spectrum, long lifetime, and good mechanical stability [1]. High-brightness LEDs
have been proven to have excellent abilities to function in back and general lighting
applications [2]. There are two major methods that can be used to increase the light output of
an LED chip: enlarging the area of a single LED chip or manufacturing a monolithic LED
chip array [3, 4]. When the area of an LED chip is enlarged, it results in an increase in the
input current and the heat dissipation. Therefore, several methods have been developed to
improve the current distribution and heat transfer in LED chips by means of altering the
design of the electrode pattern and the geometry of the structure [5-8]. The Thin-GaN LED
has also demonstrated great potential for high power illumination usage [7, 9-11]. The better
current spreading and superior heat dissipation of the Thin-GaN LED allow it to operate
under higher power conditions. A better design for the electrode pattern can increase the
light-emitting ratio and decrease the thermal burden in the Thin-GaN LED [1]. The current
504
distribution in an LED chip is usually estimated from the light output [7, 9] and is affected by
the material properties of the chip, as well as the size and pattern of the electrode [7, 9-11].
The current crowding effect can be improved by the insertion of a current blocking layer
(CBL) into the epitaxial structure of the Thin-GaN LED [7]. Many different LED chip
structures have been designed and fabricated in the search for superior LED performance [1,
3, 7, 12]. However, the mass fabrication procedure for LEDs is expensive and
time-consuming.
Several numerical models have been proposed to explain the current spreading phenomenon
and to find possible ways to obtain better device geometry for LEDs [1, 4, 8, 13-15]. Tu et al.
improved the design of the n-electrode patterns to obtain more uniform current spreading [1].
Hwang et al. [16] developed a three-dimensional numerical simulation model based on the
concept of series and parallel connections with which to investigate current spreading in an
LED. The three-dimensional numerical models use the continuity equation for electronic
transport to simulate the electric potential and current in lateral-injection LED chips. This has
proven to be much more economical than other methods [8, 13]. Using this approach, the
simulation of complex LED structures becomes possible. The influence of the thermal effect
on the performance of the LED has been considered in some studies [7, 9, 10, 12, 16, 17]. In
our previous study, a numerical model which considered the thermal effect was proposed for
simulating a vertical LED chip [18]. The simulation results for different areas of the
n-electrode and the influence of CBL are discussed. The results are found to agree with the
experimental results obtained in the study of Kim et al. [7].
The main problem with Thin-GaN LEDs is the shadowing effect of the n-electrode when
current crowding under the n-electrode is serious. Recently, a novel n-electrode with holes
has been manufactured with powder metallurgy technology to solve this problem. In the
present study, we modify the 3-D numerical model developed in our previous work [18] to
simulate the influence of the n-electrode with holes. This is used to investigate the thermal
and electrical characteristics of Thin-GaN LED chips operating under high power conditions.
The effects of the hole size are considered. In addition, the wall-plug efficiency (WPE) is also
discussed in our numerical model.
2. Numerical Simulation
A Thin-GaN LED chip is analyzed in the present study. A schematic representation of a
cross-section in the lateral direction is shown in Fig. 1. The full chip dimensions are 600 ×
600 μm2. The thicknesses of the n-type GaN layer, active layer and p-type GaN layer are 2.5,
0.07 and 0.07 μm, respectively. A p-type electrode is formed by the application of a 0.93 μm
thick Ag/Ni/Au metalized layer to the entire wafer (600×600 μm2). The n-type electrode is
formed by the application of a 0.7 μm thick Ti/Au metalized layer with various square
dimensions and different widths (L = 100, 200, 300, 400, 500 and 600 μm).
505
Figure 1 A cross-sectional schematic representation of a Thin-GaN LED chip in the lateral
direction, where L represents the size of the n-electrode.
The continuity equation for electronic transport in an LED chip is
  (V )  0
,
(
1
)
where σ is the conductivity and V is the electrical potential. The resistivities of the n- and
p-type cladding layers are ρn= 1×10-2 Ω·cm and ρp= 14 Ω·cm, respectively; and the other
resistivities for the metallic electrode are ρAg= 1.59×10-6 Ω·cm, ρNi= 6.9×10-6 Ω·cm, ρAu=
2×10-6 Ω·cm, and ρTi= 4.2×10-5 Ω·cm. The specific p-contact resistance and the specific
n-contact resistance are set to be 2.8×10-3 Ω·cm2 and 3.6×10-4 Ω·cm2, respectively. Following
upon our previous work [18], the current through the active layer is assumed to move in the
direction perpendicular to the layer. The equation for calculating the electrical potential in the
LED is solved using the finite-element method (FEM). The equivalent conductivity for each
element in the active layer is calculated by

le
 Je
Vj
,
(
2
)
where le is the elemental thickness of the mesh; Vj is the voltage drop between the active layer;
and Je is the elemental current density. The current behavior through the active layer of the
LED chip is dominated by the following Shockley equation:
eV j


J e  J 0  exp nkT  1




,
(
3
)
-19
where J0 is the saturation current density; e is the elementary charge (1.6×10 coul); n is the
ideality factor; k is the Boltzmann constant (1.38×10-23 J/K); and T is the absolute
temperature. It is well known that J0 and n are dependent on the material quality and/or
device structure. The parameters used for the geometric dimensions and for the Shockley
equation are the same as those used in previous studies [7]: i.e., the saturation current and n
are set to be 4.72×10-22 A and 2.5, respectively. The p-pad is set to have a uniform input
current and the n-pad is set as the ground. With the exception of the electrodes, the rest of the
boundaries in the LED chip are all assumed to be insulated. The relationship between J0 and
506
T developed by Millman and Grabel [19] is also adopted in the present study as
J 0 (T )  J 0
300K
 2 (T 300) / 10
.
(
4
The equation of conduction heat transfer with the heat source for the steady-state is
)
 kc   (T )  q
,
(
5
)
where kc is the thermal conductivity. Based on the law of energy conservation, the input
electrical power can be divided into two major parts, one is the light output power and the
other is heat generation. Given the same assumptions for the active layer as in Eq (2), we
propose the following heat generation term q in the active layer [18]:


 T  300  ,
q  J e  V j 
 int  ext  exp  
 / l e
e
 1600 

(6)
where  is the reduced Planck constant;  is the angular frequency;ηint is the internal
quantum efficiency at room temperature; ηext is the light extraction efficiency at room
temperature. The heat generated by light absorption inside the LED chip is assumed to be
included in this q term. In the present study, the value of the external quantum efficiency
(ηint×ηext ) is 25%, as obtained in the literature [18]. The heat generation term per unit volume
q due to Joule heating in the other layers of the LED chip is
.
q  J e  V
(
7
)
The thermal conductivities of the materials are summarized in Table 1 [18]. The thermal boundary conditions of
the top, lateral and bottom surfaces of the LED chip are shown by
nˆ  (kT )  he (Tinf  T ) ,
(8)
where n̂ is the unit normal vector of the interface; Tinf is the air temperature; and he the equivalent heat transfer
coefficient. A copper slug with a board is usually used to dissipate the heat generated from the LED chip. The
size is set to be about 7.2 cm2 in the present study. Natural convection conditions affect the surfaces with he = 5
W/m2.K and Tinf is 300 K. The convection from the bottom surface of the slug is replaced by the convection
condition from the chip with the equivalent heat transfer coefficient he. In this case, he is about 40000 W/m2.K.
Table 1 Thermal conductivities used in the simulation.
Material
kc (W/m.K)
Ni
Ag
Au
Ti
GaN
55.2
374
290
17.1
130
A self-developed correlation coupling the thermal and electrical equations is added to the
FEM software (COMSOL Multiphysics) to obtain the temperature, the electrical potential,
and the current density in the LED chip. In the present study, the simulation is performed in a
three-dimensional, steady-state condition; the mesh for each layer is made up of triangular
prism elements. The mesh numbers for n-electrodes with holes are much higher than for
those without holes due to the structure of the hole. After convergence testing, the mesh
507
numbers are found to be 2656, 10681 and 18406 for the cases without holes, with 5 μm holes
and with 10 μm holes, respectively, as shown in Fig. 2. The central region of the n-electrode
still reserves a circular area 100 μm in diameter for wire bonding. Utilizing a similar testing
procedure, the relative tolerance is selected to be 1×10-3 for the variables of temperature and
voltage.
n-electrode without holes
n-electrode with 10 μm holes
n-electrode with 5 μm holes
Figure 2 Triangular prism elements of each layer in LED chip.
3. Results and Discussion
Fig. 3 displays the current densities in the active layer for an input current of 100 mA for
the cases where L = 200 μm, without holes, with 5 μm holes and with 10 μm holes. The
simulated current density distributions are almost the same results. The current crowding
beneath the n-electrode is more serious for every case.
Case of n-electrode without holes
Case of n-electrode with 10 μm holes
Case of n-electrode with 5 μm holes
508
Figure 3 Current densities in the active layer with an input current of 100 mA for the
case of L=200 μm.
Fig. 4 shows the temperature distribution in the active layer for the cases of L = 200 μm,
without holes, with 5 μm holes and 10 μm holes, under the same operating conditions. There
is little difference (about 0.1 K) in the maximum temperature in the active layer between the
three cases. The temperature distributions also show the same results. According to analyses
of the current distribution and temperature distribution, the influences of the n-electrode for
cases with and without hole seem very tiny.
Max: 333.34 K, Min: 309.624 K Max: 333.444 K, Min: 309.623
K
Max: 333.469 K, Min: 309.622
K
Case of n-electrode without
Case of n-electrode with
Case of n-electrode with
holes
10 μm holes
5 μm holes
Figure 4 Temperature distribution in the active layer for the cases with L = 200 μm.
In order to improve the shadowing effect of the n-electrode, the current density distributions
in the active layer, including the shadow of the n-electrode are respectively shown in Fig. 5
for different cases. For the case with L = 200 μm, the area beneath the n-electrode is only 1/9
of the total area in the active layer, but the average current density beneath the n-electrode is
3.78 times than that of the rest of the area (see Fig. 3). The percentages of the light output
power from those holes can be integrated by the present numerical simulation, and the
comparison between the two cases for n-electrodes with holes of different n-electrode size is
shown in Fig. 6. The results show that the n-electrode with 10 μm holes is better offering an
increased light output power of about 2.45 times more than for n-electrodes without holes.
Case of n-electrode with
Case of n-electrode with
Case of n-electrode
10 μm holes
5 μm holes
without holes
Figure 5 Current density distributions in the active layer including the shadow of the
509
n-electrode
Light output percentage
50
40
30
5 m holes
10 m holes
20
10
0
100
200
300
400
500
600
L (m)
Figure 6 Percentages of the light output from those holes.
The wall-plug efficiency of the LED chip is defined as the output light power divided by the
total input electrical power. Here, the output light power is selected as the electrical power
generated by the residual region of the active layer not covered by the n-electrode subtracted
from the total heat generated in the same region. The WPEs for different sizes of n-electrodes
with 5 μm and 10 μm holes are also discussed, and the results are compared with the cases of
n-electrodes without holes, as shown in Fig. 7. The better size for the n-electrode is L = 200
μm while in the case with 10 μm holes, the WPE can be increased from 2.33% to 5.72% by
the design of n-electrode with holes.
8
without hole
with 10 m holes
with 5 m holes
7
WPE (%)
6
5
4
3
2
1
0
100
200
300
400
500
600
L (m)
Figure 7 WPE results for different sizes of n-electrode for the cases with and without holes.
4. Conclusions
In the present study, a design is proposed for n-electrode with holes for application in
Thin-GaN LEDs. The effect of the n-electrode with hole design on the thermal and electrical
characteristics of a Thin-GaN LED chip is investigated by a new numerical model. The
510
variation in current density and temperature distributions in the active layer is very tiny for
n-electrodes with and without holes. The percentages of light output power from these holes
ranges from 3.5 % to 37.5 % and from 3.1 % to 49 % for the cases with 5 μm and 10 μm
holes for n-electrodes with different side lengths (L). Therefore, the light output power can be
increased 2.45 times by the design of n-electrode with holes. The wall-plug efficiency is also
improved from 2.3 % to 5.7 % for the case of L = 200 μm n-electrodes with 10 μm holes.
WPE analysis is used to determine better n-electrode and hole sizes.
5. Acknowledgement
The authors gratefully acknowledge the support of the National Science Council of
Taiwan, R.O.C. for this work, through Grant No. NSC 101-2221-E-253-006.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
6. References
S.H. Tu, J.C. Chen, F.S. Hwu, G.J. Sheu, F.L. Lin, S.Y. Kuo, J.Y. Chang, C.C. Lee,
Characteristics of current distribution by designed electrode patterns for high power
ThinGaN LED, Solid-State Elec., 2010, Vol. 54, pp.1438-1443.
Y.C. Hsu, Y.K. Lin, M.H. Chen, C.C. Tsai, J.H. Kuang, S.B. Huang, H.L. Hu, Y.I Su,
W.H. Cheng, Failure Mechanisms Associated with Lens Shape of High-Power LED
Modules in Aging Test, IEEE Tran. on Elec. Devices, 2008, Vol. 55, pp.689-691.
A. Chakraborty, L. Shen, H. Masui, S.P. DenBaars, U.K. Mishra, Interdigitated
multipixel arrays for the fabrication of high-power light-emitting diodes with very low
series resistances, Appl. Phys. Lett., 2006, Vol. 88, pp.181120-3.
J.P. Ao, Monolithic Integration of GaN-based LEDs, J. of Phy.: Conference Series, 2011,
Vol. 276, pp.012001-4.
J.I. Shim, Design and Characterization Issues in GaN-based Light Emitting Diodes,
Proc. of SPIE, 2008, Vol. 7135, pp.71350C.
P. Wang, W. Wei, B. Cao, Z. Gan, S. Liu, Simulation of current spreading for GaN-based
light-emitting diodes, Opt.s & Laser Tech., 2010, Vol. 42, pp.737-740.
H. Kim, K.K. Kim, K.K. Choi, H. Kim, J.O. Song, J. Cho, K.H. Baik, C. Sone, Y. Park,
T.Y. Seong, Design of high-efficiency GaN-based light-emitting diodes with vertical
injection geometry, Appl. Phys. Lett., 2007, Vol. 91, pp.023510-1.
G.J. Sheu, F.S. Hwu, J.C. Chen, J.K. Sheu, W.C. Lai, Effect of the electrode pattern on
current spreading and driving voltage in a GaN/Sapphirem LED chip, J. Electrochem.
Soc., 2008, Vol. 155, pp.H836-H840.
J.T. Chu, C.C. Kao, H.W. Huang, W.D. Liang, C.F. Chu, T.C. Lu, H.C. Kuo, S.C. Wang,
Effect of different n-electrode patterns on optical characteristics of large-area p-side
down InGaN light- emitting diodes fabricated by laser lift-off, Jpn. J. Appl. Phys., 2005,
Vol. 44, pp.7910-7912.
S.J. Wang, K.M. Uang, S.L. Chen, Y.C. Yang, S.C. Chang, T.M. Chen, C.H. Chen, B.W.
Liou, Use of patterned laser liftoff process and electroplating nickel layer for
thefabrication of vertical- structured GaN-based light-emitting diodes, Appl. Phys. Lett.,
2005, Vol. 87, pp.011111-3.
D.W. Kim, H.Y. Lee, G.Y. Yeom, Y.J. Sung, A study of transparent contact to vertical
GaN-based light-emitting diodes, J. Appl. Phys., 2005, Vol. 98, pp.053102-4.
H. Kim, J. Cho, J.W. Lee, S. Yoon, H. Kim, C. Sone, Y. Park, T.Y. Seong,
Consideration of the Actual Current-Spreading Length of GaN-Based Light-Emitting
511
[13]
[14]
[15]
[16]
[17]
[18]
[19]
Diodes for High-Efficiency Design, IEEE J. Quantum Electron, 2007, Vol. 43,
pp.625-632.
J.C. Chen, G.J. Sheu, F.S. Hwu, H.I. Chen, J.K. Sheu, T.X. Lee, C.C. Sun,
Electrical-optical analysis of a GaN sapphire LED chip by considering the resistivity of
the current-spreading layer, Opt. Rev., 2009, Vol. 16, pp.213-215.
M.V. Bogdanov, K.A. Bulashevich, I.Y. Evstratov, A.I. Zhmakin, S.Y. Karpov, Coupled
modeling of current spreading, thermal effects and light extraction in III-nitride
light-emitting diodes, Semicond. Sci. Technol., 2008, Vol. 23, pp.125023.
M.V. Bogdanov, K.A. Bulashevich, I.Y. Evstratov, S.Y. Karpov, Current spreading,
heat transfer, and light extraction in multipixel LED array, phys. stat. solidi (c), 2008,
Vol. 5, pp.2070–2072.
S. Hwang, J. Shim, A Method for Current Spreading Analysis and Electrode Pattern
Design in Light-Emitting Diodes, IEEE Trans. Electron Devices, 2008, Vol. 55,
pp.1123-1128.
H. Kim, S.J. Park, H. Hwang, N.M. Park, Lateral current transport path, a model for
GaN-based light-emitting diodes: Applications to practical device designs, Appl. Phys.
Lett., 2002, Vol. 81, pp.1326-1328.
F.S. Hwu, J.C. Chen, S.H. Tu, G.J. Sheu, H.I. Chen, J.K. Sheu, A numerical study of
thermal and electrical effects in a vertical LED Chip, J. Electrochem. Soc., 2010, Vol.
157, pp.H31-H37.
J. Millman, A. Grabel, Microelectronics, 2nd ed., McGraw-Hill Book Company, New
York, U.S.A., 1987.
512
Environmental Science
10:15-12:00, December 15, 2012 (Meeting Room 3)
248: Removal of Nickel Ions from Industrial Wastewater Using Sericite
Gangneung-Wonju National
Choong Jeon
University
Gangneung-Wonju National
Taik-Nam Kwon
University
513
248
Removal of Nickel Ions from Industrial Wastewater Using Sericite
Choong Jeona, Taik-Nam Kwonb
a
BioChemical Engineering
Gangneung-Wonju National University
Email:metaljeon@gwnu.ac.kr
BioChemical Engineering
b
Gangneung-Wonju National University
Abstract
The applicability of sericite for Ni(II) removal and recovery process in actual industrial
wastewater which has Na(I) was studied. The removal efficiency for NI(II) could not be
affected by Na(I) which generate ionic strength. The highest removal efficiency of sericite for
Ni(II) was approached about 93% at pH 7.5 and silanol (SiO2) and aluminol (Al2O3) groups
are likely to play important role in adsorption process. Sericite had a high Ni (II) adsorption
capacity of about 44mg/g at pH 7.5. All adsorption process was completed in 120min and
removal efficiency of sericite was higher than that of Amberlite IR 120 plus resin. The effect
of temperature on the removal efficiency of Ni(II) was negligible. In addition, previously
adsorbed Ni(II) onto sericite could be easily and completely eluted by HNO3. Therefore, it
could be concluded that adsorption process using sericite can be applied to the recovery
system for Ni(II) in actual wastewater treatment system and furthermore, the technique could
sufficiently replace conventional treatment process such as solvent extraction and ion
exchange resin.
514
Electrical and Electronic Engineering II
14:30-16:15, December 15, 2012 (Meeting Room 3)
268: Coordination of Voltage Regulation Control between STATCOM and OLTC
Taipei Chengshih University of
San-Yi Lee
Science and Technology
Taipei Chengshih University of
Jer-Ming Chang
Science and Technology
299: Maximising Incident Diffuse and Direct Solar Irradiance on Photovoltaic Panels
Through the Use of Fixed Tilt and Limited Azimuth Tracking
Yun Fun Ngo,
National University of Singapore
Fu Song
National University of Singapore
Benjamin Kho
National University of Singapore
343: A Modified Correlation Coefficient of Circularly Polarized Waves for the UMCW
Radar Frequency Band
Deock-Ho Ha
Pukyong National University
515
268
Coordination of voltage regulation control between STATCOM and OLTC
San-Yi Lee
Taipei Chengshih University of Science and Technology,
No. 2, Xueyuan Rd., Peitou, Taipei City, TAIWAN
sylee@tpcu.edu.tw
Abstract
In addition to the function of power oscillation damping, the STATCOM also has the ability
of quick voltage regulation for power quality improvement. Hence, when an STATCOM is
installed with an OLTC, a coordinated voltage control scheme, where the STATCOM
responses for transient voltage variation and the OLTC for long-term voltage variation, is
required to reserve the operating margin of STATCOM for emergency conditions, and to
reduce the operating losses. In this paper, an independent control scheme for the voltage
regulation of STATCOM is proposed and compared with a basic interconnected control
scheme. It’s shown that the proposed control method has the advantage of low control system
cost and low OLTC tap-changing number. The basic interconnected control method has the
advantage of better voltage profile and low operating losses.
Keyword: Voltage regulation, STATCOM, OLTC.
1. Introduction
Due to the advancement of power electronics, STATCOM (Static Synchronous Compensator)
has gradually become one of the major options for the improvement of power system
transient stability and dynamic voltage regulation because of its high speed ability of
controlling the output of reactive power through the switching of power electronic elements.
On the other hand, in order to provide customers a stable and near-rated voltage, power
utilities utilize OLTC (On-Load Tap-Changer) to adjust the secondary voltage by changing
the transformer turn ratio, which in reality is achieved by automatic moving the tap position
based on the detected bus voltage. Because tap moving belongs to mechanical action, so the
response time of OLTC is much larger than that of STATCOM. Fig.1 shows the basic
structures and control mechanisms of STATCOM and OLTC.
516
VS
XS
V 
V
N1/N2
VS
Q
STATCOM
V
OLTC
(VSC or CSC)
Fig. 1 Basic structures and control mechanisms of STATCOM and OLTC
In order to reduce the operating cost of STATCOM and provide immediate capacity in the
transient voltage events, it’s preferred that the STATCOM only responses to the transient
voltage changes. That is, the long term voltage regulation belongs to the duty of OLTC for the
benefit of reducing the steady state output of STATCOM. Therefore, when both STATCOM
and OLTC are used to control the same bus voltage, a coordination scheme between these
two devices is needed to get good performances about voltage regulation, tap change
frequency, and operating cost.
For many circumstances, the installation date of the STATCOM is a long time after that of the
OLTC, so the coordination strategies include two types, one is independent control type and
the other is interconnected control type. The independent control type needs not to change the
control system of the already existing OLTC, but the interconnected control type needs to
modify the OLTC control system.
Fujii [1] describes the independent regulator of an in-service 80MVA STATCOM in Japan,
which adopts a droop voltage control curve with dual slopes. In [2], even though the
STATCOM and OLTC share the same control system, its essence is an independent control
scheme, wherein the detected bus voltage is sent to the STATCOM directly, but passed
through a LPF (Low Pass Filter) before sent to the OLTC. For interconnected control type,
both Khederzadeh [3] and Paserba [4] propose the method of using the output of STATCOM
to bias the voltage errors seen by the OLTC to force the moving of tap changer to release the
output capacity of STATCOM. In [5], an artificial neural network (ANN)-based controller
with load P, load Q, tap position, and STATCOM output as its inputs is proposed as the tap
controller of OLTC.
For interconnected control type, the controller of OLTC receives the amplitude of STATCOM
output continuously, so it can react and reduce the output of the STATCOM quickly, however,
it requires more complex control hardware, or even needs to change the OLTC control
hardware. Hence, for a newly installed STATCOM, the independent control method is a
worth option. In this paper, a novel independent control method which using filtered bus
voltage as the reference voltage of the droop control curve for the reduction of STATCOM
output in the steady state is proposed. The performance of the proposed control method and
its comparison with the basic interconnected control method are validated and analyzed with
the simulations by MATLAB/SIMULINK.
517
2. The Voltage Regulation Principles of STATCOM
Fig. 2(a) shows the effects of droop curve slope on the STATCOM output. Clearly, the output
of STATCOM is inversely proportional to the slope of droop curve. Fujii [1] utilizes this
principle to design a dual-slope droop curve for the coordination of STATCOM and OLTC.
Fig. 2(b) shows the effect of reference voltage of droop curve on the STATCOM output
which is the principle utilized by the proposed control scheme. As can be seen from Fig. 2(b),
when the Vref, the reference voltage of droop control curve, is adjusted to equal to the
Thevenin equivalent voltage (VS), the output of STATCOM will be zero. Hence, when the
Vref is the low-pass-filtered signal of the bus voltage, the output capacity of STATCOM will
be released in the steady-state.
V
V
Voltage Droop Control
Voltage Droop Control
VrefA
Vref
VS
VS
VrefB
Thevenin
Impedance &
Voltage
IC3
Thevenin
Impedance &
Voltage
I
I
IC2 IC1 I=0
IC
I=0
IL
(a) Effect of slope
(b) Effect of reference voltage
Fig. 2 The effects of droop control curve on STATCOM output
3. Coordination Methods
3.1 Proposed Independent Control Scheme
Fig. 3 shows the scheme of the proposed independent controller which using the STATCOM
Vref to control its output as illustrated in Fig. 2(b). The main feature of this controller is the
bus voltage is filtered with a LPF before input to the STATCOM Vref. Because the transient
portion of the bus voltage will not pass through the filter, there is difference between V ref and
Vin, the STATCOM will provide reactive current to compensate the rapid voltage variation.
In steady-state, the bus voltage will pass through the LPF, so there is no difference between
the Vref and Vin, the STATCOM will not output compensation current.
V
V
Q
Q
OLTC
OLTC
Controller
STATCOM
Vin
LPF
Vin
Vref
STATCOM
+
+
Q
Dead Zone Block
Fig. 3 Proposed independent control scheme
Fig. 4 Interconnected control scheme [3]
3.2 Basic Interconnected Control Scheme
518
Fig. 4 shows the scheme of the basic interconnected controller. The OLTC Vin is the
summation of bus voltage and STATCOM output. Hence, if the output of STATCOM is
capacitive, the OLTC Vin is lower than the actual bus voltage, the OLTC controller will move
the tap to raise the bus voltage, and then the STATCOM will reduce its output accordingly. In
contrary, if the output of STATCOM is inductive, the OLTC Vin is higher than the actual bus
voltage, the OLTC controller will move the tap to drop the bus voltage, the STATCOM will
increase its output accordingly. In this way, the ULTC is forced to release the reactive power
already supplied by the STATCOM.
4. Simulation Results
In order to verify the validation of the proposed independent controller, and compare with
that of interconnected controller, Matlab/Simulink is utilized to carry out the simulations.
4.1 Without STATCOM
Fig. 5 shows the sample system without STATCOM, the system parameters are as follows:
System short-circuit capacity: 5000MVAsc
Transformer: 161/22.8kV, 60MVA, and Z = 17%, through X / R = 25
OLTC: step V = 1.25%, position range=8, dead band =1.25%, delay time=5 sec.
Fault interval in primary side: 10 sec. ~ 10sec. +10 cycles, 50MW, 500MVar
Fixed load=30MW, 15MVar, fixed capacitor=10MVar
The switching-in interval of variation load: 30sec. ~ 150sec.
The capacity of variation load: 20MW/10MVar
Fig. 5 Sample system without STATCOM
(a) OLTC tap position
519
(b) Bus voltage
(c) The detail bus voltage around the primary side fault
Fig. 6 Results for the system without SATACOM
Fig. 6 shows the simulation results of the sample system without STATCOM. As can be seen
in Fig. 6(a), the switching-on and switching-off of the variation load result in 6 tap changes of
the OLTC. However, due to the slow reaction of OLTC, both the switching-on and
switching-off introduce 4% voltage deviation. This 4% voltage deviation keeps 5 seconds
because of the 5-second OLTC delay time. The OLTC will continue to moving its tap every 5
seconds until the bus voltage is within the dead band of OLTC regulator. For the fault
occurring at 10 sec., even though the bus voltage drops to 0.91 pu, there is no tap moving
because of its short duration.
Fig. 7 Sample system for basic interconnected control scheme
(a) OLTC tap position
520
(b) Bus voltage
(c) The detail bus voltage around the primary side fault
(d) STATCOM output
Fig. 8 Results for the system without coordination
4.2 Without coordination between STATCOM and OLTC
Fig. 7 is the sample system with the basic interconnected controller. The STATCOM capacity
is 15MVA and the droop slope is 3%. To simulate the scenario of without coordination
between STATCOM and OLTC, the gain for the output of STATCOM is set to zero. Fig. 8
shows the simulation results. The switching-on and switching-off of the variation load result
in only 1 tap change because of the STATCOM compensation. However, the STATCOM
keeps a 40% output during the on-duration of the variation load, which will induce large
operation losses and un-sufficient capacity margin for transient events. The STATCOM
outputs almost 100% of its rating to compensate the fault occurring at 10 sec. to improve the
bus voltage form 0.91 pu to 0.95 pu.
4.3 Interconnected Control Scheme
As shown in Fig. 7, the gain for the output of STATCOM is set to unity to simulate the
scenario of interconnected control scheme. The limits of the dead zone block are specified as
0.1 to prevent oscillatory operation around the desired operating point between OLTC and
STATCOM. Fig. 9 shows the simulation results for the sample system with interconnected
control scheme. The switching-on and switching-off of the variation load result in 6 tap
changes because the OLTC is forced to release the reactive power already supplied by
STATCOM. By each tap changing, the STATCOM decreases its output. The compensation
result for the primary side fault is the same with the scenario of without coordination as that
shown in Fig. 8(c).
521
(a) OLTC tap position
(b) Bus voltage
(c) STATCOM output
Fig. 9 Results for the system with basic interconnected control scheme
4.4 Proposed Independent Control Scheme
Fig. 10 shows the sample system with the proposed independent controller, wherein the
cutoff frequency of LPF is 0.005Hz. The simulation results in Fig. 11 show that the
switching-on and switching-off of the variation load result in 4 tap changes of the OLTC.
Because of the filtered input signals to Vref, the STATCOM gradually reduces its output. The
deviation value of the bus voltage hence is gradually increased until higher than the OLTC
regulator dead band, which will activate the tap changer. Simulation results show that the
proposed independent control scheme has better voltage control effect and less STATCOM
output than those of without coordination. However, the proposed scheme has more tap
changes than that of without coordination, but less than the interconnected control scheme.
Because the transient fault does not affect the Vref signal into the STATCOM, the proposed
method has the same compensation effect with that of the interconnected control scheme for
the fault occurring at 10 sec.
522
Fig. 10 Sample system with the proposed independent control scheme
(a) OLTC tap position
(b) Bus voltage
(c) The detail bus voltage around the primary side fault
(d) STATCOM output
Fig. 11 Results for the system with the proposed independent control scheme
5. Acknowledgments
This work was supported by the National Science Council, Taiwan, ROC, under grant NSC
100-3113-P-194-001.
6. Conclusions
523
In this paper, an independent control scheme for the voltage regulation of STATCOM is
proposed and compares with basic interconnected control scheme. The following conclusions
are obtained:
1. If without coordination between STATCOM and OLTC for voltage control strategy, the
voltage regulation is poor and the STATCOM has large operation losses and can not
provide sufficient reactive power compensation in a transient demand. However, the
scenario of without coordination scheme has the least number of tap changes.
2. The basic interconnected control scheme can get best voltage regulation and least
STATCOM output in steady state, but can not reduce the number of tap changes.
3. The proposed independent control scheme has medium voltage regulation effect,
STATCOM losses, and number of tap changes when comparing with no coordination
scheme and basic interconnected control scheme. The proposed independent control
scheme can save the hardware requirements for the integration of control system between
STATCOM and OLTC.
[1]
[2]
[3]
[4]
[5]
7. References
T. Fujii, H. Chisyaki, H. Teramoto, T. Sato, Y. Matsushita, Y. Shinki, S. Funahashi,
and N Morishima, Coordinated voltage control and continuous operation of the
80MVA STATCOM under commercial operation, Proceedings of Power Conversion
Conference, PCC '07, Nagoya, Japan, 2007, April 2-5, pp. 969-974.
M. S. El Moursi, B. Bak-Jensen, and M. H. Abdel-Rahman, Coordinated voltage
control scheme for SEIG-based wind park utilizing substation STATCOM and ULTC
transformer, IEEE Transactions on Sustainable Energy, Vol. 2, No. 3, 2100, pp.
246-255.
M. Khederzadeh, Coordination control of STATCOM and ULTC of power
transformers, Proceedings of 42nd International Universities Power Engineering
Conference, UPEC 2007, Brighton, United Kingdom, 2007, September 4-6., pp.
613-618.
J. J. Paserba, D. J. Leonard, N. W. Miller, S. T. Naumann, M. G. Lauby, and F. P,
Sener, Coordination of a Distribution Level Continuously Controlled Compensation
Device with Existing Substation Equipment for Long Term VAr Management, IEEE
Transactions on Power Delivery, Vol. 9, No. 2, 1994, pp. 1034-1040.
G. W. Kim and K. Y. Lee, Coordination control of ULTC transformer and STATCOM
based on an artificial neural network,” IEEE Transactions on Power Systems, Vol. 20,
No. 2, 2005, pp. 580-586.
524
299
Maximising Incident Diffuse and Direct Solar Irradiance on Photovoltaic
Panels Through the Use of Fixed Tilt and Limited Azimuth Tracking
NgoYunFuna, Chiam Fu Songb, Benjamin Khoc
a
National University of Singapore,
Faculty of Engineering,
Engineering Science Programme,
E-mail address:ngoyunfun@gmail.com
b
National University of Singapore,
Faculty of Engineering,
Engineering Science Programme,
E-mail address: u0905386@nus.edu.sg
c
National University of Singapore,
Faculty of Engineering,
Engineering Science Programme,
E-mail address: Benjamin.kho@nus.edu.sg
Abstract
Photovoltaic systems have been popular recently to generate electricity on-site at residential
and commercial buildings. This paper investigates the effects of azimuth tracking and tilt of
the photovoltaic panels installed on a 4-person residential house designed by National
University of Singapore for Solar Decathlon 2013. The photovoltaic system is firstly
designed and sized based on EnergyPlus energy consumption simulations of the house. The
simulations are based on weather data files of Datong, China, and Singapore. The effect of
azimuth tracking is then, investigated to maximise the incident irradiance on the photovoltaic
panels. Optimum azimuth angles of the photovoltaic panels are needed in order to receive an
optimum blend of diffuse and direct solar irradiance for maximum total incident irradiation
on the panels. Fixed tilt angles of the photovoltaic panels towards the Sun at solar noon are
also investigated for effects on the performance of the photovoltaic system. These are
investigated with EnergyPlus simulations.
Keyword: Photovoltaic; tracking; diffuse irradiance; direct irradiance
525
1. Introduction
National University of Singapore (NUS) solar house for Solar Decathlon China 2013 is a
4-person residential house designed to incorporate green technologies. The house is designed
to be energy self-sufficient, i.e., generate as much electrical energy as it consumes. The
fundamental consideration in the design a photovoltaic (PV) system for a residential house is
the energy consumption of the house. Then, azimuth tracking and tilt specific to the design of
the house are investigated to find out the effects on the performance of the PV system
2. Photovoltaic System
2.1 Meeting the energy demand of the building
According to the energy consumption simulation models done in EnergyPlus simulation
software, the building will require approximately 214kWh during the competition period in
August 2013 [1]. The simulation results are based on the weather data of Datong, China 2005.
The PV array is built to cater to the calculated consumption while providing a reasonable
margin of safety to account for fluctuations in weather, electricity usage, and a change in
various other factors.
36 panels of 250W each are estimated to meet the building’s simulated energy requirements.
This number is determined based on calculations, using an average of 4 sun hours on the
horizontal plane daily in Datong during the month of August. The array is estimated to produce
at least 260kWh during the competition period in August in Datong. The energy demand by the
building and energy generated by the PV system is simulated using weather data from 22nd
August 2005 to 30th August 2005 in Datong, China (Fig.1). The energy generated is sufficient
to meet the energy demand in the simulation [1].For the energy simulation based on Singapore
weather data provided by International Weather for Energy Calculations (IWEC), the PV array
is able to produce approximately 11900kWh of energy annually.
The simulated energy generations done here are based on PV system with PV panels fixed on
the horizontal plane. There is no solar tracking taken into account. To implement the tracking
system, the diffuse and direct solar irradiance at different hours of the day will need to be
investigated. During dawn and dusk, the irradiance is mainly diffuse irradiance. Only later in
the day is direct irradiance more dominant over diffuse irradiance (Annex A). The following
sections will present the performance of the PV panels with limited azimuth tracking system
and with fixed tilt.
Based on the specifications of the solar panels, calculations pertaining to the annual power
generation can be made. The calculated annual power generation is shown in the table 1
below. The calculations are done based on EnergyPlus simulation with weather data files
from China Meteorological Bureau and International Weather for Energy Calculations
526
(IWEC). The PV system efficiency is assumed to be 12%, with inverter and system
efficiencies taken into account.
527
2.2 PV system specifications
The PV array consists of 36 REC250PE panels producing a total peak power output of 9kW.
The direct current (DC) generated by part of the array will be converted to alternating current
(AC) using inverters, and then fed in to the power grid, while a small section of the array will
be used to supply power in a Low Voltage Direct Current (LVDC) micro-grid within the
building.
The PV array is split into 2 sections. Thenorth section has 18-panel array with tracking
system, while the south section has (12+6)-panel arrays without tracking system but with a
fixed 10o tilt towards the south. Figure 2 shows the plan view of the array’s sections.
2.3 Incident solar radiation
The incident total solar radiation can be calculated as follows [4].
where
is the incident direct solar radiation, and
is the incident diffuse solar radiation,
both at per unit area of the PV panel. can be calculated as follows [4].
where
is the direct radiation from
the Sun normal to the rays,
is the incident angle
between the normal of the PV panel and the incoming direct radiation. Cosine of the incident
angle, , is a function of Sun’s altitude, , solar azimuth angle,
, panel tilt angle, , and
the panel azimuth angle,
. can be calculated as follows [4].
where C is the sky diffuse factor and it’s close approximation is as follows [4].
where n is the day number of the year.
From the equations, for maximising incident diffuse irradiation, horizontal placement of the
PV panels is ideal. While, for maximising incident direct irradiation, the normal of the plane
of the PV panels should be in the direction of the Sun’s light. EnergyPlus assisted in
determining the optimal angles for the PV panels to receive maximum amount of incident
solar radiation.
528
2.2.1 North18-panel Array with Tracking System
The North section of the roof holds the 18-panel array with a one-axis tracking system to
maximise energy harvesting. An important consideration for this section of the PV array is
the tight placement of the PV panels on the roof due to architectural design. Ecotect
simulations done showed “inter-panel” shading during early and late hours of the day (Fig. 3).
A 20 degree tilt limit is implemented to prevent the panels from