Advanced Manufacturing and Automation VIII

This proceeding is a compilation of selected papers from the 8th International Workshop of Advanced Manufacturing and Automation (IWAMA 2018), held in Changzhou, China on September 25 - 26, 2018. Most of the topics are focusing on novel techniques for manufacturing and automation in Industry 4.0 and smart factory. These contributions are vital for maintaining and improving economic development and quality of life. The proceeding will assist academic researchers and industrial engineers to implement the concepts and theories of Industry 4.0 in industrial practice, in order to effectively respond to the challenges posed by the 4th industrial revolution and smart factory.

119 downloads 4K Views 88MB Size

Recommend Stories

Empty story

Idea Transcript

Lecture Notes in Electrical Engineering 484

Kesheng Wang · Yi Wang  Jan Ola Strandhagen · Tao Yu Editors

Advanced Manufacturing and Automation VIII

Lecture Notes in Electrical Engineering Volume 484

Board of Series editors Leopoldo Angrisani, Napoli, Italy Marco Arteaga, Coyoacán, México Bijaya Ketan Panigrahi, New Delhi, India Samarjit Chakraborty, München, Germany Jiming Chen, Hangzhou, P.R. China Shanben Chen, Shanghai, China Tan Kay Chen, Singapore, Singapore Ruediger Dillmann, Karlsruhe, Germany Haibin Duan, Beijing, China Gianluigi Ferrari, Parma, Italy Manuel Ferre, Madrid, Spain Sandra Hirche, München, Germany Faryar Jabbari, Irvine, USA Limin Jia, Beijing, China Janusz Kacprzyk, Warsaw, Poland Alaa Khamis, New Cairo City, Egypt Torsten Kroeger, Stanford, USA Qilian Liang, Arlington, USA Tan Cher Ming, Singapore, Singapore Wolfgang Minker, Ulm, Germany Pradeep Misra, Dayton, USA Sebastian Möller, Berlin, Germany Subhas Mukhopadhyay, Palmerston North, New Zealand Cun-Zheng Ning, Tempe, USA Toyoaki Nishida, Kyoto, Japan Federica Pascucci, Roma, Italy Yong Qin, Beijing, China Gan Woon Seng, Singapore, Singapore Germano Veiga, Porto, Portugal Haitao Wu, Beijing, China Junjie James Zhang, Charlotte, USA

Lecture Notes in Electrical Engineering (LNEE) is a book series which reports the latest research and developments in Electrical Engineering, namely: • • • • • •

Communication, Networks, and Information Theory Computer Engineering Signal, Image, Speech and Information Processing Circuits and Systems Bioengineering Engineering

The audience for the books in LNEE consists of advanced level students, researchers, and industry professionals working at the forefront of their fields. Much like Springer’s other Lecture Notes series, LNEE will be distributed through Springer’s print and electronic publishing channels.

More information about this series at

Kesheng Wang Yi Wang Jan Ola Strandhagen Tao Yu •


Advanced Manufacturing and Automation VIII


Editors Kesheng Wang Department of Mechanical and Industrial Engineering Norwegian University of Science and Technology Trondheim, Sør-Trøndelag Fylke, Norway Yi Wang School of Business Plymouth University Plymouth, UK

Jan Ola Strandhagen Department of Mechanical and Industrial Engineering Norwegian University of Science and Technology Trondheim, Sør-Trøndelag Fylke, Norway Tao Yu Shanghai Second Polytechnic University Shanghai, China

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-13-2374-4 ISBN 978-981-13-2375-1 (eBook) Library of Congress Control Number: 2015413778 © Springer Nature Singapore Pte Ltd. 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore


IWAMA—International Workshop of Advanced Manufacturing and Automation— aims at providing a common platform for academics, researchers, practicing professionals, and experts from industries to interact, discuss trends and advances, and share ideas and perspectives in the areas of manufacturing and automation. IWAMA began in Shanghai University in 2010. In 2012 and 2013, it was held at the Norwegian University of Science and Technology; in 2014 at Shanghai University again; in 2015 at Shanghai Polytechnic University; in 2016 at University of Manchester; and in 2017 at Changshu Institute of Technology. The sponsors organizing the IWAMA series have expanded to many universities throughout the world, including University of Plymouth, Changzhou University, Norwegian University of Science and Technology, SINTEF, University of Manchester, Changshu Institute of Technology, Shanghai University, Shanghai Polytechnic University, Xiamen University of Science and Technology, Tongji University, University of Malaga, University of Firenze, University of Stavanger, The Arctic University of Norway, Shandong Agricultural University, China University of Mining and Technology, Indian National Institute of Technology, Donghua University, Shanghai Jiao Tong University, Dalian University, St. Petersburg Polytechnic University, Hong Kong Polytechnic University, and China Instrument and Control Society. As IWAMA becomes an annual event, we are expecting that more sponsors from universities and industries will participate in the international workshop as co-organizers. Manufacturing and automation have assumed paramount importance and are vital factors for the economy of a nation and the quality of life. The field of manufacturing and automation is advancing at a rapid pace, and new technologies are also emerging in the field. The challenges faced by today’s engineers are forcing them to keep on top of the emerging trends through continuous research and development. IWAMA 2018 took place in Changzhou University, China, September 25–26, 2018, organized by Changzhou University, University of Plymouth, Norwegian University of Science and Technology, and SINTEF. The program is designed to improve manufacturing and automation technologies for the next generation through v



discussion of the most recent advances and future perspectives, and to engage the worldwide community in a collective effort to solve problems in manufacturing and automation. Manufacturing research includes a focus on the transformation of present factories, toward reusable, flexible, modular, intelligent, digital, virtual, affordable, easy-to-adapt, easy-to-operate, easy-to-maintain, and highly reliable “smart factories.” Therefore, IWAMA 2018 has mainly covered four topics in manufacturing engineering: 1. 2. 3. 4.

Industry 4.0 Manufacturing systems and technologies Production management Design and optimization.

All papers submitted to the workshop have been subjected to strict peer review by at least two expert referees. Finally, 89 papers have been selected to be included in the proceedings after a revision process. We hope that the proceedings will not only give the readers a broad overview of the latest advances, and a summary of the event, but also provide researchers with a valuable reference in this field. Especially, we work together with Changzhou Science and Technology Bureau to organize an industry session, where more than 150 companies worldwide join the session to discuss how AI and robotics support Industry 4.0 and China Manufacturing 2025. On behalf of the organizing committee and the international scientific committee of IWAMA 2018, I would like to take this opportunity to express my appreciation for all the kind support, from the contributors of high-quality keynotes and papers, and all the participants. My thanks are extended to all the workshop organizers and paper reviewers, to Changzhou University and University of Plymouth for the financial support, and to all co-sponsors for their generous contribution. Thanks are also given to Jin Yuan, Quan Yu, Wanping Wu, Lin Liu, Lin Zou, Guohong Dai, and Ziqiang Zhou, for their hard editorial work of the proceedings and arrangement of the workshop.

Yi Wang Chair of IWAMA 2018


Organized and Sponsored By China Changzhou University (CCZU), China University of Plymouth (PLYU), UK Norwegian University of Science and Technology (NTNU), Norway Shanghai Second Polytechnic University (SSPU), China Foundation for Industrial and Technical Research (SINTEF), Norway

Co-organized by Changshu Institute of Technology (CSLG), China Tongji University (TU), China Shandong Agriculture University (SDAU), China University of Stavanger (UiS), Norway

Honorary Chairs Minglun Fang Kesheng Wang

General Chairs Yi Wang Jan Ola Strandhagen Tao Yu




Local Organizing Committee Guohong Dai (Chair) Lin Liu Lin Zou

Ziqiang Zhou Wanping Wu Li Yang

International Program Committee Jan Ola Strandhagen, Norway Kesheng Wang, Norway Odd Myklebust, Norway Per Schjølberg, Norway Knut Sørby, Norway Erlend Alfnes, Norway Heidi Dreyer, Norway Torgeir Welo, Norway Kristian Martinsen, Norway Hirpa L. Gelgele, Norway Wei D. Solvang, Norway Yi Wang, UK Chris Parker, UK Jorge M. Fajardo, Spain Torsten Kjellberg, Sweden Fumihiko Kimura, Japan Gustav J. Olling, USA Michael Wozny, USA Byoung K. Choi, Korea Wladimir Bodrow, Germany Guy Doumeingts, France Van Houten, the Netherlands Peter Bernus, Australia Janis Grundspenkis, Latvia George L. Kovacs, Hungary Rinaldo Rinaldi, Italy Gaetano Aiello, Italy Romeo Bandinelli, Italy Yafei He, China Jawei Bai, China

Jinhui Yang, China Dawei Tu, China Minglun Fang, China Binheng Lu, China Xiaoqien Tang, China Ming Chen, China Xinguo Ming, China Keith C. Chan, China Meiping Wu, China Lanzhoung Guo, China Xiaojing Wang, China Jin Yuan, China Yongyi He, China Chaodong Li, China Cuilian Zhao, China Chuanhong Zhou, China Jianqing Cao, China Yayu Huang, China Shirong Ge, China Guijuan Lin, China Shanming Luo, China Dong Yang, China Zumin Wang, China Guohong Dai, China Sarbjit Singh, India Vishal S. Sharma, India Hongjun Ni, China Ziqian Zhou, China Jianqien Chao, China Xifang Zhu, China



Organizing Committee Guohong Dai (Chair) Xifang Zhu (Chair) Yue Zhang Xuedong Liu Lin Liu Lin Zou

Secretariat Wangping Wu Jin Yuan Ziqiang Zhou

Ziqiang Zhou Odd Myklebust Quan Yu Jin Yuan Wanping Wu Jianqing Cao




Industry 4.0 Industry 4.0 Closed Loop Tolerance Engineering Maturity Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kristian Martinsen A DEMO of Smart Manufacturing for Mass Customization in a Lab . . . Jinghui Yang and Timmie Abrahamsson

3 12

A Fault Diagnosis Method Based on Mathematical Morphology for Bearing Under Multiple Load Conditions . . . . . . . . . . . . . . . . . . . . . Yang Ge, Lanzhong Guo, and Yan Dou


An Industry 4.0 Technologies-Driven Warehouse Resource Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haishu Ma


Collaboration with High-Payload Industrial Robots: Simulation for Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Beibei Shu and Gabor Sziebig


Depth Image Restoration Using Non-negative Matrix Factorization Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suolan Liu and Hongyuan Wang


Design and Manufacture of Elevator Model Control System Based on PLC and HMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Dou, Lanzhong Guo, Yang Ge, and Yuchao Wang


Design of Integrated Information Platform for Smart Ship . . . . . . . . . . Guiqin Li, Jinfeng Shi, Qiuyu Zhu, Jian Lan, and Peter Mitrouchev Development of the Prediction Software for Mechanical Properties of Automotive Plastic Materials at High and Low Temperatures . . . . . . Guiqin Li, Peng Pan, and Peter Mitrouchev






Gear Fault Diagnosis Method Based on Feature Fusion and SVM . . . . Dashuai Zhu, Lizheng Pan, Shigang She, Xianchuan Shi, and Suolin Duan


HDPS-BPSO Based Predictive Maintenance Scheduling for Backlash Error Compensation in a Machining Center . . . . . . . . . . . . . . . . . . . . . Zhe Li, Yi Wang, Kesheng Wang, and Jingyue Li


Influence of the Length-Diameter Ratio and the Depth of Liquid Pool in a Bowl on Separation Performance of a Decanter Centrifuge . . . . . . Huixin Yuan, Yuheng Zhang, Shuangcheng Fu, and Yusheng Jiang


LSTM Based Prediction and Time-Temperature Varying Rate Fusion for Hydropower Plant Anomaly Detection: A Case Study . . . . . . . . . . . Jin Yuan, Yi Wang, and Kesheng Wang


Wind Turbine System Modelling Using Bond Graph Method . . . . . . . . Abdulbasit Mohammed and Hirpa G. Lemu


On Opportunities and Limitations of Additive Manufacturing Technology for Industry 4.0 Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Hirpa G. Lemu Operator 4.0 – Emerging Job Categories in Manufacturing . . . . . . . . . 114 Harald Rødseth, Ragnhild Eleftheriadis, Eirin Lodgaard, and Jon Martin Fordal Reliability Analysis of Centrifugal Pump Based on Small Sample Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Hongfei Zhu, Junfeng Pei, Siyu Wang, Jianjie Di, and Xianru Huang Research on Horizontal Vibration of Traction Elevator . . . . . . . . . . . . . 131 Lanzhong Guo and Xiaomei Jiang Research on Real-Time Monitoring Technology of Equipment Based on Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Lilan Liu, Chen Jiang, Zenggui Gao, and Yi Wang Research on the Relationship Between Sound and Speed of a DC Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Xiliang Zhang, Sujuan Wang, Zhenyu Chen, Zhiwei Shen, Yuxin Zhong, and Jingguan Yang Review and Analysis of Processing Principles and Applications of Self-healing Composite Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Yohannes Regassa, Belete Sirabizuh, and Hirpa G. Lemu Scattered Parts for Robot Bin-Picking Based on the Universal V-REP Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Lin Zhang and Xu Zhang



Brain Network Analysis Based on Resting State Functional Magnetic Resonance Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Xin Pan, Zhongyi Jiang, Suhong Wang, and Ling Zou Development of Bicycle Smart Factory and Exploration of Intelligent Manufacturing Talents Cultivation . . . . . . . . . . . . . . . . . . 181 Yu’an He The Journey Towards World Class Maintenance with Profit Loss Indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Harald Rødseth, Jon Martin Fordal, and Per Schjølberg Initiating Industrie 4.0 by Implementing Sensor Management – Improving Operational Availability . . . . . . . . . . . . . . . . 200 Jon Martin Fordal, Harald Rødseth, and Per Schjølberg Manufacturing System and Technologies A Prediction Method for the Ship Rust Removal Effect of Pre-mixed Abrasive Jet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Qing Guo, Shuzhen Yang, Minghui Fang, and Tao Yu A Review of Dynamic Control of the Rigid-Flexible Macro-Micro Manipulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Xuan Gao, Zhenyu Hong, and Dongsheng Zhang Analysis of Speech Enhancement Algorithm in Industrial Noise Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Lilan Liu, Gan Sun, Zenggui Gao, and Yi Wang Application of Machine Learning Methods for Prediction of Parts Quality in Thermoplastics Injection Molding . . . . . . . . . . . . . . . . . . . . . 237 Olga Ogorodnyk, Ole Vidar Lyngstad, Mats Larsen, Kesheng Wang, and Kristian Martinsen Application of Machine Learning Methods to Improve Dimensional Accuracy in Additive Manufacturing . . . . . . . . . . . . . . . . . 245 Ivanna Baturynska, Oleksandr Semeniuta, and Kesheng Wang Design and Implementation of PCB Detection and Classification System Based on Machine Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Zhiwei Shen, Sujuan Wang, Jianfang Dou, and Zimei Tu Diagnosis of Out-of-Control Signals in Multivariate Manufacturing Processes with Random Forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Zheng Jian, Beixin Xia, Chen Wang, and Zhaoyang Li Effect of Processing Parameters on the Relative Density of AlSi10Mg Processed by Laser Powder Bed Fusion . . . . . . . . . . . . . . 268 Even Wilberg Hovig, Håkon Dehli Holm, and Knut Sørby



Experimental Study on Measuring the Internal Porosity of Plant Canopy by Laser Distance Measuring . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Huan Li, Xinghua Liu, Xuemei Liu, and Yang Li Laser Stripe Matching Based on Multi-layer Refraction Model in Underwater Laser Scanning System . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Jinbo Li, Xu Zhang, Can Zhang, Pingping He, and Dawei Tu Numerical Simulation of Internal Flow Field in Needle Valve Body Extrusion Grinding Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Yong Zeng, Shuzhen Yang, Minghui Fang, and Tao Yu One Dimensional Camera of Line Structured Light Probe Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Qian Zhan and Xu Zhang Optimization of Sample Size for Two-Point Diameter Verification in Coordinate Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Petr Chelishchev and Knut Sørby Paper Currency Sorting Equipment Based on Rotary Structure . . . . . . 322 Lizheng Pan, Dashuai Zhu, Shigang She, Jing Ding, and Zeming Yin Precision Analysis of the Underwater Laser Scanning System to Measure Benthic Organisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Pingping He, Xu Zhang, Jinbo Li, Liangliang Xie, and Dawei Tu Recognition Algorithm Based on Convolution Neural Network for the Mechanical Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Duan Suolin, Yin Congcong, and Liu Maomao The Research of Three-Dimensional Morphology Recovery of Image Sequence Based on Focusing Method . . . . . . . . . . . . . . . . . . . 348 Qian Zhan Research on Motion Planning of Seven Degree of Freedom Manipulator Based on DDPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Li-lan Liu, En-lai Chen, Zeng-gui Gao, and Yi Wang Research on Straightness Error Evaluation Method Based on Search Algorithm of Beetle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Chen Wang, Cong Ren, Baorui Li, Yi Wang, and Kesheng Wang Production Management Analysis of Machine Failure Sorting Based on Directed Graph and DEMATEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Min Ji



Applying Decision Tree in Food Industry – A Case Study . . . . . . . . . . . 383 James Mugridge and Yi Wang Applying Decision Tree in National Health Service . . . . . . . . . . . . . . . . 389 Freddy Youd and Yi Wang Cognitive Maintenance for High-End Equipment and Manufacturing . . . 394 Yi Wang, Kesheng Wang, and Guohong Dai Decision-Making and Supplier Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 Abbie Buchan and Yi Wang Groups Decision Making Under Uncertain Conditions in Relation—A Volkswagen Case Study . . . . . . . . . . . . . . . . . . . . . . . . . 406 Arran Roddy and Yi Wang Health Detection System for Skyscrapers . . . . . . . . . . . . . . . . . . . . . . . . 411 Lili Kou, Xiaojun Jiang, and Qin Qin Integrated Production Plan Scheduling for Steel Making-Continuous Casting-Hot Strip Based on SCMA . . . . . . . . . . . . 418 Lilan Liu, Pengfei Sun, Zenggui Gao, and Yi Wang Knowledge Sharing in Product Development Teams . . . . . . . . . . . . . . . 432 Eirin Lodgaard and Kjersti Øverbø Schulte Multi-site Production Planning in a Fresh Fish Production Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Quan Yu and Jan Ola Strandhagen Product Design in Food Industry - A McDonald’s Case . . . . . . . . . . . . . 448 Polly Dugmore and Yi Wang Research and Practice of Bilingual Teaching in Fundamental of Control Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Wangping Wu and Tongshu Hua Research on Assembly Line Planning and Simulation Technology of Vacuum Circuit Breaker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Wenhua Zhu and Xuqian Zhang Shop Floor Teams and Motivating Factors for Continuous Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 Eirin Lodgaard and Linda Perez Johannessen Structural Modelling and Automation of Technological Processes Within Net-Centric Industrial Workshop Based on Network Methods of Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Vsevolod Kotlyarov, Igor Chernorutsky, Pavel Drobintsev, Nikita Voinov, and Alexey Tolstoles



Student Learning Information Collection and Analysis System Based on Mobile Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Chuanhong Zhou, Chong Zhang, and Chao Dai Task Modulated Cortical Response During Emotion Regulation: A TMS Evoked Potential Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 Wenjie Li, Yingjie Li, and Dan Cao The Research on the Framework of Machine Fault Diagnosis in Intelligent Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Min Ji Utilization of MES System, Enablers and Disablers . . . . . . . . . . . . . . . . 509 Inger Gamme and Ragnhild J. Eleftheriadis Design and Optimization Application of CNN Deep Learning in Product Design Evaluation . . . . 517 Baorui Li, Yi Wang, Kesheng Wang, and Jinghui Yang An FFT-Based Technique for Underwater Image Stitching . . . . . . . . . . 527 Dawei Li, Xu Zhang, and Dawei Tu An Experimental Study on Dynamic Parameters Identification of a 3-DOF Flight Simulator Platform . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Zhen-Yu Hong, Xuan Gao, Jia-Ren Liu, Dong-Sheng Zhang, and Zhi-Xu Zhang Analysis of Soil Disturbance Process and Effect by Novel Subsoiler Based on Discrete Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 Jinguang Li, Ziru Niu, Xuemei Liu, and Jin Yuan Design and Analysis of Drive System of Distributing Machine . . . . . . . . 554 Guiqin Li, Xuehong Li, and Peter Mitrouchev Design and Experimental Study of the Spinach Continuous Harvester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 Yeben Song, Liangliang Zou, Xuemei Liu, and Jin Yuan Design and Test of End-Effectors of Control System for White Asparagus Selective Harvesting Robot . . . . . . . . . . . . . . . . . . . . . . . . . . 567 Baogang Dou, Yang Li, Xuemei Liu, and Jin Yuan Design of Greenhouse Environmental Monitoring System Based on Arduino and ZigBee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576 Lijuan Shi, Qing Li, and Shengqiang Qian Design of Marine Elevator Car Frame . . . . . . . . . . . . . . . . . . . . . . . . . . 583 Xiaomei Jiang, Lanzhong Guo, and Shuguang Niu



Dynamic Balance Analysis of Crankshaft Based on ThreeDimensional Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 Chuanhong Zhou, Xiaoyu Jiang, and Xiaotong Wang Effect of Stress Triaxiality on Plastic Deformation and Damage Evolution for Austenite Steel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 Ying Wang, Jian Peng, and Kaishang Li Intelligent Fertilization Strategy Based on Integration of Soil Moisture and Forecast Rainfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608 Lin Liu, Yang Li, Ming Hao, Xuemei Liu, Kun Yang, and Jin Yuan Investigation into Velocity Choice for Determining Aerodynamic Resistance in Brush Seals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 Yuchi Kang, Meihong Liu, Xiangping Hu, and Jinbin Liu Key Structure Innovation and Optimization Design of Bucket Elevator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 Fang Ma, Feng Xiong, and Guiqin Li Multibody System Modelling and Simulation: Case Study on Excavator Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 Yohannes Regassa and Hirpa G. Lemu Numerical Simulation of the Flow Field Characteristic in a Two-Dimensional Brush Seal Model . . . . . . . . . . . . . . . . . . . . . . . . 636 Jinbin Liu, Meihong Liu, Yuchi Kang, and Yongfa Tan Research on Design Conflict Based on Complex Network . . . . . . . . . . . 643 Guiqin Li, Maoheng Zhou, and Peter Mitrouchev Simulation and Analysis for Overlapping Probability of ADS-B 1090ES Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 Dong Zhu, Chengtao Feng, Kaibin Chu, and Zhengwei Zhu Stress Analysis of Pre-stressed Steel Wire Winding Ultrahigh Pressure Vessels Based on Birth and Death Element Method . . . . . . . . 655 Yi Lu and Jie Zhu Structural Design and Kinematic Analysis of a Weakly Coupled 3T Parallel Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663 Wei Zhu and Qian Guo Structure Design and Analysis of Coal Drying Equipment . . . . . . . . . . . 668 Xinqi Yu and Zhaoyang Wang Study on a New Type Cosine Rotator Pump . . . . . . . . . . . . . . . . . . . . . 674 Min Zou, Yongqiang Qiao, and Liangcai Wu



Study on Soil Disturbance Behavior of Globoid Subsoiling Shovel Based on Discrete Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684 Zhenbo Xin, Ziru Niu, Xuemei Liu, and Jin Yuan Study on the Torque of Sleeve Permanent Magnetic Couplings . . . . . . . 694 Jian Wu, Xinyong Li, and Lanzhong Guo The Influence of T Groove Layout on the Performance Characteristic of Cylinder Gas Seal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701 Xueliang Wang, Meihong Liu, Xiangping Hu, and Junfeng Sun

About the Editors

Yi Wang obtained his PhD from Manufacturing Engineering Center, Cardiff University, in 2008. He is a Lecturer in Business School, University of Plymouth, UK. Previously, he worked in the Department of Computer Science, University of Southampton, and at the Business School, Nottingham Trent University. He holds various visiting lectureships in several universities worldwide. He has special research interests in supply chain management, logistics, operation management, culture management, information systems, game theory, data analysis, semantics and ontology analysis, and neuromarketing. He has published 75 technical peer-reviewed papers in international journals and conferences. He co-authored two books: Operations Management for Business and Data Mining for Zero-defect Manufacturing. He also authors one new book: Intelligent Fashion Supply Chain. Kesheng Wang holds a PhD in Production Engineering from the Norwegian University of Science and Technology (NTNU), Norway. Since 1993, he has been appointed Professor in the Department of Mechanical and Industrial Engineering, NTNU. He is a Director of the Knowledge Discovery Laboratory (KDL) at NTNU at present. He is also an Active Researcher and serves as a Technical Adviser in SINTEF. He was an Elected Member of the Norwegian Academy of Technological Sciences in 2006. He has published 21 books, 10, and over 270 technical peer-reviewed papers in international journals and conferences. His current areas of interest are intelligent manufacturing systems, applied computational intelligence, data mining and knowledge discovery, swarm intelligence, condition-based monitoring and structured light systems for 3D measurements and RFID, predictive maintenance, and Industry 4.0. Jan Ola Strandhagen is a Research Director of the research center SFI Norman at SINTEF. He is also a Professor in the Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology (NTNU). He holds a PhD in Production Engineering from NTNU (1994). His research interests have focused on production management and control, logistics, manufacturing



About the Editors

economics and strategies. He has managed and performed R & D projects in close collaboration with a wide variety of Norwegian companies and participated as a researcher and a project manager in several European projects. Tao Yu is the President of Shanghai Second Polytechnic University (SSPU), China, and a Professor of Shanghai University (SHU). He received his PhD from SHU in 1997. He is a Member of the Group of Shanghai manufacturing information and a Committee Member of the International Federation for Information Processing (IFIP TC5). He is also an Executive Vice President of Shanghai Science Volunteer Association and an Executive Director of Shanghai Science and Art Institute of Execution. He managed and performed about 20 national, Shanghai, enterprises commissioned projects. He has published hundreds of academic papers, of which about thirty were indexed by SCI, EI. His research interests are mechatronics, computer-integrated manufacturing system (CIMS), and grid manufacturing.

Industry 4.0

Industry 4.0 Closed Loop Tolerance Engineering Maturity Evaluation Kristian Martinsen(&) Department for Manufacturing and Civil Engineering, Teknologivn 22, 2815 Gjøvik, Norway [email protected]

Abstract. Closed Loop Tolerance Engineering (CLTE) is introduced as a model of information flow – feed forward and feedback- between functional requirements, tolerance selection, process capabilities and product performance. “Industry 4.0” and “Cyber physical manufacturing systems” opens new potentials for information and data exchange along variation management activities, when developing, producing and manufacturing products. This paper describes a method for evaluation of the maturity level of the CLTE data and information exchange. The method is based on and validated through empirical findings from field studies in a number of manufacturing companies. Keywords: Tolerancing  Quality assurance Closed loop tolerance engineering

 Digital manufacturing system

1 Introduction 1.1

Tolerances and Tolerance Engineering

Tolerances are defined in order to limit components and products geometry and to ensure interchangeability, quality and function according to the customer demands. The selected tolerances will usually also impact manufacturing and inspection processes and thus manufacturing costs. In spite of the increasing ability to assess process capabilities and other data and the increasing number of design software; tolerances are still often determined with lacking insight. This may lead to inappropriate tolerances. Too tight tolerances “to be on the safe side” regarding assembly and product function and insufficient tolerance distribution are typical errors. Geometry features having an over-qualified manufacturing process are potentially more expensive than necessary. On the other hand, will under-qualified processes lead to problems to meet the quality requirements without sorting or other measures. Literature reports many examples on this; Zhang (1997) [1] states that “many parts and products are certainly over-toleranced or haphazardly toleranced, with predictable consequences”. Singh [2] point at the negative effects of inappropriate tolerances of increased cost and lacking product quality. Ali et al. [3] and Krogstie and Martinsen [4] point at the costs and efforts to change tolerances at a later stage. Adding to this is a seemingly lack of attention to tolerance engineering. As Watts [5] states; “all industry is suffering, often unknowingly, of the lack of adequate academic © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 3–11, 2019.


K. Martinsen

attention on tolerances”. He claims tolerancing has “gradually been removed from the curriculum at universities and has been replaced by other product development courses”. Oddly enough have popular management paradigms that originates from quality and variation control such as TQM, Six Sigma and Lean a lacking attention to tolerancing. The focus is mainly on management [4]. “Not only has tolerances low explicit attention within industry, academia and product development literature; managers are lacking tools to address tolerancing activities” [4]. Tolerancing has been “kept in a high degree of technical focus” with focus on norms and standards [6, 7]. There are many different product development methodologies and approaches where tolerances and variation management are addressed, such as Robust design [8] and Design for Manufacturing (or DfX) [9, 10]. A comprehensive listing of models and management control of product development shows, however, that tolerancing is not addressed in many other approaches [11–14]. 1.2

Closed Loop Tolerance Engineering

Krogstie and Martinsen have developed a conceptual model of Closed Loop Tolerance Engineering (CLTE) [15]. CLTE (Fig. 1) is a model for “systematic and continuous reuse and understanding of product-related knowledge, with the aim of designing robust products and processes with the appropriate limits of specifications”. CLTE sees tolerancing as activities not limited to the traditional activities of tolerance-specification, allocation, modelling/optimization and synthesis, but also an organizational process, with information flow and ability to collect, use and reuse data. Prevent problems form occurring, attention to and understanding of tolerances in the whole value chain, fact based tolerance engineering are some benefits expected. Good tolerance engineering practice includes a collective ability to detect critical situations in the product development phase [16]. A critical situation is the decision-making between a desirable or negative consequence in the future. CLTE is distinguished from other approaches by representing the “skilled knowledge-based collaboration with a specific focus on the importance of defining appropriate tolerances”. CLTE has been applied for analyzing tolerance engineering practices in different companies, including a high-precision aerospace company [17].

Fig. 1. The CLTE-model [15]

Industry 4.0 Closed Loop Tolerance Engineering


The CLTE - model has a “feed forward” and a feed-back” information flow dimension. It contains four interconnected activities: 1 - Defining functional requirements, 2 - Defining tolerances, 3 - Considering of production capabilities and 4 Confirmation of functional performance. Furthermore, six pairs of closed loops of relations; 1a/b etc., see Fig. 1. The closed loop relations links activities together passing information forward in the project flow, as well as to the predecessors in the feed-back dimension. The ability to prepare and utilise information and data from both feed forward and feed-back dimension is the main key element. The need for crossfunctional teams for product and process development is a well-known concept [18] and the proposed CLTE is a cross-functional.

2 Industry 4.0 CLTE Industry 4.0 is a strategy for implementing the so called 4th industrial revolution, and a central concept is Cyber-Physical Manufacturing systems [19] where the physical and the virtual processes are providing simultaneous data-accessing and processing. Machine learning/Artificial intelligence, sensor based monitoring and control, multiagent/holonic systems, (wireless) sensor networks, (big) data mining, virtual/augmented reality etc. are some technologies that are mentioned. Better connectivity, productivity, efficiency, information flow, robustness, flexibility are some of the expectations to Industry 4.0. There are a growing number of scientific articles on Tolerance Engineering in Industry 4.0. One example is Gianetti [21], who suggests a framework for process robustness, improving process robustness with quantification of uncertainties in Industry 4.0. She proposes to use big data analysis to find “Likelihood Ratios” for process capabilities used to set robust tolerance limits. Another is Söderberg et al. [22] discussing how a digital twin with “geometry representation of the assembly, kinematic relations, FEA functionality, Monte Carlo simulation, material properties and link to inspection data base”. One might also argue that the vast number of articles on Computer Aided Tolerancing (including CIRPs own conference track) really are part of the essence of Industry 4.0- although the term “Industry 4.0” is newly “invented” [23– 28] (Fig. 2). The Acatech study Industrie 4.0 Maturity Index [20] defines 6 steps from “Digitalisation” to “Industrie 4.0”. Stage one and two are more or less current industry status. Stage three to six represent different steps from seeing, to understanding, to predicting what will happen and finally autonomous response. Collecting and displaying data, (out of the “silos” and useful across the company), up-to–date models at all times, simulations, optimisation, and ultimately autonomy (response without human assistance) are key competencies. The Acatec steps of Industry 4.0, four last (Industrie 4.0) stages could mean the following for CLTE; 1. Visibility – what is happening; Instant and constant data collection and visualisation along the CLTE model. A “digital shadow” (or twin) across data silos with semantic linking of useful data for tolerancing (see also [22]).


K. Martinsen

Fig. 2. Actech Industrie 4.0 maturity index [20]

2. Transparency – why is it happening; Ability to analyse and present data in a useful way for potential users along the CLTE 3. Prediction – what will happen; Ability to use the data for simulations and optimisations at all levels 4. Adaptability – Autonomous response –self-optimising CLTE without human intervention. The ultimate goal for CLTE would be to have an instant and autonomous flow of information and data across the CLTE chain “translating” information to adapt to the specific use and suggest decisions for the user. This level would mean that the product designer e.g. automatically gets relevant process capabilities and suggested optimised tolerance distribution and process path as an automated relation in the CLTE model.

3 CLTE Maturity Assessment Model Based on the CLTE the author is here proposing a maturity assessment model CLTE. This maturity model was developed as a combination of literature study, discussion with industry partners, and case studies in selected Norwegian companies, mainly in the SFI Manufacturing research centre. The CLTE maturity assessment can be used to evaluate and plan improvements in a company regarding their management of tolerancing and variations. The model consists of two parts; first part is assessing how well the company is performing in the 12 relation loops. Secondly how information flow in the relation loops and how data is stored, assessed and used. Both can be done as self-assessment by the company and by an external expert. It would be recommended to do both followed by a reflection workshop with discussions on actions to improve.

Industry 4.0 Closed Loop Tolerance Engineering



Performance Assessment

The performance assessment is done by grading the company performance according to questions given regarding the performance on the CLTE relation loops (1a–6a, 1b–6b). Grades are from (1) not applied, (2) poor, (3) medium, (4) good and (5) excellent. The list underneath is a simplified/summary of the questions given. 1a How well are functional requirements transformed to tolerancing – by whom, in which form and which tools? 1b How well are the decision basis for selected tolerances stored and fed back to aid functional requirements description in following projects? 2a How well do the tolerances fit the manufacturing capabilities? How well are tolerance stack-up [29], critical tolerances and reference surfaces working? 2b How well are existing process capabilities used in tolerancing? How well are quality and productivity data on current products used in tolerancing of new products? 3a Are process capabilities and parameters and their effect on product performance and inspection well known? 3b Are sources of variation in product performance well understood? Is knowledge gained in product performance tests looped back to manufacturing? Can variation in product performance be traced back to variation in the manufacturing processes? 4a How are functional requirements information used in manufacturing? Are critical parameters known and manufacturing and inspection processes sufficiently attended? 4b How well are process capabilities fed back (and make an influence) on functional requirements? 5a How well are the relations between (critical) tolerances and the product performance understood? How are defined tolerances deciding product performance assessment? 5b How well are critical tolerances and their variation influence on the product performance understood? 6a How satisfactory is the product performance according to the functional requirements? How are functional requirements influencing the product performance assessment? 6b To what extent is existing product performance fed back to aid definition of functional requirements in following projects? The results can be shown as spider diagrams comparing the company assessment, the expert(s) assessment and the wanted future scenario/goal. 3.2

Information and Data Exchange Assessment

The second part of the maturity assessment is the information and data exchange assessment. For each relations loop (1a to 6a, and 1b to 6b) the company must agree on which stage they are (and wish to be): Stage 1: No organised information exchanges. Stage 2: Information exchange based on expert’s subjective opinions. Crossfunctional teams using semi-quantitative tools such as FMEA.


K. Martinsen

Stage 3:

Stage 4:

Stage 5:


Information exchange containing real data on ad-hoc basis. Timeconsuming data processing and analysis using highly qualified personnel. Unknown or weak data quality with little to no meta data. Data mainly for internal use. Some use of computer-aided decision support. Systemized (but manual) regular data and information exchange, analysis and computer-aided decision support. Data management with a broader use in mind. Meta data and cross-linked data, but still manual translation of information to adapt to the use. Generally good data quality and ability to assess and grade data quality. Instant and autonomous data exchange. Automatic translation of data and information to adapt to the specific user. Automated data processing, simulation and optimisation and suggested decisions. Automated assessment, filtering and signal processing for maximum data quality.

Examples from Industry Case Study

The charts underneath show the results from one industry case study. It is a global leading company in a specific niche as a Tier 1 automotive supplier. They own their own product patents and are developing, manufacturing and assembling a complete range of products within their niche. The assessment was made in two workshops separated by an expert mapping and analysis. The expert assessment is based on semi-structured interviews, observations and analyses of a few selected products (Figs. 3 and 4).










1b 2a

2 1

1 5b


4 3








3a 4b

3b 4a

Fig. 3. Relation Performance assessment





3a 4b

3b 4a

Fig. 4. Information flow assessment

The charts show a typical picture where the feed forward (1a to 6a) are more advanced than the feed-back loops (1b to 6b). Similar results can be found in other companies. There are some deviations on the expert vs. company self-evaluation. This is not untypical; the companies are in some cases more “hard” on themselves on

Industry 4.0 Closed Loop Tolerance Engineering


the self-assessment than the expert’s assessment. The relation performance goals are in this case somewhat ambitious based on the current grade, but it is long-term goals where the company mean they have to be.

4 Discussions 4.1

Discussion on the CTLE Maturity Assessment Model

The maturity assessment model is a semi-quantitative model useful as a tool to map a company and to point at possible improvements. This is not an exact numerical model and any comparisons between different companies should be done with care. The case companies are all measuring and storing large quantities of data in product tests, numerical models etc. in product development, productivity and capability/variability in manufacturing, product geometry in inspection processes and measurement of actual product performance. The data material is, however, usually stored for a specific use and to transfer the data and extract information for use by other departments is currently difficult. For example, are all data form statistical process control stored, but to translate these charts to process capabilities and make it easy usable for the product designers and tolerance definition is still not straight forward. This is one of the obstacles the Industry 4.0 paradigm should solve. Stage 5 in the information and data exchange assessment is currently not reachable for most companies. A key to this will be a seamless interconnection of Manufacturing Execution Systems (MES), Product Lifecycle Management (PLM), Computer Aided Engineering, including Computer Aided Tolerance Engineering software. 4.2

Shortcomings of the CTLE Model – Future Extensions

The CLTE model is currently focusing on the process within one company. Future models must include tolerancing and variation management in the supplier vs. customer relations in the supply-and distribution chain. Furthermore; future CLTE models should include information and data exchange with product use phase and end-of-life and possibly remanufacturing of products. One of the current trends are the manufacturers liability of products after end-of-life (EOL) as well as the extension of the product to a product-service system. Products containing sensors opens for new business models, but the data collected could also be used for future CLTE activities, such as functional requirement definitions. Tolerance engineering and variation management will also be vital for a circular manufacturing with increasing remanufacturing of products and components rather than re-melting or disposal at the EOL.

5 Conclusions and Further Work Industry 4.0 will most likely open new opportunities for information flow, data assessment and exchange for variation management and tolerancing engineering. This paper has suggested a maturity model that can be used to lift tolerancing on the agenda


K. Martinsen

and point at improvement potentials for companies. This is a proposal for a management tool. Further work would see longitudinal results from industries using the tool with measured improvements. Acknowledgements. The author wishes to thank the discussion partners and the case study companies. The work reported in this paper was based on activities within the centre for research based innovation, SFI Manufacturing in Norway, and is partially funded by the Research Council of Norway under contract number 237900.

References 1. Zhang, C., Wang, H.P.B.: Tolerancing for design and manufacturing. In: Handbook of Design, Manufacturing and Automation, pp. 155–169. Wiley Inc., London (2007) 2. Singh, N.: Integrated product and process design: a multi-objective modeling framework. Robot. Comput. Integr. Manuf. 18(2), 157–168 (2002) 3. Ali, S., Durupt, A., Adragna, P.: Reverse engineering for manufacturing approach: based on the Combination of 3D and Knowledge Information. In: Abramovici, M., Stark, R. (eds.) Smart Product Engineering, pp. 137–146. Springer, Berlin Heidelberg (2013) 4. Krogstie, L., Martinsen, K.: Beyond lean and six sigma; cross-collaborative improvement of tolerances and process variations-a case study. Proced. CIRP 7, 610–615 (2013) 5. Watts, D.: The “GD&T knowledge gap’’ in industry. In: ASME Conference Proceedings, vol. 2007(48051), pp. 597–604 (2007) 6. Srinivasan, V.: Standardizing the specification, verification, and exchange of product geometry: research, status and trends. Comput. Aided Des. 40(7), 738–749 (2008) 7. Srinivasan, V.: Reflections on the role of science in the evolution of dimensioning and tolerancing standards. In: 12th CIRP Conference on Computer Aided Tolerancing 2012, Elsevier Ltd. Huddersfield (2012) 8. Zhang, J., et al.: A robust design approach to determination of tolerances of mechanical products. CIRP Ann. Manuf. Technol. 59(1), 195–198 (2010) 9. Holt, R., Barnes, C.: Towards an integrated approach to “Design for X”: an agenda for decision-based DFX research. Res. Eng. Design 21(2), 123–136 (2010) 10. Zhang, C., Wang, H.P., Li, J.K.: Simultaneous optimization of design and manufacturing— tolerances with process (machine) selection. CIRP Ann. 41(1), 569–572 (1992) 11. Brown, S.L., Eisenhardt, K.M.: Product development: past research, present findings, and future directions. Acad. Manage. Rev. 20(2), 343–378 (1995) 12. Horváth, I.: A treatise on order in engineering design research. Res. Eng. Design 15(3), 155– 181 (2004) 13. Pahl, G., Beitz, W., Wallace, K.: Engineering Design, XXX, 544 s. Springer, London (1996) 14. Richtnér, A., Åhlström, P.: Top management control and knowledge creation in new product development. Int. J. Op. Prod. Manage. 30(10), 1006–1031 (2010) 15. Krogstie, L., Martinsen, K.: Closed loop tolerance engineering – a relational model connecting activities of product development. Proced. CIRP 3, 519–524 (2012) 16. Badke-Schaub, P., Frankenberger, E.: Management kritischer SituationenProduktentwicklung erfolgreich gestalten. Springer, Heidelberg (2004) 17. Krogstie, L., Martinsen, K., Andersen, B.: Approaching the devil in the details; a survey for improving tolerance engineering practice. Proced. CIRP 17(2014), 230–235 (2014) 18. Andreasen, M.M., Hein, L.: Integrated Product Development, 205 s. IFS (Publications), Kempston (1987)

Industry 4.0 Closed Loop Tolerance Engineering


19. Monostori, L., Kadar, B., Bauernhansl, T., Kondoh, S., Kumara, S., Reinhart, G., Sauer, O., Schuh, G., Sihn, W., Ueda, K.: Cyber-physical systems in manufacturing. CIRP Ann. Manuf. Technol. 65(2), 621–641 (2016) 20. Schuh, G., et al.: Industrie 4.0 Maturity Index, Acatech study. ISSN 2192-6174 (2017). 21. Giannetti, C.: A framework for improving process robustness with quantification of uncertainties in Industry 4.0. In: Proceedings—2017 IEEE (2017) 22. Söderberg, R., Wärmefjord, K., Carlson, J.S., Lindkvist, L.: Toward a digital twin for realtime geometry assurance in individualized production. CIRP Ann. 66, 137–140 (2017) 23. Salomons, O.W., et al.: A computer aided tolerancing tool I: tolerance specification. Comput. Ind. 31(2), 161–174 (1996) 24. Ramesh, R., Jerald, J.: Concurrent tolerance allocation for quality with form control using genetic algorithm. Int. J. Manuf. Res. 4, 439–457 (2009) 25. Shin, S., Kongsuwon, P., Cho, B.R.: Development of the parametric tolerance modeling and optimization schemes and cost-effective solutions. Eur. J. Oper. Res. 207(3), 1728–1741 (2010) 26. Laperrière, L., ElMaraghy, H.A.: Tolerance analysis and synthesis using Jacobian transforms. CIRP Ann. Manuf. Technol. 49(1), 359–362 (2000) 27. Kusiak, A., Feng, C.-X.: Deterministic tolerance synthesis: a comparative study. Comput. Aided Des. 27(10), 759–768 (1995) 28. Skander, A., Roucoules, L., Klein Meyer, J.: Design and manufacturing interface modelling for manufacturing processes selection and knowledge synthesis in design. Int. J. Adv. Manuf. Technol. 37(5), 443–454 (2008) 29. Bjørke, Ø.: Computer-Aided Tolerancing, xiii, 216 s. ASME Press, New York (1989)

A DEMO of Smart Manufacturing for Mass Customization in a Lab Jinghui Yang1(&) and Timmie Abrahamsson2 1


College of Engineering, Shanghai Polytechnic University, Jinhai Road 2360, Pudong, Shanghai, China [email protected] Industrial Management and Engineering, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden

Abstract. The demand from customer is becoming more and more personalized and with more requirements like shorter production time and lower price. To be able to solve these demands, the production would need to become extremely complex with difficulties in planning and operation. This could be solves by using smart manufacturing for mass customization. Mass customization involves using smart production to produce customized personal products or services at a high rate to a low cost. Internet of Things, Big Data and Cyber-Physical Systems are three important concepts of Industry 4.0. In SSPU, Industry 4.0 and smart manufacturing is currently being demonstrated in a laboratory where the user can customize their own toy house. The house consist of different panels (modules) that can be chosen from a variety of different designs and colors. The windows on the panels may also be customized by design and color. The production line consists of three assembly robots and one smaller robot for transporting the product to storage. Keywords: Industry 4.0  Mass customization Intelligent manufacturing systems  Smart factory

1 Introduction As we are entering a new era, the fourth industrial revolution called industry 4.0, the manufacturing systems are seeing a big change as they move from standard mass production to mass customization. Mass customization (MC) means the ability to provide customized products or services through flexible processes in high volumes and at a reasonably low cost [6]. This is a result from increased customer expectations and saturation of markets. Manufacturing systems now a days need to be smart, flexible, and adaptable and react quickly on changes in markets and demands. [3, 5] They also need to support individualized products that are made from specific customer needs. This leads to a higher complexity in production and in order to handle this complexity and the increased variety, new Smart Manufacturing (SM) systems are needed [1, 2]. Industry 4.0 is being materialized in China where lots of famous enterprises have already attempt to implement some concepts of the industry 4.0. All these attempts will © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 12–18, 2019.

A DEMO of Smart Manufacturing for Mass Customization in a Lab


be beneficial to the manufacturing industry of China. In Shanghai Polytechnic University (SSPU), a smart factory laboratory have been set up as a model of industry 4.0. In the previous two years, a keychain with a RFID tag was produced and in 2018, a toy house that can be customized according to the customers specifications. The aim of this paper is to present the Industry 4.0 and important related concepts. I will also present how the Industry 4.0 is currently being demonstrated at SSPU.

2 Basic Concepts 2.1

Smart Manufacturing

Even some of the most successful manufacturing business are unsure when it comes to defining “smart manufacturing”. To clear things up, we will look at two definitions set by two leading organizations. According to the National Institute of Standards and Technology (NIST), Smart Manufacturing are systems that are “fully-integrated, collaborative manufacturing systems that respond in real time to meet changing demands and conditions in the factory, in the supply network, and in customer needs” [7]. The Smart Manufacturing Leadership Coalition (SMLC) defines it as “Smart Manufacturing is the ability to solve existing and future problems via an open infrastructure that allows solutions to be implemented at the speed of business while creating advantaged value” [8]. In other words, implementation of internet into the industry where machines are connected and communicating with each other and the internet. The purpose is to increase efficiency of processes and decrease human involvement, to meet today’s complex demands with personalized products at short delivery time [9]. 2.2

Mass Customization

Mass customization is a marketing and manufacturing strategy that combines the flexibility and personalization of custom-made products or services with the low costs of mass production. It allows customers to customize the product in certain ways while keeping the costs close to the costs of a mass produced product. Many of the products made by MC consists of modular components. The customer can chose by a variety of different modules and combine them in different ways to achieve a semi-custom final product. It is notable that true MC products are individually made (Silveira et al. 2001). MC systems may reach customers in the mass market economy but treat them individually as in the pre-industrial economies. The MC strategy can become a competitive strategy as long as the company is able to quickly respond to expectations and different requirements of its customers. In other words, the company need to combine the MC strategy with another strategy known as Quick Response. This combination if possible if the company has a flexible and adaptive manufacturing system and the possibility of rapid design, and implementation of new products and processes of their manufacturing [4]. It is important to understand the difference between customization and variety. A customized product is manufactured specifically after the needs of a particular


J. Yang and T. Abrahamsson

customer. Variety provides different choices for the customer, but not the ability to specify the product. Variety is not customization and this fact is often overlooked in the business of MC [10]. Silveira et al. (2001) reported there are six success factors that the MC systems depends on: 1. 2. 3. 4. 5. 6.

Customer demand for variety and customization must exist. Market conditions must be appropriate. Value chain should be ready. Technology must be available. Products should be customizable. Knowledge must be shared.

These factors have direct practical implications. First off, they corporate the idea that MC is not every company´s best strategy as it must conform to specific markets and customers. Second, the factors claim the complexity involved in MC systems. MC systems involves major aspects of operations including configuration of products, the whole value chain, process and information technology and the development of a knowledge-based organization structure [6]. 2.3

Cyber-Physical Systems

Cyber-Physical Systems (CPS) are an important part of Industry 4.0. CPS are physical and engineered systems whose operations are monitored, coordinated, controlled and integrated by a computing and communication core. As can be heard by the name, such systems combines the cyber-world of computing and communications with the physical world. CPS integrates the dynamics of the physical processes with those of the software and networking, providing modeling, design, and analysis techniques for the integrated whole. CPS bring together the discrete and powerful logic of computing to monitor and control of physical and dynamic systems. The difference between CPS and usual imbedded systems is that CSP combines computation, physical processes, digital visualization and network functions. A CPS can affect its physical environment through actuators and it can communicate and collaborate with its surroundings. An example of a CPS in today’s industry is a modern industry robot. It can affect its surroundings by moving multiple items with precision and it communicates with the rest of the production system. A large amount of sensors and the communication with the surroundings, creates a high grade of automation for production systems consisting of CPS.

3 Methodology Smart manufacturing is currently being demonstrated in a laboratory at SSPU, Shanghai. Last year, the smart production in the lab included the production of a personal keychain with a RFID tag and a personal engraving. This year, to demonstrate Industry 4.0, the customer can order customized house-panels that can later be put together, creating a customized small toy house. The customer may choose from 10 different panels and for each panel, the customer can chose different colors and layouts

A DEMO of Smart Manufacturing for Mass Customization in a Lab


for the doors and windows. The panels are made from plastic parts hold together by magnetic rods and metal balls. The different parts of the panels, and the panels themselves, can be seen as modules customized by the customer and when put together, creating a customized toy house [11] (Fig. 1).

Fig. 1. Architecture of the DEMO for smart manufacturing for mass customization


Design Process

The first step in the smart manufacturing at SSPU is the order process. The customer visit the website created by SSPU and start the design of their toy house, see appendix 1 for pictures. The design process consists of the following 3 steps: Step 1, Customer chose panel design. Step 2, Customer customize the specific panel by choosing different layouts for windows and different colors on the magnetic rods. Step 3, Customer saves the current panel design, chooses another panel and repeat the steps. When the customer is done with the design process and place their order, the order is sent to the Enterprise Resource Planning (ERP) and Manufacturer Execution System (MES). The ERP will distribute the order information to the MES which in turn can tell the control system to control each unit in the production line to assemble the product automatically. Each production unit can communicate with the product, knowing where the product is and what to do with it, all by itself. After running the Manufacturing Resource Planning system (MRP), the production planner can create a purchase order, planning order and production order which are all sent to the production plant. The operator will transfer the information of production order from ERP to MES and use a programmable logic controller (PLC), controlling the assembly line and produce the product based on the information of the order. At the same time, all the production information and Key Performance Indicator (KPI) is displayed in real time on screens in the lab.



J. Yang and T. Abrahamsson

Product Assembly

The house panels are put together by the machines as seen in Fig. 2. Metal beads are put accordingly on a plastic plate. Then the magnetic rods are picked according to the customer’s request and attached to the metal beads on the plastic plate.

Fig. 2. Mass customization products

Through the whole manufacturing process, the ongoing processes can be viewed by looking at the screen with the machines in virtual reality (VR). 3.3

Delivering the Product

The Fig. 2 shows the final step of the manufacturing at SSPU. The next step would be to deliver the product to the customer. The product would first need to be stored until transportation is available. The product can be brought to the storage by the robot and with a RFID tag, the location of the product in the storage would be known for when it´s time to transport it. Although not covered in this report, the transportation methods for the product would depend on where the user is located and when he or she wants the product delivered.

A DEMO of Smart Manufacturing for Mass Customization in a Lab


4 Virtualization of Production Line In this section, we create three dimensional objects of the production line equipment such as robots, AGV cars etc. And based on the movement and logic control of the production line, we design a virtual platform in which we can place our 3D objects to construct different production lines of different functions. Thus, we can do experiments and stimulations for production lines’ upgrade and transition. It can be a great tool for research and teaching and practical adjustment at the fields of production management and industry engineering (Fig. 3).

Fig. 3. Virtualization of production line

5 Conclusions The conclusion of this paper is that mass customization (MC) is an important manufacturing strategy as we enter industry 4.0 and in order to achieve a successful implementation of MC that supply each customer with individualized products or services, a right combination of human and technological factors are needed. The main concepts of Industry 4.0, IoT, Cyber-Physical Systems and Big Data, are all important parts allowing correct decisions regarding machine operations, smart manufacturing and material to flow faster. However, it must not be forgotten that the final decision are always made by a human. Industry 4.0 is the future whether we want it or not and therefore, both academics and practitioners should start using the Industry 4.0 with all its concepts to make future manufacturing intelligent. There are some challenges to MC as it requires the manufacturers to have a flexible and adaptive manufacturing system and a rapid design process. At SSPU, Industry 4.0 and smart manufacturing for mass customization is currently being demonstrated in a laboratory where the user can design their own toy house. The house consists of panels (modules) that can be chosen from a variety of different designs. The design of the windows on the panels may also be customized as well as the color of the different parts. This demonstration gives a general picture of Industry 4.0, which can be used as a guideline to implement industry 4.0.


J. Yang and T. Abrahamsson

In this report, I have focused on Industry 4.0, its important concepts and the production of products in this new era. Future research in this topic could include how Industry 4.0 and mass customization can be used to provide services and not only products. Another interesting research topic could be how an implementation of a MC system should be conducted in a company not currently using MC in its manufacturing processes. When it comes to smart manufacturing at SSPU, future research could include the next step in the smart manufacturing, the delivery of the product to the customer. This project is supported by discipline construction of mechanical engineering of Shanghai Polytechnic University (XXKZD1603).

References 1. Anderl, I.R.: Industrie 4.0 - advanced engineering of smart products and smart production. In: Proceedings of 19th International Seminar on High Technology (2014) 2. Duray, R., Ward, P.T., Milligan, G.W., Berry. W.L.: Approaches to mass customization: Configurations and empirical validation. J. Oper. Manage. 18(6), 605–625 (2000) 3. Gilmore, J., Pine, J.: The four faces of mass customization. Harvard Bus. Rev. 75(1), 91–101 (1997) 4. Kang, H.S., Lee, J.Y., Choi, S., Kim, H., Park, J.H., Son, J.Y., Kim, B.H., Noh, S.D.: Smart manufacturing: past research, present findings, and future directions. Int. J. of Precis. Eng. and Manuf. Green Tech. 3(1), 111–128 (2016) 5. Lindqvist, H., Nilsson, A.: The production staff of the future within the smart factory. Skövde High School, Institute of Engineering Science (2015) 6. Silveira, G.D., Borenstein, D., Fogliatto, F.: Mass customization: literature review and research directions. Int. J. Prod. Econ. 72, 1–13 (2001) 7. Sugimori, Y., Kusunoki, K., Cho, F., Uchikawa, S.: Toyota production system and Kanban system Materialization of just-in-time and respect-for-human system. Int. J. Prod. Res. 15(6), 553–564 (1997) 8. Want, R.: An introduction to RFID technology. Perv. Comput. 5(1), 25–33 (2006) 9. Witchalls, C.: The internet of things business index. Technical report, The Economist and ARM (2013) 10. Zawadzki, P., Żywicki, K.: Smart product design and production control for effective mass customization in the Industry 4.0 concept. Manag. Prod. Eng. Rev. 7(3), 105–112 (2016) 11. Zhong, R.Y., Dai, Q.Y., Qu, T.: RFID-enabled real-time manufacturing execution system for mass-customization production. Robot. Comput. Integr. Manuf. 29(2), 283–292 (2013)

A Fault Diagnosis Method Based on Mathematical Morphology for Bearing Under Multiple Load Conditions Yang Ge(&), Lanzhong Guo, and Yan Dou School of Mechanical Engineering, Changshu Institute of Technology, Changshu 215500, Jiangsu, People’s Republic of China [email protected], [email protected], [email protected]

Abstract. A mathematical morphology based status feature extraction method for rolling bearing is proposed in this paper. A status recognition method for bearing based on the feature is given under multiple load conditions. The experiment results verify that the proposed method can perform better than support vector machine based on time-frequency domain features, and also predict bearing status precisely with a mean accuracy of 99.53%. The comparison results show the proposed method can eliminate the influence of load conditions, and distinguish the actual status of bearing accurately. Above all, the calculation of the proposed method is very simple. Keywords: Feature extraction  Mathematical morphology  Status recognition Rolling bearing  Support vector machine

1 Introduction BEARINGS are widely used as vital components in rotary machines. The occurrence of bearing faults will result in significant breakdown time, elevated repair cost, and even a major safety accident. They are prone to produce various faults such as inner race fault, out race fault, cage fault and rolling elements fault [1]. Vibration signals analyses are often used to recognize operation status of mechanical products in recent researches. Generally, the process of fault recognition mainly consists of data acquisition, feature extraction, and status recognition, among which feature extraction is essential to accurately predict the status of bearing [2]. In recent years, various fault feature extraction methods for mechanical vibration signals have been proposed and developed, such as time domain features [3, 4], frequency domain features [5], entropy features [6, 7], and wavelet packet energy features [8]. Except these features of vibration signals, various status recognition methods are proposed too, such as support vector machine (SVM) [5, 9, 10], artificial neural network (ANN) [10–12], Bayesian classification [3, 13], genetic algorithm [14], deep learning [15, 16], and k-nearest neighbor (KNN) [17, 18]. Each kind of features may contain multiple parameters, and each parameter has different sensitivity to status of machine. In general, multiple feature parameters are adopted to diagnose machine status at the same time. Because of the correlation of multiple feature parameters, using © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 19–26, 2019.


Y. Ge et al.

too many feature parameters will increase the calculation time of status classification and reduce classification accuracy. To solve this problem, many researchers employ some data dimension reduction methods, such as principal component analysis (PCA) [19, 20], singular value decomposition (SVD) [21], independent component analysis (ICA) [22]. In recent years, image recognition, speech recognition technologies have achieved leap-forward development, deep learning algorithms made a greater contribution to them. Deep learning has been a hot research topic in the field of machine learning because of its better learning performance and feature extraction capability for unsupervised massive raw data [16]. Thus, it is ideal for processing and classifying mechanical vibration signals. In the studies mentioned above, multiple signals features were used for machine fault recognition, such as time domain features, frequency domain features, entropy features, wavelet packet energy features and so on. In this case, machine fault recognition accuracy is not ideal, because signals features may interfere with each other. The proposed methods do not consider whether to adapt to fault identification under multiple loading conditions. Besides, too many features extraction may increase computational cost. In this paper, we attempt to use mathematical fractal feature, only one feature, to recognize bearing fault under multiple loading conditions, then use timefrequency domain features and SVM to recognize the bearing fault at the same time, and make comparisons and analyses between different features and different recognition methods at last.

2 Fractal Dimension Based Signal Feature Extraction 2.1

Fractal Dimension of Mathematical Morphology (MMFD)

Fractal dimensions can quantitatively describe some characteristics of natural morphology, are widely used in image processing field. The fractal box dimension is the most widely used among various fractal dimensions. Many researchers have employed it to process vibration signals, and provided an effective method for fault identification of mechanical products. However, fractal box dimension has a disadvantage of inaccurate calculation. In this paper, we adopt MMFD, which can effectively solve the problem of inaccurate calculation in box dimension [23]. Compared with traditional methods, MMFD uses one-dimensional morphological coverage instead of grid division, makes calculation results more stable and accurate, and has achieved good application effects in mechanical signal processing [24, 25]. Mathematical morphology includes two basic operations: erosion and expansion. Assume that f ðnÞ is a one-dimensional original signal, and g(m) is a structuring element signal, both of them are discrete time signals, where n ¼ 1; 2;    ; N; m ¼ 1; 2;    ; M, and N  M. The erosion and expansion operations of f ðnÞ and gðmÞ are defined as (1) and (2) respectively.

A Fault Diagnosis Method Based on Mathematical Morphology


Erosion: ðf gÞðnÞ ¼ minff ðn þ mÞ  gðmÞg


ðf  gÞðnÞ ¼ maxff ðn  mÞ þ gðmÞg



Where m ¼ 1; 2;    ; M,  denotes the operator of erosion and  denotes the operator of expansion. The structuring element at scale k is defined as (3). ktimes

zfflfflfflfflfflfflfflfflfflfflffl}|fflfflfflfflfflfflfflfflfflfflffl{ gk ¼ g  g      g


The morphological coverage Ag ðkÞ at scale k can be defined as A g ð kÞ ¼

XN   n¼1

    f  gk ðnÞ  f gk ðnÞ


Morphological coverage Ag ðkÞ and scale k satisfy formula (5)     Ag ðkÞ 1 þc ln ¼ Dln 2 k k


Where D is the MMFD and c is a constant.  A ðkÞ Let x ¼ ln gk2 , y ¼ ln 1k , formula (5) changes into formula (6) x ¼ Dy þ c


Then, we can use the least square method to calculate D. For more information about morphological covering, please refer to the literature [24]. 2.2

MMFD Features Extraction of Bearing Vibration Signals

In this paper, the bearing vibration acceleration data are taken from Case Western Reserve University (CWRU) Bearing Data Center [26]. We choose normal baseline data and 12k drive end bearing fault data with rpm 1797 as research object. The data contains the rolling ball fault, inner race fault, outer race fault and normal four statuses. Fault diameters of ball fault, inner race fault, outer race fault are all 0.007″. Original time domain waveform of bearing vibration acceleration under 4 statuses is shown in Fig. 1. We can see that waveform is obviously distinctive under different statuses. A continuous 1000 samples are cut as one group of data, and each status has 120 groups of data. Setting the largest scale kmax ¼ 10 and structures length L = 5, choosing the flat structure element [0, 0, 0, 0, 0], the MMFD of 30 groups of data selected randomly from each status can be calculated as shown in Fig. 2.


Y. Ge et al.

Fig. 1. Original time domain waveform of bearing vibration acceleration

Fig. 2. MMFD of 4 statuses at structures length L = 5

From Fig. 2, we can see that MMFD of inner race fault, outer race fault and normal status overlap seriously. This is due to the improper structure length L. In order to find an appropriate structure length, we calculate MMFD of each status at structure length among [1, 15], using the same data in Fig. 2. The MMFD range of each status is marked and the results are shown in Fig. 3. As shown in Fig. 3, MMFD of each bearing status is sensitive to the change of structure element length L. When L = 3, MMFD of each status is the easiest to be distinguished. At this point, the MMFD can be used as a feature to distinguish different statuses. Therefore, in the following studies, the structure length is set to 3.

Fig. 3. MMFD of each status at different structure length


Fig. 4. Time domain waveform of ball fault under different load conditions

MMFD of Bearing Under Multiple Load Conditions

Bearing may need to work at different speeds due to load conditions or actual work requirements. Taking the normal baseline data and 12k drive end bearing fault data

A Fault Diagnosis Method Based on Mathematical Morphology


from CWRU Bearing Data Center as an example, the data is obtained from experiments under four different loads, 0, 1, 2 and 3 HP, so the bearing work at four different speeds, 1797 rpm, 1772 rpm, 1750 rpm and 1730 rpm. Figure 4 gives the original time domain waveform of ball fault under different load conditions. From Fig. 4, vibration amplitude under different load conditions is significantly diverse, even though they are all in ball fault status. Therefore, it is difficult to recognize the status of bearing under different load conditions only from time domain features. Figure 3 shows the MMFD of four statuses, but only under one load condition. Next, we try to calculate the MMFD of different statuses and different load conditions. The MMFD of 30 groups of data selected randomly from each status and each load condition can be calculated as shown in Fig. 5. As shown in Fig. 5, MMFD of same status under different load conditions is very similar, and MMFD of different status has no overlap within a certain range. This indicates that MMFD can eliminate the influence of load condition and accurately reflect the actual status of bearing, which can be used as a feature for bearing status recognition.

Fig. 5. MMFD of each status under different load conditions

Fig. 6. Recognition results

3 Experimental Verification 3.1

Experimental Background

The experimental data taken from the CWRU Bearing Data Center, include normal baseline data and 12k drive end bearing fault data. The bearing works at four different speeds, 1797 rpm, 1772 rpm, 1750 rpm, and 1730 rpm, and four different statuses, ball fault, inner race fault, outer race fault, and normal. 30 groups of data with 1797 rpm are selected randomly as standard data from each status, and then 30 groups of data are selected randomly as test data from each status and each load condition respectively. By the way, the test data selected from 1797 rpm are different from standard data. Repeat the above process 20 times and calculate the recognition accuracy of each extraction result.



Y. Ge et al.

Status Recognition Method

First, calculate MMFD of standard data to ascertain the range of each status. Second, calculate MMFD of each test data. Third, use formula (7) to judge which status the test data belongs to. di ¼

  y  ximax þ ximin 2

ximax  ximin


Where di denotes the distance of MMFD between test data and standard data, i = 1, 2, 3, 4, denote four statuses, y denotes MMFD of test data, ximax denotes the maximal MMFD of standard data at status i, ximin denotes the minimum MMFD of standard data at status i, jj denotes absolute value calculation. Assume dk ¼ minfdi g, where k = 1, 2, 3, 4, the test data belongs to status k. 3.3

Recognition Result

According to Sect. 3.1, the recognition accuracy results of 20 times random sampling is shown in Fig. 6, and the average accuracy of 20 times is 99.53%. In order to facilitate comparative analysis, we adopt time-frequency domain features and SVM to recognize the bearing fault at the same time. 16 time domain features and 13 frequency domain features are extracted, such as variance, root-mean-square, kurtosis, peak-peak value, average frequency, frequency center, mean square root frequency and so on, 29 features in total. Calculation method of each feature is detailed in literature [11] and [27]. Through PCA [20], the cumulative contribution rate of the first 6 features is 98.3%, so the first 6 features can be selected as final features of bearing vibration signal to shrink calculation. The 6 time-frequency domain features whose dimensions are reduced by PCA are extracted from each standard data and test data. Then the features are input into LIBSVM [28] toolbox to calculate recognition accuracy with respective label. The recognition results are shown in Fig. 6. From Fig. 6, we can see that recognition accuracy of the presented method in this paper is generally stable, there is no great fluctuation, and the average recognition accuracy is 99.53%, which is significantly better than time-frequency domain features and SVM method with average recognition accuracy 95.29%. This shows that MMFD can eliminate the influence of the load conditions, and differentiate the actual status of bearing accurately. In addition, MMFD only need calculate one feature, and the recognition method is also simple, while time-frequency domain features need calculate many features, and SVM need multiple iterations. Comparing the calculation of two methods, the method proposed in this paper is relatively simple.

4 Conclusion In this paper, MMFD is employed for feature extraction of bearing vibration signal; a status recognition method is given for bearing under multiple load conditions. In order to verify the effectiveness of the method, we use time-frequency domain features and

A Fault Diagnosis Method Based on Mathematical Morphology


SVM to recognize the bearing fault at the same time. Experimental results show that MMFD can eliminate the influence of the load conditions, and differentiate the actual status of bearing accurately with less calculation. In this study, MMFD is used to recognize the bearing status solely, which achieved very good recognition effect. Next, we try to mix MMFD with other features for recognizing the status of mechanical products.

References 1. Jardine, A.K.S., Lin, D.M., Banjevic, D.: A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mech. Syst. Signal Process. 20, 1483–1510 (2006) 2. Ma, J.X., Xu, F.Y., Huang, K., et al.: GNAR-GARCH model and its application in feature extraction for rolling bearing fault diagnosis. Mech. Syst. Signal Process. 93, 175–203 (2017) 3. Wang, C.C., Kang, Y., Liao, C.C.: Gear fault diagnosis in time domains via bayesian networks. Trans. Can. Soc. Mech. Eng. 37(3), 665–672 (2013) 4. Shao, R.P., Hu, W.T., Li, J.: Multi-fault feature extraction and diagnosis of gear transmission system using time-frequency analysis and wavelet threshold de-noising based on EMD. Shock Vib. 20(4), 763–780 (2013) 5. Bansal, S., Sahoo, S., Tiwari, R., et al.: Multiclass fault diagnosis in gears using support vector machine algorithms based on frequency domain data. Measurement 46(9), 3469–3481 (2013) 6. Cheng, G., Chen, X.H., Li, H.Y., et al.: Study on planetary gear fault diagnosis based on entropy feature fusion of ensemble empirical mode decomposition. Measurement 91, 140–154 (2016) 7. Boskoski, P., Juricic, D.: Fault detection of mechanical drives under variable operating conditions based on wavelet packet Renyi entropy signatures. Mech. Syst. Signal Process. 31(15), 369–381 (2012) 8. Jena, D.P., Sahoo, S., Panigrahi, S.N.: Gear fault diagnosis using active noise cancellation and adaptive wavelet transform. Measurement 47, 356–372 (2014) 9. Yang, D.L., Liu, Y.L., Li, S.B., et al.: Gear fault diagnosis based on support vector machine optimized by artificial bee colony algorithm. Mech. Mach. Theory 90, 219–229 (2015) 10. Jaber, A.A., Bicker, R.: Fault diagnosis of industrial robot gears based on discrete wavelet transform and artificial neural network. Insight Non-Destruct. Test. Cond. Monit. 58(4), 179–186 (2016) 11. Kane, P.V., Andhare, A.B.: Application of psychoacoustics for gear fault diagnosis using artificial neural network. J. Low Freq. Noise Vib. Active Control 35(3), 207–220 (2016) 12. Kanai, R.A., Desavale, R.G., Chavan, S.P.: Experimental-based fault diagnosis of rolling bearings using artificial neural network. J. Tribol. Trans. ASME 138(3), 031103 (2016) 13. Liu, Z., Liu, Y., Shan, H., et al.: A fault diagnosis methodology for gear pump based on EEMD and bayesian network. PLoS One 10(5), e0125703 (2015) 14. Cerrada, M., Zurita, G., Cabrera, D., et al.: Fault diagnosis in spur gears based on genetic algorithm and random forest. Mech. Syst. Signal Process. 70–71, 87–103 (2016) 15. He, M., He, D.: Deep learning based approach for bearing fault diagnosis. IEEE Trans. Ind. Appl. 53(3), 3057–3065 (2017)


Y. Ge et al.

16. Mao, W.T., He, J.L., Li, Y., et al.: Bearing fault diagnosis with auto-encoder extreme learning machine: a comparative study. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 231(8), 1560–1578 (2017) 17. Dong, S.J., Xu, X.Y., Chen, R.X.: Application of fuzzy C-means method and classification model of optimized K-nearest neighbor for fault diagnosis of bearing. J. Braz. Soc. Mech. Sci. Eng. 38(8), 2255–2263 (2016) 18. Tian, J., Morillo, C., Azarian, M.H., et al.: Motor bearing fault detection using spectral kurtosis-based feature extraction coupled with K-nearest neighbor distance analysis. IEEE Trans. Industr. Electron. 63(3), 1793–1803 (2016) 19. Dong, S.J., Sun, D.H., Tang, B.P., et al.: Bearing degradation state recognition based on kernel PCA and wavelet kernel SVM. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 229(15), 2827–2834 (2015) 20. Dong, S.J., Luo, T.H.: Bearing degradation process prediction based on the PCA and optimized LS-SVM model. Measurement 46(9), 3143–3152 (2013) 21. Yu, K., Lin, T.R., Tan, J.W.: A bearing fault diagnosis technique based on singular values of EEMD spatial condition matrix and Gath-Geva clustering. Appl. Acoust. 121, 33–45 (2017) 22. Zvokelj, M., Zupan, S., Prebil, I.: EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis. J. Sound Vib. 370, 394–423 (2016) 23. Maragos, P.: Measuring the fractal dimension of signals: morphological covers and iterative optimization. IEEE Trans. Signal Process. 41(1), 108–121 (1993) 24. Li, B., Zhang, P.L., Wang, Z.J., et al.: Morphological covering based generalized dimension for gear fault diagnosis. Nonlinear Dyn. 67(4), 2561–2571 (2012) 25. Khakipour, M.H., Safavi, A.A., Setoodeh, P.: Bearing fault diagnosis with morphological gradient wavelet. J. Frankl. Inst. Eng. Appl. Math. 354(6), 2465–2476 (2017) 26. 27. Islam, M.M., Kim, J., Khan, S.A., et al.: Reliable bearing fault diagnosis using Bayesian inference-based multi-class support vector machines. J. Acoust. Soc. Am. 141(2), EL89– EL95 (2017) 28.*cjlin/libsvm/

An Industry 4.0 Technologies-Driven Warehouse Resource Management System Haishu Ma(&) Department of Mechanical Engineering, Henan University of Engineering, Zhengzhou, China [email protected]

Abstract. The last several years have seen the evolution of warehouse from stockpiling of inventory to high-velocity operations, allowing facilities to handle as many goods as they typically deal with, but at lower costs. Industry 4.0–driven technologies can help pave the way for the evolving warehouse, enabling automated systems to adapt to their environment and tackle tasks more efficiently. This paper presents a framework of warehouse resource management system based on Industry 4.0–driven technologies such as RFID, low-cost sensors, artificial intelligence, autonomous vehicles, Internet of Things (IoT) and high-performance computing to enable a more flexible, adaptive, and productive warehouse. Keywords: Industry 4.0

 Warehousere  Source management system

1 Introduction High-velocity operations are becoming the norm in warehouses, allowing facilities to handle as many products as they usually deal with, but at lower costs. This change comes as automated warehouses are supplemented with an added layer of intelligence, which is driven by Industry 4.0. Recent years have seen the rise of connected technologies throughout the manufacturing and distribution value chain [1]. This combination of physical and digital systems— also known as Industry 4.0—has paved the way for growingly connected experiences that impact everything from product design and planning to supply chain and production. The last several years have seen the evolution of warehouse from stockpiling of inventory to high-velocity operations, pushing more products through the same physical assets while bringing down overall costs [2]. Warehouses are an essential component of the supply chain infrastructure and are increasingly considered as strategic facilities to provide competitive advantages. As the demand for shorter lead times, better quality control, greater order customization, reduced labor costs, and higher production output is increasing, adaptable advanced technologies are needed to achieve these goals. Technologies such as RFID, low-cost sensors, artificial intelligence, computer vision, autonomous vehicles, augmented reality (AR), wearables, Internet of Things (IoT), analytics, and high-performance computing—all inherent in Industry 4.0—are being employed to create a more adaptable facility. Meanwhile, they are also enabling new types of smart automation that can help transform warehouse operations. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 27–33, 2019.


H. Ma

In this paper, an Industry 4.0 technologies-driven warehouse resource management system is proposed to facilitate the warehouse operations and improve the efficiency and productivity of the warehouse. The framework consists of three tiers. The first tier is data collection. In this tier, RFID devices are deployed to capture the warehouse resource data. In the second tier, the locations of warehouse resources are computed based on the data collected in the first tier. The third tier is data management which contains two modules, namely route optimization module and motion detection module. The route optimization module is to formulate the shortest route for order picking operations, thus reducing total travel time. The motion detection module is responsible for distinguishing the pallets that are currently being loaded to the truck from the pallets that are detected by the RFID reader but not the ones of interest during loading operations.

2 Framework of Resource Management System The aim of this framework is to enhance warehouse operations by means of tracking, positioning, and optimizing resource utilization. In doing this, the objectives of maximizing the productivity of warehouse and minimizing the operations cost are achieved. Figure 1 illustrates the proposed framework which consists of three tiers. The first tier is data collection. The locations of warehouse resources are computed at the second tier. The third tier is data management which contains two modules, namely route optimization module and motion detection module. The technical details of the three tiers are elaborated in the next few sections.

Tier 2: Warehouse resource locating

Tier 1: Data collection module

Warehouse operations Receiving



Order No.

SKU Dimension


Machine ID



RFID based data collection

Data processing

RSSI based

Phase based

Estimated positions

SKU position

Tier 3: Data management


Forklift position

Centralized database



Linear programming model

Feature extraction

Formulation of shortest route for order picking


Route optimization module

Motion detection module

Fig. 1. Architecture of resource management system

An Industry 4.0 Technologies-Driven Warehouse Resource



Data Collection

In this tier, RFID devices are deployed to capture the warehouse resource data, which involves the type and quantities of SKUs, date and time when the SKU is read by the RFID reader antennas, received signal strength (RSS) which indicate the strength of signal that the reader antenna receives from the tag, etc. All the received data is stored in the centralized database. The passive RFID tag is attached on the items like pallets and forklifts to record their identities and exchange data with RFID reader. The reader is attached to the fixed facility in the warehouse, such as the main entrance, storage racks and the dock door, to transmit and receive the radio signals. The reader antennas can recognize and read hundreds of tags within their reading range. Once the reader has received the signal returned by the RFID tags, the received data is decoded into useful information and stored in the centralized database. Two types of data are captured by the RFID readers to visualize the actual state of the warehouse operations, which are static and dynamic warehouse resource data. The static warehouse resource data refers to the SKU number, type of SKU, physical dimension of SKU and quantity, etc. Dynamic warehouse resource data refers to the data that varies over time, such as locations of forklifts, the inventory levels in each rack, and the items moving on a conveyor belt, etc. [3]. When the forklifts pass the antenna or the SKUs are within the read range of the reader, the information stored in the tag is captured. 2.2

Warehouse Resources Locating

In the second tier, the locations of warehouse resources are computed based on the data collected in the first tier. The warehouse resources locating module is the core of the proposed framework. The function of this module is to compute the locations of the warehouse resources such as pallets, SKU and forklift. The accuracy of the positions of these resources affects the effectiveness and efficiency of warehouse operations. Finegrained RFID positioning approaches can mainly be divided into two categories, received signal strength indicator (RSSI) based methods and the phase based methods [4]. In this research, we address the positioning of warehouse resources using both RSSI based and phase based methods. RSSI Based Method Received Signal Strength Indication (RSSI) measures the distance from a tag to a reader antenna using the distance-to-signal strength relationship Equation [5]. Fingerprinting is the most widely adopted RSSI based positioning method, because it can mitigate the fluctuation of RSS signals due to the multipath reflection and variation of tag orientation especially when the tag is moving. Since the reference tags are subject to the same effect in the environment, RSS readings from similar environment behave similarly. Fingerprints database is constructed by gathering the RSS readings from known locations. To estimate the location of a target tag based on a given set of RSS readings, the fingerprint database is explored using machine learning methods to predict the target location.


H. Ma

The implementation of fingerprinting based methods involves two phases. In the offline training phase, a site survey is carried out to collect the fingerprints from reference points whose locations are known in advance. In the online phase, the RSS values received at the current moment are matched with the fingerprint database to determine the tag’s position [6]. It is acknowledged that the fingerprints approach results in high localization accuracy but can be severely degenerated for heterogeneous devices. Thus, if the testing tag is different from the reference tag, the fingerprint database built in the offline phase can deviate from the truth during the online phase. Moreover, for mobile tags, the fast changing environment with multipath reflections of RF signals [7, 8] makes most localization methods fail to achieve high precision. In order to deal with the challenges above, we propose a new RSSI based tracking method. For RSS measurement, the problem of device heterogeneity is alleviated by data normalization. Regarding the problem of multipath reflection, fingerprinting method is employed in our design to give the rough estimation. Then it will be refined using Kalman filter [9]. Phase Based Method Compared to RSSI, phase measurements have the potential to locate the object with accuracy within mm-level. But there are inherent challenges for phase value. First, the phase value has ambiguity and repeats every 2p radians. Therefore, phase value cannot be used to calculate the distance directly. Second, different antennas and tags will introduce extra phase shifts to the measured total phase. The device diversity is a constant parameter but calibration of all RFID tags and readers is impractical and timeconsuming. To deal with the above challenges, we propose a new phase based locating method using COTS RFID devices. Phase ambiguity can be solved by phase difference between two different antennas when the space of the two antennas is less than half the wavelength. That is a simple triangle inequality, as illustrated in Fig. 2. According to the triangle constraint, the distance difference will be less than half the wavelength and the phase difference will be less than 2p. Because hyperbola is the locus of points where the distance difference to the two foci is a constant, a hyperbola can be constructed once the distance difference is known [10]. Here, the two antennas’ locations are the foci, and the position of the target tag lies on the hyperbola curve. When multiple hyperbolas are built, their common intersection will be the object’s position in theory. One way to build many hyperbola curves is to deploy antenna array, which is adopted in works [11]. However, antenna ar-ray is not cost effective and another disadvantage of antenna array is the device diversity cannot be eliminated. Our design simulates the antenna array through antenna motion along a straight line at a constant velocity. This method should attribute to synthetic aperture radar, which is firstly used in Radar system for terrain imaging. Different from the physical antenna array, this virtual antenna array is composed of the same antenna and therefore, there is no antenna diversity.

An Industry 4.0 Technologies-Driven Warehouse Resource





c2, while global exploration would do better when c1 < c2 [9].

3 BPSO PSO was originally developed for continuous valued spaces, however, some practical problems require discrete solutions, which shall be defined in discrete spaces. Kennedy and Eberhart proposed binary particle swarm optimization (BPSO) to solve this issue in 1997 [10]. In their model, each particle could present the solution through a binary value. In BPSO, the personal best and global best solution are also updated in a continuous version. The difference lies on the improvement of velocity during optimization, which is defined as changes of probabilities, which could make the change in one state or the other. Therefore, velocity would be limited into [0, 1] through defining a logistic transformation S, usually a sigmoid function as Eq. (3).   S vij ðtÞ ¼

1 1 þ evij ðtÞ


Where vij(t) means the jth component of vector ! vi ðtÞ. Then the new solution of the particle could be updated through Eq. (4).   if randij \S vij ðtÞ thenxij ðt þ 1Þ¼ 1 otherwisexij ðt þ 1Þ¼ 0


Where randij is a random number selected from a uniform distribution in [0, 1], xi ðt þ 1Þ. xij(t + 1) represents the jth component of vector ! However, raising the positive direction in the BPSO will cause larger probability for the particle solution, while raise in the negative direction results in probability of zero. When the optimization process has nearly reached to the optimum solution, the probability of changing the position of the particle must be near to zero, while at this


Z. Li et al.

point using sigmoid function, the position will change by taking the value of 1 or 0 with the probability of 0.5, which would cause the algorithm not to converge well. To avoid this situation, hyperbolic tangent (Tanh) function, as shown in Eq. (5), is leveraged as the transformation function      eavij ðtÞ  eavij ðtÞ S vij ðtÞ ¼ tanh avij ðtÞ  ¼ av ðtÞ e ij þ eavij ðtÞ


Where a is the weight vector of the transportation.

4 HDPS-BPSO Based Maintenance Scheduling As introduced above, backlash error that will occur in the equipment at all positions and directions could be predicted through proposed HDPS. During the scheduling, our target is to minimize the total cost raised by backlash error, including the maintenance cost, machining accuracy, and defective percentage in the latest 25 weeks. The main cost function in this case study includes degradation cost CG, maintenance cost CM, and inspection cost CI. Here, the assumptions and definitions in the mathematical model are given: Assumption 1: In practical industrial applications, the relationship between production and maintenance is usually considered as a conflict in management decision. Here, we assume the maintenance scheduling compromises the production scheduling, which means the workload of the equipment will not change with maintenance decisions. Assumption 2: The degradation in specific direction and position completely follows the mapping provided by HDPS. Assumption 3: Once a maintenance has been performed, the degradations in all directions and positions are supposed to return back to the initial values (Week 1). Subsequent degradations keep following HDPS according to the distance from the last maintenance performance. Assumption 4: If maintenance has been scheduled, it is supposed to be performed at the beginning of that week. Assumption 5: Holidays have been excluded from the mathematical model. CG: CM: CI: W: A: P: Pr: DP: D N: M: Loadi:

Degradation cost. Maintenance cost. Inspection cost. Number of weeks to be scheduled. Number of axes inspected. Number of axial positions inspected. Production profit in unit time. Maximum permissible degradation. Criterion of normal product. Cost of maintenance performance. Working load in week i.

HDPS-BPSO Based Predictive Maintenance Scheduling

H: h: Dijk: D′ijk: a: b: di : xi :


Maximum working hours per week. Time of single maintenance performance. Degradation in week i along j axis at position k predicated from HDPS. Degradation in week i along j axis at position k after maintenance scheduling. Weighting factor for degradation cost. Weighting factor for maintenance cost. Distance from the last maintenance in week i. Decision variable.

Decision variable xi during optimization is defined as: if maintenance performanced in week i then xi ¼ 1 otherwise xi ¼ 0


The degradation cost here is caused by the geometrical error from backlash directly. It could be estimated as following: CG ¼


X i2W



  Load  H  u D0ijk i k2P


D0ijk ¼ Ddi jk if xi ¼ 1

then di ¼ 1 otherwise di ¼ di1 þ 1

Where u() denote the production cost caused by degradation. It can be calculated according to Eq. 8: 8 if D0ijk  DN < 00   > u D0ijk ¼ Dijk DN  Pr if DN \D0ijk  DP ð8Þ > : PrDP if D0ijk [ DP Here, we consider when the degradation is between the normal and maximum permissible degradation, the manufacturing profit decreases with a linear manner with degradation. The maintenance cost here is evaluated according to the number of maintenance performance. CM ¼ M 

X i2W



Then, the total cost Ctot can be obtained as: Ctot ¼ a  CG þ b  CM þ CI With constraint 8xi 2 W : xi  h þ Loadi  H  H



Z. Li et al.

Because the equipment is inspected in a continuous manner in this model, the value of CI is fixed. Since some issues such as incidental damage or cost caused by maintenance, and the loss in reputation of producing imperfect products. a and b can be leveraged to weight the effect of degradation and maintenance here, respectively. The parameters of HDPS-BPSO are set according to the case study as: number of population size is 100, maximum iteration is 500, weighting coefficients aand b are both set as 1, W is 25 weeks, A is 2 axes, P is 25 positions, Pr is 2,000 Norwegian Krone (NOK)/hrs, M is 15,000 NOK, DP is 16 lm, DN is 12.5 lm, H is 45 h, h is 2 h. During the test, we leveraged hyperbolic tangent function as logistic transformation for optimization. The numerical result of HDPS-BPSO is as following. The convergence starts around 200th iteration. According the numerical result, the best predictive maintenance solution in this case is to perform maintenance in week 9 and week 18, in which the total cost including the loss from degradation and maintenance cost is 33,303 NOK. According to the previous preventive maintenance strategy, the maintenance is supposed to be performed every 6 weeks. The cost is also calculated based on the preventive maintenance strategy. When maintenance executed in week 7, 13 and 19. The total cost is 47,881 NOK. Therefore, through predictive maintenance, the maintenance cost of single machine center can be reduced by 14,578 NOK in this case.

5 Conclusion and Future Work In this paper, a novel maintenance implementation strategy HDPS-BPSO is proposed to illustrate the implementation of predictive maintenance in practical application. A maintenance model for backlash error compensation in machining centers is also established. With the help of BPSO, we can find the optimized maintenance strategy for the machining center to achieve zero-defect production and leverage the remaining useful life as long as possible. The numerical result shows the benefit of implementing the strategy of predictive maintenance compared with that of preventive maintenance. In this research, we assume the maintenance scheduling compromises the production scheduling, which removes the issues about joint production from this research since maintenance scheduling for a single machine will always compromise its planned work. This assumption could separate machining centers from each other in maintenance scheduling to fit this case. Future work may focus on the application of predictive maintenance in group maintenance scheduling. Acknowledgement. This work is supported by the CIRCIT (Circular Economy Integration in the Nordic Industry for Enhanced Sustainability and Competitiveness) project, which is financed by Nordic Green Growth Research and Innovation Programme.

HDPS-BPSO Based Predictive Maintenance Scheduling


References 1. Li, Z., Wang, Y., Wang, K.: A data-driven method based on deep belief networks for backlash error prediction in machining centers. J. Intell. Manufact. (2017). 1007/s10845-017-1380-9 2. Rini, D.P., Shamsuddin, S.M., Yuhaniz, S.S.: Particle swarm optimization: technique, system and challenges. Int. J. Comput. Appl. 14(1), 19–26 (2011) 3. Gong, Y.-J., et al.: Optimizing RFID network planning by using a particle swarm optimization algorithm with redundant reader elimination. IEEE Trans. Ind. Appl. 8(4), 900–912 (2012). 4. Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of the Sixth International Symposium on Micro Machine and Human Science. MHS 1995, pp. 39–43. IEEE (1995) 5. Yu, Q.: New approaches for automated intelligent quality inspection system integration of 3D vision inspection, computational intelligence, data mining and RFID technology (2015) 6. Eberhart, R.C., Shi, Y.: Comparing inertia weights and constriction factors in particle swarm optimization. In: Proceedings of the 2000 Congress on Evolutionary Computation, pp. 84–88. IEEE (2000) 7. Liu, H., Abraham, A., Zhang, W.: A fuzzy adaptive turbulent particle swarm optimisation. Int. J. Innov. Comput. Appl. 1(1), 39–47 (2007) 8. Shi, Y., Eberhart, R.C.: Fuzzy adaptive particle swarm optimization. In: Proceedings of the 2001 Congress on Evolutionary Computation, pp. 101–106. IEEE (2001) 9. Clerc, M., Kennedy, J.: The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6(1), 58–73 (2002) 10. Kennedy, J., Eberhart, R.C.: A discrete binary version of the particle swarm algorithm. In: 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation (1997)

Influence of the Length-Diameter Ratio and the Depth of Liquid Pool in a Bowl on Separation Performance of a Decanter Centrifuge Huixin Yuan(&), Yuheng Zhang, Shuangcheng Fu, and Yusheng Jiang Institute of Separation Engineering, Changzhou University, Changzhou 213016, China [email protected]

Abstract. Although decanter centrifuges have been widely used for solidliquid separation, little attention has been paid to the two-phase flow field and the solid concentration distribution inside. Due to the high-speed rotation and closed structure, there is still no effective method to describe the complex internal flow. Basing on the reasonable simplification and hypothesis, a computational fluid dynamics (CFD) software Fluent was used to calculate the 3D steady flow field. The CFD results, including the solids recovery and the solid contend of cake discharge were in good agreement with the industrial experiments. The centrifugal hydraulic and particles’ stokes settling velocity were also in line with the theoretical calculation. These indicate that the SRF method combined with RSM turbulence model and Mixture multiphase model is an effective technique to investigate the effects of length-diameter ratio and the liquid pool depth of a drum in a decanter centrifuge. Results show that increase in the beach length can significantly improve the solid contend, but it may lead to a slight decline in solid recovery; While increase in the cylindrical length and liquid pool depth is advantageous to the liquid clarification under the premise of little effect on solid contend of cake discharge. Keywords: Decanter centrifuge  Numerical simulation Length-diameter ratio  Liquid pool depth  Separation performance

1 Introduction Decanter centrifuges have been widely used for the concentration, clarification, dehydration and particle size grading in petroleum, chemical, metallurgical, pharmaceutical, food, light industry, environmental protection et al. Decanter centrifuges are a kind of separation equipment which has better material adaptability and wide range of applications [1]. Due to lack of scientific observation and testing methods, the previous research on decanter centrifuges are focusing on strength, differential systems, wear resistance and specific applications but pay little attention on the two-phase flow and the separation of © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 78–85, 2019.

Influence of the Length-Diameter Ratio


particles in the drum. The traditional theoretical analysis is too simplistic, as a result, there are too many correction factors and calculate errors to understand the motion law of flow field [2]. Huang [3] simulated the distribution of axial and tangential velocity in a cylindrical bowl. Combined with the specific characteristics of activated sludge, Ren [4] analyzed the relationship between pressure distribution and sediment rejection velocity in spiral flow channel; Yu [5] used DPM method analyzed the movement of particles in a helical flow channel at different concentrations. On this basis, Zhou [6] analyzed the influence of helical conveyor on particle trajectory and separation. Jing [7] analyzed the influence of drum’s cone angle on velocity field; Zhu [8] used Euler method to simulate the transient changes in the process of solid phase deposition and described the accumulation and transportation of particles in a drum. With the development of computational fluid dynamics, numerical simulation method has become an important method to analyze the internal flow field of fluid machinery. Scholars around world have done some impressive work in the field of centrifuges [9–11], but the numerical simulation of decanter centrifuges is rare.

2 Methods and Reliability Verification 2.1

Physical Model and Meshing

The physical model is taken based on LW-250 decanter as shown in Fig. 1 and the dimension of model is shown in Table 1. The structure is simplified properly. Assumptions are made as follows: (1) The drum is filled with liquid and flows steadily; (2) The liquid phase is a continuous incompressible fluid; (3) The solid phase is a uniform particle with a uniform density, and no flocculation, breakage and deformation occur during the movement; (4) The gravity and the thermal effect of sediment extrusion are ignored. Table 1. LW-250 decanter’s parameters Inner diameter of drum Number of bars Length of drum Leaf angle Lead of screw D/mm N L/mm h/(°) S/mm 250 8 1000 0 60

Talc slurry with a volume fraction of 20% was used for testing and simulation. The size distribution was measured by Malvern laser particle sizer. The particle diameter is d50 = 28.7 lm, and the average particle size is d[4, 3] = 32.6 lm. The separation in the decanter was simulated with single particle size d = 30 lm and the limit of particle accumulation (Packing, limit) theoretically reach its peak limit at 0.74 (Tables 2 and 3). Table 2. Physical parameters of feed Solid particle size Solid density Liquid density Liquid viscosity ll/(Pas) d/lm qs/(kgm−3) ql/(kgm−3) 30 2621.2 998.2 0.001


H. Yuan et al. Table 3. Operating parameters of centrifuges


Processing capacity Liquid pool depth Drum speed Speed difference h/mm n1/rpm Dn/rpm Q/(m3h−1) 2.12 20 4000 20

Discharge port


Fig. 1. Physical model of decanter centrifuges

The model is established by Creo Parametric and Gambit. Feed pipe, overflow port and the center of helical flow passage use the structured hexahedral mesh while other regions use the unstructured tetrahedral meshes. After verification of mesh independence, the final mesh number is about 540 thousand. Mesh and mesh quality are shown in Fig. 2. x=125




Transition zone

Subsidence region

Dewatering region

Fig. 2. Sketch map of mesh quality


Boundary Conditions and Solving Strategies

Inlet boundary condition adopts inlet velocity. According to processing capacity, inlet flow velocity is 1.2 m/s. Overflow and discharge port adopt outflow. Quality weights are estimated based on actual flow conditions and value as 0.45:0.55; The wall contacting the fluid use the no sliding condition; The speed difference between the screw and the drum is specified by SRF, that is, the whole rotating of the drum of the decanter centrifuge (including the pre rotating cavity), while the shaft of the screw conveyer and the spiral wall face a certain amount of differential speed.

Influence of the Length-Diameter Ratio


The internal flow of horizontal spiral centrifuge is complex, so it is more appropriate to adopt RSM turbulence model with higher accuracy. The interaction between two phases of solid and liquid is calculated by Mixture model. Although the Mixture model uses a single fluid method, it allows the phases to cross each other, allowing the phases to move at different speeds, thus reflecting the flow conditions in the drums more comprehensively. The calculations are performed on a workstation, the pressure velocity coupling approach uses the SIMPLE algorithm, the pressure term is chosen PRESTO, the format is adapted to the larger pressure gradient, and the other difference schemes select the QUICK with three order accuracy. After 45000 steps of flow field calculation, the residual curve has been stabilized, and the volume fraction of solid phase of each monitoring surface is no longer changing with the increase of the number of iterations. The flow field has been stabilized.

3 Results and Discussion 3.1

Effect of Length-Diameter Ratio on Separation Performance

The inner diameter D2 of decanter centrifuges is a parameter to determine its capacity. At given D2, the reasonable selection of length-diameter ratio L/D2 is very important. The effective length of the drum L is the total length from the drum’s big end flange to the discharge port, including the subsidence zone, the transition zone and the dewatering zone (Fig. 2). At given drum cone angle and liquid pool depth, the length of the subsiding area and the length of the transition and dehydration zone are respectively determined by the length L1 and the length L2 of the column length of the drum. Effect of the Beach Length The increase of the beach length, i.e. the length of the drum cone section, would lead to a decrease in the radius of the discharge end. This would strengthen the squeezing effect of the solid bed, forcing the debris to further dewater and thus greatly improve the slag solid content W (see Figs. 3 and 4). Meanwhile, particles which could not complete settling would likely be entrained by the liquid and then escape via overflow. Thus, as shown in Fig. 8, the recovery rate E would decrease with increase in the beach length. Influence of the Length of a Cylindrical Drum It is possible for countercurrent decanter centrifuge to shift directly toward the narrower end of the drum then move back from the discharging end and flow back to the drum broader end with clear fluid on the inside of the spiral channel then evacuate from the overflow while pre-swirl particles rest into the drum. In the latter case, the time required


H. Yuan et al.

Fig. 3. Effect of cone length on solid volume fraction in drum 100


Solid content of slag W /%

Recovery yield of solid Solid content of slag









40 100





Recovery yield of solid E /%



40 600

Cone length L2 /mm

Fig. 4. Influence of the beach length on separation performance in a decanter centrifuge

for the particles to pass through the entire subsidence area along the drum axial direction is the shortest residence time t1 of the particles, referred here as “residence time”:

t1 ¼

L1 uZ


Where uz is the average axial velocity of the particles inside the flow channel. It can be calculated according to the area of the curve and the coordinate axis in Fig. 5. Since the length of a cylindrical drum has little effect on the flow field, uz is considered to be 0.45 m/s.

Influence of the Length-Diameter Ratio


Fig. 5. Influence of the length of a cylindrical drum on the axial velocity

According to the calculation above, we can see that the residence time t1 of the particles in the drum is much longer than the time t2 required for the settling of the particles to the inner wall of the drum for the separation of talc in the existing operating conditions. This means that even if the particles are agitated by local vortices, they still have enough time to complete the settlement and ensure that the solid phase is recovered, and this is the main reason why the overflow fluid remains clear. As can be seen from Figs. 6 and 7, the length of a cylindrical drum L1 has almost no effect on the slag solid content W for the simulation. The solid phase recovery E inceases with increase in L1. This effect would be more significant for the feed with finer solids.

Fig. 6. Influence of the cylindrical length on the solid phase distribution in the drum


H. Yuan et al.

Fig. 7. Influence of the cylindrical length on separation performance in a decanter centrifuge


Influence of Liquid Pool Depth on Separation Performance

The liquid pool depth h of the decanter centrifuge usually needs to be adjusted according to the particles in feed with different mean size. The fluid-level adjustment device is generally placed at the big end of the drum. As can be seen from Fig. 8, h has weak influence on the slag solid content W as the overflow port is far away from the dewatering zone. But a small liquid pool depth h would cause the rapid outflow of overflow, and reduce the residence time of solids and decrease the separation efficiency of solids. Therefore, the liquid pool should be as deep as possible for difficult separation tasks, h/(D2 − D1) = 0.4–0.5, So that along the length of the settlement area it is conducive to sediment dehydration and washing.

Fig. 8. Influence of liquid pool depth on separation performance in a decanter centrifuge

Influence of the Length-Diameter Ratio


4 Conclusions (1) The Mixture multiphase flow model is used to consider the interaction between liquid and solid phases, the simulation results are in good agreement with the test and the theoretical calculation. Compared with DPM model, the Mixture model is more suitable for high concentration of small end of drum and the distribution of solid volume fraction within the drum can be obtained directly, the simulation results are clear and intuitive. (2) Increasing the cone length L2 of the drum is beneficial to increase the slag solid content, but at the same time, it also leads to a slight decrease in solid phase recovery, it is suitable for occasions requiring higher slag solid content. Increasing the column length L1 of the drum can increase the solid phase recovery rate appropriately. It has little effect on the slag solid content; it is suitable for occasions requiring high clarity. (3) Increasing the depth of liquid pool h can prolong the length of the drum settlement area and increase the residence time of the fine particles, and it also has some hindering effect on the escaping particles. For the material which is difficult to separate, the liquid pool should be as deep as possible to improve the clarification effect. As the overflow big end away from the drum dewatering area, the change of the slag solid content is very small.

References 1. Bell, G.R.A., Symons, D.D., Pearse, J.R.: Mathematical model for solids transport power in a decanter centrifuge. Chem. Eng. Sci. 107, 114–122 (2014) 2. Boychyn, M., Yim, S.S., Bulmer, M., et al.: Performance prediction of industrial centrifuges using scale-down models. Bioprocess Biosyst. Eng. 26, 385 (2004) 3. Fernández, X.R., Nirschl, H.: Multiphase CFD simulation of a solid bowl centrifuge. Chem. Eng. Technol. 32, 719–725 (2010) 4. Huang, Z.X., Qian, C.F., Fan, D.S., et al.: Numerical simulation of the velocity of the fluid surface in the drum of a sedimentation centrifuge. J. Beijing Univ. Chem. Technol. (Nat. Sci. Ed.) 34, 645–648 (2007) 5. Jing, B.D., Liu, J.G., Wang, B., et al.: Structure design and parametric optimization on drum cone angle of horizontal screw centrifuge. J. Mech. Eng. 49, 168–173 (2013) 6. Leung, W.F.: Inferring in-situ floc size, predicting solids recovery, and scaling-up using the Leung number in separating flocculated suspension in decanter centrifuges. Sep. Purif. Technol. 171, 69–79 (2016) 7. Sun, Q.C., Jin, D.W.: Centrifuge Principle of the Structure and Design Calculation. China Machine Press, Beijing (1987) 8. Yu, P., Lin, W., Wang, X.B., et al.: Velocity simulation analysis on centrifugal separation field of horizontal spiral centrifuge. J. Mech. Eng. 47, 151–157 (2011) 9. Yuan, H.X., Feng, B.: Separation Engineering. China Petrochemical Press, Beijing (2002) 10. Zhou, C.H., Ling, Y., Shen, W.J., et al.: Numerical study on sludge dewatering by horizontal decanter centrifuge. J. Mech. Eng. 16, 206–212 (2014) 11. Zhu, G., Tan, W., Yu, Y., et al.: Experimental and numerical study of the solid concentration distribution in a horizontal screw decanter centrifuge. Ind. Eng. Chem. Res. 52, 17249–17256 (2013)

LSTM Based Prediction and TimeTemperature Varying Rate Fusion for Hydropower Plant Anomaly Detection: A Case Study Jin Yuan1,2, Yi Wang3, and Kesheng Wang2(&) 1

College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai’an, China [email protected] 2 Department of Mechanical and Industrial Engineering, NTNU, Trondheim, Norway [email protected] 3 The School of Business, Plymouth University, Plymouth, UK [email protected]

Abstract. Data-driven based predictive maintenance is vital in hydropower plant management, since early detections on the emerging problem can save invaluable time and cost. The overheating of bearings of turbines and generators is one of the major problems for the continuous operations of hydropower plants. A reliable forecast of bearing temperature helps designers in preparing future bearings and setting up the operating range of bearing temperatures. In this study, the fusion algorithm between Long Short Term Memory (LSTM) neural networks based effective slide-window regression model with timetemperature varying rate based anomaly detection framework is developed for detecting component and temporal anomalies of 56 MW Francis Pumped Storage Hydropower (PSH) plant in predictable and noisy domains. Data sets of all sensors were collected for a period of ten year ranging from 2007 to 2017 used for the train and test dataset. The predicted upper guide bearing temperature values were compared with the actual bearing temperature values in order to verify the performance of the model. The data analysis results shows anomaly is validated on the PSH plant. Keywords: Hydropower plant  Anomaly detection Data-driven  Predictive maintenance

 LSTM neural networks

1 Introduction Thanks to hydropower is an environmental friendly and renewable energy source, thus 1/6 of the power produced comes from hydropower in the world and 99% of all power production in Norway comes from hydropower [1]. The growing synergy among EU member states has made it possible for Norway to be selected as the “Green Battery” of

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 86–94, 2019.

LSTM Based Prediction and Time-Temperature Varying Rate Fusion


Europe by developing Pumped Storage Hydropower (PSH) plants as a means of storage technology, the most feasible among all the energy storage technologies available today. Predictive maintenance for the large-scale complex electromechanical systems has attracted many extensive studies in the recent years [2]. Early fault or anomaly detections before the emerging problem appeared are vital, and it can trigger the necessary maintenance to avoid dangerous situations thus to help ensure equipment reliability and in-service time, and reduce failure rates and unscheduled downtimes [3]. Moreover, because predictive maintenance activities are scheduled ahead of time, Condition Based Maintenance (CBM) tends to be less costly than preventive maintenance [4, 5]. In the field of automation control of hydropower plant, real-time monitoring of operating equipment is crucial [6]. In real-time monitoring, different types of data of key components need to be collected, including temperature, vibration, water level, pressure, flow, etc. With the development of cloud computing and Internet of Things (IOT), a large amount of historical and online CBM data of hydropower plant can be stored in the cloud for predictive maintenance research and applications. Bearing temperature of the hydropower plant plays a vital role in the operations of the turbines and generators under working condition [7]. Stable bearing temperatures in the turbine and generator are essential for their successful continues operations [4]. Predictive maintenance decision-making depends on the health statuses and deterioration trend prediction. The health prediction refers to the use of accurate and effective model to carry out the state monitoring of key performance indictor as health recession process of the component or system, and predict the forthcoming health deterioration tendency by data mining and analysis of the current running state. Long Short Term Memory (LSTM) neural networks [8] have powerful modeling capability in time sequence [9], which can model the long-term contextual information and extract the representative features from the latent variables, in addition to the neural network’s ability to fit nonlinearities. This paper proposes a LSTM-based method for bearing temperature prediction and increment-based anomaly detection in hydropower plant. The proposed approach provides an alternative method to leverage and integrate features for anomaly detection instead of empirical knowledge of the hydraulic bearing and its cooling system of hydropower plant. The remaining part of this paper is organized as follows. Section 2 introduces the set up and data collection of a pumped storage hydropower plant. Section 3 details the Methodology of framework of LSTM based prediction and time varying rate fusion for hydropower plant anomaly detection. Finally, the conclusions are given in Sect. 5.

2 Pumped Storage Hydropower Plant Data Collection The pumped storage plant is installed a 56 MW Francis reversible pump turbines with 91 GWh annual average production, which is also the most appropriate for Litjfossen. The condition surveillance system is contained: Power production and Guide Vane opening for turbine operation monitoring, Bearing and Oil temperature of the Thrust bearing and Upper guide bearing, Vibration intensity of Turbine bearing, Lower guide


J. Yuan et al.

bearing and Upper guide bearing indicated by Maximum and Mean amplitude, etc. The schematic overview key monitoring components of pumped storage hydropower plant is shown in Fig. 1. The dataset covered the period of ten year ranging from April 2007 to June 2017 every one hour, which was collected from the SCADA surveillance system of the pumped storage plant, shown in Fig. 2.

Fig. 1. Schematic overview of key monitoring components of hydropower plant

Fig. 2. The Power generating and pumping condition of PSH plant

3 Framework of LSTM Based Prediction and Time Varying Rate Fusion for Hydropower Plant Anomaly Detection One of the important goals of predictive maintenance is to use daily monitoring of temperature, vibration, and other status settings to predict failures in advance. The hydraulic bearing temperature of PSH plant variance depends on whether the interface structures and contractions among the shafts, bearings are oil are normal or not, further depending on the flow and temperature of cooling water, ambience temperature and oil

LSTM Based Prediction and Time-Temperature Varying Rate Fusion


temperature. Some thermodynamics balance caused by corresponding mechanical and lubricating components with different working stage make it complex, but the datadriven time series modeling algorithms could be leveraged via the co-relationships hidden behind the historic data. In this case, considering the data collected with lower sampling frequency, two methods in data-driven and physical ways is proposed respectively and fused together. One is to find out the trend of change from the historic hourly values that has been queried, and then predict when it will cross the given limit. The data driven approach to learn the patterns of variance intrinsically avoids the modeling of the potential physical mechanisms and focus on predicting according to the relevant historical time series data. Another method is to find abnormal signs from the above data changes, which is anomaly detection. The flow of LSTM based prediction and time varying rate fusion for PSH plant anomaly detection, shown in Fig. 3.

Fig. 3. Flow of LSTM based prediction and time varying rate fusion for anomaly detection


LSTM Based Bearing Temperature Prediction

LSTM is one of the most successful modern recurrent neural networks architectures for sequence learning tasks, which could self-adaptively extract features from the time series. A memory cell is introduced to form simpler nodes in a specific connectivity pattern, and such units ensure that the gradient can pass across many time steps without vanishing or exploding. LSTM networks have achieved great success dealing with sequential data in a range of different fields, including video analysis, information retrieval, natural language translation, and predict the excess vibration events in aircraft engines.


J. Yuan et al.

From the perspective of PSH plant maintenance, it is always desirable to predict the bearing temperature correctly. So in this paper, the warped with slide windows based LSTM is utilized to one-step bearing temperature prediction of PSH plant. The LSTM network Structure is shown in Fig. 4. The input layer is resampled by a wide size N sliding window from historical sequence data as input points. The output layer give out one-step ahead prediction. Three hidden layer with LSTM and one fully connected layer are cascaded connected to the input and output layer. There are 128 LSTM and 512 Dense in the hidden layer at each time step. We set respectively the window wide N = 20, 30, 50, 100 to predict the results every one hours. The best sliding window size N is set to 50 empirically according to the experiment results.

Fig. 4. Structure of LSTM based model to one-step ahead bearing temperature prediction


Bearing Temperature Increment Based Anomaly Detection

Before the bearing failure, in most cases there will be an abnormal temperature change rate. It is not difficult for LSTM to modeling the normal behaviors in a stable state of operation, but for the transition state, such as in the stage of startup or fault condition, to predict exactly is a certain difficulty. However, which could be utilized to anomaly detection that is LSTM to model normal behaviors only. From the thermodynamic point of view, before the balance between generated heat and force-cooling heat is reached, the variances every certain period in heat during shutdown, startup and fault stage have different time-constants and nonlinearly lags. The actual situation shows that the time change rate of the bearing temperature is mainly affected by the operating conditions controlled by the uniformity of the oil temperature in the bearing oil tank, the rate of rise of the unit speed, the concentricity between the inner and outer rings of the bearing, etc. For example, when the oil tank is mixed with water, the bearing temperature will be decrease suddenly, therefore to utilize such physical

LSTM Based Prediction and Time-Temperature Varying Rate Fusion


thermodynamic feature, the time time-temperature varying rate could be utilized to perform the recognition of bearing temperature in the monitoring system. A fusion approach time-temperature varying rate measure combining with LSTM prediction method to consider together by tail probability is proposed, as Figs. 3 and 5 shown.

Fig. 5. Data fusion between LSTM based prediction and time-temperature varying rate

Let the LSTM prediction error and hourly increment of bearing temperature denotes as DE and DT respectively. Both DE and DT are combined with weights to form a scalar value nominal error st to represent the anomaly confidence: st ¼ x1 DEt þ x2 DTt ;

s:t:x1 þ x2 ¼ 1


Rather than using fixed threshold as recognition metric directly, a distribution of nominal error as a probabilistic metric could be utilized to check for the anomaly likelihood [10] of the current state, which is defined how anomalous the current condition is based both on data-driven prediction and on physical time-varying rate change. A sliding window with a long-term size W along the last nominal errors st continuously updated from nominal error. The anomaly likelihood compute as follows:   ~  lt l Lt ¼ 1  Q t [1  e rt


The distribution is assumed as a normal distribution sampled from sliding window ~t is defined the W, and the distribution is defined the sample mean lt and variance rt . l mean value from a short-term moving average window with size W′ (W 0  W). The e is a user-defined threshold to recognize the anomaly condition if the inequality is satisfied in Eq. (2).

4 Results and Discussion Because the sampling frequency of the PSH dataset is too low (one hour a point), and each time the turbine started or closed in the hour is mutative, it is difficult to reflect the actual bearing thermodynamic process. Therefore, in this paper, we only consider the pure working condition in every time of the power generation and pumping process, and the start-up heads and shut-down process tails are removed.



J. Yuan et al.

LSTM Prediction Validation

In this paper, the Upper Guide bearing temperature prediction as the demo case is utilized. The training and testing data is split by 70% and 30% respectively along the time sequence with 24014 points in total, shown in left panel of Fig. 6. We can observe that there is good agreement between the model and the real values on the training data. On the test data, shown in right panel of Fig. 6, the prediction of trained LSTM model shows a reasonable agreement between the prediction (green line) and the real values (blue line)in the early stage of test data, but the predicted error is gradually increases throughout the test set. The In the lower right corner of Fig. 6 shows the situation in 2017 where there is a large difference between the training and the test set, however the squared error upper bound is not exceed to 0.025, with the normalized dataset. An online learning mode may be boosted the result for a more frequently LSTM model update.

Fig. 6. LSTM based prediction of upper guide bearing temperature


Anomaly Validation

From the ten year’s dataset, it was discovered that the temperature of the upper guide bearing when the turbine is active had been slowly increasing, although typical temperature ranges for turbine bearing is about 70–80 °C, and the temperature still steady under the safety boundary. The anomaly detection result of fusion method is shown in Fig. 7. There are four important anomaly condition occurred in the ten years shown in Fig. 7a, the result shown in upper guide bearing temperature is also validated by the result of maximum vibration upper guide bearing shown in Fig. 7b. At 2018, an inspection done by GE Renewable Norway AS indicated that the cause of the steady temperature increase was most likely wear and tear or guide bearing skewness since the oil temperature in the upper guide bearing and support bearing did not change noticeably during the period. The predictive maintenance using this analysis models could help to detect the problem earlier.

LSTM Based Prediction and Time-Temperature Varying Rate Fusion


(a) Anomaly detected using upper guide bearing temperature

(b) Anomaly detected using maximum vibration upper guide bearing Fig. 7. Anomaly detected by fusion between LSTM and time-temperature varying rate

5 Conclusions The detection of anomalies in data-driven based predictive maintenance is becoming increasingly important in hydropower plant management, since early detections on the emerging problem can save invaluable time and cost. In this paper, the upper guild bearing temperature in PSH plant is as a case study to predict the failure of the turbine and generator in the PHM system. The fusion algorithm between LSTM-based one-step ahead predictions with time-temperature varying rate based anomaly detection framework is developed for detecting component and temporal anomalies in predictable and noisy domains. The prediction errors are acceptable and the results provide a reference for capturing abnormal trend of the upper guild bearing temperatures. The effectiveness of the anomaly detection model is validated on the practical running data of a PSH plant. The data analysis results shows the acceptable accuracy and effectiveness of the predictive maintenance of the PSH plant in engineering practice. Acknowledgements. The work is supported by MonitorX project, which is granted the Research Council of Norway (grant no. 245317).

References 1. Kjølle, A.: Hydropower in Norway. Mechanical Equipment, Trondheim (2001) 2. Mobley, R.K.: An Introduction to Predictive Maintenance, 2nd edn. Butterworth Heinemann, Boston (2003) 3. Wang, K., Wang, Y.: How AI affects the future predictive maintenance: a primer of deep learning. In: IWAMA 2017. Lecture Notes in Electrical Engineering, vol. 451 (2018) 4. Wang, Y., Ma, H.-S., Yang, J.-H., Wang, K.-S.: Industry 4.0: a way from mass customization to mass personalization production. Adv. Manuf. 5(4), 311–320 (2017)


J. Yuan et al.

5. Bram, J., Ruud, T., Tiedo, T.: The influence of practical factors on the benefits of conditionbased maintenance over time-based maintenance. Reliab. Eng. Syst. Saf. 158, 21–30 (2017) 6. Gao, Z., Sheng, S.: Real-time monitoring, prognosis, and resilient control for wind turbine systems. Renew. Energy 116(B), 1–4 (2018) 7. Matheus, P.P., Licínio, C.P., Ricardo, K., Ernani, W.S., Fernanda, G.C., Luiz, M.: A case study on thrust bearing failures at the SÃO SIMÃO hydroelectric power plant. Case Stud. Therm. Eng. 1(1), 1–6 (2013) 8. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 9. Martin, L., Lars, K., Amy, L.: A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recogn. Lett. 42, 11–24 (2014) 10. Subutai, A., Alexander, L., Scott, P., Zuha, A.: Unsupervised real-time anomaly detection for streaming data. Neurocomputing 262, 134–147 (2017)

Wind Turbine System Modelling Using Bond Graph Method Abdulbasit Mohammed1 and Hirpa G. Lemu2(&) 1

Addis Ababa University of Science and Technology, Addis Ababa, Ethiopia 2 University of Stavanger, Stavanger, Norway [email protected]

Abstract. This paper discusses use of bond graphs as a graphical method to model a wind turbine system. The purpose of the study reported in the article is to show some of the advantages gained by using the bond graph method in analysis and modeling of wind turbines. A nonlinear model of a wind turbine generating system, containing blade pitch, drive train, tower motion and generator are modeled and presented. Using bond graph method, particular focus is given on the blade, the drive train, and show the difference with modeling using a classical mechanical method. The model consists of realistic parameters, however no attempt is done to validate a specific wind turbine system. Simulations are also carried out in the bond graph simulation software 20-sim. The model is validated using blade profile data of NACA 4415. Keywords: Bond graph Wind turbine

 Dynamic modelling  Wind energy

1 Introduction Wind energy is one of the globally fastest growing renewable energy sources. Because of the fact that it does not produce any byproducts that are harmful to the environment, wind power generation is clean and non-polluting. Converting enrgy from winds is very different in nature from traditional generators, and thus integration of the wind power into the conventional power system requires further studies including study of the dynamic system. According to Karnopp and colleagues [1], the classical sources of energy analysis of power systems is relatively simple because the models of the power system components and controllers are well known, sufficiently standardized and data are available. Regarding wind turbine modelling, however, problems are encountered due to lack of data and lack of control-system structures. This is attributed to the strong competition between wind turbine manufacturers. As a result, many studies use relatively simpler models, mostly by neglecting the control systems, which influence the reliability of the analytical results. The general objectives of this paper to conduct analytical modeling of horizontal axis wind turbine with two blades using bond graph method and simulation by using 20-Sim software. To address this objective, the study covered modelling and dynamic behavior investigation of the aerodynamic, mechanical and parts of a variable speed wind turbine blade pitch angle control, modelling each components of wind turbine © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 95–105, 2019.


A. Mohammed and H. G. Lemu

such as blade, drive trains and tower motion and simulation of the overall wind turbine system. This article is divided into four sections. The first section briefly gives detailed information about the general introduction which contains objectives of this article. Section 2 describes wind turbine modeling, system and model description of different parts of the system model such as aerodynamic loads, drive train, tower motion and complete system modeling. The simulation and results and conclusion will be presented in Sects. 3 and 4 respectively.

2 Bond Graph Approach to Wind Turbine Modelling A wind turbine (WT) is a multi-domain and multi-disciplinary complex system involving different technical areas. Though the technology is not so new, the concern for sustainability of fossil fuel based energy has recently attracted research in this topic. The bond graph model of wind turbine can be found in several works. Among others, a detailed model of a blade is proposed and the aerodynamic forces acting on the blade are considered. To calculate the output of dynamic behavior, real data of a wind turbine are often used. Over the years there existed discussions about how to model the WT accurately. While some works are reported dynamic analysis for a one-mass-model [2, 3], others examined a two-mass-model [4–6]. There are also examples where actual measured data from a WT is used and compared with both a 1-mass and a 2-mass-model [7], with model validation using a recorded case obtained in a fixed speed and stall regulated wind turbine. In this article, the study conducted on a complete bond graph model is discussed. The study is based on data and parameters of a real wind turbine of 750 kW [8]. 2.1

Concept of Bond Graph

The basic elements of bond graph method is illustrated in Fig. 1. As shown, a bond should always exist between two elements. To each bond, two power variables are associated with: a pair of conjugated variables, one is effort (e) and the other is flow (f). Energy flows through the bond in either direction. Ports are places where energy transfer takes place. Ports are also contact points of sub-systems. Power bonds are used to transfer power between different parts the system. A bond graph consists of nine elements, designated as Se, Sf, I, C, R, GY, TF, 0-junction and 1-junction (Table 1).

Fig. 1. Bond and ports of bond graph and symbol of power bond

Wind Turbine System Modelling Using Bond Graph Method


Table 1. Elements, symbols and causalities of bond graphs Elements Source of effort

Symbol Se

Source of flow


Resistive element Compliance element


Elements Inertial element

Symbol I








These elements can be classified into three categories: one-port element, two-port element and three or multi-port element. One-port elements are further categorized as active elements and passive elements. While Se and Sf are active, I, C and R elements are passive. The two-port elements are TF and GY. Being a transformer, TF does not create, store or dissipate energy. It conserves power and transmits the factors of power with power scaling. Three-port or multi-port elements are 0-junction and 1-junction. These junctions also serve to interconnect elements into subsystem or system models. The efforts on 0-junction are equal and the algebraic sum of the flows is zero. In 1junction the operation is vise verse [1]. 2.2

Wind Turbine Model Description

The wind causes lift and drag forces on the blades which subsequently generate a torque and turn the blades. Then, the blades turn the low-speed shaft inside the nacelle. The low-speed shaft goes into a gearbox that increases the rotational speed transmitted to the generator by the high-speed shaft. Finally, the generator converts the mechanical power into electrical power. In other words, the torque produced by blades is transferred to gearbox through the main shaft and further transferred to the generator. This wind turbine model description consists of five inertias: the two blades, hub, gearbox and generator (Fig. 2). The Euler Bernoulli beam and blade elements momentum models are used as the bases of the proposed model, where the global model is constructed by coupling the aerodynamic model with the structural model (Fig. 3). That means the model of wind turbine blade is considered as cantilever beam. The Bernoulli-Euler method is designed assuming uniform section beams and small deformation, while the blade exhibits large deformation. To apply this method, the blade is divided into three elements, where the deformation of the sections is added to get the total deformation of blade. The beam element establishes the realtion between the generalized Newtonian forces and the


A. Mohammed and H. G. Lemu

Fig. 2. Wind turbine model description

generalized displacements at the ends of the element. The stiffness matrix is modelled as a 4-port C-field because of the four generalized displacements. A bond graph model of the beam element is lumped to the element inertias at the element’s centre of gravity (CoG) and appending to the 1- junctions represents displacements and rotations at the CoG of the element [9]. The final simplified bond graph of structural model of a flexible blade is shown in Fig. 4.

Fig. 3. Blade model

In the model given in Fig. 4, Ri stands for the multiple of the structural damping matrix between the CoG of adjacent elements, [Ki] is stiffness matrix of cantilever beam and l is the structural damping coefficient. This model is mathematically expressed as Ri ¼ l½Ki 


Wind Turbine System Modelling Using Bond Graph Method


Fig. 4. Structural model of flexible blade with three section

Wind turbine aerodynamics is highly complex and nonlinear. The true flow of fluid passing a wind turbine is governed by the first principles - i.e. Navier-Stokes equations, which is too complex to use in modelling. The blade geometry and flow stream properties can be related to a rotor torque dQi and a differential rotor thrust dTi. expressed as 1 2 dTi ¼ qVreli ðCli cos/i þ Cdi Sin/i Þci li 2


1 2 dQi ¼ qVreli ðCli Sin/i  Cdi cos/i Þci li 2


By assuming that there exists no interaction between each blade element, the forces exerted on the blade elements due to the flow stream are determined solely by a planar (2D) lift and drag characteristics of the blade element airfoil shape and the orientation relative to the inflow. The local forces on the blade are the lift (L) and drag (D) forces, and the normal force (R). The relation between these forces including the tangential (T) forces is purely vectorial and determined by the angle of attack of the incoming flow, ai and /i at the ith section. The velocity of the incoming flow stream Vreli for the ith blade element relative to the wind velocity Vw is thus obtained from Vreli ¼

V w ð 1  ai Þ sin /i


The tip speed ratio is one of the key parameters used to describe the performance of a wind turbine, and it is expressed as ki ¼

x r ri Vw


For each blade section, the aerodynamic force is obtained applying BEM theory. Thus, the aerodynamic force Fi acting on the ith element (i = 1, 2,…, N) is expressed as:


A. Mohammed and H. G. Lemu

dQi 1 2 ð1  ai Þ2 ¼ qVw ðCli Sin/i  Cdi cos/i Þci li Vw 2 r sin2 /i ! dQi 1 2 ð1  ai Þ2 qV ¼ ðCli Sin/i  Cdi cos/i Þci li Vw Fi ¼ 2 w sin2 /i r |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Fi ¼




where GYi represents modulus of the Gyrator element associated with the ith section, Vw is wind velocity, q is the air density, /i is the wind inflow angle, Cli and Cdi are dimensionless coefficients for the lift and the drag forces respectively. As the bond graph model in Fig. 5 shows, the modulated gyrator (MGY) element is employed in Eqs. (4) and (5). This is because the wind (i.e. the MSf source) is transformed into an aerodynamic source (Se source).

Fig. 5. Bond graph of aerodynamic model

Tower Motion: The turbine sketch is shown in Fig. 6 with the thrust force acting on the structure. The tower movement is assumed not to have influence on the mechanical system, it only affects its input. The dynamic equation f of the tower motion can be formulated as Eq. (6).

Fig. 6. Sketch of wind turbine structure and simplified bond graph of tower motion

Wind Turbine System Modelling Using Bond Graph Method

mt Z€ ¼ Ft  Dt Z_  Kt Z



where mt is the tower mass, Kt is the tower stiffness, Ft is the wind force acting on the tower and Dt is the tower damping. Drive Train: Drive train models range from one- to six mass models. For simplicity, a 2D-model is assumed to be sufficient and deriving the governing equations for this model is not too hard. The model (Fig. 7) consists of six 1-junctions and three 0junction. The rotational speed is described by the 1-junction that is connected to the inertia of the rotor. The inertia of the rotor and that of the generator have different angular speeds due to the dynamics of the system.

Fig. 7. Sketch of wind turbine and its bond graph model

Fig. 8. Assemblage bond graphs of wind turbine


A. Mohammed and H. G. Lemu

It is because of this reason that the 0-junction transfers the same torque. The 1junction that is connects the resistive and the compliance elements indicates the difference in rotational speed of the two inertias. It also indicates that the compliance and resistive elements have the same rotational speed, but different torque. The generator rotational speed is then represented by the last 1-junction, which is connected to the generator inertia. Complete System Modelling: The individual models presented in the previous sections are assembled as shown in Fig. 8. In the main and the high-speed shafts are also added to the simulation model.

3 Simulation and Results To simulate the blade model, the previously given in Eqs. 3 and 5 are placed within each MGY element. This means that their constitutive relation is altered. These expressions represent an iterative process involving change of the axial tangential induction factor and the wind inflow angle. Figure 9 shows the code flow diagram in which the connection between the blade and the hub is assumed to be rigid (Vbound = 0) and the flow is unsteady. The blade has only one rotational degree of freedom. For simulation, NACA 4415 blade data are used [10] and distribution of twist and chord along the length of the blade is shown in Fig. 10.

Fig. 9. Flowchart code based on BEM

To validate that the bond graph models give similar interpretation of the system, the inputs are set to zero and an initial value is inserted for the hub rotational speed. The two models are considered equal when they respond with similar dynamic behavior at different rotational speeds. The plots for xH ; xGB and xG are shown in Fig. 11.

Wind Turbine System Modelling Using Bond Graph Method


Fig. 10. Twist and chord along the blade

Fig. 11. Simulation with 20-sim


Validation with Literature

This is an approach proposed to validate model using the parameters from [11], whose model is also a horizontal axis wind turbine described by bond graph model. All the wind turbine properties for simulation are documented in Table 2. In addition, the model of Bakka and Karimi [11] is rebuilt in CAMPG. Both models are then simulated

Table 2. Wind turbine properties used in the simulation Blade sections Section 1 r1 = 3.9 m c1 = 1.56 m bt1 = 7.65° M1 = 928 kg J1 = 26 kg m2 I0 = 0 I1 = 0.00077 li = 7.8 m

Section 2 r2 = 11.7 m c2 = 1.459 m bt2 = 4.53° M2 = 560 kg J2 = 13.78 kg m2 I1 = 0.00077 I2 = 0.0005 li = 7.8 m

Section 3 r3 = 19.5 m c3 = 0.8315 m bt3 = 0.72° M3 = 207 kg J3 = 1.66 kg m2 I2 = 0.0005 I3 = 6.2 10-5 li = 7.8 m

Wind turbine properties Components Values Rotor inertia 5.91e7 kg-m2 Generator inertia 500 kg-m2 Drive train stiffness 8.74e8 N/rad Drive train damping 8.35e7 N/rad-s Gear ratio 97 R = 23.2 m, E = 1.71e10, B = 2, q = 1.225, l = 0.01, JT = 1000 kg m2


A. Mohammed and H. G. Lemu

in MATLAB using ODE15s solver. Figure 12 shows hub angular velocity with respect to time, where the discrepancy between the two models is clearly observed. The similar trend of both plots is used to validate the model.

Fig. 12. Comparison of hub ang. velocity - Proposed model versus Bakka and Karimi

4 Conclusion In this work, a flexible wind turbine is modelled using a bond graph method. The flexibility was introduced into the blades, the shafts and the tower and a complete model describing the behavior of all the essential elements of the system was obtained with less difficulty compared with other methods. In order to apply the aerodynamic force, blade structure is considered to be a flexible body. The nonlinear model of the wind turbine consisted of the drive train, pitching system, tower and generator. Though modelling dynamic systems using the classical approach and the bond graph approach is quite different, it is observed through the validation process of this study that the model outcomes are expressed by similar governing equations. It is also observed, in this study, that the bond graph method gives better understanding of what actually happens in the system.

References 1. Karnopp, D.C., Margolis, D.L., Rosenberg, R.C.: System Dynamics: Modeling and Simulation of Mechatronic Systems, 5th edn. Wiley (2012) 2. Abdin, E.S., Xu, W.: Control design and dynamic performance analysis of a wind turbine induction generator unit. IEEE Trans. Energy Convers. 729275 (1998) 3. Zubia, I., Ostolaza, X., Tapia, G., Tapia, A., Saenz, S.R.: Electrical fault simulation and dynamic response of a wind farm. In: International Conference on Power and Energy System, No. 337-095.595 (2001) 4. Carrillo, C., Feijoo, A.E., Cidras, J., Gonzalez, J.: Power fluctuations in an isolated wind plant. IEEE Trans Energy Converse 19(1), 21721 (2004) 5. Muyeen, S.M., Ali, M.H., et al.: Comparative study on transient stability analysis of wind turbine generator system using different drive train models. IET Rene, Power Gener. 1(2) (2007)

Wind Turbine System Modelling Using Bond Graph Method


6. Petru, T., Thiringer, T.: Modeling of wind turbines for power system studies. IEEE Trans. Power Syst. 17(4), 11329 (2002) 7. Lydia, M., Immanuel, A., et al.: Advanced algorithms for wind turbine power curve modeling. IEEE Trans. Sustain. Energy (2013) 8. Yi, G., Jonathan, K., Robert, G.P.: Dynamic analysis of wind turbine planetary gears using an extended harmonic balance approach. In: International Conference on Noise and Vibration Engineering, Leuven, Belgium, 17–19 September (2011) 9. Lubosny, Z.: Wind Turbine Operation in Electric Power Systems. Springer, Germany (2003) 10. Xing, Y.: An inertia-capacitance beam substructure formulation based on bond graph terminology with applications to rotating beam and wind turbine rotor blades. Ph.D. Dissertation, Norwegian University of Science and Technology (2010) 11. Bakka, T., Karimi, H.R.: Bond graph modeling and simulation of wind turbine systems. J. Mech. Sci. Technol. 27(6), 1843–1852 (2013)

On Opportunities and Limitations of Additive Manufacturing Technology for Industry 4.0 Era Hirpa G. Lemu(&) University of Stavanger, Stavanger, Norway [email protected]

Abstract. This article presents analysis of the role of additive manufacturing within the vision of the Industry 4.0. The key focus areas of Industry 4.0 and the relevance of the developing additive manufacturing in this paradigm shift within the manufacturing industry are highlighted. Based on available literature, some of the central limitations in the current development level of additive manufacturing technology for production of functional parts such as mechanical behavior, surface finish, geometrical accuracy and production rate related limitations are studied and reported. The study shows that, though additive manufacturing represents one of the central paradigms of Industry 4.0, namely smart machines that are decentralized units with local control intelligence and strong communications with other devices, it still requires further research and development. Keywords: Additive manufacturing  3D printing  Industry 4.0 Fused deposition modeling  Selective laser melting

1 Introduction Nowadays, the concept of “Industry 4.0” is appearing in the manufacturing terminology supported by the rapid progress of digitalization in the manufacturing sector. The concept signifies the preceding sequences of transformations in the industrial sector driven by mechanization (Industry 1.0), electric drive (Industry 2.0) and computer technology (Industry 3.0). In a similar manner, as it has been exhibited in other sectors such as mobile communication, Information Technology (IT) systems, e-commerce (banking systems) and the like, the advances in digital technology is transforming the way enterprises add value to their products and increase productivity. This transformation to “Industry 4.0” is expected to be the drive for innovations that are realized by interconnecting diverse elements of the digital technologies such as autonomous machines, robots, sensors, IT systems etc. into the manufacturing value chain. A number of enabler technologies (Fig. 1) are also identified that interact with each other and enable the autonomous operation, collection and analysis of data to monitor and optimize the system performance and prediction of potential failures [1]. As illustrated in Fig. 1, additive manufacturing (AM) is the key enabler technologies in “Industry 4.0” concept. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 106–113, 2019.

On Opportunities and Limitations of Additive Manufacturing Technology


Fig. 1. Key enablers of “Industry 4.0” concept

The aim of the study reported in this article is to make analysis of the role of additive manufacturing within the Industry 4.0 concept based on the available literature and case studies and provide some outlooks for future research. Following this introduction section, significance and relevance of AM technology in Industry 4.0 concept is presented in Sect. 2. Section 3 explains about AM technology with focus on fused deposition modelling and selective laser melting technologies that are used for production of functional parts. After presenting brief outlooks for future research and development in Sect. 4, conclusions are given in Sect. 5.

2 Significance and Relevance of AM to Industry 4.0 The AM technology is nowadays getting high popularity partly because the number of low-cost 3D printers is exponentially increasing, and partly because 3D printing is becoming popular by not only average consumers and small businesses, but also the technology is seen as potential game changer even by big companies involved in aerospace and automobiles. The technology is one of the fastest growing industries and is expected to bring about a paradigm shift in the manufacturing industry with significant impacts on national and global economy. There are many indications that AM will change the way the manufacturing industry is making business in the future. Beyond engineering applications, home applications such as food printing, printing of high-fashion dresses and the like are also reported [2]. AM is considered to be the cornerstone of the current industrial phase designated as “Industry 4.0” with digitization of the manufacturing, parallel with automation, Internet of Things, Big data and the like. As part of the national goal to make Norway digitized, for instance, a Top Industry Center (“Toppindustrisenteret”) has been established early in 2017 at national level where AM is one of the focus areas [3]. It is stated that AM will serve as a key disruptive technology in the transformation to the digitized economy of the country designated as “Norway 6.0”.


H. G. Lemu

It is also argued in the literature that the digitization of manufacturing process realized by AM can bring about a new era with a huge potential to revolutionize the manufacturing process. In particular AM, as one of the key enablers of Industry 4.0, enables the production of high value, complex and customized products [4]. It leads to reduction of time-to-market and the cost of manufacturing. AM is also seen as a potential game changer in maintenance, repair and overhauling areas [5]. It will play a key role and will have huge impacts on economic, geopolitical, environmental, intellectual property, legal and security implications including [6, 7]: – – – –

changes in the functioning process of business models. decisions on not only what to produce, but also where to produce. simplification of the supply chain: digital files are transported. creating sustainable manufacturing process that is environment friendly.

The above listed implications mean that simplifying the supply chain and having more production activities closer to the end-user leads to reduced environmental impact of manufacturing processes and therefore contributes to sustainable manufacturing. Furthermore, parts are directly produced from 3D CAD files and hence the end-user’s position to influence eco-efficient approach to manufacturing, which is one key element of the visions for “Industry 4.0”. AM is eco-efficient because products and services are created using processes that are non-polluting, conserve energy and natural resources, because it allows the user to bypass many of the service provider and part manufacturers. Such a production process can be economically viable, safe and healthy for producers and the community at large.

3 Brief Description of General AM Process In the last three decades, diverse variants of AM technology have been dynamically expanded with a range of printers that can print different materials including metals. Almost all technologies have common characteristics in that all build the 3D object by depositing materials layer-by-layer and binding the layers together. However, the technologies are categorized based on the type of material used and the way the materials are fused together. Figure 2 shows the general category of the main technologies commercialized in the early years of the technology development. 3.1

Overview of the Printing Process

The general principle of layer manufacturing indicates that the steps or procedures employed by the AM machines is almost identical. In general, the following typical steps are followed by all AM machines (depicted in Fig. 2). The AM based production of parts start from a software model that fully describe the external geometry developed in a CAD solid modeling software. The geometry data as a 3D solid or surface representation, is passed from the CAD environment to the AM machine in STL or object (OBJ) file formats. These file formats are today included in most solid modeling tools and are commonly used interface formats with AM machines.

On Opportunities and Limitations of Additive Manufacturing Technology


Fig. 2. Steps of converting a 3D CAD model to a 3D physical object in AM process

The STL file format is derived from the file format for the initial commercial RP technology STereoLithography. STL stand also for Standard Tessellation Language and is currently used as an industry standard format to export geometry data to 3D printers. It represents the 3D model using information about the coordinates and outward surface normal of triangles. The output to the AM machines environment is a boundary representation of the 3D object that is approximated by a mesh of triangles, where the STL file can be output as either binary or ASCII (text) format. OBJ file format is also used as an exchange format by many software programs as an alternative to STL file format particularly when information about colors or materials is desirable because the STL file format lacks the ability to define and transfer data about materials or microstructure volumetrically. The file format has both ASCII format (.obj) and binary format (.mod). Recently, a new file format for AM referred to as additive manufacturing file (AMF) format is designed, particularly to enable interfacing of multi-material printing capacities [8]. This file format replaces the standard STL file format and serves as a universal file to describe the shape and composition of any 3D object to be fabricated on any AM machine. Contrary to STL file format, AMF has functionalities that support transfer of data about color, materials, lattices, etc. 3.2

Fused Deposition Modelling Technology

FDM technology is one of the technologies developed to transform layer manufacturing from prototyping to additive manufacturing of functional parts directly from digital model in CAD systems. FDM machine prints the part using diverse thermoplastic materials such as ABS and nylon by deposition-based process where a heating jet deposits the melted plastic material layer-by-layer in a similar way as an inkjet printer. The nozzles of the print head is controlled by a motor and extrudes plastic filament that is rapidly cooled by surrounding low temperature air. Though the fabricated part uses only one material, the FDM machines can extrude two materials: the model material and the support structure material. The last mentioned is particularly required when it is required to provide support structure for horizontally overhanging parts of the component.


H. G. Lemu

The affordability combined with high material usage efficiency has put FDM as a forefront technology with great potential in several industrial sectors [9], mould fabrication [10] and design of bio-medical devices [11] and tissue engineering [12]. Though popular among the available technologies in its class, FDM as an AM process for functional part production is still far from reality. For instance, practical tests of printed parts in our 3DP Lab. indicate that the dimensional accuracy and surface finish are unpredictable and the mechanism of controlling is not straightforward. A comparison of the printed part with the CAD model (Fig. 3) using GOM Inspect V8 SR1 software shows a clear variation of the printed surface from place to place. The surface quality of the printed parts depends on several parameters including printing orientation. To see this variation, 3D printed parts were produced in FORTUS 450 both with 90° and 45° orientation. The surface qualities are then measured by Alicona InfiniteFocus. The measured results are given in Table 1 for comparison. The sample results show that the surface quality in terms of all measured parameters varies. In most of the used parameters, the 90° orientation gives better surface quality while the flatness is better in case of 45° orientation (Table 1).

Fig. 3. Illustration of the printing process in FDM Table 1. Values of surface quality measured in two different print orientations 45° [lm] 90° [lm] Print orientation 15.389 14.113 Surface finish parameters Sa Sq 20.339 18.290 Sv 76.964 86.888 Sz 135.438 140.677 Ssk −0.370 −0.227 Sku 3.312 3.249 FLTt 135.438 140.677 Notations: Sa = Average height; Sq = Root-MeanSquare height; Sv = Maximum valley depth; Sz = Maximum height; Ssk = Skewness; Sku = Kurtosis; FLTt = Flatness using least squares ref. of the selected area

On Opportunities and Limitations of Additive Manufacturing Technology


The printing accuracy (both dimensional and geometrical) and surface finish, in general, can improve with reduced layer thickness. However, lower layer thickness has adverse effects on the fabrication time leading to high production costs. Furthermore, the lower boundary of the layer thickness is limited by the in-built machine parameters [13, 14]. In this regard, further studies are required to investigate the influence of material, machine and operation specific parameters, including printing orientation and slicing software on the achievable precisions. 3.3

Selective Laser Melting

Selective laser melting (SLM) in the selective laser sintering (SLS) family of powderbed fusion technologies [15] and is considered as a viable AM technology. The advances in laser technology has contributed to the transition from SLS to SLM where the latter can fully melt metal powder into dense parts by exposing the powder to the laser beam and solidify upon cooling. Contrary to SLS, SLM is difficult to control the process [16] and hence the quality of the product. But, when compared with conventional metal casting, SLM provides products with fine microstructure due to the higher cooling rate [17]. Review of the literature also shows that SLM based AM is currently a hot research issue in diverse directions. To mention a few, recent research has focused on printing parameters [18], laser scanning strategies [15], mechanical and thermal behaviour, surface chemistry and characterization of different metals. However, the mechanical behaviour under loads and the process capability are not sufficiently investigated. For instance, comparative study of fatigue strength and tribological behaviours as a function of print strategy and other machine parameters are attractive research areas. In particular, parts produced by SLM commonly experience residual stresses due to the combined effect of high temperature forming and the need for support structure. This influences the process efficiency because of the required post processing including heat treatment and post machining.

4 Outlook for Future Research and Developments AM is at infant stage and its application to manufacturing functional products needs further research and development. In this section, a couple of the forefront research challenges and opportunities for the mechanical industry are highlighted. 4.1

Design for AM or Design to Print: Procedures, Standards and, Benchmarking the Process

In terms of the AM based new manufacturing approach, there is a need to closely see the design process, particularly the necessary design procedures and specifications of design intents. The real optimization of a design is believed to be achievable for AM process because the geometric complexity is no longer a constraint for the manufacturing process. However, there still exist outstanding challenges to incorporate some manufacturing constraints into the optimization process. Some obvious constraints


H. G. Lemu

related with part feature such as minimum/maximum thickness and limitations on the horizontal overhang in view of the need for support structure are accounted for in available commercial software such as NASTRAN [19] and OptiStruct [20]. Topology optimization as implemented in such software seeks to find an optimal distribution of material based on certain parameters, volume limitations, loads, and other boundary conditions. 4.2

Multi-materials Printing and Influence of Parameters

While multiple material printing for rapid prototyping is widely in use, AM processes that can handle multiple materials are limited and process dependent. For instance, FDM and similar process that print the part by extrusion and jetting can use several nozzles to jet the molten polymer material and hence are inherently suited to a multiple material printing. On the other hand, powder and liquid bed based technologies such as SLM/SLS and SLA respectively are less suitable for multi-material printing. Though multi-material printing is achievable, how to determine the optimal material combinations to maximize the material mechanical properties and save material consumption is the research challenge. AM technology emerged from other key developments in other disciplines such as laser technology, numerical control (NC) of machine tools, physical chemistry of materials and CAD technologies. Therefore, the technological advances within CAD, NC, laser and material technology, influence the advances in AM technology.

5 Conclusion In this article, the potential role of additive manufacturing technology from Industry 4.0 concept and its key limitations are described. Among others, the relevance and significance of AM technology from Industry 4.0 point of view has been reviewed. The accuracy and surface quality of parts produced by FDM technology have been closed studied for parts produced by FDM process by varying printing orientation. The study showed that better roughness quality is obtained with 90° printing orientation while 45° gives better accuracy. Acknowledgement. The support of Adugna D. Akessaa in printing the test samples and Bob van Beek from ST Instruments Bv, in testing the samples is highly appreciated.

References 1. Wits, W.W., García, J.R.R., Becker, J.M.J.: How additive manufacturing enables more sustainable end-user maintenance, repair and overhaul (MRO) strategies. Procedia CIRP 40, 693–698 (2016) 2. Sun, J., Peng, Z., Yan, L.K., Fuh, J.Y.H., Hong, G.S.: 3D food printing—an innovative way of mass customization in food fabrication. Int. J. Bioprinting 1(1), 27–38 (2015) 3. Digital Norway webpage: Accessed 30 May 2018

On Opportunities and Limitations of Additive Manufacturing Technology


4. Stock, T., Seliger, G.: Opportunities of sustainable manufacturing in Industry 4.0. Procedia CIRP 40, 536–541 (2016) 5. Bourell, D.L., Leu, M.C., Rosen, D.W.: Roadmap for Additive Manufacturing: Identifying the Future of Freeform Processing. University of Texas, Austin, USA (2009) 6. Garett, B.: 3D printing: new economic paradigms and strategic shifts. Glob. Policy 5(1), 70–76 (2014) 7. Campbell, T.A., Thomas, A., Olga, S.I.: Additive manufacturing as a disruptive technology: implications of three-dimensional printing. Technol. Innov. 15(1), 67–79 (2013) 8. ISO/ASTM52915 – 16 Standard Specification for Additive Manufacturing File Format (AMF) Version 1.2 9. Ravari, M.K., Kadkhodaei, M., Badrossamay, M., Rezaei, R.: Numerical investigation on mechanical properties of cellular lattice structures fabricated by fused deposition modelling. Int. J. Mech. Sci. 88, 154–161 (2014) 10. Boschetto, A., Giordano, V., Veniali, F.: Modelling micro geometrical profiles in fused deposition process. Int. J. Adv. Manuf. Technol. 61(9–12), 945–956 (2012) 11. Gu, P., Li, L.: Fabrication of biomedical prototypes with locally controlled properties using FDM. CIRP Ann. Manufact. Technol. 51(1), 181–184 (2002) 12. He, Y., Xue, G.-H., Fu, J.-Z.: Fabrication of low cost soft tissue prostheses with the desktop 3D printer. Nature Scientific Reports 4, Article number 6973 (2014) 13. Jin, Y., He, Y., Fu, J.-Z.: Quantitative analysis of surface profile in fused deposition modelling. Addit. Manufact. 8, 142–148 (2015) 14. Huang, T., Wang, S., He, K.: Quality control for fused deposition modeling based additive manufacturing: current research and future trends. In: The First International Conference on Reliable System Engineering RP0266 (2015) 15. Parry, L., Ashcroft, I.A., Wildman, R.D.: Understanding the effect of laser scan strategy on residual stress in selective laser melting through thermo-mechanical simulation. Addit. Manufact. 12(A), 1–15 (2016) 16. Simonelli, M., Tse, Y., Tuck, C.: On the texture formation of selective laser melted Ti-6Al4V. Metall Mater. Trans. Phys. Metall. Mater. Sci. 45, 2863–2872 (2014) 17. Jung, H.Y., Choi, S.J., et al.: Fabrication of Fe-based bulk metallic glass by selective laser melting: a parameter study. Mater. Des. 86, 703–708 (2015) 18. Fraunhofer ILT home page: Accessed 30 May 2018 19. MSC Nastran FE Solver by MSC. Software Corporation, 2 MacArthur Place, Santa Ana, California, Accessed 15 June 2018 20.,19,OptiStruct.aspx. Altair Optistruct software. Accessed 30 May 2018

Operator 4.0 – Emerging Job Categories in Manufacturing Harald Rødseth1(&), Ragnhild Eleftheriadis2, Eirin Lodgaard2, and Jon Martin Fordal1 1

Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway {Harald.Rodseth,jon.m.fordal} 2 SINTEF Manufacturing AS, Raufoss, Norway {ragnhild.eleftheriadis,eirin.lodgaard}

Abstract. With the trends of industry 4.0 and increased degree of digitalization in production plants, it is expected that production plants in future is much more adaptive where they can both self-optimize production parameters as well as self-maintain of standard activities. All though this would reduce manual operations, new work activities are expected in a cyber-physical production plant. For instance, the establishment of digital twins in cloud solutions enabled with Internet of Things (IoT) can result in crafts in maintenance analytics as well as more guided maintenance for the maintenance operator with augmented reality. In addition, more service from external personnel such as the machine builder is expected to be offered in Industry 4.0. In overall, it will be of interest to identify and recommend qualification criteria relevant for a cyber physical production plant that would be implemented in the organisation. The aim of this article is to evaluate the role of operator as well as other relevant job categories in a cyber physical production plant. The result in this paper is a recommended framework with qualification criteria of these job categories. Further research will require more case studies of this framework. Keywords: Operator 4.0

 Shop floor operator  Maintenance personnel

1 Introduction It is an important need in European manufacturing to sustain competitive supported by information and communication technology [10]. Several architectures have been developed for cyber-physical systems (CPS) that require human interaction from an operator. An example of such architectures is the 5C architecture integrating the sensor monitoring systems with the decision making systems in the organisation [11]. This architecture has well-defined demonstrated the human interaction with a cognition level where e.g. an online prognostics health management platform will support with a visual interface in evaluating degradation of machines. Still, more detailed description is needed to clarify what role the operator will have in such architecture. From a cultural perspective it is at least required both a willingness to change and a more open

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 114–121, 2019.

Operator 4.0 – Emerging Job Categories in Manufacturing


communication to succeed with the ground-breaking technologies offered by Industry 4.0 [19]. In Norwegian manufacturing, the research project CPS Plant aims to develop a framework for the Norwegian approach for the digital manufacturing industry based on the breakthrough technologies from Industry 4.0. Due to the increased automation in the operation of a CPS, it should be expected that degree of manual operations for the operator will be reduced and can be explained where manual operations are changed with assisted operations and increasing the automation level [16]. Although several technological solutions are offered for operators [18], it is also important to consider the future needs for the operator [7] as well as establishing a development path for Operator 4.0 based on Industry 4.0 principles [19]. The aim of this article is to investigate the role operator will have in an Industry 4.0 environment and to propose a recommended framework for qualification criteria for this operator. The future structure in this article is as follows: Sect. 2 presents the opportunities with assistance systems for the future operator, whereas Sect. 3 proposes a framework for evaluating relevant criteria for this operator. Section 4 provides concluding remarks in the article.

2 New Opportunities with Assistance Systems The benefit of CPS is the improved decision support for operators. For example, the 5C architecture provides decision support in terms of visualization of degradation and “digital advices” in maintenance scheduling [12]. From an operator perspective this interface is also denoted as a decision support system (DDS) or assistance system and can have different modules such as production status evaluation, adaptive decision logic, dynamic resource position detection as well as an operator device [9]. Figure 1 illustrates the cooperation between the operator and the assistance system [15]. Instead of to manually collect and analyze the information with the help of the existing systems, the operator can now use the assistance system and carry out the production control partly automatically with support from DDS.

Fig. 1. Human-machine interface for decision making [15].


H. Rødseth et al.

3 Framework for Evaluating Operator 4.0 Figure 2 illustrates the recommended framework for qualification criteria of operator 4.0. The aim of this framework is to ensure that the needs in future industry are met with the breakthrough technology offered by Industry 4.0 and that the organization ensures a ramp up for this job category. Step 1: IdenƟfy stakeholders needs

Step 2:

Step 3:

IdenƟfy technology

Ramp up operator 4.0

Fig. 2. Framework for qualification criteria for Operator 4.0


Step 1: Identify Stakeholders Needs

This step aims to understand the needs from relevant stakeholders. A classical definition of a stakeholder is “…any group or individual who can affect or is affected by the achievement of the organisation’s objectives” [4]. For manufacturing companies who maps out a strategy for Operator 4.0 formalized in their own organisation’s objectives, this broad definition will include a wide-ranging network of among others enterprises such as technology providers for Industry 4.0 as well as the users of the technology in the manufacturing environment. This article includes both existing and future employees in manufacturing when understanding their needs. In Swedish manufacturing several surveys have been conducted to investigate the future demands from operators in manufacturing [7, 8]. These surveys describe the future shop-floor operator as involved in self-controlled team with high level of knowledge and which is dealing with increased extensive tasks. Even more, the results from these surveys clearly addresses that the future demands for operators are the ability to be innovative, creative and getting things done [7]. In additional, the need for updated IT-knowledge is critical where answers from a high-school student pin points the current gap in industry [8]: “IT-knowledge is as important during your work as in everyday life, though industry is possibly a little bit behind.” Also in Norway, there are concrete initiatives to investigate the future role of the operator. From an ongoing Norwegian project named “Skills” [20], it gives some indications of how operator and core skills may develop. That in future work life beside the possibility to learn and develop, you had to master ICT, cultural and language understanding. To contribute in improvement work and innovation, it is necessary for future operator skills to have communication and responsible competencies, and it is crucial for skilled workers to understand the whole picture of value creation and the chain, so they can see the context in production processes and participate in optimizing production lines. To broaden the view of Operator 4.0, the competence needed for a maintenance technician specialist should be included as well due to their involvement of the equipment being operated. The existing required competence of this category has been

Operator 4.0 – Emerging Job Categories in Manufacturing


identified in the standard EN 15628 for qualification of maintenance personnel [2]. In overall, the existing competence of the maintenance technician specialist includes independent performance of maintenance assignments. Further, this standard addresses the use of ICT systems as one key competence. When considering “maintenance employees” shop floor operators should be included in this category. For instance, the key innovation of the classical maintenance concept total productive maintenance (TPM) is that operators executes basic maintenance tasks on their own equipment [14]. This also seems to be the situation in Swedish industry where the scope was not limited to assembly and machining task, but preventive maintenance [8]. Based on contribution from maintenance experts in Swedish manufacturing industry a scenario for the maintenance function in 2030 were developed [1]. In this scenario, it is specified that maintenance employees will have a new digital competence as well as social competence. Table 1 summaries the stakeholders needs identified in this article. Table 1. Stakeholders needs representing future roles for Operator 4.0 Norwegian operators Master ICT skills Skills in communication and responsibility Understand the whole picture for the value chain


Swedish shopfloor operators Self-controlled team Ability to be innovative Updated ITknowledge

Swedish Maintenance employees Digital competence, e.g. data analytics Social competence, e.g. interdisciplinary collaboration Continuous education and training Individuals have responsibility, authority, and autonomy

Step 2: Identify Industry 4.0 Technology for Cognition Level

This step aims to identify the specific technologies based on the Industry 4.0 principles. Several studies have identified relevant technologies for Industry 4.0. An Operator 4.0 typology presented several technologies that would serve different roles for the operator [18]. The purpose of the technology is that the production performance improves (e.g. reduced throughput-time, reduced downtime and scrappage) where the operator interacts with the cyber-physical system. As an example, this interaction can be supported by the use of sensors, as they have been acknowledged for increasing built-in intelligence by providing information on the parameters being measured, and identifying control states [22]. Thus, live sensor-information with adjustable alert limits with direct connection to the operators’ smartphone/tablet is one possible technology. In overall several categories of available technologies should be identified: • • • •

Real time feedbacks system [13]. Augmented reality with smartphones, tablets, and smartglasses [5]. Personal digital assistant with speech- recognition [1, 5, 13]. Warning message system of improper operation [13].


H. Rødseth et al.

• Cobots: Robots cooperating with shop-floor operators without fences between them [7]. 3.3

Step 3: Ramp up of Operator 4.0

This last step aims to ramp up the role of operator 4.0 so it can be implemented in the organization. It is well known that new frameworks in technologies can be described as double S-curves (innovation steps). Figure 3 illustrates an example of such a learning curve formed as a double S-curve in strategic development in manufacturing [21].

Fig. 3. Visualization of a learning S-curve in strategic development in manufacturing [21].

To bring open innovation and self learning of the employees a step further the Scurve should be applied in the organisation. In the quality area this can now lead to new ways of using statistical and quantitative models for developing, acquisition, decision making. Which by use of algorithms and safe measurement systems can be operated, correct and assisted for further improvement and implement rather fast in a production facility [3]. However this is much based on culture for organizational change and management skills which is rather different from one country to another country [6], so a implementation plan for industry 4.0 operator will depend much of the leadership in the organisation, the maturity development and the use of operator skills. A recommended approach for the leadership facing this challenge is to follow the feedback learning curve [17]. Figure 4 illustrates this learning curve where the leader must facilitate both a learning stage towards acceptance in the organisation as well as an enlightenment stage in order to commit resources for actions in ramping up the Operator 4.0 in the organisation.

Operator 4.0 – Emerging Job Categories in Manufacturing


Fig. 4. Feedback learning curve, adapted from [17].

4 Concluding Remarks and Future Outlook This article has proposed a framework for qualification criteria relevant for operator 4.0. The framework consists of three steps where the stakeholders needs are identified (1), relevant technology is identified (2), and ramp up of operator 4.0 in the organization (3). For step 1 in the framework, it is concluded that more studies are necessary for identifying relevant stakeholders. In this article, both existing and future employees was identified as relevant stakeholders. In future research, specification of the stakeholders must be further elaborated. For the maintenance employees this specification has already started in EN 15628 for qualification of maintenance personnel. Also for step 1, it is concluded that it is not rather straightforward to generalize the stakeholders needs for all countries and industry branches. Nevertheless, some learning should be expected between the countries. For example, the experiences for future maintenance employees in Sweden has been shared in Norway where it is expected that the Norwegian maintenance society can learn from these experiences. For step 2 it is concluded that the technologies should be elaborated more in detail. For example the technology readiness level (TRL) for the company should be specified more in detail. For step 3 it is concluded that different types of learning curves should be applied when performing Ramp up for Operator 4.0 to be able to meet future needs regarding evolvement of high knowledge level and extensive tasks. In overall, it is concluded to further develop the framework proposed in this article. Further research for this framework will require more testing in the Norwegian project CPS-Plant as well as relevant case studies in Norway. Acknowledgement. The authors wish to thank for valuable input from the research project CPS-plant. The Research Council of Norway is funding CPS-plant.


H. Rødseth et al.

References 1. Bokrantz, J., Skoogh, A., Berlin, C., Stahre, J.: Maintenance in digitalised manufacturing: Delphi-based scenarios for 2030. Int. J. Prod. Econ. 191, 154–169 (2017). 1016/j.ijpe.2017.06.010 2. CEN (2014) EN 15628: Maintenance - qualification of maintenance personnel 3. Eleftheriadis, R.J., Myklebust, O.: Industry 4.0 and cyber physical systems in a Norwegian Industrial context. In: Advanced Manufacturing and Automation VII. Springer, Singapore, 2018, pp. 491–499 4. Freeman, R.E.: Strategic Management: A Stakeholder Approach. Pitman Series in Business and Public Policy. Pitman, Boston (1984) 5. Gorecky, D., Schmitt, M., Loskyll, M., Zühlke, D.: Human-machine-interaction in the industry 4.0 era. In: Proceedings - 2014 12th IEEE International Conference on Industrial Informatics, INDIN 2014, pp. 289–294 (2014). 6. Hofstede, G.: Cultural dimensions in management and planning. Asia Pacific J. Manag. 1(2), 81–99 (1984). 7. Holm, M.: The future shop-floor operators, demands, requirements and interpretations. J. Manufact. Syst. 47, 35–42 (2018). 8. Holm, M., Adamson, G., Moore, P., Wang, L.: Why I want to be a future Swedish shop-floor operator. Procedia CIRP 41, 1101–1106 (2016). 9. Holm, M., Garcia, A.C., Adamson, G., Wang, L.: Adaptive decision support for shop-floor operators in automotive industry. In: Procedia CIRP, 2014. pp. 440–445. 1016/j.procir.2014.01.085 10. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic initiative Industrie 4.0 (2013) 11. Lee, J., Bagheri, B., Kao, H.A.: A cyber-physical Systems architecture for Industry 4.0based manufacturing systems. Manufact. Lett. 3, 18–23 (2015). mfglet.2014.12.001 12. Lee, J., Jin, C., Bagheri, B.: Cyber physical systems for predictive production systems. Prod. Eng. Res. Dev. 11(2), 155–165 (2017). 13. Longo, F., Nicoletti, L., Padovano, A.: Smart operators in industry 4.0: a human-centered approach to enhance operators’ capabilities and competencies within the new smart factory context. Comput. Ind. Eng. 113 (Supplement C), 144–159 (2017). cie.2017.09.016 14. Nakajima, S.: TPM Development Program: Implementing Total Productive Maintenance. Productivity Press, Cambridge (1989) 15. Nelles, J., Kuz, S., Mertens, A., Schlick, C.M.: Human-centered design of assistance systems for production planning and control: the role of the human in Industry 4.0. In: Proceedings of the IEEE International Conference on Industrial Technology, 2016, pp. 2099–2104. https:// 16. Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(3), 286– 297 (2000) 17. Parker, D.W.: Service Operations Management: The Total Experience. Edward Elgar Publishing Limited, Cheltenham (2012) 18. Romero, D., et al.: Towards an Operator 4.0 typology: a human-centric perspective on the fourth industrial revolution technologies. In: CIE 2016: 46th International Conferences on Computers and Industrial Engineering (2016)

Operator 4.0 – Emerging Job Categories in Manufacturing


19. Schuh, G., Anderi, R., Gausemeier, J., ten Hompel, M., Washlster, W.: Industrie 4.0 Maturity Index. Managing the Digital Transformation of Companies (acatech STUDY). Munich: Hebert Utz Verlag (2017) 20. Solem, A., Buvik, M.P., Finnestrand, H.G., Landmark, A.D., Magerøy, K., Ravn, J.E.: Fagarbeiderkompetanse. Kartlegging av dagens og fremtidens kompetansebehov i fagarbeiderrollen, i industri og bygg og anlegg. SINTEF Teknologi og samfunn (2016) 21. Westkämper, E.: Strategic development of factories under the influence of emergent technologies. CIRP Ann. 56(1), 419–422 (2007). 22. Yong, Z., Yikang, G., Vlatkovic, V., Xiaojuan, W.: Progress of smart sensor and smart sensor networks. In: Fifth World Congress on Intelligent Control and Automation (IEEE Cat. No.04EX788), vol. 3604 15–19 June 2004, pp. 3600–3606. 2004.1343265

Reliability Analysis of Centrifugal Pump Based on Small Sample Data Hongfei Zhu1, Junfeng Pei1, Siyu Wang1(&), Jianjie Di2, and Xianru Huang2 1


School of Mechanical Engineering, Changzhou University, Changzhou 213016, Jiangsu, China [email protected] China Petroleum and Chemical Corporation Jinling Branch, Nanjing 210033, Jiangsu, China

Abstract. In the past, the reliability analysis of centrifugal pumps is usually performed by collecting a large amount of maintenance data for reliability analysis. For some failures of centrifugal pumps that are not very frequent, the corresponding maintenance data are also small. This paper proposes a reliability research analysis method based on the fact that the centrifugal pump has few maintenance data. This method uses the least squares method to estimate the Weibull distribution parameters for small sample data; then it uses Monte Carlo sampling to expand the sample capacity. The Weibull distribution parameters after the expansion of the sample capacity are estimated. Finally, the reliability index of the centrifugal pump is calculated and the reliability operation rules of the centrifugal pump under the small sample data are predicted. Keywords: Reliability  Least square method Weibull distribution  Small sample data

 Monte Carlo method

1 Introduction Reliability-centered maintenance technology is increasingly being used in all walks of life in China. It will effectively improve the modernization level of equipment management in China and achieve significant social and economic benefits. Centrifugal pumps are widely used in industrial production and occupy an important position. The operational reliability of centrifugal pumps will directly influence whether production can be carried out, and it is related to the economic benefits of safe production and enterprises. The reliability analysis of the centrifugal pump usually requires a large amount of maintenance record data as the research basis, but in fact this kind of maintenance record data is not enough, so it is necessary to study the reliability analysis method of the centrifugal pump with less maintenance data. After research, this article provide a reliability analysis method based on small sample data is proposed.

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 122–130, 2019.

Reliability Analysis of Centrifugal Pump


2 Research Methods Research methods can be divided into three parts [1]: (1) Determine the life distribution of the centrifugal pump and test the distribution for goodness of fit. (2) It is necessary to fit the distribution of small sample data. The least squares method is used to obtain the estimated values of the two parameters of shape parameter m and scale parameter η of Weibull distribution. (3) The obtained shape parameter m and the scale parameter η are subjected to Monte Carlo sampling by the inverse function method, and the obtained sample subset is obtained, and the two parameters of each sample subset are obtained again by the least squares method. estimated value. The estimated life expectancy MTTF and the reliability of the centrifugal pump can be calculated from the estimated parameters. 2.1

K-S Text

For the collected parameters, the distribution of statistical values obeyed by the goodness-of-fit test needs to be determined. Using K-S test does not need to group the collected data, it is more convenient to use, and it will not lose the information in the collected data. It is an effective test method [2]. Assume that the number of samples is n. According to the method of establishing an empirical distribution function, apply the formula (1) to solve the piecewise cumulative frequency: 8 < 0; x\xi Fn ð xÞ ¼ ni ; xi  x\xi þ 1 ð1Þ : 1; x  xn In that formula: X1, X2, …, Xn-sample data after alignment; Fn(x) - step curve. In the full range of the random variable X, the largest difference between Fn(x) and FX(x) is: Dn ¼


1\x\ þ 1

fjF ð xÞ  Fn ð xÞjg\Dan


In that formula: Dn-a random variable whose distribution depends on the number of samples n; Dan -the critical value on the level of significance a. It can be considered that the distribution to be used at the significant level a is irresistible, otherwise it should be rejected. 2.2

Least Squares Method Estimated Parameters

Least squares method is a method of fitting curve. It is one of the mathematical optimization techniques that matches the best function of finding the data by


H. Zhu et al.

minimizing the square error [3]. Assuming there are n sets of experimental data, n (xi, yi) scatter plots can be obtained on the x0y coordinate system. It can be seen from the figure that these points all fall near a straight line. Between x and y is close to the y = ax + b straight line, where a and b are the undetermined constants, and a and b are obtained through the inverse function. This method is the least squares method [4]. This paper chooses the median rank sum and least squares method to estimate the Weibull distribution. The two parameters Weibull distribution function is: F ðtÞ ¼ 1  eðgÞ ðm; g [ 0Þ t m


After the deformation of (3), take the logarithm: ln ln

1 ¼ m ln t  ln t0 1  F ðt Þ


Let Fi be the median rank at time ti [5]: i N þ 0:4


1 ¼ m ln t  ln t0 1  Fi


Fi ¼ Then formula (5) can be written as: ln ln

1 Make y ¼ ln ln 1F ; x ¼ ln t; b0 ¼  ln t0 i Finally get the regression equation:

y ¼ b1 x þ b0


Observe the data fxi ¼ ln ti ; yi ¼ ln ln 1=1  Fi ; i ¼ 1; 2; . . .; ng and perform linear regression on xi, yi to get b0, b1. So t0 ¼ eb0


Find the distribution under two parameters of Weibull: Shape parameters m: m ¼ b1


Reliability Analysis of Centrifugal Pump


Scale parameters η: 1


g ¼ t0m


Monte Carlo Sampling

The Monte Carlo method is used to select and estimate data by sampling the probability distribution, and statistical methods are used to obtain the reliability index [6]. The general idea is to establish a probability model and then carry out numerical tests to obtain sample values, then apply statistical methods to test data, and finally use the results as solutions to engineering problems [7]. The well-defined Weibull distribution parameters m and η are taken as known values and substituted into the sampling formula for sampling, resulting in a regenerated sample with a sample size of 16 in each group of 5 groups. Sampling first obtains the pseudo-random array e[n] uniformly distributed over the interval (0,1), and then the inverse function method obeys the Weibull distribution’s random variable. The sampling formula [8] is: 1 t½n ¼ gð lnðe½nÞÞ =m


Sort the sampling time group t[n] from the smallest to the largest, and at the same time, the deadline time in each data set and the original no-failure data (the longest running time in each set of no-fault running time is the deadline). Compare. If there is individual data in the sampled data that is greater than the deadline, the data is replaced with a deadline. By this, a small sample sampling data of the centrifugal pump can be obtained.

3 Application Examples The trouble-free time of the shallow-reduced three-line oil centrifugal pump in a petrochemical company No. III atmospheric and vacuum distillation unit is as shown in Table 1. Select the last set of run time as the cutoff time, Ts = 7989 h. Table 1. No fault running time No. 1 2 3 4 5 6 7 8

Running time/h No. 107 9 448 10 520 11 615 12 1267 13 1809 14 3017 15 3304 16

Running time/h 3824 4304 4642 4743 5266 5394 5459 7989


H. Zhu et al.

The K-S test was applied to this group of fault-free running time, and the goodnessof-fit test was performed on the exponential, Weibull and normal distributions, respectively. The test results are shown in Table 2. Table 2. Distribution fitting results Distribution type Exponential distribution Weibull distribution Normal distribution

Hypothesis probability 0.3413

Test statistics 0.2248

Statistical threshold 0.3273

Obey the conclusion Obey









It can be known from the results in Table 2 that the assumed probability of the Weibull distribution in the trouble-free running time of the centrifugal pump is the highest, which is greater than the probability distribution under the exponential distribution and the normal distribution; and the Weibull distribution test statistic is smaller than the statistical threshold, so The trouble-free running time of the centrifugal pump is subject to the Weibull distribution. By using the least squares method to estimate the two parameters of Weibull, by calculating the fault-free running time, we calculate y = b1x + b0, b1 = 0.9274 and b0 = −7.6388. The Weibull distribution parameter estimation value is obtained by the formula (8) and the formula (9) as follows: m = 0.9274, η = 3777.4 (Fig. 1).

Fig. 1. Least squares fit image

The estimated shape parameter m and scale parameter η were substituted into the formula (12) for sampling. A total of 5 regenerated samples with a sample size of 16 in each group were generated, as shown in Table 3.

Reliability Analysis of Centrifugal Pump


Table 3. Sampled data t1/h 13 327 700 706 873 1689 1818 4630 6293 7288 7321 7989 7989 7989 7989 7989

t2/h 1170 1307 1842 2010 2389 2805 3158 3223 3477 3803 4229 5076 5352 6341 7989 7989

t3/h 196 950 1061 1193 1249 1281 1484 1856 2150 2961 3291 3860 4658 5092 7989 7989

t4/h 490 1109 1110 1145 1251 1323 1493 1622 1772 2019 2209 2765 3927 4318 4796 6016

t5/h 11 97 105 288 548 613 971 973 1957 2537 2878 4397 4619 7236 7859 7989

Under the sample, each set of data corresponds to a unique shape parameter m and a scale parameter η. Once again, the least squares method is used to estimate the parameters in each group of samples, for example, sampling data t1. The data t1 can be used to estimate the group of shape parameters m and scale parameters η, and find the shape parameter m between t1 and t5. The scale parameter η, then the weighted average of the five groups of parameters is the shape parameter m and scale parameter η that the sample seeks, and the correlation coefficient method is used to estimate each group of position parameters d. Each set of parameters is estimated as in Table 4. Table 4. Weibull three parameters value Sampling group Shape parameter m Scale parameter η Position parameter d 0.616 5261.6 596.1 t1 t2 1.9437 4404.5 1167.8 t3 1.1905 3278.2 172.2 t4 1.6529 2642.1 467.9 t5 0.6144 2458.5 169.8

Calculate the weighted average of Weibull’s three parameters respectively as the final estimate (Table 5). We have already estimated Weibull’s three parameters in the small sample, brought the three parameters into the reliability theory calculation formula under the Weibull distribution, and then calculated the failure probability density, reliability and failure rate of the centrifugal pump.


H. Zhu et al. Table 5. Weibull parameter average estimate Shape parameter m Scale parameter η Position parameter d 1.1905 3278.2 514.76

(1) Failure probability density f(t) (Fig. 2):   1:1905 1:1905 t  514:76 0:1905 ðt514:76 f ðtÞ ¼ e 3278:2 Þ 3278:2 3278:2


From the curve of the failure probability density of the centrifugal pump, the extremum point of the curve is at 1387 h (58 days), indicating that most of the centrifugal pumps may require a certain degree of overhaul after the centrifugal pump operates smoothly for 1387 h.

Fig. 2. Centrifugal pump reliability curve

(2) Reliability R(t) (Fig. 3): RðtÞ ¼ eð 3278:2 Þ

t514:76 1:1905


From the reliability curve of the centrifugal pump, it is known that the threeparameter Weibull distribution fitting reliability curve is similar to the source data probability distribution fitting result, and the centrifugal pump failure trend is expressed. (3) Failure rate k(t) (Fig. 4):   1:1905 t  514:76 0:1905 kð t Þ ¼ 3278:2 3278:2


Reliability Analysis of Centrifugal Pump


Fig. 3. Centrifugal pump reliability curve

Fig. 4. Centrifugal pump failure rate curve

From the failure rate curve of the centrifugal pump, it can be found that, along with the increase of the running time of the centrifugal pump, the failure rate is gradually increasing. The calculated failure efficiency function is consistent with the rising trend. (4) Average life of centrifugal pump MTBF: Z MTBF ¼ 0


 tf ðtÞdt ¼ g  C

1 þ1 m

 ¼ 3278:2  Cð1:8403Þ ¼ 3090:1ðhÞ ð16Þ

Through the reliability analysis based on the small sample maintenance data of the company’s shallow subtraction oil centrifugal pump, it can be predicted that the operating reliability of the centrifugal pump of the enterprise has the following rules: When the reliability of the centrifugal pump is 0.85, the running time is 1316 h, which can be used as the preventive maintenance cycle of the centrifugal pump. Table 6 shows the reliability, failure rate and failure probability density of the centrifugal pump’s corresponding running time, which can be used for enterprise production and equipment. Management reference.


H. Zhu et al. Table 6. Centrifugal pump reliability index Reliability 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2

Failure rate 0.000251 0.000286 0.000308 0.000326 0.000342 0.000358 0.000375 0.000392

Failure probability density Running time/h 0.000227 989 0.000229 1443 0.000216 1897 0.000195 2374 0.000172 2901 0.000144 3552 0.000112 4366 0.0000795 5385

4 Conclusion Centrifugal pump life cycle is directly related to the company’s safe production and economic benefits. A reasonable maintenance plan can greatly improve the pump’s life cycle and reduce its maintenance costs. In the case of less maintenance data or troublefree operation data, if you want to analyze the reliability of the centrifugal pump, you can use the method described in this article to process the sample of the non-fault running time, so as to effectively predict and analyze the centrifugal pump. The same method can be used for predicting and analyzing the reliability of other devices in the case of small sample data.

References 1. Li, Z., Dai, Z., Jiao, Y.: Bayesian-Monte Carlo evaluation method for protection reliability of small sample failure data. J. Electric Power Syst. Autom. 28(05), 9–14 (2016) 2. Yan, Y., Shengjie, L., Renji, Z.: Rapid prototyping and manufacturing technology: principle, representative technics, applications and development trends. Tsinghua Sci. Technol. 2009, 1–12 (2009) 3. Yitang, X.: Curve fitting based on least squares method and its application in Matlab. Electron. World 10, 102–103 (2013) 4. You, D.: Reliability evaluation of least squares method in Weibull distribution. J. Hubei Univ. Technol. 24(04), 34–45 (2009) 5. Nelson, W.: Accelerated Testing-Statistical Models, Test Plans, and Data Analysis. Wiley, New York (1990) 6. Luo, J., Luo, L.: Random simulation of Non-Martensian repairable systems. Syst. Eng. Electron. 7, 41–47 (1996) 7. Wu, Y.F., Lenwins, J.D.: Monte Carlo studies of engineering system reliability. Ann. Nucl. Engegy 19, 825–859 (1992) 8. Sheng, Z., Xie, S., Pan, C.: Probability Theory and Mathematical Statistics. Higher Education Press, Beijing (2009)

Research on Horizontal Vibration of Traction Elevator Lanzhong Guo and Xiaomei Jiang(&) School of Mechanical Engineering, Jiangsu Key Laboratory of Elevator Intelligent Safety, Changshu Institute of Technology, Changshu, China [email protected]

Abstract. With the continuous improvement of people’s pursuit of life quality, elevator manufacturers are no longer just concerned about the safety and reliability of the elevator, also the ride comfort. The vibration of the elevator is the main factor that influences the ride comfort. Compared to the study of vertical vibration, there are relatively few studies on horizontal vibration at home and abroad. An 8 DOF dynamic model of the horizontal vibration of the car is established by combining theoretical research with computational simulation considering the variable stiffness of the guide rail and the Hertz contact stiffness of the rolling guide shoe, the model is simulated using MATLAB and also verified by the experiment. The influence of related parameters on natural frequency is investigated. It is of great significance to reduce the vibration in the running process and improve the comfort of the passengers. Keywords: Horizontal vibration Experiment

 Dynamic modeling  Simulation

1 Introduction At present, the elevator is a necessity in modern people’s life; the production technology of high speed elevators is still in the hands of the foreign elevator manufacturing enterprises. The fastest elevator in the world is 28.5 m/s, which is equivalent to 102 km/h. Although many domestic enterprises have production licenses reaching more than 7 m/s, the actual manufacturing production is still less than 5 m/s for passenger elevator. Vibration is the elevator’s important evaluation index of comfort. Most of the smaller vibration amplitude of the elevator will not pose a threat to the health of passengers and personal security, but once the elevator vibration reaches a certain value, will bring the passenger discomfort [1, 2]. So to strengthen vibration analysis and control also to reduce vibration can improve the quality of the elevator. There are two important factors that affect the dynamic comfort of an elevator: one is the vertical vibration, the other is the horizontal vibration. The vertical vibration of the medium and low speed elevators produced by domestic enterprises can meet the requirements of the national standard [3, 4]. Compared to the study of vertical vibration, there are relatively few studies on horizontal vibration. One reason is that the widely used elevators are medium and low speed elevators. The impact of horizontal vibration on the dynamic comfort of elevator is far from vertical vibration; the other is © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 131–140, 2019.


L. Guo and X. Jiang

that the horizontal vibration is closely related to the form, structure and motion parameters of the elevator system, and the vibration mechanism is complex and the modeling is difficult [5]. With the increase of elevator speed, the influence of horizontal vibration is more and more obvious. Therefore, the research on horizontal vibration of elevator system is more and more important. There are many main factors that cause elevator horizontal vibration. The guide rail and guide shoe system are the main excitation sources for horizontal vibration of elevator system [6]. Roberts et al. deduced the conversion formula of the displacement and external force the guide shoe to the mass center coordinate system of the car [7]. Ken-Ichi Okamoto et al. put forward three kinds of guide excitation which may affect the horizontal vibration of elevator car [8]. On the basis of the wave theory, Zhu et al. established the boundary value problem of the vibration of the elevator car under the conditions of the fixed and moving traction rope, and got the analysis method of the natural vibration and response of the system [9].

2 Horizontal Vibration Dynamics Modeling In the horizontal direction of the elevator, the physical model of the car is shown in Fig. 1. The whole elevator car consists of two parts: one is the car itself, and the two is the car frame. It is not a rigid connection between the car and the car frame, and the two are connected by the elastic medium such as the vibration damping rubber at the car bottom and the anti shaking rubber of the car wall. The stiffness of these elastic media has a certain influence on the horizontal direction vibration of the elevator.

Fig. 1. Physical model of an elevator car in the horizontal direction

Since the guide rail is section connected one by one, it can be simplified into a beam fixed at both ends of a segment. As shown in Fig. 2, A and B of the beam are equivalent to the two ends of the guide rail, and the point C is equivalent to the rolling guide shoe. The position of the C is constantly changing during the elevator operation.

Research on Horizontal Vibration of Traction Elevator

Fig. 2. Guide rail model


Fig. 3. Contact model of rolling guide shoe and guide rail

In the contact process, the guide shoe is equivalent to applying a force F to point C. From strength of materials, constraining force of A end and B end is:  2a Þ l2 l  2b 2 FB ¼ Fa ð1 þ Þ l2 l

FA ¼ Fb2 ð1 þ

The deflection at point C is xc ¼

F a3 b3  EI 3l3

The stiffness at point C is k¼

F 3l3 ¼ EI  3 3 xc ab

The elevator used in this experiment is a rolling guide shoe. During the operation of the elevator, the rolling guide shoes are rolling purely along the guide rail. There is Hertz contact force in the normal contact between rolling bodies and there is creep force in the direction perpendicular to the contact area and along the rolling direction of the guide wheel. Because the creep force is small and the left and right vibrations of the horizontal direction of the car mainly discussed, the Hertz contact force along the normal direction is only considered here. According to the three-dimensional Hertz elastic contact theory, the rolling shoe and guide rail are simplified as the model shown in Fig. 3. There is a layer of rubber on the surface of the guide wheel on the rolling guide shoe, and its elasticity modulus is much lower than the elastic modulus of the guide rail. Therefore, the normal contact deformation of the rolling guide shoe and the guide rail is mainly produced by the guide shoe, and the contact deformation is far less than the radius of the guide wheel. Suppose the force of the guide wheel is in contact


L. Guo and X. Jiang

with the guide rail F, and the guide is equivalent to the circle with an infinite radius. As shown in Fig. 3, the contact zone with a width of 2a is generated at the contact. The expression of the contact width is: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4Fq=pbE q1 q2 q¼ q1 þ q2 E1 E2 E ¼ ð1  l21 ÞE2 þ ð1  l22 ÞE1 a¼

F: The force between the guide and the shoe; b: The thickness of the guide rail; E1, E2: The elasticity modulus of rolling guide and guide rail; l1, l2: The Poisson ratio of rolling guide shoe and guide rail; q1, q2: The radius of two cylinders; The stiffness of the normal contact can be obtained: k2 ¼

F pbE  ¼ D 2

The vertical vibration and horizontal vibration of the elevator car are not coupled, so the vertical vibration is ignored in the modeling of the horizontal vibration, and the horizontal vibration of the elevator is divided into the front and back vibration and the left and right vibration. The two vibration models and the dynamic response are similar. Therefore, here the left and right vibration of the car is mainly focused. The horizontal vibration physical model of elevator is simplified in Fig. 1, and the horizontal vibration dynamic mathematical model of elevator system shown in Fig. 4 is established.

Fig. 4. An 8 DOF model of horizontal vibration

Research on Horizontal Vibration of Traction Elevator


O1 and O2: the centroid position of car and car frame e respectively; M1, I1, h1 and y1: the mass, moment of inertia, rotational displacement and lateral displacement of car; M2, I2, h2, y2: the mass, moment of inertia, rotational displacement and lateral displacement of car frame; K1, c1: the equivalent stiffness and damp of shoe spring; K2, c2: Hertz normal contact stiffness and damp of guide shoe and guide rail; K3, c3: the variable stiffness and damp of guide rail; K4, c4: the equivalent stiffness and damp of isolation rubber between car wall and car frame; K5, c5: the equivalent stiffness and damp of damping rubber at car bottom; l1, l2: the distance between the upper and lower damping rubber of the car wall to the mass center of the car; l3, l4: the distance between the upper and lower damping rubber of the car wall to the mass center of the car frame; l5, l6: the distance of upper and lower guide shoe damping rubber to the mass center of the car frame; l7: y direction distance from car bottom damping rubber to the mass center of the car; md1, md2, md3, md4: the mass of four guide shoes; z1, z2, z3, z4: excitation of guide rail to guide shoe; In Fig. 4, the displacement is in the positive direction from the horizontal to the right, and the angular displacement is in the positive direction from the counterclockwise direction. According to Newton’s second law, the vibration differential equation of an 8 DOF car system is established. m1€y1  2k4 ½2y2  2y1 þ h1 ðl1  l2 Þ þ h2 ðl4  l3 Þ h i  2c4 2_y2  2_y1 þ h_ 1 ðl1  l2 Þ þ h_ 2 ðl4  l3 Þ ¼ 0 J1 €h1  2k4 l1 ½y1  y2 þ h2 l3  h1 l1   2k4 l2 ½y2  y1 þ h2 l4  h1 l2   2k5 ðh2  h1 Þl2  2c4 l1 ½_y1  y_ 2 þ h_ 2 l3  h_ 1 l1  7

 2c4 l2 ½_y2  y_ 1 þ h_ 2 l4  h_ 1 l2   2c5 ðh_ 2  h_ 1 Þl27 ¼ 0 m2€y2  k1 ðyd1 þ yd2 þ yd3 þ yd4  4y2 þ 2h2 l5  2h2 l6 Þ  k4 ½4y1  4y2   k4 ½2h1 ðl2  l1 Þ þ 2h2 ðl3  l4 Þ  c1ð_yd1 þ y_ d2 þ y_ d3 þ y_ d4  4_y2 þ 2h_ 2 l5  2h_ 2 l6 Þ  c4 ½4_y1  4_y2   c4 ½2h_ 1 ðl2  l1 Þ þ 2h_ 2 ðl3  l4 Þ ¼ 0 J2 €h2  k1 l5 ð2y2  2h2 l5  yd1  yd3 Þ  k1 l6 ð2y2  2h2 l6 þ yd2 þ yd4 Þ  k4 l3 ð2y1 þ 2y2 þ 2h1 l1  2h2 l3 Þ  k4 l4 ð2y1  2y2 þ 2h1 l2  2h2 l4 Þ  2k5 ðh1  h2 Þl27  c1 l5 ð2_y2  2h_ 2 l5  y_ d1  y_ d3 Þ  c1 l6 ð2_y2  2h_ 2 l6 þ y_ d2 þ y_ d4 Þ  c4 l3 ð2_y1 þ 2_y2 þ 2h_ 1 l1  2h_ 2 l3 Þ  c4 l4 ð2_y1  2_y2 þ 2h_ 1 l2  2h_ 2 l4 Þ 2c5 ðh1  h2 Þl27 ¼ 0md1€yd1  k1 ðy2  h2 l5  yd1 Þ þ ks1 yd1  c1 ð_y2  h_ 2 l5  y_ d1 Þ þ cs1 y_ d1 ¼ ks1 z1 þ cs1 z_ 1 md2€yd2  k1 ðy2 þ h2 l5  yd2 Þ þ ks2 yd2  c1 ð_y2 þ h_ 2 l5  y_ d2 Þ þ cs2 y_ d2 ¼ ks2 z2 þ cs2 z_ 2 md3€yd3 þ k1 ðyd3  y2 þ h2 l5 Þ þ ks1 yd3 þ c1 ð_yd3  y_ 2 þ h_ 2 l5 Þ þ cs1 y_ d3 ¼ ks1 z3 þ cs1 z_ 3 md4€yd4 þ k1 ðyd4  y2  h2 l5 Þ þ ks2 yd4 þ c1 ð_yd4  y_ 2  h_ 2 l5 Þcs2 y_ d4 ¼ ks2 z4 þ cs2 z_ 4 ½M f€yg þ ½Cfy_ g þ ½K fyg ¼ fF g


L. Guo and X. Jiang

Generalized vibration displacement matrix: Y ¼ fy1 ; h1 ; y2 ; h2 ; yd1 ; yd2 ; yd3 ; yd4 gT n oT Generalized vibration velocity matrix: Y_ ¼ y_ 1 ; h_ 1 ; y_ 2 ; h_ 2 ; y_ d1 ; y_ d2 ; y_ d3 ; y_ d4 n oT Generalized vibration acceleration matrix: Y€ ¼ €y1 ; € h1 ; €y2 ; € h2 ; €yd1 ; €yd2 ; €yd3 ; €yd4 System mass matrix: M ¼ diagðm1 ; h1 ; m2 ; h2 ; md1 ; md2 ; md3 ; md4 Þ System stiffness matrix: 2

4k4 6 2k4 ðl1 þ l2 Þ 6 6 4k 4 6 6 2k4 ðl4 þ l3 Þ K¼6 6 0 6 6 0 6 4 0 0

2k4 ðl1 þ l2 Þ 2k4 ðl21 þ l22 Þ þ 2k5 l27 2k4 ðl2 þ l1 Þ 2k4 ðl1 l3 þ l2 l4 Þ  2k5 l27 0 0 0 0

2k4 ðl4 þ k3 Þ 2k4 ðl1 l3 þ l2 l4 Þ  2k5 l27 2k4 ðl3 þ l4 Þ 2k4 ðl23 þ l24 Þ þ 2k1 ðl25 þ l26 Þ þ 2k5 l27 k1 l 5 k1 l5 k1 l 5 k1 l5

0 0 k1 k1 l 5 k1 þ ks1 0 0 0

0 0 k1 k1 l6 0 k1 þ ks2 0 0

0 0 k1 k1 l 5 0 0 k1 þ ks1 0

3 0 0 7 7 k1 7 7 k1 l6 7 7 0 7 7 0 7 7 0 5 k1 þ ks2

4c4 2c4 ðl4 þ k3 Þ 2c4 ðl2 þ l1 Þ 2c4 ðl1 l3 þ l2 l4 Þ  2c5 l27 4c4 2c4 ðl3 þ l4 Þ 2c4 ðl3 þ l4 Þ þ 2c1 ðl5 þ l6 Þ 2c4 ðl23 þ l24 Þ þ 2c1 ðl25 þ l26 Þ þ 2k5 l27 c1 c1 l5 c1 l5 c1 c1 l5 c1 c1 c1 l5

0 0 c1 c1 l 5 c1 þ cs1 0 0 0

0 0 c1 c1 l6 0 c1 þ cs2 0 0

0 0 c1 c1 l 5 0 0 c1 þ ks1 0

3 0 0 7 7 c1 7 7 c1 l6 7 7 0 7 7 0 7 7 0 5 c1 þ cs2

4k4 2k4 ðl2 þ l1 Þ 4k4 2k4 ðl3 þ l4 Þ þ 2k1 ðl5 þ l6 Þ k1 k1 k1 k1

System damping matrix: 2

4c4 2c4 ðl1 þ l2 Þ 6 2c4 ðl1 þ l2 Þ 2c4 ðl21 þ l22 Þ þ 2c5 l27 6 6 2c4 ðl2 þ l1 Þ 4c 4 6 6 2c4 ðl4 þ l3 Þ 2c4 ðl1 l3 þ l2 l4 Þ  2c5 l2 7 C¼6 6 0 0 6 6 0 0 6 4 0 0 0 0

3 Horizontal Vibration Dynamics Analysis The elevator horizontal vibration frequency is smaller than the vertical vibration frequency, which is closer to the external excitation frequency, when the external excitation frequency of the elevator car is the same or close to the horizontal vibration frequency of the car, it can easily cause the resonance of the car, which aggravates the horizontal vibration of the elevator car. Therefore, the natural frequency of elevator horizontal vibration can be changed to avoid resonance and reduce horizontal vibration acceleration. In the study of vertical vibration, it can be found that the stiffness of the vibration damping rubber at the car bottom and the car load have great influence on the vertical vibration. Therefore, these factors cannot be easily changed. The carside damping rubber stiffness and the guide shoe spring stiffness has no effect on the vertical vibration. So the two factors are taken to investigate the influence on the natural frequency of the horizontal vibration of the elevator with no-load. The natural frequencies of order 1, 2 and 3 are from left to right. As shown in Fig. 5, when the car side damping rubber stiffness changed (0–2 * 105 N/m), the first three order natural frequencies of the elevator show an increasing trend, and the first and second order natural frequencies increase obviously. When the rubber stiffness being changed, the natural frequency of the elevator changed greatly; the third order natural frequency changed relatively gently around 15.5 Hz. In Fig. 6, when guide shoe spring stiffness changed (0–1.2 * 105 N/m), the three order natural frequencies of the elevator changed obviously. The first order natural frequency reaches the maximum when it is near 2 * 104 N/m, and then tended to be gentle; the second and third order natural frequency increased obviously, and the natural frequency range changed largely

Research on Horizontal Vibration of Traction Elevator


Fig. 5. Natural frequency (Hz) versus car side damping rubber stiffness (105 N/m)

Fig. 6. Natural frequency (Hz) versus guide shoe spring stiffness (104 N/m)

when the shoe spring stiffness being changed. When the car’s natural frequency is close to the external excitation frequency, the guide shoe spring stiffness can be adjusted to avoid the external excitation frequency. Let the elevator to the bottom, as shown in Fig. 7, put comprehensive tester EVA625 on the elevator floor, and set the measuring time and other parameters of the instrument to trigger the recording, when close the elevator car door and run to the midlayer, the control personnel press the emergency stop button, in the whole measurement process, the results been recorded, and then imported into the computer for Fourier analysis, that is, the natural frequency of elevator free vibration can be obtained. In order to make the results more accurate, the elevator operates up and down for one emergency stop, and the average value of the two experiments is obtained.

Fig. 7. Experiment field (no-load, halfload, full load)


L. Guo and X. Jiang

The natural frequency in the horizontal direction of the elevator is measured by taking the mid-layer braking at the elevator’s full load as an example. Figure 8 is the vibration displacement diagram of the elevator full load up in the mid-layer braking. When the Z distance will enter the horizontal stage, it shows that the elevator starts to stop quickly. Implement Fourier transform for the Y axis free vibration curve after this time point, and the frequency domain data of the elevator’s horizontal direction vibration can be obtained. From the frequency chart, the dominant frequency of vibration is the natural frequency of the elevator’s horizontal direction (Fig. 9 is horizontal vibration frequency spectrum diagram of the elevator full load up in the midlayer braking).

Fig. 8. Horizontal vibration displacement of time domain (full load, mid-layer, up)

Fig. 9. Horizontal vibration frequency spectrum diagram (full load, middle layer, up)

The natural frequency of the elevator under three conditions of no load, half load and full load can be obtained by repeating the above process. The experimental value of the natural frequency is shown in Table 1. An 8 DOF vibration model is established for horizontal vibration of elevator, and the theoretical value of horizontal natural frequency is obtained by simulation. The specific value is shown in Table 2.

Research on Horizontal Vibration of Traction Elevator


Table 1. Elevator horizontal vibration natural frequency (experimental value) Natural frequency (Hz) 1st order 2nd order No load 5.625 10.750 Half load 4.625 9.5 Full load 3.005 10.375

3rd order 15.895 15.875 15.875

Table 2. Elevator horizontal vibration natural frequency (simulation value) Natural frequency (Hz) 1st order 2nd order No load 7.6633 10.0569 Half load 6.3157 9.9171 Full load 5.7275 9.8878

3rd order 15.6318 15.6260 15.6243

4 Conclusion By comparing the experimental values and theoretical simulation values, we can draw the following conclusions: The natural frequency of the horizontal vibration on the whole less than the natural frequency of the vertical vibration, and the frequency component in the horizontal vibration spectrum more complex, including its own natural frequency and other components. This is mainly because in the experiment, because the weight is placed in the elevator to simulate the load, the position of the actual instrument is not on the middle floor of the elevator car, so the car rotation frequency is mixed in the measured frequency. Some obvious dominant frequencies are chosen as their natural frequencies. In the horizontal vibration spectrum, some experimental values are lower than the theoretical value 2 Hz, which is mainly because the theoretical simulation neglects some factors, thus causing some errors. In addition to the individual data, the error of the theoretical values of other order natural frequencies and experimental values are within the scope of error, the theoretical value can better conform to the actual measurement, therefore, the model of the horizontal vibration of the elevator system and the simulation results obtained by this model are credible. Acknowledgement. This work was supported by Natural Science Research Major Project of higher education institution of Jiangsu Province (No. 17KJA460001).

References 1. ISO 8041:2005 Human response to vibration – Measuring instrumentation 2. GB/T 10058-2009 the elevator technical conditions 3. Jiang, X.: Research on vibration control of traction elevator. In: International Industrial Informatics and Computer Engineering Conference (IIICEC 2015), pp. 2144–2147 (2015) 4. International Standard ISO 18738: 2003, Lifts (elevators)-Measurement of lift ride quality


L. Guo and X. Jiang

5. Fu, W., XiaoBo, L., Zhu, C.: Structural optimization to suppress elevator horizontal vibration using prototype. Acta Simulata Systematica Sinica (2005) 6. Lorsbach, G.P.: Analysis of elevator ride quality, vibration. Elevator World 51(6), 108–113 (2003) 7. Roberts, R.K.: Elevator active guidance system having a coordinated controller. US Patent 5652414 (1997) 8. Kenji, U., Okamoto, K.I., Takashi, Y., et al.: Active roller guide system for high-speed elevators. Elevator World 50(4), 90–92 (2002) 9. Zhu, W.D., Xu, G.Y.: Vibration of elevator cables with small bending stiffness. J. Sound Vib. 263(3), 679–699 (2003)

Research on Real-Time Monitoring Technology of Equipment Based on Augmented Reality Lilan Liu1, Chen Jiang1, Zenggui Gao1(&), and Yi Wang2 1

Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, China [email protected] 2 Business School, Plymouth University, Plymouth, UK

Abstract. In view of the complicated information in maintenance and management of workshop equipment and the high professional difficulty of maintenance personnel, this paper applies augmented reality technology to the operation monitoring and maintenance of the equipment. With Thingworx Studio as the upper AR development platform, the KEPServerEX connection is used to obtain real-time data from the device. Through the integration of equipment operation data, equipment operation guidance and other information, the augmented reality monitoring and maintenance system of the equipment could be built. Combined with operating parameters, equipment health status is diagnosed. We could display information to the worker in the context of integration of virtual and real environment, and guide the maintenance work, then realize the visual management of the equipment on the production site. Keywords: Augmented reality Maintenance guidance

 Equipment monitoring

1 Introduction With the continuous development of information and digital construction in the manufacturing field, as the infrastructure of production activities, the monitoring and management of equipment has become the precondition of improving production efficiency and realizing safety production. The traditional on-line monitoring system of equipment often needs to install a large number of cables in the workshop. These cables are complicated and easily affected by the resistance of the wire and the distributed capacitance, which leads to the large error of the data information collected [1]. Moreover, due to the ambiguity of requirements for equipment operation information, people lack a visual and effective tool to express the requirement information accurately and clearly in the face of massive equipment monitoring data [2]. With the intelligent development of manufacturing, more and more equipment are transferred to technology intensive. The traditional maintenance methods rely on paper or electronic manual, and the efficiency is low and the error is difficult to control [3]. Therefore, there is an urgent need for an interactive equipment monitoring system that is centered on © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 141–150, 2019.


L. Liu et al.

maintenance personnel, driven by status information, and synchronised with induction and maintenance. It integrates the data of the entire lifecycle of the equipment and helps workers complete the monitoring and maintenance of the equipment. Augmented Reality (AR) technology is developed on the basis of virtual reality technology [4]. The technology superimposes the virtual object, scene, sound or system hint information generated by the computer into the real scene, thus enhancing the realistic scene and increasing the user’s perception of the real world [5]. In the industrial field, some developed countries abroad have begun to explore the application of augmented reality technology in the future of smart factories. US Marine uses augmented reality auxiliary maintenance systems (head-mounted displays) designed by Columbia University’s Graphic and User Interface Laboratory to apply augmented reality technology to the manufacture of armored turrets. Sony’s TransVision Augmented Reality Prototyping System can display a variety of auxiliary information to the user through the helmet display, including the panel of the virtual instrument, the internal structure of the repaired equipment, and parts drawings of the repaired equipment [6]. At present, the domestic research and application of AR technology in the industrial field has just started. In this paper, a real-time monitoring technology based on augmented reality is proposed. Based on the digital 3D model of equipment, the Thingworx Studio is used as the development platform of the AR application. Through the Internet of things technology, the equipment operation data, equipment operation guidance and other information are integrated to build up the augmented reality scenarios for the equipment. In combination with operating parameters, the failure mode of the equipment is analyzed to diagnose the health of the equipment and augmented reality technology is used to display to the workers and guide them to complete maintenance work.

2 System Framework 2.1

Overall Scheme Design

The framework of real-time monitoring of equipment based on augmented reality, as shown in Fig. 1, is mainly composed of the device sensing terminal, the AR mobile application terminal and the AR system server. The image stream of the equipment identification code is input by the camera, and the code is identified and positioned by the tracking and registration technology. The pre-established virtual digital model (including the equipment geometry model, assembly features, context navigation, etc.) is registered to the corresponding parts of the real equipment to realize the seamless fusion of virtual and real scenes. The main functions of each part of the system are as follows. (1) AR system server: It contains all relevant data in the equipment monitoring system based on augmented reality, including the equipment’s graphic identification code library, equipment’s 3D model data, equipment operation and maintenance data, equipment’s text manual data, etc. The data are stored in the server to be read, called, modified and stored in a certain way, including the data structure, data invocation mechanism and other contents.

Research on Real-time Monitoring Technology of Equipment Based AR system server


Mobile application

virtual prototype

Equipment manual

3D model

Maintenance induction

target recognition pose estimation Identification graph tracking

Image stream

model processing


AR virtual scene

Human-computer interaction program

Graphic identification code

Real equipment

database Wireless transmission

Equipment control system

Equipment monitoring system

Equipment data

3D engine rendering Virtual and real integration


AR output

Fig. 1. System overall scheme design drawing

(2) Mobile application terminal: It includes target recognition module, humanmachine interaction module and scene rendering output module. The application terminal calls the camera of the intelligent glasses or the handheld device to scan the graphic code on the field equipment. The target recognition module detects and matches the identification code of the input video stream image. After the identification code is matched, the position and orientation of the identification map are calculated through the pose estimation method and the identification map tracking. The 3D rendering engine superimposes the virtual digital model in the server into the real scene to complete the enhancement of equipment monitoring and maintenance. (3) Device sensing terminal: It is mainly composed of all kinds of sensors, A/D converter and interface circuit. The OPC standard is used to collect data of control system, state monitoring system and sensors of the equipment. Using the remote communication technology of Wireless Sensor Networks (WSNs), The real-time status data of equipment is transmitted to the AR server for analysis, and displayed in the form of AR technology, so as to realize the information enhancement of real equipment. 2.2

Design of Equipment Identification Code

The AR registration technology mainly depends on image recognition technology. Its expression forms can be basically divided into two categories, marked and unmarked. The unmarked recognition method is based on the natural features existing in the real scene, and then calculates the coordinates of the virtual object to be displayed through coordinate transformation. This method is difficult to identify the diversity data of the same type of equipment, and the recognition effect is not stable in complex equipment field. Therefore, this paper uses the marked AR registration technology. The hexagon customized marker, as shown in the Fig. 2 below, mainly consists of five parts: ①


L. Liu et al.

Contour, defined by the contrast between the two different colors of the Border and Clear Space, is what the computer vision algorithm first detects. After finding the contour, the algorithm looks for the code and “reads” it to identify the value or “ID” that is encoded within the marker. ② Border, is typically the most identifiable and defining shape within the marker. In the Fig. 2 below, the Border is the outermost shape made of six straight lines forming a hexagon. ③ Clear Space, can be either inside or outside of the border and is required in order to guarantee there is enough contrast for the algorithm to detect the Contour. ④ Code. It consists of elements and the type of data and length of the value/ID that is encoded determine the number of elements. A unique code is generated by setting some of the elements in the dark or bright state. ⑤ Background, the placement of a custom image, is not used to store information and can be used to for a recognition map.

Fig. 2. Equipment identification code

The system recognizes the marker from the input video stream and reads the code, calculates the camera position, direction and other external parameter information relative to the marker, and takes the camera coordinate system as the reference coordinate system. when the user moves the camera to observe the real scene from different angles, the scene changes can be unified into the movement of the real part relative to the camera coordinate system.

3 Real-Time Monitoring of Equipment Operating Status Based on AR 3.1

Real-Time Acquisition of Equipment Operation Data

During the operation of the equipment, real-time data such as the operation and status of the on-site equipment can be obtained through sensing the data fed back by the equipment, and displayed in a visual augmented reality environment, which can accurately and intuitively obtain and locate the abnormal situation of the equipment in the operation process, and know the safety status of each equipment in a timely manner. The data acquisition of the equipment adopts KEPServerEX industrial connection platform to provide single source of industrial automation data to the upper application program. OPC is an industrial technical specification and standard for solving the

Research on Real-time Monitoring Technology of Equipment Based


communication between application software and various device drivers. It regulates the interface function and can be accessed in a unified way regardless of the form in which field devices exist. Configuring KEPServerEX is a key step in acquiring equipment data. The purpose is to set up an OPC server in KEPServerEX to establish a connection between the OPC client and the PLC address bits required to read or write. In order to realize the communication between AR application and equipment, through the network interface of automation equipment, using Ethernet or WLAN, the data transfer to OPC server is transformed into the unified OPC format through KEPServerEX, and finally displayed in the upper AR application. The acquisition structure of the equipment is shown in the Fig. 3 below. Due to the different communication drivers of various automation devices on site, the communication protocols adopted are different. Among them, Siemens PLC system is connected through Siemens TCP/IP Ethernet protocol, schneider PLC system communicates through Modbus Ethernet protocol, and electronic instruments and other devices are read through OPC.

Upper application

KEPServerEX(OPC Server)


TCP/IP driver

MODBUS driver

OPC driver

Fig. 3. Communication structure diagram

The configuration of KEPServerEX is performed by establishing a hierarchy of channel-device-tags. According to different systems of the equipment, different channels are established, and corresponding equipment communication protocols are selected. The device object is set up under the channel, and the IP of the device is configured; then the tag is established, the data type and address information of the object are set. The overall KEPServerEX configuration is shown in Fig. 4.

Fig. 4. KEPServerEX Configuration Diagram



L. Liu et al.

Monitoring and Guidance of Equipment Status

Equipment Status Visualization The server side uses Thingworx Studio, an AR application development platform launched by PTC. The real-time data of the equipment is connected to Thingworx through an interface by the KEPServerEX platform. With the help of rich 2D and 3D controls on Thingworx Studio platform, a virtual display interface is created on the equipment. The CAD engineering model data of the equipment is converted into the “pvz” format by using format conversion, and the running data of the Thingworx platform is invoked to create the 3D virtual dashboard and analyze it, so as to realize the real-time monitoring and over-limit alarm of the equipment. The feedback of equipment components in alarm state is defined through JavaScript programming. For example, according to the requirements of normal operation of the equipment, the onsite AGV car battery power cannot be lower than 30%. When the real-time monitoring value is higher than the threshold value, the equipment operation state is normal. Otherwise, low-power alarm is carried out. In the virtual equipment scene of augmented reality, the battery components will flash brightly, reminding maintenance personnel to carry out charging work. The module can realize visualization of elements including virtual parts, text/graphics context navigation, virtual instrument panel of equipment status, etc. Through the characteristics of AR virtual-real integration, massive equipment monitoring data are intuitively and efficiently expressed, which is convenient for equipment maintenance personnel to better analyze and evaluate equipment status information, obtain equipment abnormalities and locate faulty parts in time (Fig. 5).

Fig. 5. Visual development of AR application

Equipment Maintenance Guidance Based on the real-time monitoring technology of the equipment with augmented reality, the equipment maintenance guidance is mainly to provide a visual platform of

Research on Real-time Monitoring Technology of Equipment Based


operation guidance for equipment maintenance. The maintenance guidance process designed for this system is as follows: Equipment maintenance personnel use the AR mobile terminal to carry out patrol inspection at the production site. When a part of the equipment needs to be maintained and replaced, the AR terminal camera captures the scene images and matches the maintenance guidance database. The corresponding virtual enhanced information is displayed on the screen of the terminal, including text, images, 3D mockup removal, animation of the assembly process, etc. [7]. The specific implementation method of equipment maintenance guidance based on augmented reality is as follows. (1) Creation of equipment model. The model is the basis of equipment maintenance guidance. The model of the equipment can come from different system sources such as SolidWorks, Pro/E, CATIA, etc. The concrete steps are to process model data from different software sources through the Thingworx Studio data standard format, which is used as a model for subsequent AR program calls. (2) Production of guide sequence. In the maintenance guidance application, the 3d animation of the model is combined with relevant CAD data to provide graphical information of specific configurations, and the equipment operating program is accurately conveyed graphically through augmented reality technology. This paper uses the special environment of Creo Illustrator to animate imported equipment models, create detailed 3D animation sequences, and insert 3D annotations related to warnings or instructions during guidance. (3) Editing and publishing of the AR scene. This article uses Thingworx Studio software to call JavaScript and CSS styles to define the interaction of AR scenarios, associating the animation sequence of maintenance instructions with equipment components, and completing the publish of AR scenes by configuring the AR server and identification code id. The maintenance personnel of the equipment activates the corresponding AR program through virtual buttons or other methods, and completes the corresponding maintenance work according to the guidance content.

4 Effect Analysis of Practical Application The real-time monitoring system based on augmented reality designed in this paper has been successfully applied to AGV (Automated Guided Vehicle) in the production workshop of Shanghai Key Laboratory of Intelligent Manufacturing & Robotics to solve the problems of data monitoring and maintenance operations of AGV car in workshop production. Connect the car system through wireless network to collect the AGV car’s current coordinate position, running speed, battery capacity, current processing tasks, and AGV’s running health status, etc. As shown in Fig. 6, by scanning the identification code on the vehicle body, the system displays various data of the equipment in real time in the form of virtual instrument panels, including equipment operation data and production data. for abnormal data, it can retrieve and view its historical trend changes, assist personnel in


L. Liu et al.

Fig. 6. Real-time monitoring of AGV

analyzing the causes of the abnormalities, and the system timely alerts the alarm and guides the worker to correctly operate the equipment for maintenance. As shown in Fig. 7, when the system is used for equipment maintenance, through the component maintenance form, the system will synchronously locate the parts that need maintenance and display the work contents that need to be carried out. During the maintenance process, personnel can obtain electronic documents of the equipment at any time to view and assist in the completion of the work. This helps the operator to complete the inspection and maintenance of the mechanical and electrical system of the equipment to a great extent, and will significantly improve the operation efficiency and reduce maintenance errors. (Fig. 8)

Fig. 7. Maintenance guidance of AGV

Research on Real-time Monitoring Technology of Equipment Based


Fig. 8. System application effect

5 Conclusion This paper designs and completes a set of equipment monitoring and maintenance system based on augmented reality by using Thingworx Studio as the development platform of augmented reality. The main functions of the system include equipment data acquisition, processing and AR display. The AR technology is applied to equipment monitoring and maintenance. Through the integration of sensor-object technology, the equipment operation status information can be clearly understood at a glance. The operation monitoring and maintenance guidance of complex real equipment are carried out under the virtual-real interaction environment, so as to realize the real-time maintenance and operation guidance of field equipment, reduce the complicated monitoring and wiring in the workshop, and improve the operation quality and efficiency of the equipment under the production environment. The implementation of the system lays a foundation for the future full application of augmented reality technology in intelligent workshop production. Acknowledgment. The authors would like to express appreciation to mentors in Shanghai University and Norwegian University of Science and Technology for their valuable comments and other helps. Thanks for the pillar program supported by Shanghai Science and Technology Committee of China (No. 17511104600). Thanks for the support from the Ministry of industry and information technology for the key project (No. TC17085JH).

References 1. Rao, Z., Shao, Y., Ma, J., Wang, S., Liu, L.: Wisdom workshop information perception and analysis. Modern Manuf. Eng. 5, 22–27 (2017) 2. Chen, Y., Li, Z.: Full states digitized modelling method and its application in maintenance field of hydroelectric generating units. J. Syst. Simul. 20(2), 499–505 (2008) 3. Zhao, X., Zuo, H., Xu, X.: Research on key techniques of augmented reality maintenance guiding system. China Mech. Eng. 19(6), 678–682 (2008) 4. Zhao, M., Liu, B., Wu, D.: Study on the techniques of augmented reality installation and maintenance system. Opt. Instrum. 34(2), 16–20 (2012)


L. Liu et al.

5. Azuma, R., Baillot, Y., Behringer, R.: Recent advances in augmented reality. IEEE Comput. Graphics Appl. 21(6), 34–47 (2001) 6. Schwald, B., Figue, J., Chauvineau, E.: STARMATE: Using augmented reality technology for computer guide maintenance of complex mechanical elements. In: Proceedings e200: eBusiness and eWork, pp. 17–19. Venice Cheshire Henbury 2001 7. Xu, C., Li, S., Wang, J., Peng, T.: Occlusion handling in augmented reality based virtual assembly. Machinery Des. Manuf. 12, 256–258 (2009)

Research on the Relationship Between Sound and Speed of a DC Motor Xiliang Zhang1(&), Sujuan Wang1, Zhenyu Chen1, Zhiwei Shen2, Yuxin Zhong1, and Jingguan Yang1 1

College of Engineering, Shanghai Polytechnic University (SSPU), NO 2360 Jin Hai Road, Pu Dong, Shanghai, China [email protected] 2 The Faculty of Engineering, The University of New South Wales (UNSW Sydney), UNSW Sydney, Sydney, NSW 2052, Australia

Abstract. It is a common phenomenon that when a motor runs, there is a sound. In addition, when the motor accelerates or decelerates, the sound changes. The underlying reason is surmised to be that the speed is a complex function of the sound. Usually, sound is expressed by amplitude and frequency, according to the Fourier transform. Therefore, the aforementioned problem is changed to find the exact expression of the function with respect to frequency. If the function were known, it would be meaningful to find a novel method to estimate the speed from the sound only. This paper investigated the potential mechanism of the phenomenon, and discovered a quantitative relationship between sound and speed. The application area of the framework was novel. Another contribution offered in this paper was the virtual dominant frequency which was proposed to replace the dominant frequency. Keywords: Sound  Speed Virtual dominant frequency

 Fast fourier transform

1 Introduction It is a common phenomenon that different speeds of a motor have different sounds. Based on the phenomenon above, the problem whether there is any potential relationship between speed and sound occurs. It should be an approach to use frequency representing the sound, since in the signal processing area, signals, such as sounds, are always analysed by frequency. The problem, therefore, is transferred to whether there is any link between speed and frequency. The most commonly used avenue to describe the phenomenon of the changing frequency of a sound as the relative speed of the observer increasing or decreasing is Doppler effect [1]. However, this formula is not suitable for the case, since the source of the sound and the observer are relative resting and there is no relative speed. Therefore, it is necessary to find out a new method to describe the relationship between sound and frequency. Nonlinear AutoRegressive Moving Average with eXogenous input (NARMAX) model [4, 5] is considered as an effective method to solve the problem above. Using a © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 151–158, 2019.


X. Zhang et al.

nonlinear polynomial can effectively fix the complex input output function. However, the results are not presented in this paper and they will be generated in the next few months. The NARMAX model will be discussed later in the discussion section. Data-driven methods, such as deep learning [2, 3], were good choices too. Nevertheless, there was only a few data, which would lead to data-driven methods losing efficacy. The state of the art methods, such as Generative Adversarial Networks (GANs) [6], can generate the same distribution data, and data generated by GANs can be used with the real data in training the Convolutional Neural Networks (CNNs) [7], which are used to fix the complex function between input and output. The results using GANs to generate data will be presented in the next few months and the approach above will be discussed later in the discussion section. There are two challenges listed as follows. One is effectively eliminating the abnormal sound, which is unknown and random, and then reducing other noises. The other is obtaining the dominant frequency, whose order is unknown and probably changeable.

2 Related Work The Fast Fourier Transform (FFT) is the cornerstone of the entire framework, since it converts the sound into amplitude and frequency. It simplifies the sound input into a combination of frequency sequences of different magnitudes. It seems complicated, but it is much simpler to analyse. The FFT is a fast and efficient algorithm for the Discrete Fourier Transform (DFT) and commonly used in signal processing and other industrial area [8, 9]. In this research, the FFT was used to decompose the sound into a combination of signals of different frequencies and amplitudes, which were considered as the multi input signals for the framework. Noises are everywhere and cannot be avoided. Therefore, how to reduce noises is an important part in signal processing, since it affects the result directly. Usually, there are several methods, such as filters. The basic idea of filters is using expectation to reduce the effect of noises, e.g. the Kalman filter [10]. The disadvantage is that it cannot deal with colour noises directly. In addition, the linear structure is required [11]. There are other kinds of filters, which consider the problem in the complex frequency domain, for instance, Butterworth filter. However, it can only pass a range of frequency signal and block other frequency signals. Other commonly used methods are introduced in [12]. Abnormal points can significantly affect the results, too, except noises. Usually, these abnormal points are eliminated using statistical methods, such as 3r [13]. In this study, a new framework using statistics and filter was established to reduce noises. Data augmentation is a significant technique for function fitting, model estimation, and so on. In deep learning, the commonly used methods for data augmentation are rotating and clipping figures to obtain enough data for all the parameters. Down sampling is another similar approach. The main idea of data augmentation is to get enough data, which can better fit the model or function with a risk of over-fit. A huge number of avenues can solve the over-fit problem and therefore, the more data, the better performance. The GANs are another approach to solve the problem directly, since they can generate the same distribution data for training the parameters. In this research, considering the periodical sound, the data was divided with period to augment the raw data.

Research on the Relationship between Sound and Speed of a DC Motor


3 Methods 3.1

Data Acquisition

Assuming the sound is only from the motor, the environment of the system was selected in a quiet laboratory to reduce noises and the motor was taken from the model car to simplify the experiment and placed in the same location to eliminate the Doppler effect. The Recorder APP from iphone 6s was used to record the sound of the motor. All data used in this paper was collected by the small system (shown in Fig. 1(a)) composed of a hardware part and a software part.

Fig. 1. The data acquisition system and the elements

Basically, the hardware part contained four elements, i.e. the motor driver, the motor, the NI-myRIO-1900, and the oscilloscope, and the software part referred to LabVIEW. All the elements were specified as follows: The motor driver was connected as illustrated in Fig. 1(b). The V-M pin and the GND pin on the top left were connected to the motor for the power supply. The 5V pin and the GND pin on the bottom left were connected to the NI-myRIO-1900 to make the motor driver work properly. The PWMB pin provided timing sequence to control the motor forwarding, reversing, or stopping. The timing sequence was controlled by duty ratio which was controlled by LabVIEW. The details of the motor are LT25GA75-370-EN, with 9V, 86  RPM, and Gear Ratio 1:75 as presented in Fig. 1(c). At the end of the motor, there is a black coded disc which is used to read the moving frequency from the oscilloscope. Using the simple formula xmotor ¼ 2pfmotor , the angular speed of the motor can be easily calculated, although it is not so accurate. The NI-myRIO-1900 was displayed in Fig. 1(d). There are two ports: A and B. In the experiment, only port A was used. Pin 29 was connected to the PWMB pin of the motor driver and Pin 11 and 13 were used to control the logical values for the motor. More details about how to use NI-myRIO instrument can be found in [14, 15]. The oscilloscope was shown in Fig. 1(e). It was primarily used to display the duty ratio which was controlled by LabVIEW and the motor moving frequency. The LabVIEW interface was presented in Fig. 1(f). All the elements in the hardware part were connected and the LabVIEW was used to control the system via adjusting the duty ratio. More details about how to use LabVIEW can be found in [16, 17].



X. Zhang et al.


According to the system, data was collected and labelled by different duty ratio from 0:3 to 1. For each duty ratio group of data, based on reading the moving frequency of the motor from the oscilloscope, the angular speed was calculated easily, as well as the period. Considering the moving process being periodic, the data was divided periodically. Therefore, data was augmented and some statistics approaches could be used to eliminate abnormal noises. 3.3

Noise Reducing

According to the complex situation, noises cannot be reduced by methods mentioned above directly. Therefore, an effective method to reduce noises was needed. A framework was established for this purpose. It was composed of two parts: one was to eliminate the abnormal sound, and the other was to reduce noises. Assuming noised data was satisfied the normal distribution (shown in Eq. (1)), by comparing the periodic augmented data of the same duty ratio, the obviously abnormal periodic data, which was more than the expectation plus three times of standard deviation or less than the expectation minus three times of standard deviation, was deleted. The reason was presented in Eq. (2), which means the probability of data lying in that range is 99:73%, i.e. the probability of data lying outside is 0:27% which is a very small probability event and is unlikely to happen. Based on the above probability theory, the method to eliminate the abnormal data in this study is correct. pðxÞ  Nðx; r2 Þ


where, pðxÞ is the probability of noised data x;  means it satisfies some kind of distribution; Nðx; r2 Þ is the normal distribution with mean x and variance r2 . pðx  3r\x\x þ 3rÞ ¼ 99:73%


After the first part, sounds were still mingled with noises. However, those noises could be considered as white noises and would be removed by using expectation of all periods. Therefore, the last part was used to calculate the expectation, which was considered as the estimated data. 3.4

Virtual Dominant Frequency

Data, after the framework, was computed with FFT, and the results were presented by two parameters: frequency and amplitude. By analysing the distribution of the frequency and amplitude, the dominant frequency cannot be expressed the relationship and enough information independently, i.e. other component frequencies also affected the result and contained information. There were some results for the dominant frequency, however, none of them were proved to have the quantitative relationship. Those results were not

Research on the Relationship between Sound and Speed of a DC Motor


presented in this paper. Therefore, a pseudo frequency, in this paper called a virtual dominant frequency, was proposed to solve the problem. The amplitude was normalized by Eq. (3) ðjÞ


pi ¼

ai N P ðjÞ ai


i¼1 ðjÞ


where pi was the normalized amplitude at ith component of the jth group data; ai was the amplitude of the ith component of the jth group data; N was the total number at the jth group data. Note that the normalized amplitude can be considered as weight/probability as well. The virtual dominant frequency, then, can be calculated by formula (4). fvðjÞ ¼


ðjÞ ðjÞ


pi f i

i¼1 ðjÞ


where fv was the virtual dominant frequency at the jth group data; fi was the frequency at the ith component of the jth group data. Then the problem was changed to find out the relationship between speed and the virtual dominant frequency. Using the least square method, a quantitative relationship was discovered. Note that Eq. (4) can be considered as a special case of power, which means that the virtual dominant frequency is the total power spectrum. Theoretically speaking, it should be proportional to speed due to the basic physics knowledge.

4 Results The whole process can be described in Fig. 2. The raw data was considered as input. The augmentation part, the eliminate abnormal data part, and the expectation part composed the novel framework. The function of each part was mentioned in methods section. Data was sent to virtual dominant frequency, which was composed of the calculate weights part and the calculate the virtual dominant frequency part. After calculation, data was fitted with least square method and finally, was sent to the output part, which was the find out the relationship part. The results of the framework process were displayed in Fig. 3. It was obvious that different duty ratios had different numbers of raw data and amplitudes (the maximum of axis coordinate). After the process of augmentation, all data of each duty ratio group was divided into periods. Different groups had different periods and numbers of data. Then by eliminating abnormal periods of data, data was reduced in periods but the same number in each period. Finally, data was calculated by expectation to delete the white noise. The data after the whole framework was considered as the estimated data and being accurate for further analysis.


X. Zhang et al.

Fig. 2. The flow chart of this research

Fig. 3. The process of the framework

It should be noted that the period number of the last group of data was 3 after the process of eliminating the abnormal data. This was probably because the number of data was small, and this group of data was not as accurate as others. Using data above, the FFT was accomplished. After calculating weights mentioned above, the virtual dominant frequency was calculated. The final result was shown in Fig. 4. All the blue asterisks in the figure were speeds corresponding to the virtual dominant frequencies and the red line was the fitted line with the least square method. It was not the perfect fit, but it presented the quantitative relationship and verified the previous theoretical analysis. Furthermore, the speed could be estimated only using the sound, as the above avenue mentioned. A conclusion was drawn that speeds and sounds have a quantitative relationship and the speed of a motor could be estimated only with the sound by the method mentioned above.

Research on the Relationship between Sound and Speed of a DC Motor


Fig. 4. The relationship between the virtual frequency and the speed

5 Discussion Although this paper found out the relationship between sound and speed of the motor, it utilized the virtual dominant frequency. The direct method to find out the relationship is NARMAX. Using polynomial fix can obtain the relationship directly without calculating the virtual dominant frequency. Another direct method is using GANs to generate the same distribution data and using CNNs to fix the complex nonlinearity by using the real and generative data to train the model. Finally, if a better recorder for noise detection were used, the result would be better. In the future, tests on different kinds of motors will be the task. In addition, using GANs to generate data will be tested for training CNNs and the results will be represented in a few months. Furthermore, polynomial fit will be used to find the relationship on the NARMAX model and the results will be also generated in a few months. Finally, this avenue will be checked on the actual motor instead of the electric one.

References 1. Ozernoy, L.M.: Precision measuring of velocities via the relativistic doppler effect. Mon. Not. R. Astron. Soc. 291(4), L63–L66 (2018) 2. Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015) 3. Goodfellow, I., Bengio, Y., Courville, A. (2016) Deep Learning. The MIT Press 4. Billings, S.A., Chen, S., Korenberg, M.J.: Identification of mimo non-linear systems using a forward-regression orthogonal estimator. Int. J. Control 49(6), 2157–2189 (1989) 5. Billings, S.A., Zhu, Q.M.: Nonlinear model validation using correlation tests. Int. J. Control 60(6), 1107–1120 (1994)


X. Zhang et al.

6. Goodfellow, I.J., et al.: Generative adversarial networks. Adv. Neural. Inf. Process. Syst. 3, 2672–2680 (2014) 7. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017) 8. Oppenheim, A.V., Willsky, A.S., Nawab, S.H.: Signals and Systems, 2nd edn., Pearson (1996) 9. Allen, J.: Short term spectral analysis, synthesis, and modification by discrete fourier transform. IEEE Trans. Acoust. Speech Signal Process. 25(3), 235–238 (2016) 10. Heemink, A.W., Verlaan, M., Segers, A.J.: Variance reduced ensemble kalman filtering. Mon. Weather Rev. 129(613), 1718–1728 (2017) 11. Welch, G., Bishop, G. (2006) An Introduction to the Kalman Filter. University of North Carolina at Chapel Hill 12. Vyas, A., Yu, S., Paik, J.: Wavelets and Wavelet Transform. In: Multiscale Transforms with Application to Image Processing. Springer (2018) 13. Kaushik, A., Singh, R.: An Introduction to Probability and Statistics. KK Publication (2017) 14. National Instruments. NI-myRIO-1900 user guide and specifications 15. National Instruments. NI-myRIO project essentials guide 16. National Instruments.: LabVIEW Code Interface Reference Manual, Part No. 320539D-01 (1998) 17. National Instruments.: LabVIEW User Manual, National Instrument (1998)

Review and Analysis of Processing Principles and Applications of Self-healing Composite Materials Yohannes Regassa1, Belete Sirabizuh1, and Hirpa G. Lemu2(&) 1

Addis Ababa Science and Technology University, Addis Ababa, Ethiopia 2 University of Stavanger, Stavanger, Norway [email protected]

Abstract. Researchers and industries have showed a great interest on the technology of self-healing composites within the last two decades. Several innovations are reported in the area with particular focus on composites that heal micro to macro level cracks caused by fatigue loading. Polymer to metal foam matrix based self-healing materials are the primary practiced and researched materials. In polymer matrix composites, in particular, fracture propagates due to nano or micro cracks, which may be due to laminar failure by fiber rapture, delamination, matrix rapture, bridging and pullout of fiber that are common causes of composite failure modes to be healed early to sever stage. The other challenge in the applicability of polymer composites for critical structural components of aircrafts like wings and fins is to maintain uniform characteristics throughout the structure and its maintainability, which may be one of the areas for self-healing technology. Among the available techniques, capsule and vascular microencapsulation are commonly practiced. For capsule based system, a mini encapsulated healing agent and solid type chemical catalyst are impeded within the polymer matrix. The self-healing technology opens an opportunity to design and produce autonomous maintained components that are suitable to extend fatigue life of precious part exposed for dynamic loads like robotic arm, airplane and remote control devices. In this paper, the current state-of-art in the field of self-healing technologies reviewed and analyzed. The areas of research gap are identified and discussed for further research direction. Keywords: Self-healing composites Polymer composites

 Fatigue failure  Micro crack

1 Introduction In the current dynamic technological advancement, progressive and sustainable performance developments of engineering materials are one of the key indices for modernizing engineering design philosophy to cop-up multi and interdisciplinary engineering problems of multibody structures. For instance, the link and joints of machineries and equipment in construction, aviation, aerospace, robotics and automation industries are highly exposed to fatigue conditions that are potential cause of structural tears, crack and fractures. Multibody Engineering structures have been © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 159–167, 2019.


Y. Regassa et al.

under intensive research aimed to improve the performance of developed materials through numerical analysis, re-designing; experimental testing, new production methodologies like additive manufacturing. Moreover, self-healing advancement has been leading to a promise on fatigue life extension of structures [1]. Nowadays, composite materials are at the stage of replacing metals and its alloys in different advanced engineering application areas like, in aerospace, automotive, marine and building components, in particular for applications that need combined properties of lightweight and with excellent mechanical properties. Hence, having damageresistant and durable composite materials is the primary goals of designers in product development that helps to alleviate or reduce delamination among lamina, matrix rapture and bridging are a common modes of failure routinely faced as a challenge for engineering applications of composite materials made products. Currently, conducting maintenance of parts made of composite materials faces a challenge for air frame structural parts including the wing and fins of aircrafts that pose questions for the mechanical efficiency issues. As a result, advanced methodologies and materials are becoming technical tools to address such problems like using self-healing technology by polymer composite materials, which seems to be an ideal promising strategy to design self-healed or repaired crack and fracture initiation and/or propagation of structures. The development of this technology improves maintenance of composite made structures that can be simplified with the aid of self-healing polymer materials, which can be activated to explode by micro crack initiation and some of them designed for to heal without any external intervention of stimuli to explode the impeded capsules or vascular lined agent to start healing mechanism [2]. Smart material including self-healing and shape memory attracts the interest of researchers for further investigation and advance the development of healing mechanism. The advances in the preparation of self-healing made structures load-bearing selfhealed structural systems with the capacity of healing from micro to macro scale propagated crack forms. Today, developed healing material ranges from polymer type to metal foams in different states, including polymer matrix to ceramics matrix and many more researches are under intensive progress. The impeded self-healing agent can be used as either load-carrying or as a healing agent in the assembled structural system. While the formation of crack self-healing agent responded in different modes that can be categorized as plastic, elastic, damage, and healing deformation are the common mechanical responses and demonstrated by experimental investigation of selfhealed specimens [3]. Self-healing polymers offer an extended life of polymer made materials through healing any crack formation automatically whenever and wherever they are located in the stream of healing agent. Figure 1 [4] demonstrates the process of self-healing by micro encapsulation with an epoxy matrix as stimulant to activate the healing agent. This article aims to make a review on the state-of-art analysis of the gaps in the recent research works for the application of self-healing composite materials. The article describes the processes involved in producing self-healing composites and briefly presents some of the key applications in mechanical structures/systems. After rigorous review of the related works, the key gaps and challenges in self-healing composite research are highlighted.

Review and Analysis of Processing Principles and Applications


Fig. 1. Micro encapsulation healing process and methods

2 Healing Process of Self-healing Polymer Composite Extrinsic or intrinsic healing methods are the most common approaches of crack healing methods. The extrinsic healing method can be conducted with healing agents that are embedded in the matrix as an isolated phase. In this method, the liquid contains encapsulated materials or hollow fibers that are mostly used as a medium of healing agent carrier together with a catalyst. The catalyst can be dissolved in the matrix when damage occurs along the impeded capsules; or networked hollow fibers that explode to release the healing agent. This happens when reaction takes place in the crack plane with activation of catalyst in order to heal the undergoing cracks. This phenomenon results in arresting cracks and prevents crack growth that could lead to fracture of structures. The intrinsic self-healing methods are used for certain materials with their designed smart properties of molecular structures, chemical or physical bond that do not need any external stimuli to activate the healing process [5]. Low energy interaction method of reaction can be obtained by reversible supramolecular reactions that are one of reaction methods that helps for the reaction of impeded healing agent and catalyst after they are exploded by crack initiation and if this reaction is designed in standard approach, it could influence the overall properties of healed material. Hydrogen bonding and metal coordination are the possible avenues for obtaining such low energy interaction. Biological wound healing systems are the bases for autonomically (intrinsic without out stimuli) or non-autonomic (extrinsic) healing with stimuli [6]. Microcapsule, vascular network, dissolved thermoplastic, reversible physical interaction and reversible supramolecular interactions approach are promising methods among the researched and practiced methods of self-healing mechanisms for composite material that used to impart self-healing functionalities to engineered structural system [7]. The newly developed poly dimethyl siloxane (PDMS) elastomer type of self-healing [8] with encapsulated resin that is linked with a separate cross-linker was developed and such elastomer functionalized by the micro encapsulated healing material that used to heal micro cracks initiation, and then the healing agent proceed crack repairing process. In the healing methods and processes using the microspheres can increase the maintaining of the virgin tear strength up to 100%, such research work has been reported by Mauldin and Kessler [9]. The vascular healing approach that uses hollow fibres network is one of an extrinsic approach method in which the healing chemicals are embedded in the network of the


Y. Regassa et al.

capillaries that can be triggered by crack initiation and, the healing agent will flow into the damage area through capillary action and gets polymerized, and hence healing is completed. This method of self-healing mechanism is preferred over the capsule-based method because rather than as being a self-healing container, the vascular network can be used as reinforcement [10]. The thermoplastic self-healing polymer is selected for its good compatibility and is dissolved in the polymer matrix, resulting in a homogeneous system even after damage. The dissolved thermoplastic is stimulated by change in temperature and pressure due to crack, so that the thermoplastic based healing agent moves and fills the cracks and this method gives a promising healing efficiency [11]. Reversible physical interactions healing approach can be achieved by ionomer polymers that are containing ionic species, like metal salts that can aggregate and form clusters and used for the formations of reversible clusters, resulting in changes of the mobility of the polymeric network. This can make them applicable for self-healing applications. For instance, the self-healing response of copolymers such as polyethylene-comethacrylic acid were conducted [12] for ballistic testing with healing agent for crack initiation by compact load and the drawn conclusion indicates that selfhealing is triggered by the impact force of the projectile pressure on the test sample that indicates triggering methods of healing agent. Reversible supramolecular interactions are also other methods of healing. In this approach, healing can take place by hydrogen-bonded urea groups that are connected by a siloxane-based backbone and imine linkages that are used as a reversible covalent bonds. Supramolecular interaction method of healing is one of a shining method to design and make self-healing composites without stimulants. Crack healing efficiency for structural materials can be justified by the ability of recovering cracked material for its mechanical integrity to the virgin (undamaged) material and regain its original properties of fracture toughness, fracture energy, elastic stiffness, and ultimate strength. Self-healing process can be achieved without an external intervention and can also be need additional energy like heat and pressure to heal the damaged plane. In polymer composites, two distinct approaches can be employed as self-healing techniques. In the first, the crack mending process is initiated by an external stimulant like mechanical (crack size beyond limit), thermal (heat generation due to excessive friction), photo light due to crack opening, or chemical induced stimulus can trigger the process. Different researches claimed that an efficient crack healing process can be achieved through molecular diffusion and thermally reversible solid state reactions. In the second approach, damage in the form of a crack can trigger the healing agents that are embedded in the material so that the fracture progress is arrested. However, there are different techniques that are used to measure healing efficiency of todays developed healing agent. As summarized in Table 1, however, fracture toughness recovery are commonly chosen in the field rather than use of peak load recovery [13].

Review and Analysis of Processing Principles and Applications


Table 1. Self-healing polymer systems under quasi-Static fracture Material

Self-healing approach Capsule based Intrinsic

Healing efficiency measuring method Peak fracture load

Modes of Healing loading eff. % Epoxy/epoxy sphere Mode I 3100 phase/glass FRC point bend Mendomer 401/carbon FRC Strain energy Mode I 94 3point bend Epoxy-resin matrix Vascular Fracture toughness Mode I 89–100 based 4point bend 2MEP4F Polymer Intrinsic Fracture toughness Mode I CT 83 Epoxy Intrinsic Peak fracture load Mode I CT 68 Epoxy(latent CuBr2(2Capsule Fracture toughness Mode I 68–79 MeIm)4)/E-glass FRC based DCB >100 Epoxy/PCL phase Intrinsic Fracture toughness Mode I SENB Polymers 401 and 400 Intrinsic Fracture toughness Mode I 60 based SENB Epoxy vinyl ester Capsule Fracture toughness Mode I 30 based TDCB Notations: CT = compact tension, DCB = double-cantilever beam; FRC = fiber-reinforced composite; PCL, (poly-caprolactone), SENB (single edge notched beam); SMA (shape memory alloy); TDCB (tapered double-cantilever beam).

3 Self-healing Polymer Composite Applications Nowadays, polymer composite are one of the most widely used material in different sectors with the applications ranging from coating to laminating of microelectronics devices and also it used for structural purpose in the airplane wings and airframe but such structure are under highly susceptible to damage in the form of delamination and rapture form of crack. These cracks often form deep within the structure, where detection is difficult and repair is almost impossible stage. In such condition of undetectable failure of such components the addition of self-healing functionality to polymers provides a novel solution as a mitigation method for alleviating or reducing such severe problem consequence resulting structures [9]. Self-healing polymer composites research and application involve a very broad of ranging sectors. Some of the common applications area are high lightened below. Medical Sector: Artificial body replacements are one of the contributions made by self-healing materials in medication area to restore basic function. A biological compatible self-healing composite are under progressive research work to heal failed parts, artificial bone replacements, synthetic tooth filling and cracked or broken teeth replacing materials were being used widely as self-healing composites that benefit this cosmetic world to sustain full functional lifetime [7]. Polymer based self-healing materials have been recognized in the field of biomedical sector due to their


Y. Regassa et al.

compatibility to the field with basic properties, such as, flexibility and lightweight, easy processing and high strength to weight, recyclability and availability, biodegradable and non-toxic, and many more properties that meet the basic required specifications for which they are intended to offer made them a prominent material and process in the medical sector. As future opportunity such biomedical self-healing polymers materials, mechanical and physical properties can be improved by the inclusion of nanoparticles and the method of preparation [14]. Aerospace Sector: self-healing application are one of the notable filed for this sector due to an obligatory and mandatory aviation safety uncompromising issues Metal matrix based composites are the growing and promising types of self-healing concept among the existing methods for high tech application in the aerospace part repairing [15]. Usually, self-healing materials are intended to offer an extended flight time of the crack suspected aircraft parts through repairing the micro damage that will occur in the flight. Repairing dynamic damage and the ability to maintain an impact loaded part are one of the unique advantages offered by self-healing composite [16]. Use of ceramic blades instead of nickel super alloy turbine blades can also be possible in self-healing technology. Using ceramic blades would enable higher operating temperatures that can improve the turbine efficiency. In this area, many more studies are undergoing with a range of nano-composite to multi-composite self-healing approaches, hence results that are more promising are yet to come to be realized [2]. Automotive Sector: By December 2005, Nissan automotive company released news for self-healing of scratched car body that named “scratch guard coating” [17]. This newly developed self-healing coating contains an elastic resin that used to protect the inner layer of painted car surface from to be affected by the crack on surface coating and helps to regain to its original paint coating state. Since, Scratch guard coating is a hydrophobic paint type, it has been claimed as time and condition dependent selfhealing that used to heal scratches on surface of structures with range of single day to weeks that depends on depth of created scratch and surrounding available temperature to regain its original states. Research on self-healing as coatings type for different structure with different materials types for various applications are under progress reaches has been reported by researchers and industries [18–21]. Renewable Energy Sector: The other area where self-healing materials can be used is in the renewable energy sector. It is a known fact that wind blades represent the major part of wind energy generation unit that supply input power. While drifting through high winds, the blades are exposed to continuous as well as sudden loads, which can sometimes exceed the designed wind loads. As a result, fatigue crack development in the blade structures leads to failure. Such condition can be monitored and maintained using self-healing strategy by healing those micro cracks created by fatigue condition, and implementing autonomous maintenance methods that provide a foolproof to rise the safety measure against occurrence of catastrophic failure [22]. On the other hand, it is reported that the tensile strength of tested wind turbine blades was reduced by 25% and the 3-point bending flexural strength was reduced by 9%. The self-healing method was implemented on wind turbine blades made of E-glass fiber reinforced polymer matrix composite with borosilicate, which is made of micro tube containing a healing agent. The supply of the healing agent to the cracked area was claimed successful that is hoped to be a good methodology of healing process.

Review and Analysis of Processing Principles and Applications


In this regard, a new method is demonstrated [23] for supplying the monomer that is used for healing process uniformly throughout a fiber reinforced polymer composite made wind turbine blades that are exposed to fatigue load and generate cyclic stress. Self-healing polymer materials with the mentioned method have a unique property to heal inherently generated crack progression in to extend the lifetime of wind turbine blade are one of the arena undergoing at progressive stage of research.

4 Gaps and Challenges of Self-healing Polymer Composites Even though, research outputs in the self-healing polymer composites is in a very promising stage, there still exist challenges and gaps, which hinder the practicality of such technology and materials development. Some of the major research gaps and challenges are listed below: 1. Challenges to design an integrated and inherent crack healing material and technology through novel self-healing mechanism. 2. Limited container size that could be in the nano scale range. 3. Tough and self-healing material types are not designed and implement as single method are not possible with current technologies that has not been investigated yet. 4. The formulation of multilayer coating to provide self-healing functionality while maintaining extreme tolerances on surface finish is also one of an existing challenge in self-healing technology. 5. Crack healing efficiency testing method for intrinsic are limited to be diversified and standardized as of for extrinsic self-healing system that commonly used by tapered double cantilever beam to measure the healing efficiency of developed and used materials. 6. However there many promising and disruptive changes are shown for self-healing technology, there is a gap and challenges for modeling and analysis of crack healing kinetics, repeatability, reliability and accuracy are a practical limitation for selfhealing technologist and researchers. 7. Extensive work has been done related to chemical and physical aspect of selfhealing polymer composite whereas studies to carry out on morphology of composite structures to absorb static or dynamic external load as life extension methodology are limited.

5 Conclusion Self-healing can be one of the area to be researched and investigated in disruptive manner to improve the existing way of practice and utilization in the field of medicine and health, automotive and aviation sector, defense and construction sector to diversify and the application and types of suitable material design and development are at infant stages to be grown. After rigorous evaluation and review of existing related research paper, the following issues has bend drawn as a concluding remark of this paper work.


Y. Regassa et al.

Micro cracking and invisible fracture propagation are considered as stimulants of most fracture failures of components and structures. These have significant issues on cost of maintenance and repairing of the structures where self-healing technology can contribute in life extension and foolproof safety measures. Either intrinsic or extrinsic method of self-healing need further research to be cost effective. New and novel methods such as nano to micro crack detectors and inherent mechanism of self-healing, especially bio-based and bio-compatible healing mechanism are highly needed but not well researched and practiced yet. Many of today’s research in the field of self-healing technology rely on development, process and production of polymer type self-healing composite material and there is a limitation for diversification of the type of healing material at large scale. Since self-healing composites are important components in aviation industries, among others, to locate micro cracks caused by fatigue loading and to indicate suspicious poor impact resistance, further decision on crack and problem forecasting face a huge challenge because of physical realities. These include off-designed humidity, pressure, temperature and vacuum that are practical challenges and gaps in full application of the self-healing technology. Currently the available self-healing materials and technologies are not affordable due to their cost, reliability, repeatability, compatibility, additional weight to structures and modeling and design for real conditions are the primary challenges that need further and detail technical research. These challenges can potentially be handled through a novel approach of bio-based or bio mimicked method to release the product with short period of time to be used for different commercial applications purpose of self-healing materials.

References 1. Hia, I.L., Vahedi, V., Pasbakhsh, P.: Self-healing polymer composites: prospects, challenges, and applications. Polym. Rev. 56, 225–261 (2016) 2. Scheiner, M., Dickens, T.J., Okoli, O.: Progress towards self-healing polymers for composite structural applications. Polymer 83, 260–282 (2016) 3. Bekas, D.G., Tsirka, K., Baltzis, D., Paipetis, A.S.: Self-healing materials: a review of advances in materials, evaluation, characterization and monitoring techniques. Compos. B Eng. 87, 92–119 (2016) 4. Alaneme, K.K., Bodunrin, M.O.: Self-healing using metallic material systems – A review. Appl. Mater. Today 6, 9–15 (2017) 5. Garcia, S.J., Fischer, H.R.: Self-healing polymer systems: properties, synthesis and applications. In: Smart Polymers and Their Applications, pp. 271–298. Woodhead Publishing (2014) 6. White, S.R., Sottos, N.R., Geubelle, P.H., Moore, J.S., Kessler, M.R., Sriram, S.R.: Autonomic healing of polymer composites. Nature 415(6873), 817 (2002) 7. Ratna, D., Karger-Kocsis, J.: Recent advances in shape memory polymers and composites: a review. J. Mater. Sci. 43, 254 (2008). 8. Keller, M.W., White, S.R., Sottos, N.R.: A self-healing poly(dimethyl siloxane) elastomer. Adv. Func. Mater. 17(14), 2399–2404 (2007) 9. Mauldin, T.C., Kessler, M.R.: Self-healing polymers and composites. Int. Mater. Rev. 55, 317–346 (2010).

Review and Analysis of Processing Principles and Applications


10. Cuvellier, A., Torre-Muruzabal, A., et al.: Selection of healing agents for a vascular selfhealing application. Polym. Test. 62, 302–310 (2017) 11. Crall, M.D., Keller, M.W.: Controlled placement of microcapsules in polymeric materials. In: Mechanics of Composite and Multi-functional Materials, vol. 7 (2017). 1007/978-3-319-41766-0_21 12. Aïssa, B., Therriault, D., Haddad, E., Jamroz, W.: Self-healing materials systems: overview of major approaches and recent developed technologies. Adv. Mater. Sci. Eng. 2012 (2012). 13. Blaiszik, B.J., Kramer, S.L.B., Olugebefola, S.C.: Self-healing polymers and composites. Annu. Rev. Mater. Res. 40(1), 179–211 (2010) 14. Benight, S.J., Wang, C., Tok, J.B.H., Bao, Z.: Stretchable and self-healing polymers and devices for electronic skin. Prog. Polym. Sci. 38, 1961–1977 (2013) 15. Das, R., Melchior, C., Karumbaiah, K.M.: Self-healing composites for aerospace applications. In: Advanced Composite Materials for Aerospace Engineering, Processing, Properties and Applications, pp. 333–364 (2016) 16. Awaja, F., Zhang, S.N., Tripathi, M., Nikiforov, A., Pugno, N.: Cracks, microcracks and fracture in polymer structures: formation, detection, autonomic repair. Prog. Mater. Sci. 83, 536–573 (2016) 17. Nissan Motor Co., Ltd.: Nissan develops world’s first clear paint that repairs scratches on car surfaces (2005). 18. Yang, Z.: The self-healing composite anticorrosion coating. Phys. Procedia 18, 216–221 (2011) 19. Yabuki, A., Urushihara, W., Kinugasa, J., Sugano, K.: Self-healing properties of TiO2 particle-polymer composite coatings for protection of aluminum alloys against corrosion in seawater. Mater. Corros. 62(10), 907–912 (2011) 20. Lafont, U., van Zeijl, H., van der Zwaag, S.: Increasing the reliability of solid state lighting systems via self-healing approaches: a review. Microelectron. Reliab. 52(1), 71–89 (2012) 21. Hamdy, A.S., Butt, D.P.: Novel smart stannate based coatings of self-healing functionality for AZ91D magnesium alloy. Electrochim. Acta 97, 296–303 (2013) 22. Zainuddin, S., Arefin, T., et al.: Recovery and improvement in low-velocity impact properties of e-glass/epoxy composites through novel self-healing technique. Compos. Struct. 108, 277–286 (2014) 23. Matt, A.K.K., Strong, S., ElGammal, T., Amano, R.S.: Development of novel self-healing polymer composites for use in wind turbine blades. J. Energy Res. Technol. 137(5) (2015).

Scattered Parts for Robot Bin-Picking Based on the Universal V-REP Platform Lin Zhang and Xu Zhang(&) Department of Mechanical Engineering and Automation, Shanghai University, Shanghai, China [email protected]

Abstract. Robot bin-picking is a very important in industry application. Recently, the system of picking scattered parts becomes a new direction. However, there is an obstacle to any brand of industry robot. In this paper, the universal V-REP platform are adopted to communicate the robot and a practical solution of robot bin-picking based on the universal V-REP platform is proposed. The key techniques of robot bin-picking are discussed such as threedimensional laser scanning, point cloud registration, communication of real robot, communication of V-REP and V-REP simulation. This new method solves the incompatibility of the multi-brand robot in the system of robot binpicking and achieve the system of scattered parts for robot bin-picking based on the universal V-REP platform, it provides a more efficient solution for future robot bin-picking. Keywords: Three-dimensional laser scanning  Point cloud registration Communication of real robot  Communication of V-REP  V-REP simulation

1 Introduction Along With the continuous advance of the robot vision guidance technology, scattered parts for robot bin-picking is gradually applied to industrial production. The robot binpicking system is generally developed based on the simulation platform of robots of own brand [1, 2]. The simulation platform is adopted to display the simulation scene and do the path planning for robot pick scattered parts, it plays a foundation role in the robotic picking system. If the simulation software platform is replaced, the entire system will also need to conduct again. For example, ABB robotic simulation platform is RobotStudio [3] and FANUC robotic simulation platform is Robguide [4], they can display simulation scene and do path planning picking scattered parts, but it only use robot of their own brand. It is a great obstacle to change the robot. So we need to seek a common simulation platform to solve compatibility problem of multi-brand. The robot simulator V-REP, with integrated development environment, is based on a distributed control architecture: each object/model can be individually controlled via an embedded script, a plugin, a ROS node, a remote API client, or a custom solution. This makes V-REP very versatile and ideal for multi-robot applications. This paper introduce a method using V-REP simulation platform to build a system of scattered parts for robot bin-picking, which not only makes different brand of robots to be © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 168–175, 2019.

Scattered Parts for Robot Bin-picking Based on the Universal


integrated into one robotic system of bin-picking, but also loads point cloud data of three-dimensional laser scanning into V-REP to show in the simulation scene. After verification, this method meets the requirements of practical application in terms of reliability and efficiency.

2 Building Simulation Scene A system scene of scattered parts for robot bin-picking include simulation robot, gripper, mount of robotic base and box of parts, and so on. First of all, we need to build a simulation robot. To build a simulation robot, we need to import a standard robotic model, simplify the model, add shaft joints, build a mechanism tree, set rotation range of each joint of the simulation robot, set a pair of elements of inverse kinematics, install robotic gripper, establish the robot reference coordinate system, and add a controlling script. When the simulated robot is built, it has the ability to move, and the movement of the robot is controlled by a controlling script. The constructed simulation robot is shown in Fig. 1a.




Fig. 1. Simulation robot (a), installed gripper (b) and scene of simulation system (c)

It is also necessary to construct a simulation gripper, requiring to import a standard griper model and simplify the model. If the motion of griper is not required, it is not necessary to set a mechanism tree. The TCP coordinate of the hand grip needs to be set on the gripper and the TCP coordinate can be established on closing position of the gripping jaw. If the sucker is used, the TCP coordinate need to be set at the end of the suction cup, and the gripper need to be correctly installed on the robotic end flange. The installed gripper is shown in Fig. 1b. Finally, it is necessary to import the model of mount of robotic base and box of parts. First, the model is simplified, and then the model is placed in an accurate position and orientation. The physical robot can be used to calibrate this pose established relative to the robot reference coordinate system [5, 6]. The model is placed in the simulation scene according to the calibrated pose. The constructed scene of simulation system is shown in Fig. 1c.


L. Zhang and X. Zhang

3 Motion Control of Robots Motion control of robots is divided into motion control of physical robot and motion control of simulation robot. For motion control of physical robot, the communication needs to be established between the upper computer and the robot controller. For example ABB robot, PCSDK and PC-interface can be used to enable the C++ Client on the Visual Studio to access the RAPID program on the ABB robot [3]. Modify the parameters of the RAPID program and execute the specified function in the RAPID program. For motion control of simulation robot, V-REP Remote Api can establish communication between V-REP simulation environment and C++ Client [7]. In C++ Client, you can use Remote Api to access Lua script program on the V-REP side. After setting connection you can input or output lua script program parameters and execute the specified function in the Lua script program, so you can control the V-REP simulation by the C++ Client. In order to adapt the motion control of multi-brand robots, the system needs to perform unified interface programming on the motion control of physical robots and the motion control of V-REP simulation robots. First, using C++ to define a base class of universal robot controller, which define the function of the basic functions of robot binpicking, but it does not control the robot. A derived class is defined for the control of the physical robot and the control of the V-REP simulation robot. The derived class inherits the base class of the universal robot controller and has the ability of controlling the robot. At the same time, the robot needs teaching function, which is mainly used for the setting of intermediate points and placements in path planning. The teaching of the physical robot can be done by the Teaching-programming Pendant. The teaching function of the V-REP simulation robot is written by the Lua script on the V-REP. Motion control of teaching includes motion control of robotic joints and motion control of robotic TCP. The custom UI for teaching robot is shown in Fig. 2a, the UI interface is designed by the custom UI module on the V-REP and controlled by the threaded child script.



Fig. 2. Custom UI for teaching robot (a) and V-REP part template (b)

Scattered Parts for Robot Bin-picking Based on the Universal


4 System Framework The framework of scattered parts for robot bin-picking based on the universal V-REP platform can be summarized as shown in Fig. 3.

Fig. 3. System framework

The framework is divided into the following sections: First, configuring the file path. You need to configure these file path include the VREP part template, part template of point cloud registration, parameters of hand-eye calibration, point cloud data, and pose of registered parts. The V-REP part template is created on the V-REP. The coordinate system of the VREP part template needs to coincide with the coordinate system of the part template used for point cloud registration. The V-REP part template attaches pick-up point objects and is saved in ttm format for importing into the V-REP simulation scene. The V-REP part template is shown in Fig. 2b. The part template of point cloud registration is STL or OBJ format. The parameters of hand-eye calibration [5, 6] are the pose transformation matrix of calibrated between reference frame of the laser scanning system and the robot reference coordinate system. The point cloud data is the position data transformed to the robot reference coordinate system from reference coordinate system of the laser scanning system. Pose of registered parts is the pose relative to the robot reference coordinate system.


L. Zhang and X. Zhang

Selecting a robot to login. Searching what kind of robots can be connected in the current environment. There are V-REP simulation robots and physical robots to choose. Teaching TCP pose and set configuration of bin-picking. After connecting to the robot, the V-REP simulation robot teachings by using custom UI interface. The practical robot uses its own Teach Pendant to teaching, teaching the intermediate point, placement, and saves the pose. You also need to set configuration of bin-picking, including pre-pickup offsets, pick-up offsets, and prepositioning offsets. Laser scanning to reconstruct. Laser scanning reconstruction is a three-dimensional reconstruction technology based on line laser scanning and stereo vision [8, 9]. First, the laser scanning system will calculate the depth information of the scene object, and then reconstruct the three-dimensional point cloud scene according to the depth information. For the efficiency of subsequent point cloud registration, the function of removing the background from the scene needs to be added, the method of removing the background is to remove the background depth information [8]. Showing point cloud in the V-REP scene, the reconstructed point cloud using the laser scanning system is based on the camera coordinate system. To place the point cloud to the robot reference coordinate system in the simulation scene, it is necessary to know the pose relationship between robot reference coordinate system and the camera reference coordinate system. If the laser scanning system is placed in a fixed position, the pose transformation relationship between the camera coordinate system and the robot reference coordinate system needs to be calibrated. The transformation relationship can be represented by a 4  4 RT rigid transform matrix. If the homogeneous coordinates of the point cloud in the robot reference coordinate system and the camera reference coordinate system are respectively represented as P ¼ ðx; y; z; 1ÞT and PC ¼ ðXC ; YC ; ZC ; 1ÞT , the following relationship exists shown in formulas (1). 2 3 x  6y7 6 7 ¼ RT 4z5 0 1

2 3 2 3 XC  XC 7 6 YC 7 T 6 Y C 6 7 ¼ M6 7 4 ZC 5 1 4 ZC 5 1 1


Among them, R is a 3  3 orthogonal unit matrix; T is a three-dimensional translation vector; M is a 4  4 matrix. Point cloud registration. The point cloud registration algorithm [10, 11] can achieve the registration between the part template of point cloud registration and the reconstructed point cloud, and can accurately match the model to the camera reference coordinate system of the laser scanning system. Registration’s result is pose of the reference coordinate system of parts and the matching score threshold relative to the camera reference coordinate system and the matching score threshold. Adjusting the matching score threshold can significantly improve the accuracy of registration and reduce mismatch [10, 11]. Loading the model into V-REP. According to the hand-eye calibrated RT matrix M, the part template of the point cloud registration is transformed into a robot reference

Scattered Parts for Robot Bin-picking Based on the Universal


coordinate system. Loading V-REP part template into V-REP scene according to pose of the reference coordinate system of parts and the matching score. Judgment of robot bin-picking. Planning to the path of grab a part, first to determine whether the part is scratchable, you can use the simGenerateIkPath() function of Regular Api provided in the V-REP to determine, the function is to calculate the path of inverse kinematics from robot’s current state to the robot’s target state [12, 13]. The interference check object can be also added to simGenerateIkPath() so that the interference problem is considered when calculating the path. If it is impossible to avoid interference, the return value of simGenerateIkPath() is nil, and the part will be marked as an ungrasped part. For robotic picking system, interference check [14] is an essential item. The interference check object can be divided into two groups: robot group and environment group. The robot group includes robot body and robotic gripper. The environment group is other object except for robot group, setting interference check to avoid collision between the robot and environment. Bin-picking. If the part is scratchable, needing to plan the robot’s path to pick up the parts. When intermediate point, placement, pre-pickup offsets, pick-up offsets, and prepositioning offsets are all transmitted to the robotic bin-picking function, the robot will plan the path and implement bin-picking. Firstly, connecting the simulation robot controller to watch the simulation motion process, and then switch to the physical robot without errors so that the physical robot can grab the parts.

5 Experimental Verification According to the above system framework of the scattered parts for robot bin-picking, building an experimental platform to verify the reliability and efficiency of the system. The experimental platform is shown in Fig. 5a. The upper computer is the NUC kit in the Laser Vision laser scanning system. The system is Window 10, and the picked parts are to select gallipots. The robot is IRB1200_5/0.9 of the ABB robot. The process of point cloud registration is shown in Fig. 4a and b.




Fig. 4. Showing point cloud (a), show V-REP parts (b) and judgement of scratchability (c)


L. Zhang and X. Zhang

The As seen in Fig. 4a and b, the point cloud data has been transformed into the robot reference coordinate system, and the V-REP part poses loaded after point cloud registration are also accurate. From the start of scanning the scene to loading of the VREP part template, the average time for the 10 experiments was 6123 ms, which proved that the system has very high efficiency [15]. After loading V-REP parts, starting to judge whether the parts can be grasped. According to the scratchability of the parts, marking the parts as different colors. Green is the graspable parts. Red is the uncatchable parts. Result is shown in Fig. 4c. Situation of V-REP simulation robot bin-picking and practical robot bin-picking are shown in Fig. 5b and c.




Fig. 5. Experimental platform (a), V-REP robot bin-picking (b) and Practical robot bin-picking (c)

In order to speed up efficiency of the system’s bin-picking, laser scanning can be triggered when the robot gripping a part moves to the first intermediate point. In the one-key mode, the case of error bin-picking did not happen for 10 experiments. The average time picking up one part was 6345 ms.

6 Conclusion In this paper, according to the needs of requiring a more universal simulation soft-ware platform for the robotic bin-picking system, the method of scattered parts for robot binpicking based on the universal V-REP platform is proposed, and the reliability and working efficiency of the system are verified through actual experiments. All meet the requirements of practical applications. Acknowledgment. This research was partially supported by the key research project of Ministry of science and technology (Grant No. 2017YFB1301503) and the National Nature Science Foundation of China (Grant No. 51575332).

Scattered Parts for Robot Bin-picking Based on the Universal


References 1. Liu, M.Y., Tuzel, O., Veeraraghavan, A., et al.: Fast object localization and pose estimation in heavy clutter for robotic bin picking. Int. J. Robot. Res. 31(8), 951–973 (2012) 2. Chang, W.C., Wu, C.H.: Eye-in-hand vision-based robotic bin-picking with active laser projection. Int. J. Adv. Manuf. Technol. 85(9–12), 2873–2885 (2016) 3. Cohal, V.: Some aspects concerning geometric forms automatically find images and ordering them using robotstudio simulation. Adv. Mater. Res. 1036, 760–763 (2014) 4. Connolly, C.: A new integrated robot vision system from FANUC Robotics. In: Information Technology Law, pp. 103–106. Routledge (2007) 5. Yu, C., Xi, J.: Simultaneous and on-line calibration of a robot-based inspecting system. Robot. Comput. Integr. Manufact. 49, 349–360 (2018) 6. Sui Bo, D., Dong, C.Q., Xiangyu, H., Li, W., Zhang Hua, A.: Hand-eye vision measuring system for articulate robots. Tsinghua Sci. Technol. 03, 356–362 (2004) 7. Farias, G., Fabregas, E., Peralta, E., et al.: A Khepera IV library for robotic control education using V-REP. Ifac World Congress (2017) 8. Zhuang, L., Zhang, X., Zhou, W.: A coarse-to-fine matching method in the line laser scanning system. In: International Workshop of Advanced Manufacturing and Automation, pp. 19–33. Springer, Singapore (2017) 9. Yu, Q., Yan, W., Fu Z., Zhao, Y.: Service robot localization based on global vision and stereo vision. J. Donghua Univ. (English Edition) 29(03), 197–202 (2012) 10. Chen, J., Wu, X., Wang, M.Y., Li, X.: 3D shape modeling using a self-developed hand-held 3D laser scanner and an efficient HT-ICP point cloud registration algorithm. Opt. Laser Technol. (2013) 11. Nguyen, V.T., Tran, T.T., Cao, V.T., et al.: 3D point cloud registration based on the vector field representation. In: Pattern Recognition, pp. 491–495. IEEE (2014) 12. Wei, X.: The inverse kinematics analysis of Six-DOF robots. Sci. Technol. Vis. 9, 242–243 (2016) 13. Vasilyev, I.A., Lyashin, A.M.: Analytical solution to inverse kinematic problem for 6-DOF robot-manipulator. Autom. Remote Control 71(10), 2195–2199 (2010) 14. Wang, L., Li, Y., Wang, W., et al.: Interference checking approach with tolerance based on assembly dimension chain. CADDM 22(1), 84–88 (2012) 15. Consultant, R.B.: Random bin picking: has its time finally come? Assembly Autom. 34(3), 217–221 (2014)

Brain Network Analysis Based on Resting State Functional Magnetic Resonance Image Xin Pan1, Zhongyi Jiang1, Suhong Wang2, and Ling Zou1(&) 1


School of Information Science and Engineering, Changzhou University, Changzhou, China [email protected] The First People’s Hospital of Changzhou, Changzhou, China

Abstract. In the study of brain cognition based on the complex network model, the inner relationship of the brain network can be more deeply understood. This paper focuses on the construction of brain functional network, and applies the Pearson correlation method and adaptive sparse representation method to the resting state functional magnetic resonance image data. Based on the graph theory, the network attributes of the two methods are analyzed and the network attributes of the two methods are compared. The results show that the two methods can effectively construct the brain network. The brain network model constructed by adaptive sparse representation is better and the “small world” is more obvious, which is of great significance for medical diagnosis. Keywords: Network analysis  Pearson correlation method Adaptive sparse representation method

1 Introduction In the beginning period of brain science research, it was generally believed that the brain area works independently. With the improvement of research methods, the researchers found that the corresponding section of the brain to work together while processing the task [1]. A brain network is a network of a large number of different nodes, where each node with its own tasks and functions is a brain region and constantly shares information with each other [2]. fMRI (functional magnetic resonance imaging) measures the changes of blood oxygen content in different brain regions under cognitive activity according to bloodoxygenation level dependent (BOLD) [3, 4], so as to obtain images of neural activity at different intensities [5]. The brain network mainly observes the degree of connectivity between pairs of nodes [6]. There are many methods for defining a node, the whole brain is divided mainly by an automated anatomical labeling (AAL) template, where each region is a node [7]. In the process of establishing a brain function network model, there are many ways to define the connection between two nodes. The Pearson correlation method is generally used to establish a brain function network model, but it may lead to false connections [8]. Recently, Li [9] proposed a new method for calculating the correlation coefficient, adaptive sparse representation, which can avoid the occurrence of false connections. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 176–180, 2019.

Brain Network Analysis


This paper aims to use the Pearson correlation method and the adaptive sparse representation to calculate the correlation coefficient of the brain function network of normal control, and analyze the network attributes of the constructed brain function network based on the graph theory [10].

2 Method All fMRI data in this paper are from the First People’s Hospital of Changzhou (the Third Affiliated Hospital of Soochow University). A total of 33 normal subjects participated in the experiment. And 3 cases that had a relatively large head movement in the course of scanning were removed. All participants in the trial were right-handed, with no statistical difference in sex and age. This study was approved by the Ethics Committee of the Third Affiliated Hospital of Soochow University. All subjects agreed to participate in the study and received informed consent signed by the parents. All data were collected from resting state data of all subjects using 1.5T MRI scanner. It required the subjects to keep the head from shaking, close their eyes, keep relaxed and avoid the thinking activities to the greatest extent. The parameters are as follows: repeat time (TR) = 2000 ms, echo time (TE) = 40 ms, field of vision (FOV) = 24 cm, FA = 90°, matrix = 64  64, layer thickness (slice thickness) = 6 mm. The 170 time points are collected, and the scanning time is 360 s. 2.1

Pearson Correlation Analysis

The Pearson correlation analysis method is one of the most commonly used analysis methods for constructing brain function network by fMRI data, and is used as a basis for judging whether there is an edge connection between brain regions. Its calculation formula is: RTi1 ½xi ðtÞ  Xi ½xj ðtÞ  Xj  rij ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi RTi1 ½xi ðtÞ  Xi 2 RTj1 ½xj ðtÞ  Xj 2


The correlation between different brain regions depends on the absolute value of the correlation coefficient. A large absolute value indicates that the brain regions have a strong correlation, and a small absolute value indicates that the correlation between brain regions is weak. 2.2

Adaptive Sparse Representation

Li [9] used trace LASSO to calculate the brain network correlation coefficient and used the l1-norm, the l2-norm as the two extreme cases of trace LASSO:


X. Pan et al.

jjwjj2  jjXDiagðwÞjj  jjwjj1


When the data X is highly correlated, the trace LASSO is the l2 norm; when the data X is highly sparse, the trace LASSO is the l1 norm and the ASR problem can be formulated that min jjXDiagðwÞjj w

s:t:jj y  Xwjj  e 2


In the construction of the associative matrix, the above formulation can be expressed as 1 min jjy  Xwjj22 þ kjjXDiagðwÞjj w 2


Among them, in order to ensure the balance between the effects of l1 norm and l2 norm, we let k > 0. The optimization problem of formulation (4) can be solved using the Alternating Direction Method (ADM [11]).

3 Results Here we used 30 normal subjects’ data. When using ASR method, we found that it is most suitable when the value of k is 0.45. Figure 1 shows the correlation matrix obtained using the Pearson correlation method and ASR method.

Fig. 1. Results of the correlation matrix (A) the correlation matrix averaged over 30 subjects by using the Pearson correlation method (B) the correlation matrix averaged over 30 subjects by using the ASR method

When constructing a resting-state brain function network, it is necessary to set a threshold to find the edges between nodes in the network. As Fig. 2 shows, with the threshold increases, the average degree distribution of the network is gradually decreasing. When the threshold is increased to a certain extent, the network will no longer have the “small world” attributes. It can be seen from Fig. 2(A) that the most suitable range of the threshold is [0, 0.3]. Meanwhile, it can be seen from Fig. 2(B) that

Brain Network Analysis


the most suitable range of the threshold is [0, 0.05]. In addition, we can find that with the same threshold, the brain network average degree built by ASR is significantly lower than the brain network average degree constructed by Pearson. It shows that ASR method excludes more invalid connections.

Fig. 2. The average degree distribution with different thresholds (A) the average degree distribution in terms of the Pearson correlation method (B) the average degree distribution in terms of ASR method.

Then we use the complex network model to study the characteristics of functional networks. We choose a random network of the same scale as the measurement criterion [6]. From Table 1, we could find that the value is 3.0626 which is greater than 1 when the threshold is 0.3. The brain network constructed by Pearson correlation method has the “small world” attribute. From Table 2, we could find that the value is 15.24 which is greater than 1 when the threshold is 0.006. The brain network constructed by ASR also has the “small world” attribute. We also notice that the value obtained in the ASR method is much larger than that obtained by Pearson correlation method, which shows that the ASR method can better reflect the “small world” attribute than the Pearson correlation method. Table 1. Parameter results using Pearson correlation method C L c k r NORMAL 10.0889 0.1727 0.9792 1.5405 0.5030 3.0626 Random – 0.1121 1.9467 – – –

Table 2. Parameter results using adaptive sparse representation method C L c k r NORMAL 9.2 0.0201 0.0261 0.1966 0.0129 15.24 Random – 0.1022 2.0277 – – –


X. Pan et al.

4 Conclusion In this paper, the Pearson correlation method and adaptive sparse representation method are used to calculate the average correlation coefficient of 30 normal subjects. In the process of establishing the brain function network model, we determine the threshold according to the integrity of the network and the attributes of the small world, and then analyze the connection characteristics of the brain function network. Preliminary results show that the two methods can construct effective brain functional networks. But the adaptive sparse representation algorithm is better. Acknowledgments. This work is partially supported by the project of Jiangsu provincial science and Technology Department (BE2018638), Changzhou Science and technology support program (CE20175043) and funding of science research in Changzhou university (ZMF18020069).

References 1. Chung, A.W., Schirmer, M.D., Krishnan, M.L., et al.: Characterising brain network topologies: a dynamic analysis approach using heat kernels. Neuroimage 141, 490–501 (2016) 2. Van Den Heuvel, M.P., Pol, H.E.H.: Exploring the brain network: a review on resting-state fMRI functional connectivity. Eur. Neuropsychopharmacol. 20(8), 519–534 (2010) 3. Fox, M.D., Raichle, M.E.: Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat. Rev. Neurosci. 8(9), 700 (2007) 4. Mateo, C., Knutsen, P.M., Tsai, P.S., et al. Entrainment of arteriole vasomotor fluctuations by neural activity is a basis of blood-oxygenation-level-dependent “resting-state” connectivity. Neuron 96(4), 936–948. e3 (2017) 5. Bassett, D.S., Bullmore, E., Verchinski, B.A., et al.: Hierarchical organization of human cortical networks in health and schizophrenia. J. Neurosci. 28(37), 9239–9248 (2008) 6. Telesford, Q.K., Lynall, M.E., Vettel, J., et al.: Detection of functional brain network reconfiguration during task-driven cognitive states. NeuroImage 142, 198–210 (2016) 7. Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., et al.: Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15(1), 273–289 (2002) 8. Smith, S.M., Miller, K.L., Salimi-Khorshidi, G., et al.: Network modelling methods for FMRI. Neuroimage 54(2), 875–891 (2011) 9. Li, X., Wang, H.: Identification of functional networks in resting state fMRI data using adaptive sparse representation and affinity propagation clustering. Front. Neurosci. 9, 383 (2015) 10. Wheeler, A.L., Teixeira, C.M., Wang, A.H., et al.: Identification of a functional connectome for long-term fear memory in mice. PLoS Comput. Biol. 9(1), e1002853 (2013) 11. Lu, C., Feng, J., Lin, Z., et al.: Correlation adaptive subspace segmentation by trace lasso. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1345–1352 (2013)

Development of Bicycle Smart Factory and Exploration of Intelligent Manufacturing Talents Cultivation Yu’an He(&) Engineering College, Shanghai Polytechnic University, Shanghai 201209, China [email protected]

Abstract. This paper introduces the background, ideas and objectives of the production line of bicycle smart factory laboratory. The smart factory laboratory takes the low carbon environmental friendly bicycle assembly and manufacturing process as an effective carrier for the realization of intelligent manufacturing. It is equipped with ten automated guided vehicle (AGV) trolleys and twelve manual or automatic intelligent assembly processes. It focuses on intelligent manufacturing technology, industrial robot technology, Internet of thing technology, software and communication technology. The flexible assembly process of bicycles can be realized by sensor technology, customized orders, production planning and scheduling. Taking the application-oriented undergraduate education and discipline construction as the service object, the paper also introduces the exploration of the talents cultivating mode of application-oriented intelligent manufacturing. Keywords: Smart factory mode

Intelligent manufacturing

Talents cultivation

1 Introduction As the main direction of “Made in China 2025”, intelligent manufacturing is not only an important opportunity to realize the transformation and upgrade of manufacturing industry in China, but also decide the success or failure of the strategy of making a powerful manufacturing nation. Different from Germany and the United States, as a big manufacturing country, China is still in the parallel development stage of Industry 2.0 and 3.0 which used to be in western countries. Neither China has a solid foundation in traditional industrial fields as Germany, nor does it have advanced technology that guides the world’s information technology development as the United States does. Intelligent manufacturing promotes digital technology, system integration technology, key technical equipment, and intelligent manufacturing complete equipment; strengthen the basic support capability of software and standards, improving and popularizing new models of intelligent manufacturing continuously which improve the level of information technology and the level of integration and sharing in the manufacturing industry. The construction of a new manufacturing system and the implementation of the intelligent manufacturing project not only shorten the cycle of product © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 181–191, 2019.


Y. He

design and development, but also improve the production efficiency and product quality on the basis of reducing the operating costs which can improve the adaptability and flexibility of the supply structure of the manufacturing industry, and create new energy for economic growth. The plan of China is to focus on the development trend of the new information industry, to realize the development of the advanced industrial technology, to achieve the cooperative development of them, and to promote the deep integration of them. A breakthrough point is the “Smart Factory”. According to the urgent needs of the “Made in China 2025 strategy” for the talents cultivation in the field of intelligent manufacturing and the development plan of “13th Five-Year plan” in our university, the educational reform of the training of applied talents of Engineering Technology in the field of intelligent manufacturing conforms to the development requirements of the state and the university. After a new round of structural layout optimization and adjustment, our university insists on holding “careeroriented higher education”. The professional resources of the related disciplines such as mechanical engineering, computer science and technology, automation, communication engineering, environmental engineering and Applied art Institute have been effectively integrated. The foundation of scientific research has been effectively integrated which has also been further improved. In order to actively promote the educational reform of the applied undergraduate, the foundation and conditions are already available for the construction of the bicycle smart factory.

2 Objective and Ideas of the Bicycle Smart Factory 2.1

Construction Objective

The bicycle smart factory laboratory is based on “Made in China 2025”, which takes the low carbon and environmental bicycle assembly process as an effective carrier to realize intelligent manufacturing. It takes the training of intelligent manufacturing application talents and discipline construction as the service object, focusing on the intelligent manufacturing technology, industrial robot equipment technology, and internet of thing (IoT) technology, software and communication technology, sensing technology and customer personalized order, production plan and production mode which in order to build bicycle smart factory plant into the integration of teaching and scientific research as a whole, mechanism innovation, open sharing and distinctive characteristics of the practice base. 2.2

Construction Ideas

The construction of bicycle smart factory laboratory will be based on the basic concept of “Industry 4.0”, which consists of management, hardware layer, software layer, function layer, target layer, data exchange and task monitoring. Its overall architecture is as shown in Fig. 1.

Development of Bicycle Smart Factory


Fig. 1. The architecture of bicycle smart factory

In the figure, SCM means supply chain management; CRM means customer relationship management; PCS means process control system, the base layer of ERP/MES/PCS. Management Layer: Bicycle intelligent manufacturing is based on the industrial Internet of things, and manages the hardware, software, data exchange and task collection of smart factories through the industrial IoT. The management layer is responsible for the production organization management, system user management, role authority management and subsystem maintenance in the smart factory. It is the brain layer of the bicycle smart factory. Hardware Layer: Hardware equipment layer, such as CNC machine tool, machining center, industrial robots, RFID, router, sensor and industrial IoT technology which is the skeleton of the bicycle smart factory laboratory. It is the basic layer for realizing intelligent production. Software Layer: Including HANA, ERP, MES, PLM, SCR, CRM, and CAD/CAM manufacturing execution system control, enterprise resource management, supply chain management software, software layer connects hardware layer, data exchange, task monitoring and function layer which is the blood layer of environmental friendly bicycle smart factory laboratory. Function Layer: It is refers to the function of the laboratory, the teacher and students can engage in engineering education innovation practice skill and technology training, engineering design and engineering management, application technology development and intelligent manufacturing demonstration. It is the platform layer of bicycle smart factory.


Y. He

Data exchange management and acquisition task monitoring management: data exchange management has the function of data exchange service configuration, monitoring and log query, acquisition task monitoring management is responsible for task flow monitoring, task management, task log management and query management, data exchange management and acquisition task monitoring management connect software Basic level, hardware level and function level. Target Layer: Smart factory includes all the activities of Engineering technology, manufacturing and supply chain, and its software and hardware resources, which can realize intelligent production and intelligent manufacturing finally. Intelligent factory focuses on intelligent production systems and processes, as well as the realization of networked distributed production facilities. Intelligent production mainly involves the whole enterprise’s production logistics management, man-machine interaction and the application of material-adding manufacturing technology in industrial production process. Cloud Services Platform: A cloud hosting platform that supports the transformation of production services for bicycle smart factory. It adopts advanced international memory computing, large data, cloud computing technology, and Internet of Thing technology in order to establish a professional and productive service cloud platform which is in line with domestic and international standards, and makes full use of the existing Shanghai Second University of Technology advantages of engineering education resources. It can provide training and transfer personnel for enterprises by putting the German industrial 4.0 program and training system into the cloud platform, and implanting in the university education and training program. The bicycle smart factory can achieve 5 situations in the interconnected state, as shown in Fig. 2.

Fig. 2. Interconnection mode diagram of bicycle smart factory

Development of Bicycle Smart Factory


1. To realize the interconnection between workshop equipment and business level. 2. To achieve interconnection between devices. 3. Consumers can directly place orders under the business layer by E-business platform. 4. All equipment suppliers can be interconnected with the business network through equipment management platform. 5. All device status information can be integrated into the device cloud for real-time invocation by device service providers and device quality inspectors.

3 Construction Plan of Bicycle Smart Factory Laboratory 3.1

Overall Design of Smart Factory Laboratory

Bicycle smart factory includes operation of picking up in parts, operation of assembling the front fork, operation of assembling the transmission shaft and rear wheel, operation of assembling the front wheel automatically, operation of assembling the holder and crank, operation of assembling the fork stem automatically, operation of assembling the handlebar and adjusting the brakes, operation of testing the bicycle, operation of preadjusting before packaging the bicycle, operation of packing the bicycle, operation of assembling the wheel manually, operation of testing and adjusting the rim. Production process is illustrated in Fig. 3.



Picking up parts Assembling the wheel manually



Assembling the front wheel automatically

Assembling the holder and crank


Assembling the front fork

Assembling the transmission shaft and rear wheel

Testing and adjusting the rim





Packing the bicycle

Pre-adjusting before packaging the bicycle

Testing the bicycle

Assembling the handlebar and adjusting the brakes


Assembling the fork stem automatically

Fig. 3. Production process of production line of intelligent manufacturing

The main functions, main equipment and intelligent manufacturing technology of each process are as shown in Table 1.


Y. He Table 1. Name of process, main equipment and key technologies

No. of Process name process OP1 Operation of picking up the parts: Front fork, Holder, transmission shaft OP2 Operation of assembling the front fork OP3 Operation of assembling the transmission shaft and rear wheel OP4 Operation of assembling the front wheel automatically







Main equipment

Key technology

ERP order technology, digital sweep code technology, AGV principle and application technology Tighten guns, presses, DATA Technology of assembling screens, etc. the front fork AGV trolley, barcode scanning, Digital shelves, DATA screen

Technology of using tightens Tighten guns and sleeves, screwdrivers, DATA screens, guns and common tools etc.

Principle and application technology of robot, Vision technology of robot, Servo control and PLC application, Sensing and detection technology, Software application technology, etc. Barcode scanning, tighten Digital assembly technology Operation of for mechanical parts, assembling the holder guns, sleeves, wrench tolerance and fit and crank Operation of Six-axis industrial robots, Industrial robot vision assembling the fork camera and light source, technology, servo control and stem automatically vibration disk, fixture PLC application technology, positioned, AGV positioned, sensing and detection etc. technology, software application technology, etc. Barcode scanning, tighten Digital assembly technology Operation of guns, sleeves, wrench for mechanical parts, assembling the tolerance and fit handlebar and adjusting the brakes Working principle and Operation of testing Test manipulator and the bicycle barometer, cross screwdriver, application technology of manipulator, testing software inner hexagon wrench, and Application Technology wrench, test equipment Mechanical parts Barcode scanning, inner Operation of predisassembly process hexagon wrench, wrench, adjusting before packaging the bicycle DATA screen Packaging technology, Operation of packing Sweep gun, spreader, the bicycle computer host and printer, packaging equipment strapping machine, baler, principle and Application conveyor belt (continued) Two six-axis industrial robots, camera and light source, vibration disk, fixture positioned, AGV positioned, etc.

Development of Bicycle Smart Factory


Table 1. (continued) No. of Process name process OPM1 Operation of assembling the wheel manually OPM2 Operation of testing and adjusting the rim


Main equipment

Key technology

Air tightness tester, DATA screen, barometer, and spanner Clamp, dial indicator, barometer, spanner

Pneumatic technology, digital assembly technology Mechanical parts adjustment technology, dial indicator application technology

Main Process Work Flow

The bicycle smart factory assembly line includes 10 manual and automatic processes. In the practice of teaching and development, manual procedures focus on the use of digital tools, assembly process technical requirements. Automated process lets students understand the principle and application of intelligent manufacturing technology, such as robot and its visual technology, PLC technology, sensor technology, software programming, etc. Flow of the operation of picking up the parts is as shown in Fig. 4. Flow of automatic assembly process for fork stem is illustrated in Fig. 5. Flow of operation of assembling the front wheel automatically is as shown in Fig. 6.

4 Exploration on Intelligent Manufacturing Talents Cultivation Based on the bicycle assembly production process as the practice platform, the related practice courses of intelligent manufacturing technology were developed. For example, “Intelligent Assembly Project Training” course mainly focuses on bicycle assembly process as the main manufacturing object, providing comprehensive engineering practice training based on intelligent manufacturing and “Industry 4.0” concepts, adopting the theory and practice integrated teaching mode and modular teaching mode. The smart factory laboratory will serve most of our engineering undergraduate majors. The courses will focus on training intelligent manufacturing application-oriented talents for national and local economic development, and promote the teaching reform of other related courses. 4.1

Basic Principles of Curriculum Development

The course carries out CDIO (Conceive, Design, Implement, and Operate) engineering education concept with bicycle assembly production as the main line, and improves students’ application ability of intelligent manufacturing professional knowledge. Through the study of the laboratory course of smart factory, students can possess the professional ability that future intelligent manufacturing engineers and technicians


Y. He


AGV car gets in Station. ERP order generated Scanning gun scans the bar code in AGV. AGV is bound

Intelligent rack charging: front fork, drive shaft, frame.



AGV leaves

End Fig. 4. Picking up the parts process

should possess, and can enhance their professional quality, social responsibility and professionalism. Through the course practice in the smart factory laboratory, students can understand the enterprise culture, management organization and management mode of modern intelligent manufacturing enterprises, configuration characteristics of discrete manufacturing intelligent production site, equipment management method, digital development process of new products, and rational use of various resources and so on. Combining the characteristics of different majors and the role-playing of different posts, the task-led project teaching mode enables students to go deep into different intelligent assembly processes, thus further improving the digital design and intelligent system in the whole life cycle of intelligent manufacturing products. By the bicycle assembly production line, students can understand bicycle intelligent assembly

Development of Bicycle Smart Factory


Begin Initialization of camera, grab mechanism AGV gets in. RFID reads pallet information. AGV pallet positioned Vision camera taking picture and positioning nozzle

N Positioned?

Grab mechanism grabs the fork stem Insert and tighten the fork stem End Fig. 5. Automatic assembly the fork stem

technology and manufacturing technology, and lay a solid foundation for the follow-up professional courses and graduation design. 4.2

Objective of Curriculum Construction

The development of practical courses for bicycle smart factory should be based on the following objectives: • Understand the organization and production management mode of the smart factory. • Master the intelligent assembly process of a bicycle product by participating in the production practice of the smart factory.


Y. He Begin Initialization of camera, robot, grab mechanism

Visual camera taking picture of front bearing, locating nut tightening position

AGV gets in. RFID reads pallet information.


Trolley pallet location


Vision camera taking picture of front fork fixing fixture and then front fork nut fixing fixture

Robot tighten front axle nut

Vision camera takes picture of front wheel and Locating to read number of wheel, Position of grap

Robot upturn front fork fixture

Vision camera taking picture of front fork fixing fixture Robot overturn bicycle

N Positioned?

AGV leaves

Robots grab and place front wheels


Fig. 6. Flow of the front wheel automatic assembly process

• Familiar with the intelligent assembly process of the main assembly parts (such as wheel set installation, etc.); understand the general principles of assembly process planning and the basic method of process analysis. • Understand the types, characteristics of intelligent equipment and layout of smart factory. • Master the use of common mechanical and electrical tools and typical intelligent tools, familiar with the typical process system components (equipment, fixtures, measuring tools, work pieces, etc.) performance, structural characteristics and interconnection methods. • Understand industrial robot and AGV car principles, applications and other related professional knowledge. • Understand the basic methods of product quality control and data traceability. • Understand the principles and usage of MES system in smart factory. • Understand intelligent storage and material distribution management system. • Master the compilation and management of technical documentation.

5 Conclusion Smart factory is the blue ocean for the manufacturing industry to achieve long-term and healthy development in the future. It can not only realize the transformation of manufacturing industry and increase production profit, but also improve the quality and

Development of Bicycle Smart Factory


update speed of products according to the actual operation performance of products and the personalized demand of users, so that enterprises can devote more energy to product innovation, research and development. With the concept of “Industry 4.0”, the bicycle smart factory focuses on intelligent manufacturing technology, industrial robotic equipment technology, Internet of Thing technology, software and communication technology, sensing technology, customer personalized orders, production planning and production scheduling. Through the development of relevant courses of intelligent manufacturing, the talent cultivation mode of applied intelligent manufacturing engineering is explored.

References 1. Ji, Z., Intelligent Manufacturing: The main direction of made in China 2025. J. China Mech. Eng. 17, 20–26 (2017) 2. Ke, Y.G., Roth, D., Sheng, B.: HYDRA Manufacturing Execution System Guide—Perfect MES solution, pp. 96–107. Publishing House of Electronics Industry (2017). (Translated) 3. Hans, K., Pan, S.Y.: Smart factory—Opportunities, Methods and Challenges, pp. 72–80. China Machine Press (2015). (Translated) 4. Wang, X.W.: Thinking of made in China 2025— From two integration to Internet + industry, pp. 106–120. China Machine Press (2016) 5. Du, P.S.H., Gu, J.D.: Intellectual outlook to Made in China 2025, pp. 151–160. China Machine Press (2017) 6. Liu, Q., Ding, D.Y.: The road of Intelligent Manufacturing—expert wisdom and practice route, pp. 110–120. China Machine Press (2017) 7. Ulrich, S., Deng, M., Li, X.M.: Industry 4.0: the fourth industrial revolution is coming, pp. 56–70. China Machine Press (2014). (Translated)

The Journey Towards World Class Maintenance with Profit Loss Indicator Harald Rødseth(&), Jon Martin Fordal, and Per Schjølberg Department of Mechanical and Industrial Engineering, Norwegian Univeristy of Science and Technology (NTNU), NO 7491, Trondheim, Norway {Harald.Rodseth,jon.m.fordal,per.schjolberg}

Abstract. To have a maintenance function in the company that ensures a competitive advantage in the world market requires the world class maintenance (WCM). Though several different periods in history, maintenance has shifted from reactive maintenance fixing it when it breaks towards more systematic analysis techniques in terms of root cause analysis. With the onset of digitalisation and the breakthrough technologies in from Industry 4.0 more advanced analytics are expected in WCM. In particular the indicator profit loss indicator (PLI) has shown promising results in measuring e.g. time losses in production in a monetary term. Further, this indicator has also been proposed to be included in predictive maintenance. However, it is not pointed out clearly which role PLI will have in WCM. The aim of this article is therefore to investigate the trends of WCM as well as how PLI can be included in this journey. Keywords: Profit loss indicator World class maintenance

 Maintenance management

1 Introduction With the global competition and the need for improving the manufacturing performance, the focus of improving the maintenance function in the company has also increased [10]. This is supported by the maintenance expert Wireman who is addressing the importance of maintenance in order to being competitive in the market [17]. It seems that the concept world class maintenance (WCM) is referring to the maintenance function in a company that ensures a competitive advantage in the world market [7, 17]. To evaluate if a company actually is at a WCM level, specific maintenance indicators are applied in the evaluation and denoted as WCM indicators [7]. An example of such an indicator is annual maintenance costs as a percent of replacement asset value of the equipment. In addition, WCM should also strives to reduce the hidden factory which is quantified in terms of time losses and the indicator overall equipment effectiveness (OEE). It is pointed out by Nakajima that measurement of equipment effectiveness is value added to production through the equipment [11], and should therefore be regarded as a WCM indicator as well. Although this indicator has demonstrated improved results in terms of reducing the hidden factory and time losses in the industry, it is of interest to investigate in the profit loss due to the hidden factory [15]. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 192–199, 2019.

The Journey Towards World Class Maintenance with Profit Loss


The indicator profit loss indicator (PLI) measures this property of the hidden factory and should be included as a WCM indicator. Although several demonstrations have been conducted by PLI both at a strategic level [13] as well as at an operational level [15], it is a need to investigate more in detail how PLI will contribute in the journey towards WCM. Today with the breakthrough technologies from Industry 4.0 [8], new opportunities has emerged in WCM. In particular the concept Smart Maintenance is expected to contribute in Industry 4.0 formalized through standardisation work and strategic roadmaps in Industry 4.0 [3]. In light of the opportunities in Industry 4.0, the aim of this article is to investigate the trends of WCM as well as how PLI can be included in this journey. The future structure in this article is as follows: Sect. 2 elaborates some trends in WCM, whereas Sect. 3 further elaborates how maintenance management will influence the value chain as a specific trend in WCM. Section 4 further presents PLI and proposes a new structure of PLI. Future aspects of PLI is discussed and concluded in Sect. 5.

2 Trends in World Class Maintenance Table 1 presents some examples of trends in manufacturing and WCM inspired from several literature studies of maintenance trends as well as own experiences. In particular trends in WCM is discussed in this article. The structure of the trends in maintenance is based on an earlier structure [14]. Likewise in manufacturing there has in maintenance been a evolvements of the maintenance function. The content of the trends within maintenance is meant to be examples and not a complete list. Instead, the trends should aid how PLI can be a part of the journey towards WCM. The time up to 1980 can be regarded as a period of cost focus where in maintenance it was reactive maintenance in terms of corrective maintenance, ad-hoc planning as well as ad-hoc analysis. Further in the period 1980–2010 it was a shift towards quality focus where concepts such as Toyota Production System (TPS) influenced the period. In this period the maintenance function could be classified as Maturing maintenance where the concept total productive maintenance (TPM) [11] was getting implemented with different degrees of success. This also included the application of the WCM indicator OEE and should be considered to be a systematic approach in reducing the hidden factory in terms of time losses in production. Also in this period the maintenance management loop [12] was developed enabling the company to ensure continuous improvement based on the principles from the Deming cycle. In this management loop, application of analysis methods such as root cause analysis (RCA) and life cycle cost (LCC) was performed. By considering the maturity matrix in predictive maintenance [5], the condition monitoring methods in terms of visual inspections, instrument inspections and real-time condition monitoring could be positioned in period of Maturing maintenance.


H. Rødseth et al. Table 1. Trends in manufacturing and World Class Maintenance

Cost focus -> 1980 Mass Production Push policy Gantt charts Motion & time study Assembly line Statistical sampling Inventory optimization PERT/CPM MRP Reactive maintenance Corrective maintenance Ad-hoc planning Ad-hoc analysis

Quality focus 1980–2010 Lean Production Just-in time Pull policy Electronic data interchange TQM Baldrige award Kanban

Customization focus 2010 -> Smart Manufacturing Economies of scope Global manufacturing Agile manufacturing Internet-based manufacturing IoT, Data Analytics Cyber Physical System & Industry 4.0

Maturing maintenance

Smart Maintenance & Maintenance 4.0

TPM & TPS CMMS LCC OEE RCA Maintenance mgt. loop Condition monitoring: – Visual inspection – Instrument inspection – Real-time condition monitoring

Predictive Maintenance 4.0 LCP Remaining useful life (RUL) Green Maintenance Wireless sensors Miniaturisation PLI Maintenance planning with a system perspective Digital competence and social competence Value driven maintenance

From 2010 and into the future it seems to be a shift towards customization and evolvement of the period Smart Maintenance and Maintenance 4.0. The enhancement of Smart Maintenance is in particular addressed in the German standardization roadmap within Industry 4.0 [3]. In this standardization roadmap it is pointed out that Smart Maintenance is an “enabler” of Industry 4.0 where it is responsible for ensuring that the cyber-physical systems are kept available and efficient. The concepts of Smart Maintenance and Maintenance 4.0 is also included in Norwegian industry through e.g. the project CPS Plant. In this project it is pointed out the importance of PLI as a central part as Maintenance 4.0. When considering the maturity matrix in predictive maintenance [5] the concept Predictive Maintenance 4.0 is emerging in this period. This concept includes continuous real-time monitoring of the asset with on alerts based on predictive analytics such as machine learning. This will as a result estimate the remaining useful life (RUL) of the asset. With the trends within and sustainability [16] and circular economy leads to the concept of Green Maintenance. As an example, renovation projects of the asset will no longer accept disposal of old parts but rather remanufacture and re-use the parts. Another important element in this period will be miniaturization where reduction in size of computer devices combined with wireless sensors ensures

The Journey Towards World Class Maintenance with Profit Loss


that the same computer capability can be performed by an e.g. smart phone instead of a computer in a control room. This can enable new services in terms of remote maintenance. It is also expected that PLI would be a suitable indicator within Smart Maintenance. For example, it has been demonstrated that this indicator can be applied in predictive maintenance and is also relevant of maintenance planning [14]. The importance of maintenance planning is also pointed out to be essential in future digital maintenance where maintenance planning with a system perspective is a probable scenario [2]. The aim of this type of maintenance planning is to optimize the performance of the entire manufacturing system by considering both the technical condition of the machine as well as the system perspective in terms of bottlenecks. Also, the competence in the future will require more digital competence such as data analytics as well as social competence in terms of interdisciplinary collaboration. Finally, it is expected that in future it will be a more value focus of the maintenance function where the value creation and contribution to profit is systemized and quantified [4]. The next section will more evaluate the relationship between maintenance management in the value chain.

3 Maintenance Management in the Value Chain Through the four stages of industrial revolutions, industrialists have dramatically improved their level of performance. In addition, industrialist’s view on maintenance has developed from seeing maintenance as an unnecessary evil, to an opportunity of gaining a competitive advantage [6], and utilizing the field of maintenance for improving value chain performance has become an important action for industrialists. A definition of value chain is given as follows [1]: “The functions within a company that add value to the goods or services that the organisation sells to customers and for which it receives payment.” It is well-known, that maintenance has direct effect on the total operating cost of all manufacturing and production plants [6]. Thus, measuring maintenance performance is an efficient way for those aiming at increasing value chain performance. For measuring maintenance performance, Key Performance Indicators (KPIs) can be regarded as a suitable tool [10]. Based on experiences obtained from a Norwegian process industry company, the following maintenance KPIs, as shown in Fig. 1 is used for the abovementioned purpose. As shown, the company divided maintenance indicators into four factors with appurtenant parameters of importance for the company. First, “Input” described the current situation. Second, “Adjustment factors” gave an overview of elements that affect the maintenance quality. Third, “Condition” presented status of critical parameters. Fourth, “Output” defined the parameters important for increasing the value chain performance of the company. Based on the definition of the value chain it is expected that several parts in the maintenance function will influence the value chain. However, it remains to investigate more in detail the relationship between the maintenance indicators and the performance of the value chain.


H. Rødseth et al.

Fig. 1. Focus diagram for maintenance indicators supporting the value chain

4 PLI and Future Applications PLI has evolved from OEE where the need is to develop a monetary indicator of the hidden factory. A suitable approach for calculating PLI is to structure the elements of this indicator in three different dimensions. Figure 2 presents the approach for calculating PLI in terms of the PLI cube [15]. The hidden factory will be divided in terms of time loss and waste, the accounting perspective as well as the perspective of the physical asset.

Fig. 2. The PLI cube [15].

The Journey Towards World Class Maintenance with Profit Loss


Although calculation of PLI based on this cube has been demonstrated in several case studies, it is still of interest to investigate in alternative approaches for calculating PLI. Based on application of the DuPont [9] for profitability analysis as well as structuring of the contribution of OEE for return on asset [18]. Figure 3 presents a proposed structure of PLI inspired by this structure and could be related to the company level which will affect the value chain.

Fig. 3. Proposed structure of PLI, inspired from [9, 18].

5 Future Aspect of PLI and Conclusions The aim of this article was to investigate the trends of WCM as well as how PLI can be included in this journey. As shown in this article there are many elements of WCM and only some examples of elements in WCM has been presented. The elements in WCM should aid PLI in the implementation. For example, the calculation of RUL based on machine learning can be combined with PLI and support as maintenance planning [14]. With the technologies developed from wireless sensors and the principle of miniaturisation it will be possible to have PLI in dashboards on small devices where PLI information is provided in real-time. By constructing a dashboard with both maintenance indicators relevant for the value chain and PLI can result in better and faster decisions in remote maintenance. This will also ensure that WCM is value driven. More possible structures of PLI should be investigated. Although PLI has been tested in several case studies, the new approach of calculating PLI inspired from the DuPont model should be tested more in details since it can provide a new understanding of PLI at company level and how it will affect the value chain. It is concluded that PLI should be included in WCM. Future research will require testing new approaches for calculating PLI in case studies as well as relate it to other maintenance indicators that affect the value chain. Acknowledgement. The authors wish to thank for valuable input from the research project CPS-plant. The Research Council of Norway is funding CPS-plant.


H. Rødseth et al.

References 1. Blackstone, J.H., Cox, J.F.: APICS dictionary: editors: John H. Blackstone Jr., James F. Cox III. APICS, Alexandria, Va (2005) 2. Bokrantz, J., Skoogh, A., Berlin, C., Stahre, J.: Maintenance in digitalised manufacturing: Delphi-based scenarios for 2030. Int. J. Prod. Econ. 191, 154–169 (2017). 1016/j.ijpe.2017.06.010 3. DIN.: German Standardization Roadmap - Industry 4.0. Version 3. Berlin (2018) 4. Haarman, M., Delahay, G.: VDM XL - Value Driven Maintenance & Asset Manadement. Mainnovation (2016) 5. Haarman, M., Mulders, M., CVassiliadis, C.: Predictive Maintenance 4.0 - Predict the unpredictable. PwC and Mainnovation (2017) 6. Han, T., Yang, B.-S.: Development of an e-maintenance system integrating advanced techniques. Comput. Ind. 57(6), 569–580 (2006). 02.009 7. Imam, S.F., Raza, J., Ratnayake, R.M.C., World Class Maintenance (WCM): Measurable indicators creating opportunities for the Norwegian Oil and Gas industry. In: 2013 IEEE International Conference on Industrial Engineering and Engineering Management, IEEM 2013, 10–13 December 2013, Bangkok, Thailand, IEEE International Conference on Industrial Engineering and Engineering Management. IEEE Computer Society, pp 1479– 1483 (2014). 8. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic initiative INDUSTRIE 4.0 (2013) 9. Melvin, J., Boehlje, M., Dobbins, C., Gray, A.: The DuPont profitability analysis model: an application and evaluation of an e-learning tool. Agric. Finance Rev. 64(1), 75–89 (2004). 10. Muchiri, P., Pintelon, L.: Performance measurement using overall equipment effectiveness (OEE): literature review and practical application discussion. Int. J. Prod. Res. 46(13), 3517– 3535 (2008) 11. Nakajima, S.: TPM Development Program: Implementing Total Productive Maintenance. Productivity Press, Cambridge (1989) 12. Norwegian Petroleum Directorate.: Maintenance baseline study - A method for selfassessment of maintenance management systems. Downloaded from news/baseline-study-article361-878.html (1998) 13. Rødseth, H., Schjølberg, P., Kirknes, M., Bernhardsen, T.I.: Increased profit and technical condition through new KPIs in maintenance management. In: Koskinen, K.T., Kortelainen, H., Aaltonen, J. et al. (eds.) Proceedings of the 10th World Congress on Engineering Asset Management (WCEAM 2015), pp. 505–511. Springer International Publishing, Cham (2016). 14. Rødseth, H., Schjølberg, P., Marhaug, A.: Deep digital maintenance. Adv. Manufact. 5(4), 299–310 (2017). 15. Rødseth, H., Skarlo, T., Schjølberg, P.: Profit loss indicator: a novel maintenance indicator applied for integrated planning. Adv. Manufact. 3(2), 139–150 (2015). 1007/s40436-015-0113-6 16. Starr, A., Al-Najjar, B., Holmberg, K., Jantunen, E., Bellew, J., Albarbar, A.: Maintenance today and future trends. In: Holmberg, K., Adgar, A., Arnaiz, A., Jantunen, E., Mascolo, J., Mekid, S. (eds.) E-Maintenance, pp 5–37. Springer, London (2010). 978-1-84996-205-6_2

The Journey Towards World Class Maintenance with Profit Loss


17. Wireman, T.: Developing Performance Indicators for Managing Maintenance. Industrial Press, New York (2005) 18. Zuashkiani, A., Rahmandad, H., Andrew, K.S.J.: Mapping the dynamics of overall equipment effectiveness to enhance asset management practices. J. Qual. Maintenance Eng. 17(1), 74–92 (2011).

Initiating Industrie 4.0 by Implementing Sensor Management – Improving Operational Availability Jon Martin Fordal(&), Harald Rødseth, and Per Schjølberg Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway {jon.m.fordal,harald.rodseth,per.schjolberg}

Abstract. To stay competitive in the future, industrialists must be prepared to adopt the imminent changes and new technologies associated with Industrie 4.0. These changes apply equally to the field of maintenance, which is also developing quickly. Sensors, along with analyses and competence, are one of the most critical factors for Industrie 4.0 as they are the connectors between the digital and physical world. Utilization of these sensors within maintenance is a relatively unexplored field. Thus, the aim of this paper is to present a novel concept for ways sensor management can be linked to maintenance and thereby improve operational availability. The paper also presents an overview of sensor management and trends within maintenance. Keywords: Sensor management

 Predictive maintenance  Industrie 4.0

1 Introduction Sensors are the connecting elements between the digital and the physical world. Thus, sensor are one of the most critical factors for succeeding with Industrie 4.0, the Fourth Industrial Revolution [1]. The first Industrial Revolution started with the invention of Spinning Jenny in 1764, a multi-spindle spinning frame who was a game changer for the textile industry in England. Following, the world industry has gone through two other industrial revolutions, both of them causing a “make or break” situation for industrialists [13]. Implementing electricity into production processes and producing in high volumes were areas of focus in the early 20th century, and are the main characteristics of the second Industrial Revolution. Next, the introduction of electronics and computer technology for process automation and manufacturing significantly increased the level of performance for industrialists. This epoch is known as the third Industrial Revolution. Currently, humankind is on the threshold of the fourth Industrial Revolution, or “Industrie 4.0” [20]. Figure 1 gives an overview of the industrial revolutions. Along with the imminent changes Industrie 4.0 and new technology is expected to cause for industrialists, an excellent opportunity rises for improving a company’s performance of maintenance. Throughout the years, the role of maintenance has evolved from the perception of being a hindrance for throughput and scheduling, to an © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 200–207, 2019.

Initiating Industrie 4.0 by Implementing Sensor Management


Fig. 1. The four stages of the industrial revolution [6].

opportunity for gaining a competitive edge by predicting and being one-step ahead of failures. Savings can be tremendous, as maintenance costs are one of the biggest contributors to the total operating costs for all manufacturing and production plants. In some industries, the maintenance cost can represent 15% to 40% of the costs of goods produced. For the years to come, and as a result of more automation and new technologies, maintenance will increasingly be more important for improving availability, product quality, fulfillment of safety requirements, and plant cost-effectiveness [3]. This development of maintenance is illustrated in Fig. 2, where four “maintenance revolutions” (the latest trend being predictive maintenance 4.0) shows the increased level of reliability, and need for data and statistics, as originally presented by PwC and Mainnovation [15].

Fig. 2. Levels of maturity within predictive maintenance [15].

Within the field of maintenance, all superordinate maintenance systems for data interpretation, such as e-maintenance and computerized maintenance management system (CMMS), will be blind without the right sensors collecting data [1, 12]. In years to come, data interpretation will be an essential enabler to follow the development of maintenance technologies, which underpins the need for sensor management [16]. Figure 3 illustrates the progression of development within maintenance technologies.


J. M. Fordal et al.

Fig. 3. Development of maintenance technologies [16].

A body of research is available on sensor management [4, 7, 14, 17], but the research is aimed primarily at applications within military systems (tracking), robots, and radars. In [14] they also present that researchers and engineers have felt the need for conducting more work on sensor management, and [21] supports the unrealized potential for sensors as a maintenance-tool. This is also supported by [17], where it is claimed that the field of wireless sensor networks has matured, but the focus has been on environment monitoring, military and homeland security applications. Thus, the ways that sensor management can be connected to maintenance is a relatively unexplored field. With this in mind, the aim of this paper is to present a novel concept for how sensor management can be linked to maintenance to improve operational availability; where operational availability is defined as: “the probability that an item will be in an operable and committable state at the start of a mission when the mission is called for at any random point in time” [5]. The structure of this article is as follows: Sects. 2.1 and 2.2 presents the state of the art within maintenance management and sensor management, followed by a proposal in Sect. 3 for a concept for a maintenance centered approach for sensor management. Section 4 gives concluding remarks.

2 Maintenance Management and Sensor Management 2.1

Maintenance Management and E-Maintenance

To aid the maintenance management with the increased level of information and to have it available in real-time, and as support for decision-making, a computerized maintenance management system (CMMS) is required [18]. In addition to developing preventive maintenance schedules based on maintenance history, CMMS can also support condition-based maintenance (CBM). In CMMS, a CBM strategy will trigger a task when target values have been exceeded or not reached, such as on-line temperature measurements [10]. The triggering of the tasks can be based on more sophisticated decision criteria when implementing e-maintenance in the company strategy. In addition to monitoring the state of a system, it will be possible to apply predictive maintenance

Initiating Industrie 4.0 by Implementing Sensor Management


with failure prognostics built on specific degradation assessment algorithms and prognostics algorithms [12, 19]. In more detail, e-maintenance, along with expert systems, offer maintenance support and tools such as intelligent actuation and measurement (IAM) [19], as well as failure analyses and maintenance documentation [12]. Emaintenance has also been addressed to be essential for prognostics and health management (PHM) [2], and implementing platforms based on a value-driven approach [11]. In particular, the notion smart sensor support several capabilities in e-maintenance: • • • •

real-time data acquisition from the physical asset; data processing based on predefined algorithms; data transferring, and connection to networked environment.

Although considered as a technological component in e-maintenance, application of sensors should also be included for management activities where e.g. strategy for connecting sensors to the physical asset is evaluated. Thus, there is a need for expanding smart sensors into sensor management. 2.2

Sensor Management – An Overview

The term sensor management was first used in the context of automatic control of sensor systems in military aircrafts, where the goal was to control sensor resources to provide and present the most essential information (e.g., most critical threats/alerts) for the pilot. Later, sensor management was actively used within “active vision” for applications in robotics, in order to improve robotic vision systems. In recent years, the development of sensor and communications technologies has led to a rapid growth of interest within the field of sensor management, and different applications in other areas are continually being developed [4]. In [14] they claim the need for sensor management, and report the application of sensor management in several domains, and suggest that more research should be performed in the field. A recent literature review indicates that sensor management within the field of maintenance is a rather untouched area. Sensor management is described in several ways in the literature. A more generic statement of sensor management is presented in [14], and goes as follows: “to manage, co-ordinate and integrate the sensor usage to accomplish specific and often dynamic mission objectives.” In [7] they claim that sensor management can be treated as a: “general strategy that controls sensing actions, including generating, prioritizing, and scheduling sensor selections and actions.” Additionally, another description of the term sensor management is given by [4]: “control of the degrees of freedom in an agile sensor system to satisfy operational constraints and achieve operational objectives.” Based on the previous descriptions, the authors propose the following definition of sensor management within maintenance: «Sensor management aims to optimize a configuration of sensors, with the goal of improving operational availability for a given system»


J. M. Fordal et al.

Lastly, regarding sensor control, the following sensor management guidelines are important to take into consideration [1]: 1. What benefit shall the sensor application generate? 2. Are the measurements already known? Which ones shall be captured? 3. How much installation space and which interfaces are available for the sensor system? 4. To which ambient conditions is the sensor system exposed? 5. Which characteristics shall the measuring signal have for the planned data interpretation? 6. What is the consequence of a sensor system failure/malfunction? 7. What is the target quantity for implementing the sensor system? These questions establish a baseline of parameters that define the data collection required for sensor management.

3 A Maintenance Centered Approach for Sensor Management In [18] they claim that utilizing data gathered to provide information and insight to maintenance engineers and managers to make optimal maintenance decisions has always been a challenging task. However, a company in the process industry has discovered a way to take advantage of the latest in sensor technology, namely wireless sensors for monitoring of equipment, where two major advantages is experienced. First, the sensors have a battery lifetime up to 15 years, with measurements taken every two seconds and transferred every two minutes (cloud or local storage). Second, they can measure parameters such as temperature (surface, air, liquid), humidity, light, open/closed function, signal transmitter (analog to digital), and pressure (vibration and laser distance are upcoming). Until now, experience shows several maintenance quickwins by implementing these types of sensors, e.g.: • Continuous measurement of equipment, which previously has been difficult/expensive to measure • Easy access to data on smartphone/tablet/web-app • Fast setup and an appealing user interface • Adjustable control limits and alarm function ensure quick action when deviations occur, as notifications are directly sent to the maintenance personnel responsible for the given equipment • Reduced time spent on preventive maintenance rounds • Maintenance personnel take ownership of the sensors, and data, finding new areas for application, measurement points, and uses for troubleshooting. Sensor technology is continuously improving and becoming more available in terms of both cost and connectivity, and accessibility to data is seen as a foundation for Cyber-Physical Systems and Industrie 4.0 [8, 9]. Thus, it is important to prepare organizations on how sensor systems can be implemented effectively to utilize their full

Initiating Industrie 4.0 by Implementing Sensor Management


Fig. 4. Novel concept for sensor management within maintenance.

potential in terms of maintenance. Figure 4 presents a novel concept for sensor management applied to the maintenance function. The proposed concept focuses on improving operational availability, where technical condition for a given equipment/system is evaluated based on data collected by mounted wireless sensors. By using cloud storage, technical condition, historical data and trends (along with notifications) are directly available on smartphones/tablets/web-apps. Adjustable control limits with an alarm function, which notifies maintenance personnel immediately when deviations occur, result in reduced time for initiating maintenance execution. The effect of the performed action can then directly be evaluated, by comparing historical data with the continuous stream of new measurements from the sensors. Summarized, the initial concept focuses on simplicity, and does not include advanced prediction analytics or decision support at this stage, but instead strives to maximize the value of the maintenance personnel’s experience and knowledge.

4 Conclusion This article has proposed a novel concept for how sensor management can be linked to maintenance to improve operational availability. The concept presents the flow between technical condition, reporting, maintenance personnel, maintenance execution, supported by data from sensors and the benefits of cloud storage alongside web-app for smartphones/tablets. The simplicity in this initial stage of the concept is expected to result in ease of implementation, and utilize maintenance personnel’s experience and knowledge. The importance of sensor management within maintenance is also discussed, as sensors are the connecting element between the digital and the physical world. The current advancements suggest that sensors and data will be essential to follow the latest trends in maintenance technologies, along with predictive maintenance, CMMS and emaintenance.


J. M. Fordal et al.

In summary, additional research is needed to further develop the concept proposed in this paper, and discover ways this concept can adopt other technologies, such as condition assessment and diagnostics/prognostics. Case studies based on the concept and research in these areas are suggested for further work.

References 1. Fleischer, J., Klee, B., Spohrer, A., Merz, S.: Guideline sensors for Industrie 4.0. Options for cost-efficient sensor systems. VDMA Forum Industrie 4.0 (2018) 2. Guillén, A.J., Crespo, A., Macchi, M., Gómez, J.: On the role of prognostics and health management in advanced maintenance systems. Prod. Plann. Control 27(12), 991–1004 (2016). 3. Han, T., Yang, B.-S.: Development of an e-maintenance system integrating advanced techniques. Comput. Ind. 57(6), 569–580 (2006). 02.009 4. Hero, A.O., Cochran, D.: Sensor management: past, present, and future. IEEE Sens. J. 11(12), 3064–3075 (2011). 5. Jin, T., Xiang, Y., Cassady, R.: Understanding operational availability in performance-based logistics and maintenance services. In: 2013 Proceedings of Annual Reliability and Maintainability Symposium (RAMS), 28–31 January 2013, pp. 1–6. rams.2013.6517654 6. Kagermann, H., Helbig, J., Hellinger, A., Wahlster, W.: Recommendations for implementing the strategic initiative Industrie 4.0: Securing the Future of German Manufacturing Industry; Final Report of the Industrie 4.0 Working Group. Forschungsunion, Acatech (2013) 7. Kahler, B., Blasch, E.: Sensor management fusion using operating conditions. In: 2008 IEEE National Aerospace and Electronics Conference, 16–18 July 2008, pp. 281–288. https://doi. org/10.1109/naecon.2008.4806559 8. Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for Industry 4.0based manufacturing systems. Manufact. Lett. 3, 18–23 (2015). mfglet.2014.12.001 9. Lee, J., Jin, C., Bagheri, B.: Cyber physical systems for predictive production systems. Prod. Eng. Res. Dev. 11(2), 155–165 (2017). 10. Liebstückel, K.: Plant Maintenance with SAP - Practical Guide. SAP Press, Bonn, Boston (2014) 11. Macchi, M., Crespo Márquez, A., Holgado, M., Fumagalli, L., Barberá Martínez, L.: Valuedriven engineering of E-maintenance platforms. J. Manufact. Technol. Manag. 25(4), 568–598 (2014). 12. Muller, A., Crespo Marquez, A., Iung, B.: On the concept of e-maintenance: review and current research. Reliab. Eng. Syst. Safety 93(8), 1165–1187 (2008). j.ress.2007.08.006 13. Mæhlum, L.: Spinning Jenny. Store Norske Leksikon (2018) 14. Ng, G.W., Ng, K.H.: Sensor management – what, why and how. Inf. Fusion 1(2), 67–75 (2000). 15. PwC/Mainnovation (2017) Predictive maintenance 4.0 - predict the unpredictable 16. Qiu, H., Lee, J.: Near-Zero Downtime: Overview and Trends. Noria. https://www. Accessed 16 Aug 2018

Initiating Industrie 4.0 by Implementing Sensor Management


17. Ramamurthy, H., Prabhu, B.S., Gadh, R., Madni, A.M.: Wireless industrial monitoring and control using a smart sensor platform. IEEE Sens. J. 7(5), 611–618 (2007). 10.1109/JSEN.2007.894135 18. Rastegari, A., Mobin, M.: Maintenance decision making, supported by computerized maintenance management system. In: 2016 Annual Reliability and Maintainability Symposium (RAMS), 25–28 January 2016, pp. 1–8. 7448086 19. Ucar, M., Qiu, R.G.: E-maintenance in support of E-automated manufacturing systems. J. Chin. Inst. Ind. Eng. 22(1), 1–10 (2005). 20. Webel, S.: “Industrie 4.0”: Seven Facts to Know About the Future of Manufacturing. Siemens (2016). industry-and-automation/digtial-factory-trends-industrie-4-0.html. Accessed 14 Aug 2018 21. Yong, Z., Yikang, G., Vlatkovic, V., Xiaojuan, W.: Progress of smart sensor and smart sensor networks. In: Fifth World Congress on Intelligent Control and Automation (IEEE Cat. No. 04EX788), 15–19 June 2004, vol. 3604, pp. 3600–3606. 2004.1343265

Manufacturing System and Technologies

A Prediction Method for the Ship Rust Removal Effect of Pre-mixed Abrasive Jet Qing Guo1, Shuzhen Yang1,2(&), Minghui Fang1, and Tao Yu1 1

School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200072, People’s Republic of China [email protected] 2 College of Engineering, Shanghai Polytechnic University, Shanghai 201209, People’s Republic of China

Abstract. Considering the limitations of traditional experiments and workers’ experience in the determination of ship rust removal process parameters of abrasive jet. A prediction model for the ship rust removal effect of pre-mixed abrasive jet based on least squares support vector machine and particle swarm optimization (PSO-LSSVM) is proposed. The model optimizes the parameters of LSSVM through PSO, overcomes the subjectivity and blindness of LSSVM parameter selection, and improves the convergence speed and accuracy of the algorithm. The simulation results show that the PSO optimized LSSVM prediction model is more accurate and faster than the BP neural network algorithm. The model can better reflect the process rule between the effect of ship rust removal of premixed abrasive jet and the process parameters, and can guide the selection of processing parameters according to the model. Keywords: Ship rust removal  Abrasive jet  PSO LSSVM  Prediction model

1 Introduction The quality standard of ship surface pretreatment is the premise of reaching the standard of coating quality. Therefore, surface rust removal is a key step in ship manufacturing and ship repair. The quality of rust removal directly affects the uniformity and firmness of protective film in pretreatment [1]. Abrasive water jet is a new type of jet which has been developed rapidly in 80s and meets the needs of green development. It is widely used in rust removal. Among them, the pre-mixed abrasive jet is widely concerned because of its uniform mixing of abrasive and water, high rust removal efficiency and low equipment requirements [2, 3]. At present, the selection of ship rust removal parameters depends on the workers’ experience and single factor experiments. However, there are many factors that affect the rust removal effect, and there is a highly nonlinear relationship between the rust removal effect and the parameters. Simply relying on workers’ experience and single factor experiments can no longer meet the increasing requirements of rust removal [4]. Least squares support vector machines can be used for linear and nonlinear multivariate modeling which has fewer adjustment parameters, faster learning speed, and higher accuracy of classification and prediction. It can also get higher accuracy without © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 211–217, 2019.


Q. Guo et al.

the large amount of background information and data [5]. In this paper, the nonlinear relationship between the effect of rust removal and the process parameters is established by LSSVM. At the same time, the global optimization performance of the PSO is used to optimize the parameters of the LSSVM. The prediction model of ship the rust removal effect of the abrasive jet based on the PSO-LSSVM is established, and the selection of the process parameters can be guided by this model.

2 Least Squares Support Vector Machine Algorithm 2.1

Least Squares Support Vector Machine

The least squares support vector machine uses the square sum of the least squares linear system error as the loss function, and transforms the solving optimization problem into solving linear equation, and the required computing resources are less. It is widely used in the fields of pattern recognition and information fusion. Suppose the training sample D ¼ fðxi ; yi Þji ¼ 1; 2; . . .; ng; xi , represents the input data. yi represents the output data. n represents the number of samples. The optimization problem of LSSVM in x space can be expressed as: 1 1 Xn 2 minx;b;e J ðx; eÞ ¼ kxk2 þ c e ð2:1Þ i¼1 i 2 2   s:t: yi xT uðxi Þ þ b ¼ 1  ei In the formula, c is the penalty factor, which is used to control the degree of punishment of the samples that beyond the computation error. e indicates the relaxation factor. The Lagrange multiplier method is used to transform the original problem into a maximum value problem for a single parameter, that is, Lagrange multiplier a. 8 Pn @L > i¼1 ai yi uðxi Þ > @x ¼ 0 ! x ¼ P > n < @L i¼1 ai yi ¼ 0 @b ¼ 0 ! ð2:2Þ @L ¼ 0 ! ai ¼ cei > @ei > > @L : T @ai ¼ 0 ! yi ½x uðxi Þ þ b  1 þ ei ¼ 0 Based on these four conditions, the linear equations of a and b are listed:      b 0 0 yT ¼ 1v y X þ I=y a


The linear regression equation of LSSVM is obtained by calculating a and b with the least square method and choosing the radial basis function whose generalization performance is better than other kernel functions. n  x x X i j ai exp  yð x Þ ¼ þb ð2:4Þ 2 2r i¼1

A Prediction Method for the Ship Rust Removal Effect



Optimization of LSSVM Parameters with PSO

In the LSSVM model, the selection of the penalty factor c and the kernel function parameter r has a great influence on the prediction results, while the LSSVM has a certain “blindness” for the selection of the parameters [6]. Particle swarm optimization is an optimization algorithm based on swarm intelligence. In this paper, particle swarm optimization (PSO) is used to optimize LSSVM, and the optimal value of LSSVM parameters is searched. The optimal combination of penalty factor c and kernel function parameter r is obtained [7]. The concrete steps are as follows: Step 1 Initialization: Set the range of operation, learning factors c1 and c2 , evolution algebra E, penalty factor c and kernel function parameter r. Step 2 Training sample: First, the position of the particle i is set as the optimal position of the individual, and the training sample set fðxi ; yi Þg into the linear equation group (2.3) is trained, and the training error is used as the fitness value of each particle. The smaller the error, the greater the adaptability. Step 3 Test sample: Find out the most adaptable particle among k particles and record the number of iterations to 0. Step 4 Repeat steps 3 and 4 to update the positions and velocities of the particles, calculate the current fitness values of the particles when the number of iteration is t. Step 5 Determine the optimal position of the individual and the optimal position of the group: Comparing the current fitness of each particle with its own optimal fitness, if it is better, the current position of the particle is regarded as its individual optimal position. The optimal position of the particle is compared with the optimal position fitness value of the group. If it is better, the optimal position of the particle is taken as the optimal position of the group. Step 6 Calculate the weight of inertia: If the general dynamic interval is contractive according to the convex function, a better equilibrium point can be found between the convergence speed and the precision, and the comprehensive performance can be improved. The weight calculation is shown as follows. Update the next generation of particles according to the speed and position equation of standard PSO. 

Wend ðk Þ ¼ wmin Wstart ðkÞ ¼ wmax  ðwmax  wmin Þ  ðk=kmax Þ2


Step 7 Judgment of terminating conditions: If we achieve the preset goal or the maximum number of iterations, end the optimization and obtain the optimal penalty factor c and kernel function parameter r. Otherwise, turn to step 2 and continue the next iteration.


Q. Guo et al.

3 Prediction Model of Ship Rust Removal Effect of Abrasive Jet Based on PSO-LSSVM 3.1

Sampling and Preprocessing of Sample Data

The sample data were collected based on the self-made prototype. In processing, the abrasive particles choose river sand. The medium does not cause the environmental pollution of the river. The angle of the jet and the corroded steel plate is 60°, and the moving speed of the handheld spray gun is 400 mm/min. After the previous experiment, it is found that when the system pressure exceeds 7 MPa, the rust removal effect is better. But if the pressure is too large, it is difficult for the gunman to grasp the rapid recoil of spray guns, and it will have some danger and the harm to the equipment is great. When the target distance is too large, that is, the distance between the jet nozzle and the surface of the steel plate is too large, which will lose more power and energy of the jet, and cannot play a very good damage to the rust spot on the surface of the ship. When the abrasive size is too small and the water sand ratio is too large, the rust removal effect is not obvious. Abrasive particle size is too large and water sand ratio is too small, which will cause wear and clog at the pipe and nozzle. The effect of the jet is generally expressed with Sa, and Sa is divided into four grades of Sa1, Sa2, Sa2.5 and Sa3. In order to quantify the effect of rust removal, the above four grades are corresponded to the four values of 1, 3, 4 and 5 respectively. Due to the limitation of actual processing conditions, the sample data cannot be increased indefinitely. Typical data which can better reflect the process law are selected. Using the system pressure, water sand ratio, abrasive particle size and target distance as the input of the prediction model based on PSO-LSSVM, the effect of the rust removal is taken as the model output, and some data are shown in Table 1. Table 1. Partial sample data

1 2 3 4 5 6 7 8 9 10 11 12 … 50

System pressure 7 7 7 7 7 7 7 7 7 7 8 8

Volume ratio of water and river sand 3.6 3.8 4 4.2 4.4 3.8 3.6 4.2 4.4 4 4 3.8

Abrasive particle size 16 18 20 24 28 24 28 16 18 20 16 18

Target distance 300 350 400 450 500 300 350 400 450 500 300 350

Derusting effect 3.72 3.81 3.98 4.06 4.15 3.86 3.77 4.02 4.10 3.95 4.16 4.12






A Prediction Method for the Ship Rust Removal Effect


(1) The pressure of the system is 7, 8, 9, 10 and 11 MPa. (2) Volume ratio of water and river sand is 3.6(4.32:1.2), 3.8(4.58:1.2), 4(7.2:1.8), 4.2(7.56:1.8) and 4.4(7.92:1.8). (3) Abrasive visual number is 16, 18, 20, 24 and 28. (4) The target distance is 300, 350, 400, 450 and 500 mm. 35 samples were used as training samples for PSO-LSSVM prediction models, and 15 samples were used as test samples. In order to eliminate the influence of the different parameter dimensions on the prediction speed, accuracy and generalization ability, it is normalized first, then the prediction model is introduced, and the normalization formula is as follows: xi  xmin ^xi ¼ ð3:1Þ xmax  xmin In the formula, ^xi is the normalized value, xi is the initial value, xmin is the minimum value, and xmax is the maximum value. 3.2

Result Analysis

Based on the MATLAB R2015b software platform, the algorithm program is written, and the functions of initlssvm, trainlssvm and simlssvm in the least square support vector machine toolbox are called, and the simulation experiment of the prediction model is carried out. The size of the population is 40, the maximum iteration number is 500, and the default value of the termination iteration is less than 105 ; c1 ¼ c2 ¼ 2, the initial weight wmax ¼ 0:9, and the termination weight wmin ¼ 0:45. The penalty factor c of LSSVM is ½1; 1000, and the range of kernel function parameter r2 is ½0:001; 100. After PSO optimization, the final LSSVM parameter ðc; rÞ is ð6:5; 0:109Þ. Figure 1 compares the prediction effect of the training sample with the real effect. Figure 2 is a comparison between the prediction effect and the real effect of the test sample.

Fig. 1. Test results of the training set


Q. Guo et al.

Fig. 2. Prediction results of the test set

In order to further verify the prediction performance of the model, the BP neural network is used to predict the test set. It is still based on the MATLAB R2015b software platform and calls the functions of newff and train in the artificial neural network toolbox to train and test the network, and its prediction results are shown in Fig. 3.

Fig. 3. Prediction results of BP neural network model for the test set

It is found that the prediction effect of LSSVM is better than that of BP neural network. The maximum relative error of BP neural network is 3.56%, the minimum relative error is 0.05%, and the average relative error is 1.42%. The maximum relative

A Prediction Method for the Ship Rust Removal Effect


error of LSSVM is 2.47%, the minimum relative error is 0.05%, and the average relative error is 0.96%. It can be seen that LSSVM can be effectively applied to small sample prediction problems.

4 Conclusion According to the traditional single factor experiment method and workers’ experience, there are some shortcomings in determining the ship rust removal process parameters of pre-mixed abrasive jet. A prediction model for the effect of rust removal based on PSO-LSSVM is proposed. LSSVM can better reflect the high nonlinear relationship between the effect of ship rust removal and the process parameters, and the particle swarm optimization algorithm can optimize the LSSVM parameters in the global range. The results show that the prediction model can truly reflect the process rule between the effect of ship rust removal of premixed abrasive jet and the process parameters. The maximum relative error is 2.47%, the minimum relative error is 0.05%, and the average relative error is 0.96%. Compared with BP neural network, the prediction accuracy and convergence speed are significantly improved. Acknowledgments. This work was supported by Shanghai Committee of Science and Technology, China (Grant No. 17511107302) and the Project of Key Discipline of Shanghai Polytechnic University (No. XXKZD1603).

References 1. Xie, Y.F., Liu, H.W., Hu, Y.X.: Study on the method of determining the process parameters of ship plate laser rust removal. China Laser 4, 103–110 (2016) 2. Yang, G.: Simulation study on abrasive water jet derusting technology. Electr. Mech. Eng. 30(8), 929–932 (2013) 3. Liu, H., Wang, J., Kelson, N., et al.: A study of abrasive waterjet characteristics by CFD simulation. J. Mater. Process. Technol. 153–154(1), 488–493 (2004) 4. Cheng, P.F., Zhou, X.H., Tang, Y.: Experimental study of high pressure abrasive water jet descaling system for alloy wire rods. Surf. Technol. 45(4), 144–148 (2016) 5. Zhou, H.P., Xiong, B.J., Gui, H.X.: Application of least squares support vector machine optimization model in coal mine safety prediction. Surveying Mapp. Sci. 39(7), 150–154 (2014) 6. Jiang, J.N., Liang, B., Zhang, J.: Prediction of gas content based on particle swarm optimization least squares support vector machine. J. Liaoning Univ. Eng. Technol. 28(3), 363–366 (2009) 7. Dong, H.B., Li, D.J., Zhang, X.P.: A particle swarm optimization algorithm for dynamically adjusting inertial weights. Comput. Sci. 45(2) (2018)

A Review of Dynamic Control of the RigidFlexible Macro-Micro Manipulators Xuan Gao, Zhenyu Hong, and Dongsheng Zhang(&) College of Aeronautical Engineering, Civil Aviation University of China, Tianjin 300300, China [email protected]

Abstract. A review of the control strategy for rigid-flexible macro-micro manipulators is presented. This type of manipulators has great potential applications due to the advantages of high ratio of workspace and volume and fast response. Generally it consists of a cable-driven macro parallel mechanism and a rigid-driven micro mechanism serially mounted on the previous. Because of the flexibility and redundant actuation of the macro mechanism, the two submechanisms interact and cable tension solutions are non-unique. To control the macro-micro manipulators, it is significant to tackle the problems of dynamic modeling, decoupling and cable tension optimization. The methods for solving the dynamic problems are compared, and the targets, algorithms and evaluation indexes of the cable tension optimization are summarized. Based on the dynamic analysis and cable tension optimization, the control methods of the global mechanism and the performance of kinematic and dynamic controllers are reviewed. Further, the prospects of the methods for the three important issues, the dynamic decoupling, cable tension optimization and dynamic control, are pointed out. Keywords: Macro-micro manipulators  Dynamic decoupling Cable tension optimization  Instantaneity and continuity  Dynamic control Control stability

1 Introduction In 1993, Sharon et al. [1] first introduced the macro-micro parallel robot which consisted of a micro mechanism or small body and a macro mechanism or large body. During the operating, the macro manipulator moves at a large scale referencing the ground, while the micro manipulator moves at a small scale referencing the large body [2]. This paper focuses on the macro-micro manipulators composed of a cable-driven parallel mechanism and a rigid-driven serial/parallel mechanism. The cable-driven mechanism has advantages of high workspace/volume ratio, fast response, simple structure, easy disassembly and low cost. Meanwhile, the rigid mechanism can be applied to tune and compensate the vibration caused by flexibility and non-linearity of the cables and external excitation. More importantly, it increases the workspace and enhances the functionality of the flexible/cable-driven manipulator. However, such mechanism is faced with a critical problem—the dynamic control. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 218–225, 2019.

A Review of Dynamic Control of the Rigid-Flexible


Recently, many research institutions and companies have designed products for different domains. For example, the RoboCrane [3] is designed by Bostelman which can be used for aircraft maintenance, dual manipulator, cargo handling and cutting. The large adaptive reflector telescopes (LAR) feed devices [4–9] is projected by the National Research Council of Canada. For virtual sports and fetching cargo at fast speed, the Falcon robot [10–12] is developed by Ritsumeikan University. The SkyCam [13, 14] and Spidercam [13, 15] are devised for location and shoot in a wide range. The NIMS series of robots are investigated by Borgstrom et al. [16] aimed at monitoring environment. The IPAnema series of robots developed by Pott et al. [17] are applied in the plane and spatial configuration of industrial production. The Fivehundred Meter Aperture Spherical Telescope (FAST) [20–23], the main application of the macro-micro manipulators is used to find the first generation of born asters.

2 Dynamic Modeling and Decoupling The dynamic model is the basis of the dynamic analysis, dynamic performance evaluation and optimization design of the macro-micro manipulator. The interaction between the macro and micro mechanisms causes the dynamic coupling [24]. The coupling not only affects the position accuracy of the mechanism, but also adds difficulty to the design of the controller. The decoupling should be considered first for dynamic control. 2.1

Decoupling in Dynamic Modeling

To handle the dynamic coupling, Nenchev et al. [25, 26] proposed a decoupling scheme based on the null space of the so-called inertia coupling matrix, and realized the decoupled control of the compliant mechanism by the workspace feedback control. In Ref. [27] the performance of this control method was examined by simulations, but the existence of reaction null space became more stringent. Compared with the null space, the decoupling method adopted in Ref. [28–30] is relatively convenient. The authors used the singular perturbation theory and the concept of integral manifold to divide the dynamic coupling item into decoupled slow time-varying systems and fast time-varying systems, and then adopted different control strategies for different time-varying systems. Among them, Jian et al. [30] introduced the gain rise control method to track the trajectory of the slow time-varying system and eliminate the unmodeled errors. For the fast time-varying system, the sliding mode control method was adopted to suppress the vibration to decouple the nonlinear dynamic coupling. Nevertheless, the simulation is limited to a single open chain flexible structure and its applicability to multi-chain manipulators is not verified. A different perspective was proposed by George et al. in [31, 32]. The authors regarded the kinematic coupling force between the macro and micro mechanism as an inertial damping force. The recursive Newton-Euler method was used to derive the global dynamic model and the nonlinear coupling term was separated. By using the active damping method, the asymmetric coupling term with no real eigenvalues was


X. Gao et al.

transformed to symmetric decoupled term with positive eigenvalues, and then a decoupled dynamic model was established based upon the decoupled model. 2.2

Decoupling in Controllers Design

Instead of decompose the coupling term by mathematical methods, this idea of decoupling aims at estimating the value by control algorithms. In [24, 33], Lew et al. dealt with the rigid-flexible coupling by designing the feedback controllers on the basis of the vibration detected by sensors mounted on the platform. The input parameters of the controller could be adjusted to damp the inertial force. Similarly, Xu et al. [34] extracted the nonlinear coupling term from the dynamic equations of the system and established the PD controller to detect the force of the micro mechanism acting on the macro. The fuzzy logic control was adopted to tune the gains of PD controller. Instead of designing controllers especially for the coupling term, Yang et al. [35] employed the extended state observer (ESO) to estimate the nonlinear coupling term and state variables. The Lagrange method was adopted to analyze the dynamic of the two-link flexible manipulator to derive the nonlinear coupling term. This control scheme is entirely feasible and effective. For the macro-micro mechanism, it is more suitable to decouple in the process of dynamic modeling. On the one hand, decoupling in the dynamic modeling process consumes less time than decoupling by controllers, which is more conducive to the real-time control of the mechanism. On the other hand, decoupling in the dynamic modeling process can avoid further increasing the complexity of the controller.

3 Dynamic Control Method 3.1

Cable Tension Optimization

Because of the inability of cables to exert positive forces, the cable-driven mechanism needs to be redundantly actuated in order to ensure that the end of the actuator is completely controllable. This results that the solution of the cable tensions is not unique, thus requiring for cable tension optimization. Gosselin et al. [36] addressed the optimization of the cable tension distribution of a redundantly actuated cable-driven parallel mechanism. The Euclidean norm and p-norm including the four norm and infinite norm of the bias between cable tensions and mean values in tensions were respectively chosen to be the target of optimization. Then the problem of tension distribution was transformed into the solution of univariate polynomial extremes. Wen et al. [37] used the modified gradient projection method to calculate the tensions of the cables under each position. Afterwards, tension-level indices were introduced to provide a systematic and practical way of setting the desired cable tensions level. The simulation results showed that the modified gradient projection method could quickly solve the tensions smoothly according to the given path. Gouttefarde et al. [38] described a general tension force optimization method for an ndegree-of-freedom mechanism driven by n+2 cables. Firstly, the set of feasible cable tensions was regarded as a convex polygon, then the geometric algorithm was

A Review of Dynamic Control of the Rigid-Flexible


introduced to find the vertexes of the polygon rapidly clockwise or anticlockwise, and iterative optimization from the starting point to the polygon vertex was implemented. However, the applicability of this method needs to be further verified. To apply the eight-cable parallel mechanism for rocket launch in low gravity environments, Cao et al. [39] explored the method of optimizing the cable tensions. Based on constantdirection vector force output model, the cable tensions were calculated with the closedform cable-force optimization theory. The rules of tension distribution were proposed when outputting vector forces were with different values in various directions. In the paper of Müller et al. [40], the improved puncture method was developed to compute the minimum cable tension distributions for redundantly restrained parallel manipulator. Bo et al. [41] proposed the projection algorithm for finding the two straight lines intersecting at the optimal point quickly. However, the above researches did not take both instantaneity and continuity of cable tension optimization into account. In the research of Su [42], the optimal model of the tension distribution of the cable was established on the foundation of Newton-Euler method and the convex optimization theory. The solution of the optimal cable tension was then obtained by using the polynomial optimization method. For the purpose of evaluating the cable tension optimization methods, the author put forward the time consumed for the tension solution in the worst case and the average time cost in the tension solution as the indexes of instantaneity, the maximal absolute value of the tension change and the change rate of the tension as the indexes of continuity. The instantaneity and continuity are vital indexes to evaluate the methods for cable tension optimization. The system tends to generate cumulative error when instantaneity is insufficient, and lack of continuity will lead to uncontrollable vibration of the system. Therefore, the methods of cable tension optimization with instantaneity and continuity as indexes are the research direction of the macro cable-driven mechanism. 3.2

Design of Dynamic Controller

These controllers can be divided into two categories: the overall controllers and the independent controllers. Currently, the independent controllers are widely used in engineering. The two independent controllers are relatively simple to be designed and work together. To maintain stability, it is significant to reduce the influence of external disturbance and internal perturbation in the design of controllers. In the patent of Yun et al. [43], the non-oscillation control algorithm for an industrial crane driven by cables was proposed aiming at restraining the oscillation and reducing the position error of a load. Zi [44] explored the feedback control method of a cable-driven parallel robot with variable structure parameters. The motion of the system was controlled by changing the parameters collected by the feedback module in control system. In the research of Sharf et al. [45], the internal force of the micro mechanism imposed on the macro mechanism was obtained through the dynamic analysis. With this internal force taken as a control variable, the vibration of the cable-driven macro mechanism was suppressed by controlling the motion of the rigid micro mechanism. Yoshikawa et al. [2] developed a quasi-static hybrid dynamic control algorithm for the macro-micro robot. The macro part was controlled by using the force control method and the micro part was controlled by hybrid position/force control method to


X. Gao et al.

compensate for the position and force errors. Kraus et al. [46] divided the work status of the cable-driven parallel mechanism into two states: the non-contact state and the contact state. The control system consists of a force controller, position controller, speed controller and brake controller. Different controllers would be triggered in different states to control the motion of the mechanism. Compared with Kraus, the structure of the dynamic controller in [47] for cable-driven camera robots is simpler. The controller was established by using the modified PD control algorithm. Many scholars have studied the dynamic control methods of the large radio telescope. Cheng et al. [48] considered the macro-micro mechanism as multi-body system and adopted Newton-Euler method to analyze the dynamics of Stewart platform. The PD controller was designed to control the system. In the paper of Lu et al. [49], a feedback control system for macro-micro radio telescopes was established based on PID control algorithms. The controller A, B and C were used to control the position of the upper platform and the tensions in cables, the direction of the upper platform, and the position and direction of the Stewart platform respectively. Zi et al [50] utilized a fuzzy plus proportional integral control (FPPIC) method to control the wind-induced vibration of the trajectory tracking of the feed cabin. To meet the required positioning and pointing accuracy of radio telescope, the adaptive interaction algorithm was introduced by Duan et al. [51–53] to solve the real-time adjustment of PID parameters and an adaptive interactive PID supervisor controller was designed to control the motion of Stewart platform. Considering the slow motion of the mechanism as a motion in a static state, Du et al. [54] designed the nonlinear PD controller of the macro parallel to suppress the vibration of the end effector caused by external environment and the change in the length of the cable. The interaction of the two mechanisms and external forces of environment always exists when they move. The key point of dynamic controller design is maintaining the stability of the manipulator under the influence of external disturbance and internal perturbation. And dynamic control method of the macro-micro manipulator will be further investigated to improve its accuracy and stability of motion.

4 Current and Future Developments This paper reviews the current research results of macro-micro manipulators about dynamic decoupling and the design of dynamic controllers. Considering the particularity of the macro-micro manipulator, the future developments can be concluded. The ideas of dynamic decoupling include decoupling in the modeling process and decoupling by the controller. In comparison, the former is more suitable for the macro-micro manipulator. The methods of cable tension optimization with instantaneity and continuity as the goal are the research direction of the cable-driven macro mechanism. It is critical for dynamic controllers to maintain the stability of the manipulators under the influence of external disturbance and internal perturbation. This dynamic control methods of the macro-micro manipulator will attract extensive attention of numerous scholars.

A Review of Dynamic Control of the Rigid-Flexible


Acknowledgements. This work was supported by United National Science Funds and Civil Aviation Funds; under Grant [No. U1733128]; National Natural Science Foundation of China under Grant [No. 51705519]; and Basic Research Funds for National University under Grant [No. 3122017024, 3122017042].

References 1. Sharon, A.: The macro/micro manipulator: an improved architecture for robot control. Robot. Comput. Integr. Manufact. 10(3), 209–222 (1993) 2. Yoshikawa, T., Harada, K., Matsumoto, A.: Hybrid position/force control of flexiblemacro/rigid-micro manipulator systems. IEEE Trans. Robot. Autom. 12(4), 633–640 (1996) 3. Bostelman, R., Albus, J., Dagalakis, N., et al.: Applications of the NIST Robocrane. In: Proceedings of International Symposium on Robotics and Manufacturing Maui Hi, pp. 403– 407 (1994) 4. Dewdney, P., Nahon, M., Veidt, B.: The large adaptive reflector: a giant radio telescope with an aero twist. Can. Aeronaut. Space 48(4), 239–250 (2002) 5. Taghirad, H.D., Nahon, M.A.: Forward kinematics of a macro-micro parallel manipulator. In: Proceedings of 2007 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 84–89. IEEE, Zurich, Switzerland (2007) 6. Taghirad, H.D., Nahon, M.A.: Jacobian analysis of a macro-micro parallel manipulator. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1–6. IEEE Xplore (2007) 7. Taghirad, H.D., Nahon, M.A.: Dynamic analysis of a macro-micro redundantly actuated parallel manipulator. Adv. Robot. 22(9), 949–981 (2008) 8. Taghirad, H.D., Nahon, M.A.: Kinematic analysis of a macro-micro redundantly actuated parallel manipulator. Adv. Robot. 22(6–7), 657–687 (2008) 9. Wiley, J.A.: Systems and methods for aerial cabled transportation: WO, US 8205835 (2012) 10. Kawamura, S., Ida, M., Wada, T., et al.: Development of a virtual sports machine using a wire drive system-a trial of virtual tennis. In: Proceedings of the IEEE/RSI International Conference on Intelligent Robots and Systems 95. Human Robot Interaction and Cooperative Robots’, vol. 1, pp. 111–116. IEEE Xplore (1995) 11. Kawamura, S., Choe, W., Tanaka, S., et al.: Development of an ultrahigh speed robot FALCON using wire drive system. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, pp. 215–220. IEEE Xplore (1995) 12. Morizono, T., Kurahashi, K., Kawamura, S.: Realization of a virtual sports training system with parallel wire mechanism. In: Proceedings of the IEEE International Conference on Robotics and Automation, 1997, vol. 4, pp. 3025–3030. IEEE Xplore (1997) 13. Tang, X.: An overview of the development for cable-driven parallel manipulator. Adv. Mech. Eng. 2014(1), 1–9 (2014) 14. Thompson, R.R., Blackstone, M.S.: Three-dimensional moving camera assembly with an informational cover housing: US, US 6873355 (2005) 15. Förster, D.: Position data-powered control system for camera and stage equipment for the automated targeting defined mobile objects: DE 202010013678 (2010) 16. Borgstrom, P.H., Borgstrom, N.P., Stealey, M.J., et al.: Design and implementation of NIMS3D, a3-D cabled robot for actuated sensing applications. IEEE Trans. Robot. 25(2), 325–339 (2009) 17. Pott, A.: Forward kinematics and workspace determination of a wire robot for industrial applications. In: Advances in Robot Kinematics: Analysis and Design, pp. 451–458 (2008)


X. Gao et al.

18. Duan, X.C., Qiu, Y.Y., Duan, B.Y.: Adaptive interactive PID supervisory control of the macro-micro parallel manipulator. J. Mech. Eng. 46(1), 10–17 (2010) 19. Nan, R.: Five hundred meter aperture spherical radio telescope (FAST). National Astronomical Observatories, Beijing 100012, China. Science in China (Series G: Physics, Mechanics & Astronomy). (02) (2006) 20. Hui, L.I., Wenbai, Z.H.U.: Static stiffness analysis of flexible-cable-driven parallel mechanism. J. Mech. Eng. 46(03), 8–16 (2010) 21. Lu, Y.J.: Positioning and Orientating Control Study on the Feed Support System of a Large Radio Telescope. Tsinghua University (2007) 22. Duan, B.Y., Qiu, Y.Y.: Automatic switching and driving device for multi-beam feed of large— scale flexible radio telescope antenna: CN 2006 23. Du, J., Bao, H., Yang, D., et al.: Initial equilibrium configuration determination and shape adjustment of cable network structures. Mech. Based Des. Struct. Mach. 40(3), 277–291 (2012) 24. Lew, J.Y., Moon, S.M.: Acceleration feedback control of compliant base manipulators. In: Proceedings of the American Control Conference, vol. 3, pp. 1955–1959. IEEE (1999) 25. Nenchev, D.N., Yoshida, K., Uchiyama, M.: Reaction null-space based control of flexible structure mounted manipulator systems. In: Proceedings of the IEEE Conference on Decision and Control, vol. 4, pp. 4118–4123. IEEE (1996) 26. Nenchev, D.N., Yoshida, K., Vichitkulsawat, P., et al.: Experiments on reaction null-space based decoupled control of a flexible structure mounted manipulator system. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 3, pp. 2528–2534. IEEE Xplore (1997) 27. Nenchev, D.N.: Reaction null space of a multibody system with applications in robotics. Mech. Sci. 4(1), 97–112 (2013) 28. Ghorbel, F., Spong, M.W.: Adaptive integral manifold control of flexible joint robot manipulators. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, pp. 707–714. IEEE Xplore (1992) 29. Moallem, M., Khorasani, K., Patel, R.V.: An integral manifold approach for tip-position tracking of flexible multi-link manipulators. IEEE Trans. Robot. Autom. 13(6), 823–837 (1998) 30. Jian, L., Wen, T., Sun, F.: Adaptive RISE control of a multi-link flexible manipulator based on integral manifold approach. In: International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, pp. 1–6. IEEE (2014) 31. George, L.E., Book, W.J.: Inertial vibration damping control of a flexible base manipulator. JSME Int. J. 46(3), 798–806 (2003) 32. Pereira, E., Aphale, S.S., Feliu, V., et al.: Integral resonant control for vibration damping and precise tip-positioning of a single-link flexible manipulator. IEEE/ASME Trans. Mechatron. 16(2), 232–240 (2011) 33. Lew, J.Y., Moon, S.M.: A simple active damping control for compliant base manipulators. IEEE/ASME Trans. Mechatron. 6(3), 305–310 (2001) 34. Xu, W.L., Yang, T.W., Tso, S.K.: Dynamic control of a flexible macro–micro manipulator based on rigid dynamics with flexible state sensing. Mech. Mach. Theory 35(1), 41–53 (2000) 35. Yang, H., Yu, Y., Yuan, Y., et al.: Back-stepping control of two-link flexible manipulator based on an extended state observer. Adv. Space Res. 56(10), 2312–2322 (2015) 36. Gosselin, C., Grenier, M.: On the determination of the force distribution in over constrained cable-driven parallel mechanisms. Meccanica 46(1), 3–15 (2011)

A Review of Dynamic Control of the Rigid-Flexible


37. Wen, B.L., Song, H.Y., Yang, G.: Optimization of tension distribution for cable-driven manipulators using tension-level index. IEEE/ASME Trans. Mechatron. 19(2), 676–683 (2014) 38. Gouttefarde, M., Lamaury, J., Reichert, C., et al.: A versatile tension distribution algorithm for n-DOF parallel robots driven by cables. IEEE Trans. Robot. 31(6), 1444–1457 (2015) 39. Ling, C.A.O., Xiaoqiang, T.A.N.G., Weifang, W.A.N.G.: Tension optimization and experimental research of parallel mechanism driven by 8 cables for constant vector force output. Robot 37(6), 641–647 (2015) 40. Müller, K., Reichert, C., Bruckmann, T.: Analysis of a real-time capable cable force computation method. In: Cable-Driven Parallel Robots, pp. 614–616. Springer (2015) 41. Bo, O., Shang, W.: Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables. Chin. J. Mech. Eng. 29(2), 1–8 (2016) 42. Su Y.: Mechanical Analysis and Performance Optimization of the Cable-Driven Parallel Robot. Xidian University (2014) 43. Yun, J.S., Park, B.S., Lee, J.S., et al.: Velocity control method for preventing oscillations in crane: US, US 5550733 A (1996) 44. Zi, B., Qian, S., Liu, H., et al.: Variable structure parameter Rouxuo parallel robot system and control method: CN104440870A (2015) 45. Sharf, I.: Active damping of a large flexible manipulator with a short-reach robot. In: Proceedings of the American Control Conference, vol. 5, pp. 3329–3333. IEEE (1995) 46. Kraus, W., Miermeister, P., Schmidt, V., et al.: Hybrid position/force control of a cabledriven parallel robot with experimental evaluation 6(2), 119–125 (2015) 47. Wei, H., Qiu, Y., Su, Y.: Motion control strategy and stability analysis for high-speed cabledriven camera robots with cable inertia effects 13(5) (2016) 48. Cheng, Y., Ren, G., Dai, S.L.: The multi-body system modeling of the Gough-Stewart platform for vibration control. J. Sound Vib. 271(3–5), 599–614 (2004) 49. Lu, Y., Zhu, W., Ren, G.: Feedback control of a cable-driven Gough-Stewart platform. IEEE Trans. Robot. 22(1), 198–202 (2006) 50. Zi, B., Duan, B.Y., Du, J.L., et al.: Dynamic modeling and active control of a cablesuspended parallel robot. Mechatronics 18(1), 1–12 (2008) 51. Duan, X., Qiu, Y., Zhao, J.M.Z.: Motion prediction and supervisory control of the macro– micro parallel manipulator system. Robotica 29(7), 1005–1015 (2010) 52. Duan, X., Qiu, Y., Du, J., et al.: Supervisory control of a macro-micro parallel manipulator system. In: International Conference on Mechatronics and Automation, pp. 639–644. IEEE (2010) 53. Duan, X.C., Qiu, Y., Du, J., et al.: Real-time motion planning for the macro-micro parallel manipulator system 19(6), 4214–4219 (2011) 54. Du, J., Bao, H., Cui, C., et al.: Nonlinear PD control of a long-span cable-supporting manipulator in quasi-static motion. J. Dyn. Syst. Meas. Contr. 134(1), 011–022 (2012)

Analysis of Speech Enhancement Algorithm in Industrial Noise Environment Lilan Liu1, Gan Sun1, Zenggui Gao1(&), and Yi Wang2 1

Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, China [email protected] 2 Business School, Plymouth University, Plymouth, UK

Abstract. With the application of intelligent speech technology in the field of intelligent manufacturing, it is necessary to propose a speech enhancement algorithm that is suitable for industrial noise. In order to solve this problem, four types of optimized speech enhancement algorithms were selected based on the noise reduction capability and the degree of distortion: multi-band spectral subtraction algorithm, Wiener algorithm based on a priori SNR estimation, minimum mean square error of log-spectral amplitude estimation and subspace method based on Eigen-value decomposition (EVD) embedded pre-whitening. In low SNR conditions of –5 dB, 0 dB, and 5 dB, the speeches with industrial noise were used for experimental analysis. The experimental results were evaluated by the segmented SNR, perceptual evaluation of speech quality (PESQ), and time-domain waveforms, indicating that the Wiener algorithm based on a priori SNR estimation eliminates the noise better and improves speech quality higher. So it is more suitable for industrial noise environments than the other three algorithms. Keywords: Industrial noise  Speech enhancement Wiener algorithm based on a priori SNR estimation Low signal to noise ratio (SNR)

1 Introduction Speech enhancement refers to recovering the original speech signal as much as possible from noisy speech, improving speech quality and intelligibility, and providing highquality speech data for subsequent speech recognition, speech synthesis, and other work. At present, speech enhancement technology has great potential applications in communications, multimedia, human-computer interaction and other fields [1]. In order to apply intelligent speech technology to industrial production and in response to the “Made in China 2025” plan, this paper selects the industrial noise environment as the research object, analyzes and compares the current mainstream speech enhancement algorithms, and summarizes algorithms suitable for suppressing noise in the industrial environment. This paper is hoped to have a certain reference for the application of intelligent speech technology in industrial production.

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 226–236, 2019.

Analysis of Speech Enhancement Algorithm


The current classical speech enhancement algorithms include the following four categories: spectral subtraction, Wiener filtering, method based on statistical model and subspace algorithm. Spectral subtraction is a basic speech enhancement algorithm. The noise spectrum is subtracted from the noisy speech spectrum to obtain clean signal spectrum. The Wiener filter algorithm calculates the enhanced signal by optimizing the mean square error criterion [2]. The algorithm based on statistical model discusses the speech enhancement problem based on statistical estimation framework. The minimum mean square error (MMSE) algorithm belongs to this category. The subspace algorithm estimates the clean signal by removing the noisy vector components that fall in the “noise space.” In conditions of low SNR, the existing algorithms often suffer from phenomena such as signal distortion, noise residuals, and music noise generation. Therefore, the signal processing needs to be compromised in signal distortion and noise cancellation [3]. This paper describes these four types of speech enhancement algorithms and performs simulation analysis and comparison.

2 Principle of Algorithms 2.1

Multi-band Spectral Subtraction Algorithm

Spectral subtraction assumes that the noise is additive noise, and the clean signal spectrum is obtained by subtracting the noise spectrum estimation from the noisy speech spectrum. Assume that noisy speech yðnÞ consists of clean speech xðnÞ and additive noise d ðnÞ: y ð nÞ ¼ x ð nÞ þ d ð nÞ


Equation Discrete-time Fourier transforms on both sides: Y ðxÞ ¼ X ðxÞ þ DðxÞ


We assume that d ðnÞ has a zero mean and is not related to the clean signal, so the power spectrum subtraction algorithm for clean speech can be expressed as:     X ^ ðxÞ2 ¼ jY ðxÞj2 D ^ ðxÞ2


  ^ ðxÞ is the clean signal magnitude spectrum estimation, and In the formula, X   D ^ ðxÞ is the noise magnitude spectrum estimation when there is no voice activity. Because of the spectral subtraction algorithm produces music noise, Berouti [4] proposed an improved method that subtracts the noise spectrum overestimation and prevents the calculation result from being less than a preset value. The specific form is as follows:   X ^ ðxÞ2 ¼


  ^ ðxÞ2 ; jY ðxÞj2 aD   ^ ðxÞ2 ; bD

  ^ ðx Þ2 if jY ðxÞj2 [ ða þ bÞD other



L. Liu et al.

Where aða  1Þ is the over-decrease factor, bð0\b  1Þ is the lower spectrum parameter. The spectral subtraction algorithm proposed by Berouti assumes that the noise has the same effect on all spectral components, but this is not the case. The noise may have different effects on some frequencies in the speech segment than other frequencies. The literature [5] proposes multi-band spectral subtraction (Mband) algorithm which is expected to reduce speech distortion by estimating the subtraction factor of the frequency band. The multi-band spectral subtraction algorithm divides the speech spectrum into N non-overlapping sub-bands. The estimation formula of the clean speech signal spectrum in the k-th sub-band is:     X ^k ðxi Þ2 ¼ jYk ðxi Þj2 ak dk D ^ k ðxi Þ2 bk  xi  ek ð5Þ Where xi is the discrete frequency, bk and ek are the frequency start and end points of the k-th frequency band, ak is the subtraction factor of the k-th sub-band, dk is the sub-band subtraction factor, Yk ðxi Þ indicates that the amplitude spectrum is smoothed to reduce fluctuations in the speech spectrum and improve speech quality. The average estimate of the amplitude spectrum of the weighted noisy signal is expressed as: XM Wj jYk1 ðxi Þj ð6Þ jYk ðxi Þj ¼ i¼M Where Wj ð0\W\1Þ is the weight assigned to each frame. ak is the signal-to-noise ratio function of the k-th frequency sub-band, as in Eq. (7): 8 SNRk \  5 < 4:75; ð7Þ ak ¼ 4  0:15  SNRk ; 5  SNRk  20 : 1; SNRk [ 20 Sub-band subtraction factor dk is determined by Eq. (8): 8 fk  1 KHz < 1; F dk ¼ 2:5; 1 KHz\fk  2s  2 KHz : Fs 1:5; fk [ 2  2 KHz


Where fk is the frequency upper bound of the k-th sub-band and Fs is the sampling frequency. 2.2

Wiener Algorithm Based on a Priori SNR Estimation

The Wiener filter speech enhancement method is a speech signal estimation method based on a minimum mean square error criterion. From Eq. (2), the frequency domain of the Wiener filter is expressed as Eq. (9): Gðm; kÞ ¼

Px ðm; kÞ Px ðm; kÞ þ Pd ðm; kÞ


Analysis of Speech Enhancement Algorithm


In the formula, m represents the frame number, k represents the frequency point, Px ðm; k Þ and Pd ðm; kÞ correspond to the power spectral density of clean speech and noise, respectively. Define a priori SNR SNRprio ðm; kÞ: SNRprio ðm; kÞ ¼

Px ðm; kÞ Pd ðm; k Þ


The priori SNR can be considered as the actual SNR of the m-th spectral component. Substituting Eq. (10) into Eq. (9) gives the gain function represented by a priori SNR: Gðm; kÞ ¼

SNRprio ðm; kÞ 1 þ SNRprio ðm; kÞ


An estimate of the smaller fluctuation of the priori SNR required by the Wiener gain function can be used to eliminate music noise [4]. Ephraim and Malah use a decision-directed approach to estimate the priori SNR. This approach can be viewed as a method of smoothing the posterior SNR in the low SNR region and tracking the posterior SNR in the high SNR region [6]. A posterior SNR is defined as: jY ðm; k Þj2 SNRpost ðm; k Þ ¼   D ^ ðm; kÞ2


In the formula, jY ðm; kÞj2 denotes the power spectrum of the noisy speech signal in   ^ ðm; kÞ2 denotes the noise power spectrum estimate for the m-th the m-th frame, D frame. The posterior SNR can be seen as the SNR measured after adding the noise to the m-th spectral component. Finally, the estimate of the priori SNR SNRprio ðm; kÞ is based on a weighted combination of past and current SNRprio ðm; kÞ estimates:   X ^ ðm  1; kÞ2   d prio ðm; kÞ ¼ a  SNR 2 þ ð1  aÞ  max SNRpost ðm; k Þ  1; 0 D ^ ðm  1; k Þ


  ^ ðm  1; kÞ2 represents the power spectrum estimate of the previous clean Where X   ^ ðm  1; kÞ2 represents the noise power spectrum of the previous frame. a is speech, D a smoothing constant, making the estimate of a priori SNR smooth and suppressing music noise. The best results are obtained when the value of a is 0.95–0.99 [7].



L. Liu et al.

Minimum Mean Square Error of Log-Spectral Amplitude Estimation (MMSE-LSA)

Because of the importance of short-time spectrum amplitude for speech intelligibility and quality, an optimal method for extracting the signal spectrum amplitude from noisy speech needs to be proposed. That is, an optimization estimator that minimizes the mean-squared error (MMSE) of estimated amplitude and actual amplitude is proposed: e¼E


^k  Xk X

2 o


^k is the amplitude estimate of the spectrum at frequency k, and Xk is the Where X amplitude of the clean speech spectrum at k. Although the error square of the amplitude spectrum is mathematically easy to handle, the log spectral distortion criterion is still used when dealing with the amplitude of the speech spectrum because the auditory perception of the human ear to the sound intensity is proportional to the logarithm of the spectral amplitude [6]. Rewrite Eq. (14) to minimize the error square of the log spectral amplitude: e¼E


^k  logXk logX

2 o


The logarithmic MMSE optimal estimator can be obtained by obtaining the conditional mean of logXk : ^k ¼ E flogXk jY ðxk Þg logX


The solution of EflogXk jY ðxk Þg is obtained by logXk based on the momentgenerating function of the condition Y ðxk Þ: t 1 1 11 Z e dt E flogXk jY ðxk Þg ¼ logck þ logdk þ 2 2 2d t



n o E fjX ðxk Þj2 g Where ck is defined as ck ¼ 1 þ SNRprio ðkÞ, and E jX ðxk Þj2 is the variance of the k-th spectrum component of SNRprio ðk Þ dk ¼ 1 þ SNR SNRpost ðk Þ. prio ðk Þ









Finally, substituting Eq. (17) into Eq. (16) yields an optimal logarithmic MMSE estimator: ^k ¼ X

 Z 1 t  SNRprio ðk Þ 1 e exp dt Yk 2 dk t 1 þ SNRprio ðk Þ


MMSE uses prior information of speech, so it retains more speech components during speech enhancement.

Analysis of Speech Enhancement Algorithm



Subspace Method Based on Eigen-Value Decomposition (EVD) Embedded Pre-whitening

Based on the Eigen-Value Decomposition method of signal covariance matrix, assuming that the speech signal is not related to noise, rewrite Eq. (1) into a covariance matrix to represent: Ry ¼ Rx þ Rd


Where Rx and Rd are the covariance matrix of clean signal and noise, respectively. Let Rd ¼ Ud Kd UdT be the Eigen-value decomposition of the noise covariance matrix, and Rx ¼ Ux Kx UxT be the Eigen-value decomposition of the clean signal covariance matrix, substituting the optimal linear estimator for the subspace method [4]: Hopt ¼ Rx ðRx þ lRd Þ1


Where l is the Lagrange multiplier. Diagonalize Rx and Rd matrices at the same time and get: V T Rx V ¼ KP ð21Þ V T Rd V ¼ I P Where KP and V are Eigen-value matrix and Eigen-vector matrix of ¼ R1 d Rx . Substituting Eq. (21) into Eq. (20) yields an optimal linear estimator [8]

1 Hopt ¼ V T KP KP þ lI VT


The estimation of l in the formula can control the balance between residual noise and speech distortion, and then affect the speech quality.

3 Performance Evaluation 3.1

Segmented Signal to Noise Ratio (SNRseg)

Segmented SNR can be calculated either in the time domain or in the frequency domain, but a potential problem with segmented SNR calculated in the time domain is that the signal energy during the silence interval of the speech signal will be very small, leading to a large negative value, which will cause bias to the overall measurement result. Therefore, this paper chooses to calculate the segmental SNR in the frequency domain and apply different weights to different frequency bands on the frequency spectrum. The frequency domain segmented SNR (FSNRseg) is defined as [4]: h   i PK 2 ^ ðm; jÞ 2 B log F ð m; j Þ= F ð m; j Þ  F X j 10 j¼1 10 M1 FSNRseg ¼ ð23Þ PK m¼0 M j¼1 Bj


L. Liu et al.

Where Bj is the weight of the j-th band, K is the number of bands, M is the total number of signal frames, and F ðm; jÞ is the filter band amplitude of the j-th band of the ^ ðm; jÞ is the filter band amplitude of the enhancement signal m-th frame clean signal, F in the same frequency band. 3.2

Perceptual Evaluation of Speech Quality (PESQ)

ITU-T organized an evaluation in 2000 to select a new observable measure that has reliable performance under various codec and network conditions. Perceptual Evaluation of Speech Quality was selected as the recommendation. The PESQ score ranges from −0.5 to 4.5. The higher the PESQ score, the better the speech quality and the higher the intelligibility. Defined as [9]: PESQ ¼ 4:5  0:1Dind  0:0309Aind


Where Dind is the average perturbation value and Aind is the average asymmetry perturbation value.

4 Algorithm Simulation and Comparative Analysis This paper revolves around the above four types of typical algorithms: multi-band spectral subtraction algorithm (Mband), Wiener algorithm based on a priori SNR estimation (SNRprio-Wiener), minimum mean square error of log-spectral amplitude estimation (MMSE-LSA) and subspace method based on Eigen-value decomposition embedded pre-whitening (EVD-Subspace) for experimental simulations. The data comes from the factory noise factory1 and factory2 in the noisex-92 noise library. Factory1 noise is mainly recorded near board cutting and welding of electrical equipment. Factory2 noise is mainly recorded in the automobile production workshop. At the same time, a piece of noise data factory3 is recorded under the working condition of the factory robotic arm, which is used as the simulation data of the four types of algorithms. Since the research of speech enhancement algorithms with low SNR is more practical, experimental speech contains 3 different SNRs (−5 dB, 0 dB, 5 dB). The simulation experiment is performed in Matlab R2014a software. The voice signals used in the experiment are WAV format, sampling frequency 8 kHz, 16 bit, mono. Basic parameter settings: each frame is 20 ms long, with 50% overlap between frames. The window function uses a hamming window. The experimental results are the increase of the frequency domain segmented SNR (FSNRseg) and the Perceptual Evaluation of Speech Quality (PESQ) after the speech enhancement algorithm is processed, which is represented by DFSNRseg and DPESQ. Tables 1, 2 and 3 show the algorithm simulation results for −5 dB, 0 dB and 5 dB, respectively. Where f1, f2, and f3 represent noisy speech containing industrial noise factory1, factory2, and factory3, respectively, and the unit of the segmented SNR is dB.

Analysis of Speech Enhancement Algorithm


Table 1. −5 dB simulation results DFSNRseg/dB f1 f2 f3 Mband 0.029 0.449 0.022 SNRprio-Wiener 1.032 1.489 0.457 MMSE-LSA 0.845 1.158 0.424 EVD-subspace 0.658 0.110 −0.573

DPESQ f1 f2 −0.028 −0.052 0.032 0.183 0.159 0.285 −0.078 −0.051

f3 0.394 −0.178 −0.167 −0.611

Table 2. 0 dB simulation results DFSNRseg/dB DPESQ f1 f2 f3 f1 f2 Mband 0.543 0.840 0.486 −0.111 0.117 SNRprio-Wiener 1.852 2.146 0.884 0.256 0.252 MMSE-LSA 1.659 1.713 0.656 0.268 0.339 EVD-subspace 1.358 0.496 −0.383 0.253 0.164

f3 −0.093 0.062 −0.312 −0.137

Table 3. 5 dB simulation results DFSNRseg/dB f1 f2 f3 Mband 1.375 1.585 0.591 SNRprio-Wiener 2.814 2.300 0.970 MMSE-LSA 2.157 1.666 0.488 EVD-subspace 1.702 0.403 −0.098

DPESQ f1 f2 0.215 0.379 0.402 0.347 0.429 0.427 0.425 0.220

f3 0.058 0.093 0.077 0.090

From the data in the tables, we can conclude that: (1) In the same SNR, the SNRprio-Wiener algorithm significantly increases the segmented SNR in the frequency domain, and as the SNR of the noisy speech increases, the de-noising performance becomes better and better, followed by MMSE-LSA algorithm, Mband algorithm and EVD-Subspace. (2) Compared with the increase of PESQ in the three cases, it is obvious that the SNRprio-Wiener algorithm and the MMSE-LSA algorithm can effectively improve the speech quality. At the same time, the performance of the SNRprio-Wiener algorithm is more stable. (3) Combining the performance of the four types of algorithms in the frequency-domain segmentation SNR and PESQ, we can see that the Wiener algorithm based on a priori SNR estimation can not only effectively eliminate noise and prevent speech distortion, but also can effectively improve the intelligibility of speech enhancement and is suitable for speech enhancement of industrial noise. The FSNRseg and PESQ scores can only measure the residual noise level and speech distortion level of the enhanced speech as a whole, but cannot describe the details. Taking the 0 dB SNR case as an example, the time-domain waveforms of the


L. Liu et al.

noisy speech with factory2 enhanced by four types of algorithms are given, respectively. As shown in Figs. 1, 2, 3, 4, 5 and 6, the characteristics of the enhanced speech signal can be better observed and the above conclusions can be further verified. The abscissa is time, unit s. The ordinate is the normalized amplitude value.

Fig. 1. Clean speech time domain waveform

Fig. 2. Noisy speech time domain waveform

Fig. 3. Waveform enhancement

Fig. 4. Waveform after SNRprioWiener enhancement



Fig. 5. Waveform after MMSE-LSA enhancement

Fig. 6. Waveform after Subspace enhancement


Analysis of Speech Enhancement Algorithm


From these time domain waveforms, it can be seen intuitively that the four types of speech enhancement algorithms can eliminate noise to a certain extent. But compared with the clean speech waveform diagram, the enhanced speech also weakens the amplitude of the target speech, especially at the end of the speech segment. It can be seen that the Mband algorithm, the MMSE-LSA algorithm, and the EVD-Subspace algorithm all have a certain degree of damage to the original speech components. In comparison, the Wiener algorithm based on a priori SNR estimation can not only eliminate a part of the noise but also preserve the original speech information as much as possible. Subjective listening is also consistent with the above simulation results.

5 Conclusion This paper mainly studies the typical four types of speech enhancement algorithms and their corresponding improvements. The basis of optimization is to reach a balance between signal distortion and noise elimination as much as possible. The noise data used are all industrial noise. The algorithms are simulated at −5 dB, 0 dB, and 5 dB with low SNR. Using frequency domain segmented SNR and PESQ objective evaluation criteria and time-domain waveforms, the Wiener algorithm based on a priori SNR estimation is suitable for speech enhancement in industrial noise environments. The experimental results in this paper have a certain reference role for the research and development of industrial speech sensors. Acknowledgements. The authors would like to express appreciation to mentors in Shanghai University for their valuable comments and other helps. Thanks for the funding of Shanghai Science and Technology Committee of China and Postdoctoral Science Fund Project of China. The program numbers are No. 17511109300 and No. 2018M632077. Fund Project. Project of Shanghai Science and Technology Committee of China (No. 1751 1109300). Project of Postdoctoral Science Fund Project of China (No. 2018M632077).

References 1. Hu, Y.: Research on Microphone Array Speech Enhancement Algorithm. University of Electronic Science and Technology (2014) 2. Fangjie, W., Yun, J.: Speech enhancement algorithm for digital hearing aids based on Wiener filter. Electron. Devices 40(04), 1021–1025 (2017) 3. Ning, C., Wenju, L.: Signal subspace speech enhancement algorithm based on GaussianLaplace-Gamma model and human auditory masking effect. J. Acoust. (Chinese Edition) 34 (06), 554–565 (2009) 4. Loizou, P.C.: Speech enhancement: theory and practice. In: Gao, Y. et al. University of Electronic Science and Technology Press, pp. 87–276 (2012) 5. Kamath, S., Loizou, P.: A multi-band spectral subtraction method for enhancing speech corrupted by colored noise. In: IEEE International Conference on Acoustics, Speech and Signal Processing (2002)


L. Liu et al.

6. Zhang, T.: Research on Speech Enhancement Algorithm Based on Time Domain Filter. University of Science and Technology of China (2009) 7. Koucheng, A.: Speech enhancement based on a priori SNR estimation and gain smoothing. J. Comput. Appl. 32(S1), 29–31 + 35 (2012) 8. Hu, Y., Loizou, P.: A generalized subspace approach for enhancing speech corrupted by colored noise. IEEE Trans. Speech Audio Process. 11, 334–341 (2003) 9. Yecai, G., Xiaoyan, C., Chao, W.: Improved Wiener filter post beam-forming algorithm for LCMV frequency division. J. Electron. Measur. Instrum. 31(10), 1646–1652 (2017)

Application of Machine Learning Methods for Prediction of Parts Quality in Thermoplastics Injection Molding Olga Ogorodnyk1, Ole Vidar Lyngstad2(&), Mats Larsen3(&), Kesheng Wang4(&), and Kristian Martinsen1(&) 1

Department of Manufacturing and Civil Engineering, Norwegian University of Science and Technology (NTNU), Teknologivegen 22, 2815 Gjøvik, Norway {olga.ogorodnyk,kristian.martinsen} 2 Department of Materials Technology, SINTEF Raufoss Manufacturing, Postbox 163, 2831 Raufoss, Norway [email protected] 3 Department of Production Technology, SINTEF Raufoss Manufacturing, Postbox 163, 2831 Raufoss, Norway [email protected] 4 Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway [email protected]

Abstract. Nowadays significant part of plastic and, in particular, thermoplastic products of different sizes is manufactured using injection molding process. Due to the complex nature of changes that thermoplastic materials undergo during different stages of the injection molding process, it is critically important to control parameters that influence final part quality. In addition, injection molding process requires high repeatability due to its wide application for massproduction. As a result, it is necessary to be able to predict the final product quality based on critical process parameters values. The following paper investigates possibility of using Artificial Neural Networks (ANN) and, in particular, Multilayered Perceptron (MLP), as well as Decision Trees, such as J48, to create models for prediction of quality of dog bone specimens manufactured from high density polyethylene. Short theory overview for these two machine learning methods is provided, as well as comparison of obtained models’ quality. Keywords: Artificial neural network Machine learning

 Decision trees  Injection molding

1 Introduction In 2016 there were 335 million metric tons of plastics produced worldwide and 60 million metric tons in Europe [1]. At the same time, more than one third of all plastic products is produced using injection molding process [2], this makes injection molding

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 237–244, 2019.


O. Ogorodnyk et al.

one of the most frequently used processes for mass production of plastic parts for variety of applications. Injection molding process includes such stages as: plasticization, filling, injection, cooling and ejection [3]. At first, material is fed into a heated barrel, where it is mixed and turned into molten plastic. The melt is then inserted into a cavity with help of injection pressure and reciprocating screw and afterwards packed with packing pressure to obtain part with a desired shape. The molten plastic cools down and solidifies inside of the mold, later the final part is ejected. The process includes three main control loops: control loop of machine parameters (speed, pressure, temperature), control loop of process parameters (in-mold temperature and pressure) and quality control loop [4]. In order to obtain a final part of high quality, it is necessary to use optimal machine and process parameters [5], which are not always easy to define and are often obtained through trial and error method by injection molding machine operators based on their experience [3]. A problem with such approach is fact that injection molding is a highly competitive industry and it is not enough anymore to utilize only experience to determine the optimal parameters. It would be of high convenience if, in case of insertion of machine/process parameters that may lead to production of defected parts, control system of injection molding machine would notify the operator that parameters need to be adjusted. This is why ability to predict part quality based on values of inputted process and machine parameters is of high importance. Some of the most frequently occurring defects during injection molding are flash, short shot, sink mark, warpage and flow line [6]. “Low injection pressure, short injection time, and low mold temperature will easily lead to short shot, and low packing pressure and short cooling time will cause warpage” [7]. In this paper 41 machine and process parameters were logged during 160 machine runs. Models for prediction of the final part quality were then built using Artificial Neural Networks (ANN) and Decision Trees machine learning (ML) algorithms. Proposed prediction models are able to distinguish only good or bad parts, without possibility to categorize which type of defect occurs. There are multiple studies, where prediction models for injection molding are built with help of different ML methods. For example, Yen et al. [8] use ANN in order to design runner dimensions to minimize warpage, Altan [9] use Taguchi, ANOVA and ANN methods to minimize shrinkage, Zhu and Chen [10] apply fuzzy neural network approach to predict flash. In [11, 12] genetic algorithm is used to obtain optimized process parameters and avoid warpage, while Che [13] uses particle swarm optimization combined with ANN to optimize costs for product and mold for injection molding. To authors’ knowledge there are rare or no examples of use of Decision Trees method for training models in similar studies, that is why it will be interesting to compare it’s performance with that of ANN. The following sections will give a broader description of the study setting, data collection, data processing, methods used to build prediction models and comparison of models’ quality.

Application of Machine Learning Methods for Prediction


2 Methodology Described study was conducted with use of “ENGEL insert 130” vertical injection molding machine. Produced part is a standard dog bone specimen with 19 mm width and 165 mm length, as shown on Fig. 1. The material used is high density polyethylene.

W = 19 mm

L = 165 mm

Fig. 1. Dog bone specimen

Latin hypercube method in ModeFRONTIER [14] was used to create design of experiment (DOE) to gather data for a dataset with both high and low quality of the target part. The DOE included 32 different combinations of parameters such as: holding pressure, holding pressure time, backpressure, cooling time, injection speed, screw speed, barrel temperature and temperature of the tool/mold. Each combination has been launched 5 times on the injection molding machine, giving 160 data samples in the end of the experiment. The dataset is slightly unbalanced with 101 data samples representing defected parts and 59 samples for good parts. During each run values of 41 machine and process parameters were logged. After the data has been gathered, Artificial Neural Network (Multilayered Perceptron) and Decision Trees (J48) methods were applied to the dataset in WEKA (Waikato Environment for Knowledge Analysis) [15]. ANN has been chosen as one of the methods for prediction model building as it is often applied in similar studies [8– 10], while Decision Trees was used to compare ANN model with a model that is easier to interpret. In addition, it was of interest to see which parameters will become tree nodes and which values will be chosen as thresholds to make decision about the final part quality. The methods were first applied to the full dataset with 41 parameters included. As a next step Information Gain (InfoGain) feature selection method was used to identify parameters containing the biggest amount of information about the process. Afterwards ML methods were applied to reduced parameters sets of 35 and 18. The following section will give a short theory overview of the applied ML and feature selection methods, as well as explain how reduced number of parameters for prediction models was chosen.

3 Machine Learning Methods Machine learning methods use statistical techniques to improve algorithm’s performance on a particular task. These methods “give better results when it comes to process modelling and forecasting, as they have higher precision and lower error


O. Ogorodnyk et al.

values compared to conventional modelling methods” [16]. In addition, they are not as resource consuming as regular optimization techniques [17]. However, before applying ML methods, it is important to pre-process the data. Data and features that have missing, incomplete or redundant values are recommended to be avoided, when possible. Feature selection can be one of the ways to select the most “meaningful” parameters/features in the obtained data. 3.1

Feature Selection

Feature selection is a process of selecting a subset of features that are most relevant/useful for a model construction. It is also commonly used for dimensionality reduction to decrease amount of time and resources necessary to build a model. Feature selection methods allow to choose the most relevant features for the task and use them to train the model, removing redundant and correlated attributes/parameters. As mentioned before, Information Gain was used in this study to evaluate quality of parameters logged during the experiment. Information gain “is defined as the amount of information, obtained from the attribute” [18]. InfoGain takes values between 0 and 1, the bigger is the value the more relevant is the attribute/parameter. The list of all parameters and their information gain scores is shown in Table 1. The prediction models were at first trained with use of all 41 parameters, afterwards 6 attributes (Machine time, Shot counter, Good parts counter, Bad parts counter, Parts counter and Machine date) were removed as irrelevant by meaning and the models were built one more time with 35 parameters. Later all the attributes that have information gain score equal to 0 were removed and the models were trained again using 18 attributes. 3.2

Artificial Neural Networks

Artificial Neural Networks (ANN) is one of ML methods used for modeling and forecasting in variety of areas of science, business and engineering. The main idea of the method is based on biological neural networks found in animal brains. It includes use of interconnected processing elements called neurons. These elements are organized in separate layers connected with weights. Such models are able to “learn” to perform tasks by considering samples related to the problem they are supposed to solve. Every time a new sample is “fed” to the network, the weights are adjusted accordingly in order to obtain a model that is able to perform a necessary task in the best possible way. In case of this study, ANN is “learning” through processing samples of good and bad parts characterized by number of relevant parameters. Multilayer Perceptron (MLP) is one of classic ANN models. It is based on sequence of layers of neurons interconnected between each other, where layer-to-layer mapping is activated with a non-linear function. In this study sigmoid function is used as an activation function. 3.3

Decision Trees

Decision trees is a class of supervised learning algorithms. The main idea behind the method is to use training data to build a predictive model shown in a form of a tree

Application of Machine Learning Methods for Prediction


Table 1. Information Gain scores for parameters used in the study


InfoGain score

Parameter name



Parts counter Plasticizing time





Machine time



Speed max



Shot counter



Cushion average



Injection work



Good parts counter



Cushion after holding



Machine date




Ejector position last



Temperature cylinder





Cushion smallest



Screw speed max








zone2 average



Current station



Injection pressure limit

Switchover time



Injection time set max








der1 zone1 set tempera-



Waiting delay



Injection time



Bad parts counter





plasticization Switchover volume

Plasticizing number

Heating group cylin10

Decomposition after

Ejector position set max Last cycle time Closing force Plasticizing delay time



Plasticizing time set



Shot volume




Holding pressure time



Specific pressure at switchover


Clamping force at switchover

structure. Final goal is then to find a correct answer to a problem with minimal possible number of decisions using the obtained model. However, this is not always possible due to noise and missing values in data. The basic algorithm for learning a decision tree consists of the following steps: select parameter or value that gives the “best” data split, create “child” nodes based on the split, run the algorithm recursively on the “child” nodes until certain stopping criteria is reached (the tree is too large, or number of examples left is too small). J48 algorithm used in this study includes additional features such as handling missing values and continuous attributes/parameters values, as well as decision trees pruning. The following section will show results of application of described methods to the obtained data set.


O. Ogorodnyk et al.

4 Results The main goal of this study was to create prediction models capable of distinguishing between high- and low-quality parts based on machine and process parameters, in particular, dog bone specimens with 19 mm width and 165 mm length manufactured from high density polyethylene. After training the model, it is capable of notifying a machine operator that the parameters need to be adjusted not to produce defected parts. Simplified study procedure used to reach the goal is shown on Fig. 2.

Fig. 2. Simplified study procedure

The first method used to train the model is MLP, to verify quality of the model 10folds cross validation was used. The algorithm has been launched three times including 41, 35 and 18 parameters based on the feature selection and common sense related to meaning of the logged parameters. The final architecture of the neural network includes 3 layers (input layer, hidden layer and output layer) and 22 neurons in the hidden layer ((number of parameters + number of classes)/2) for the first model, 3 layers and 19 neurons in the hidden layer for the second model, as well as 3 layers and 10 neurons in the hidden layer for the third. Quality of the final models was assessed with help of Accuracy and ROC area metrics, which can be seen in Table 2. The second method applied is Decision Trees (J48). There were three models trained, with the same number of features as in case with ANN, 10 folds cross validation was also applied. Due to ability of J48 algorithm to prune obtained decision trees, no matter how many features were used, the resulting tree always included 6 nodes. Each tree included the following features: Cushion after holding pressure, Screw speed max, Injection time and Holding pressure. In addition to those four, the first model also included Bad parts and Holding pressure time features, the second model – Plasticizing time set max and Holding pressure time, while the third had Plasticizing time set max and Injection work.

Application of Machine Learning Methods for Prediction


Table 2. Comparison of obtained models’ quality Accuracy (41 features) ROC area (41 features) Accuracy (35 features) ROC area (35 features) Accuracy (18 features) ROC area (18 features) Accuracy average

ANN (MLP) 88.75% 0.942 96.875% 0.996 99.375% 0.994 95%

Decision Tree (J48) 95.625% 0.957 96.25% 0.958 97.5% 0.968 96.45%

As it is possible to see from Table 2, both algorithms show high quality results with average accuracy of 95% of correctly classified instances of good and bad parts for ANN and 96.45% for Decision Trees. Both algorithms show increase in accuracy after removing features that do not contain much information about the process.

5 Conclusions In this study, experimental data has been collected from “ENGEL insert 130” vertical injection molding machine. The data includes 41 machine and process parameters from 160 machine runs based on variation of holding pressure, holding pressure time, backpressure, cooling time, injection speed, screw speed, barrel temperature and temperature of the tool/mold parameters. Parameters are varied according to the DOE consisting of 32 combinations of above mentioned attributes. The obtained data set includes 101 instances of bad and 59 instances of good parts. Due to unbalanced data set 10-folds cross validation is used to increase quality of the final models. Collected data is then pre-processed with help of Information Gain feature selection algorithm. Later six different quality prediction models are built with help of ANN (MLP) and Decision Trees (J48) methods (three models per method). The models are assessed with help of accuracy and ROC area measures. Models with the highest accuracy rate are obtained with use of 18 parameters/features for both ANN and Decision Trees. The highest accuracy rates are 99.375% and 97.5% for MLP and J48 correspondingly. In addition, Decision Trees algorithm has shown that the main features used to make the final decision about quality of the part are: Cushion after holding pressure, Screw speed max, Injection time, Holding pressure, Holding pressure time, Plasticizing time set max and Injection work. Acknowledgement. This research is funded by Norwegian Research Council as a part of MegaMould project.


O. Ogorodnyk et al.

References 1. Statista—The Statistics Portal. Global production of plastics since 1950. 2018 [cited 2018.20.07]; Available from: 2. Hernandez-Ortiz, J.P.: Polymer Processing: Modeling and Simulation. Hanser Gardner Publications, Cincinnati (2006) 3. Tsai, K.-M., Luo, H.-J.: An inverse model for injection molding of optical lens using artificial neural network coupled with genetic algorithm. J. Intell. Manuf. 28(2), 473–487 (2017) 4. Karbasi, H., Reiser, H.: Smart mold: real-time in-cavity data acquisition. In: First Annual Technical Showcase & Third Annual Workshop, Canada. Citeseer (2006) 5. Zhao, P., et al.: A nondestructive online method for monitoring the injection molding process by collecting and analyzing machine running data. Int. J. Adv. Manuf. Technol. 72 (5–8), 765–777 (2014) 6. Rosato, D.V., Rosato, M.G.: Injection Molding Handbook. Springer Science & Business Media, Berlin (2012) 7. Kitayama, S., et al.: Multi-objective optimization of variable packing pressure profile and process parameters in plastic injection molding for minimizing warpage and cycle time. Int. J. Adv. Manuf. Technol. 92(9–12), 3991–3999 (2017) 8. Yen, C., et al.: An abductive neural network approach to the design of runner dimensions for the minimization of warpage in injection mouldings. J. Mater. Process. Technol. 174(1–3), 22–28 (2006) 9. Altan, M.: Reducing shrinkage in injection moldings via the Taguchi, ANOVA and neural network methods. Mater. Des. 31(1), 599–604 (2010) 10. Zhu, J., Chen, J.C.: Fuzzy neural network-based in-process mixed material-caused flash prediction (FNN-IPMFP) in injection molding operations. Int. J. Adv. Manuf. Technol. 29 (3–4), 308–316 (2006) 11. Ozcelik, B., Erzurumlu, T.: Determination of effecting dimensional parameters on warpage of thin shell plastic parts using integrated response surface method and genetic algorithm. Int. Commun. Heat Mass Transfer 32(8), 1085–1094 (2005) 12. Yin, F., Mao, H., Hua, L.: A hybrid of back propagation neural network and genetic algorithm for optimization of injection molding process parameters. Mater. Des. 32(6), 3457–3464 (2011) 13. Che, Z.: PSO-based back-propagation artificial neural network for product and mold cost estimation of plastic injection molding. Comput. Ind. Eng. 58(4), 625–637 (2010) 14. ModeFRONTIER—Design optimization platform [cited 2018.20.07]; Available from: 15. WEKA—Waikato Environment for Knowledge Analysis [cited 2018 20.07]; Available from: 16. Ogorodnyk, O., Martinsen, K.: Monitoring and control for thermoplastics injection molding a review. Procedia CIRP 67, 380–385 (2018) 17. Dang, X.-P.: General frameworks for optimization of plastic injection molding process parameters. Simul. Model. Pract. Theory 41, 15–27 (2014) 18. Kononenko, I., Kukar, M.: Machine Learning and Data Mining: Introduction to Principles and Algorithms. Horwood Publishing, Cambridge (2007)

Application of Machine Learning Methods to Improve Dimensional Accuracy in Additive Manufacturing Ivanna Baturynska1, Oleksandr Semeniuta1(&), and Kesheng Wang2(&) 1 Department of Manufacturing and Civil Engineering, Norwegian University of Science and Technology (NTNU), Teknologivegen 22, 2815 Gjøvik, Norway {ivanna.baturynska,oleksandr.semeniuta} 2 Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway [email protected]

Abstract. Adoption of additive manufacturing for producing end-use products faces a range of limitations. For instance, quality of AM-fabricated parts varies from run to run and from machine to machine. There is also a lack of standards developed for AM processes. Another limitation is inconsistent dimensional accuracy error, which is often out of the standard tolerancing range. To tackle these challenges, this work aims at predicting scaling ratio for each part separately depending on its placement, orientation and CAD characteristics. Recent attention to machine learning techniques as a tool for data analysis in additive manufacturing shows that such methods as classical artificial neural networks (ANN), such as multi-layer perceptron (MLP), and convolutional neural networks (CNN) have a great potential. For the data collected on polymer powder bed fusion system (EOS P395), MLP outperformed CNN based on accuracy of prediction and mean squared error. The predicted scaling ratio can be used to adjust size of the parts before fabrication. Keywords: Additive manufacturing  Artificial neural network Convolutional neural network  Deep learning  Dimensional accuracy Machine learning

1 Introduction Additive manufacturing is a “process of joining materials to make parts from 3D model data, usually layer upon layer, as opposed to subtractive manufacturing and formative manufacturing methodologies” [1]. There are different types of additive manufacturing processes, categorized after characteristics such as source of energy and type of material. Different AM process categories require optimization of different process parameters. Therefore, AM processes with similar parameters can be investigated as one AM group, and results of optimization can be generalized within this group. Lately, the main attention is set on optimization of additive manufacturing process parameters in order to improve quality of fabricated products. Many studies report that © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 245–252, 2019.


I. Baturynska et al.

AM is already used to produce end-user products, but quality still remains an issue. For example, dimensional accuracy is still an issue for such AM processes as powder bed fusion AM [2–5]. As compared to the tolerance requirements defined in the DIN 16742:2013 standard [6] for injection molding process, dimensional accuracy error of AM exceeds the defined ranges [4]. Baturynska [4] made an attempt to improve dimensional accuracy by predicting dimensional features (thickness, length and width) based on linear regression. However, just 4 out 12 proposed models (separate models for each dimensional feature and parts’ orientation) had accuracy higher than 75%. Based on these results, the author proposed to use more advanced methods for predicting scaling ratio, and hence improving dimensional accuracy. This paper is an extension of the work described in [4], and therefore, more advanced methods are used for data analysis and improvements of dimensional accuracy prediction. In recent years, machine learning has become a viable option in the additive manufacturing domain as a means for building highly flexible models describing complex relationships between variables. One of the latest reviews on trends of machine learning in additive manufacturing [7] describes five different categories of machine learning application: process parameters, quality enhancement, process monitoring and control, digital security and additive manufacturing in general. The main focus is set on application of ANN, genetic algorithms (GA), support vector machines (SVM). Fewer articles used deep neural networks, principal component analysis (PCA) and particle swarm optimization (PSO) [8–10]. While ANN is used to optimize process parameters, predict mechanical properties and porosity of the object, deep learning techniques were already applied in order “to identify styles of 3D models” based on 2D images rendered from digital 3D models [7]. This paper investigates applicability of two neural network models, namely Multilayer perceptron (MLP) and Convolution Neural Network(CNN), for predicting scaling ratio for each additively manufactured part separately. The former model constitutes the classical ANN, while the latter is a deep learning model. The chosen techniques are described in Sect. 3. Comparison of the results of MLP and CNN is done based mean squared error (MSE) and prediction accuracy. As CNN is one of the deep learning models, its performance depends on the amount of training data, but it is less sensitive to noise in data. MLP, on the other hand, is more sensitive to noise, but requires less data for training. Performance of the two chosen methods are compared in Sect. 4, and results on scaling ratio prediction are presented in Sect. 5.

2 Experimental Work In this work, data is collected from the EOS P395 polymer powder bed fusion (PPBF) additive manufacturing process with the material being Polyamide 2200 (PA12). Two identical runs were performed, with 358 samples being fabricated in each build. The build layout is designed in Magics 20.0 software, and is shown in Fig. 1. Each sample is labeled to support identification of each sample after fabrication. More details on the experiment are described in the earlier work [4].

Application of Machine Learning Methods to Improve Dimensional


Fig. 1. Placement and orientation of 358 specimens [4]

In order to minimize a dimensional accuracy error for each part separately, scaling ratio for thickness, width and length was calculated as follows: sri ¼ yi  y0i


where i 2 f1; 2; 3g (1 stands for thickness, 2 for width and 3 for length), sri is a scaling ratio of feature i, yi is the designed dimension of feature i and y0i is measured dimension of feature i. Predicted value for each dimensional feature should be added to the designed value before fabricating it. For more complex design, a different type of scaling ratio should be proposed.

3 Computational Intelligence Techniques for Improvement of Dimensional Accuracy 3.1

Data Pre-processing

Data analysis always requires clean and normalized data beforehand. This step is especially important in case when parameters’ values are different. Application of machine learning requires normalization of features in the training data. In this study, an impact of 20 different parameters on three dimensional features (thickness, width and length) are investigated (see Table 1). Part orientation is represented as four different groups, and thus value of this parameter varies from 1 to 4, while value of number of mesh triangles starts at ca. 1200 and increases up to ca. 7000. These ranges in parameters’ values has to be scaled to zero mean and unit variance. The work underlying this paper is based on Scikit-learn and TensorFlow with Keras frontend. The original data is split to training (541 samples) and testing (136 samples)


I. Baturynska et al. Table 1. Description of input and output parameters Input parameters cent_coord_X Orient_Z cent_coord_Y Orient_group cent_coord_Z Weight Orient_X Build_numb Orient_Y Shape_group

Output min_coord_X min_coord_Y min_coord_Z max_coord_X max_coord_Y

max_coord_X Thickness Num_mesh_trian Num_mesh_points Width Volume Length Surface

sets using train_test_split. Before training the models, the training data is scaled to zero mean and unit variance using StandardScaler. 3.2

Multilayer Perceptron

Artificial Neural Networks constitute a class of machine learning models that allows to define complex non-linear relationship between input and output. The core idea behind ANNs is in constructing a complex model as a network of processing functions and learning the parameters of these functions using backpropagation. The latter constitutes a method for computation of gradients of a cost function with respect to functions’ parameters by propagating the error back through the network architecture and applying the chain rule for differentiation. Multilayer Perceptron (MLP) is the classical neural network model, based on a sequence of fully connected layers of neurons, where the lineal layer-to-layer mapping is activated with a non-linear function. In this work, MLP neural network is designed with Scikit-learn. In order to obtain stable prediction every time, MLP architecture was optimized by trial and error approach: different combinations of number of hidden layers and number of nodes in each layer, as well as various available activation functions, has been manually tuned and applied in order to predict scaling ratio of thickness, width and length. The stability of this neural network was evaluated based on 5 runs given the same architecture but different randomly chosen training and testing sets (Table 2). The final architecture of MLP ANNs consists of one hidden layer with a size of 35 nodes, 20 parameters are used as input and 3 dimensional features as outputs. Rectified Linear Unit (ReLU) activation function is chosen because of best performance. 3.3

Convolutional Neural Network

Convolutional Neural Networks are a class of deep neural networks. Their architecture is comprised of a series of convolutional layers, followed by flattening the multidimensional output tensor and feeding it to a series of fully-connected layers (the same as in an MLP). The convolutional layers provide space invariance by sliding a filter with shared weights over data. These models are typically used in image recognition, language processing, and similar types of application. The main interest in deep learning techniques is attributed to possibility of ignoring the noise in data, and therefore minimizing time spent on data preprocessing.

Application of Machine Learning Methods to Improve Dimensional


Table 2. Stability evaluation of multilayer perceptron architecture Number of runs MLP performance metrics MSE Accuracy 1 0.0006358 0.919135 2 0.0006402 0.868677 3 0.0006770 0.839951 4 0.0007860 0.865166 5 0.0006876 0.892730 Mean 0.0006853 0.87713

MLP number of iterations

1131 1333 1553 2271 1664 1590

Contrary to the traditional use cases, in this paper a CNN is used for regression, and trained with relatively small amount of data. The input data is one dimensional (1D) that was beforehand preprocessed (more details in Sect. 3.1) The final architecture for Convolutional Neural Network (see Fig. 2) was chosen in the same way as for ANN.

Fig. 2. The final architecture of convolutional neural network

CNN model is comprised of four convolutional layers, three max-pooling layers and three dropout layers. The latter are added to prevent overfitting. The output from the last dropout layer is flattened and fed to two fully connected layers. where the first


I. Baturynska et al.

layer is activated with ReLU, and the second one – with softmax. The Adam optimizer were chosen due to its computational efficiency and low memory requirements.

4 Prediction of Scaling Ratio of Dimensional Features by Using MLP and CNN Dimensional accuracy error can be caused by different variations in the process, such as temperature distribution in the build chamber. One normally requires deep knowledge of the process and the material in order to mathematically define this phenomenon. Typically, a single scaling ratio is proposed for the whole build chamber or one for each coordinate axis (x, y and z). However, due to different temperature distribution at different places in the build chamber, dimensional error will still be present, especially in the corners (see Fig. 1). This work applies MLP and CNN to predict scaling ratio for each part separately. Using machine learning techniques allows incorporating any type of relationship between input and output and providing better prediction accuracy with lower mean squared error (MSE). Comparing the performance metrics for MLP and CNN, it is evident that prediction accuracy and MSE are better for MLP. As it is shown in Table 3, the known (measured) values for dimensional features, presented as Y original, range between −1.0 and 1.0, and therefore value of MSE should be smaller than 0.001 in order to minimize accuracy error in the future. Table 3. Prediction of scaling ratio for thickness, width and length using MLP and CNN techniques for a sample of 10 data points from the test set MLP predicted [0.2289, 0.2626, −0.9517] [− 0.1189, 0.1878, 0.1135] [− 0.0233, 0.1945 0.2757] [− 0.0115, 0.0163, −0.2164] [− 0.0328, −0.1823, 0.1669] [− 0.2750, 0.0320, 0.0865] [− 0.1224, 0.1965, 0.3019] [0.1235, 0.1964, 0.3570] [− 0.0796, 0.0938, 0.0186] 0.0005877 0.887624

Y original [0.218 0.274 −0.95] [− 0.085 0.249 0.22] [− 0.055 0.098 0.28] [− 0.005 0.101 −0.12] [− 0.012 −0.281 0.18] [− 0.279 0.025 0.22] [− 0.075 0.177 0.21] [0.147 0.202 0.34] [− 0.099 0.11 0.06] MSE Accuracy

CNN predicted [0.2342, 0.4075, 0.3582] [0.4603, 0.52578, 0.0139] [0.1739, 0.4218, 0.4043] [0.1574, 0.3943, 0.4482] [0.3194, 0.4055, 0.2749] [0.2845, 0.2184, 0.4972] [0.1976, 0.4104, 0.3920] [0.1454, 0.4223, 0.4324] [0.2689, 0.3832, 0.3479] 0.097392 0.764705

Although, results for CNN is not as good as for MLP, it is very important to keep in mind that amount of data is an important factor to train deep learning models. As a rule of thumb, the more data is used to train a machine learning model, the better performance it will have. As such, for data set less than 1000 points, CNN results are relatively good, and they can be improved upon in the future when more data from experiments is accumulated.

Application of Machine Learning Methods to Improve Dimensional


At the same time, the trained MLP can already be used to predict scaling ratio for dimensional features even considering parts with simple but different shapes. Such industries as automotive, aerospace and medical can already benefit from the results described in this article. Incorporating presented algorithms will allow decreasing dimensional accuracy error, while fully utilizing build chamber space and thus decreasing cost per part in one build.

5 Conclusion In this work, data was collected from the experiment performed on EOS P395 polymer powder bed fusion process. Two identical runs with the same process, build and material parameters were executed with 358 samples in each run. This data was preprocessed and divided into training and testing samples. As an input for algorithm training, 20 different parameters were chosen and scaling ratios for 3 dimensional features were defined as an output. Two machine learning algorithms were applied for data analysis, and their results were compared based on two metrics (MSE and accuracy). Multi-layer perceptron outperformed convolutional neural network and should be used in the future in order to minimize dimensional accuracy error. However, results for convolutional neural network show the possibility of using this method in the future after more data is accumulated. Additional experiments with different material, process and build parameters will be beneficial for both the MLP and CNN algorithms. Acknowledgment. This research is funded by Norwegian Research Council as a part of MKRAM project.

References 1. ISO/ASTM52900-15: Standard Terminology for Additive Manufacturing—General Principles—Terminology (2015). 2. Zhao, X., Rosen, D.W.: A data mining approach in real-time measurement for polymer additive manufacturing process with exposure controlled projection lithography. J Manufact. Syst. 43, 271–286 (2017) 3. Kamath, C.: Data mining and statistical inference in selective laser melting. Int. J. Adv. Manufact. Technol. 86, 1659–1677 (2016) 4. Baturynska, I.: Statistical analysis of dimensional accuracy in additive manufacturing considering STL model properties. Int. J. Adv. Manufact. Technol. 1–15 (2018) 5. Baturynska, I., Semeniuta, O., Martinsen, K.: Optimization of process parameters for powder bed fusion additive manufacturing by combination of machine learning and finite element method: a conceptual framework. Procedia CIRP 67, 227–232 (2018) 6. German Institute for Standardization: DIN 16742:2013 plastics mouldings: tolerances and acceptance conditions (2013) 7. Baumann, F., Sekulla, A., Hassler, M., Himpel, B., Pfeil, M.: Trends of machine learning in additive manufacturing. Int. J. Rapid Manufact. 10, 1–31 (in press)


I. Baturynska et al.

8. Garg, A., Lam, J.S.L., Savalani, M.M.: A new computational intelligence approach in formulation of functional relationship of open porosity of the additive manufacturing process. Int. J. Adv. Manufact. Technol. 80, 555–565 (2015) 9. Samie Tootooni, M., Dsouza, A., Donovan, R., Rao, P.K., Kong, Z.J., Borgesen, P.: Classifying the dimensional variation in additive manufactured parts from laser-scanned three-dimensional point cloud data using machine learning approaches. J. Manufact. Sci. Eng. 139, 091005 (2017) 10. Negi, S., Sharma, R.K.: Study on shrinkage behaviour of laser sintered PA 3200GF specimens using RSM and ANN. Rapid Prototyp. J. 22, 645–659 (2016)

Design and Implementation of PCB Detection and Classification System Based on Machine Vision Zhiwei Shen2(&), Sujuan Wang1(&), Jianfang Dou1, and Zimei Tu1 1

School of Engineering, Shanghai Polytechnic University, Shanghai, China [email protected] 2 Faculty of Engineering, The University of New South Wales, Sydney, Australia [email protected]

Abstract. In this paper, an optimization method, which uses the combination of machine vision and automatic technology, is proposed and applied to the process of defect detection and classification of PCB. This study focuses on the automatic detection and sorting which uses smart camera and industrial robot to gain a rapid process for detecting and sorting the PCB instead of manually detecting PCB one by one. In this paper, hardware selection and software design are mainly introduced. There are some keys in this system: the position of PCB is determined by template matching and sub-pixel precision, which use the software way to improve the accuracy and efficiency of positioning; the components is tested by gray-scale feature. PCBs are sorted by industrial robot according to the types of PCB defects detected by smart camera. The experimental results show that the system can effectively detect and sort PCB, which is feasible and practical. Keywords: Machine vision Template matching

 PCB inspection  Defect detection

1 Introduction With the rapid development of information technology, the electronic industry also develops vigorously. At present, China has become a large producer of printed circuit board (PCB), with a world-leading output value. The quality of PCB plays a vital role in the performance of the electronic products. In the production process of PCB, defect detection of PCB is an important step to ensure the quality of products. Manual detection and traditional equipment sometimes take a few seconds for one defect, and 2–3 h for testing a piece of PCB, which obviously cannot meet the production needs. Nand et al. (2014 p. 5) argues that the machine vision detection and control technology used in this paper is to replace human hand with industrial robot. Machine vision is utilized to conduct detection, measurement, analysis, judgment and decision-making. The main advantages of this system are as follow: high intelligence, repeatability, abundant information acquisition, high detection speed and accuracy, and real-time operation. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 253–261, 2019.


Z. Shen et al.

In this PCB detection and classification system, Datalogic smart camera is utilized as testing device, and software Impact is applied to design vision algorithms and ABB industrial robot is used to sort PCBs. The system can read QR code which is traceability, detect components with different errors (R, C, L). In the context of industrial production lines, it has a bright development prospect and economic benefit expectation.

2 Design Scheme This paper introduces a PCB detection and classification system based on machine vision from hardware and software design. The hardware part mainly consists of intelligent camera, industrial robot, air pump, PC, connection box, light source and lens. The software part is made up of vision software Impact and Robot Studio. Intelligent camera and PC use network communication to transmit real-time processing results; Intelligent camera and industrial robot deliver coordinate, angle and sorting information through serial communication. A smart camera sends an I/O signal to the air pump to control the state of suction and discharge. PC can monitor the state of smart camera and robot in real time via the software.

3 Hardware Design Hardware of PCB detection and sorting system based on machine vision includes AOI (Automatic Optic Inspection) platform and sorting execution unit respectively. The AOI platform consists of four parts: an intelligent camera, a lens, a light source and bracket. In the AOI platform, camera is the key to the quality of image; pixel and CCD size are its important parameters. 3.1

Camera Choice

Datalogic intelligent camera T49 is used in this system. The camera resolution is 2456 * 2058, and the size of CCD is 2/3. With the same resolution as colour cameras, grayscale cameras have higher precision, especially on the edges. In the case, the grayscale camera, which is cheaper than colour camera, can meet the detection requirement. 3.2

Lenses and Light Sources Selection

The optical system is based on the principle of optical diffraction, and the light source is essential for the success of the system. Due to the camera position 50 cm–70 cm range from the object being measured, lens (f = 50 mm) was chosen in order to make the vision be greater than the size of PCB. This will not lead to more loss of accuracy. The light source is white and circular with a diameter of 350 mm.

Design and Implementation of PCB Detection and Classification System



Robot Selection

The sorting execution unit has two parts including an ABB industrial robot and a vacuum air pump. ABB robot is one of the most widely used robots, and it is usually used because of its good compatibility. The air pump nozzle is fixed on the sixth axis of the robot to uptake PCB. In conclusion, the composition of this system is shown in Fig. 1.

Fig. 1. The system building block diagram

4 Software Design The software used in this system includes the vision software-Impact and the Robot simulation software-RobotStudio. Impact is used for acquiring images, analysing and processing images, setting sorting conditions, communication programs, triggering programs and making application display interfaces. RobotStudio is in charge of offline programming of robots and simulating results. PC and ABB robot maintain real-time communication to ensure the completion of sorting procedures. When the smart camera gains the trigger signal that there is a PCB on the platform, it starts to collect the image. The image is located by the template matching, which uses sub-pixel precision so that the precision can be made sure. Compared with normal precision, sub-pixel precision is able to offer more exact location. The position of PCB is the most significant data, because every detect tool is based on the location to work. The template matching in the software avoid using hardware matching. Image is the determining factor in AOI. This system can collect the high-quality image, leading to the grayscale of three kinds of component have obviously differences. Therefore, the detection of components is finished by the comparison of gray level. This equips the system with high speed of detection. All above is the core of the system.



Z. Shen et al.

Image Processing Flow Chart

There are six steps in image processing. The first procedure is to position PCB through template matching, and it is located by computing. In the second step, PCB is tested. The QR code on the current PCB is identified to determine at first. The next step is to determine whether the chip is available before testing the chip soldering pin. Continuous flux analysis is used to analyse the foot alignment. There should be two situations: if there is a chip, the edge enhancement should be the pre-action; if there is no chip, it can directly judge the regularity of the chip welding pin. Then, pins are tested whether they are bent or not and the error leakage parts of the components need to go through the median filtering first to remove the noise in the image, and the detection result will be better. Finally, the coordinates, angles and sorting bits, which are calculated by the smart camera, are delivered to the robot through the serial ports. Different types of defects are sorted through the robot cooperating with the air pump (Fig. 2).

Fig. 2. The flow chart of image processing


Software Tool–Impact

Impact is a software program for Datalogic smart camera, which is used to edit visual programs and making user interfaces. The Impact software suite contains more than 120 detection tools and more than 50 user interface controls that help users create detection programs and develop user interfaces. Impact consists of visual program management (VPM) and control panel management (CPM). That is to say, the results of image processing in VPM can be demonstrated in CPM. 4.3

Impact Design

Before digital image processing, pre-processing should be carried out in advance to check whether the image effect meets the processing needs. In order to find out objects in varying backgrounds, the images should be transformed, separated, positioned, and segmented. The image is going to experience the following steps: capture,

Design and Implementation of PCB Detection and Classification System


pre-processing, segmentation, feature extraction, identification, measurement calculation and result output. (1) Image Zooming The camera collects 8-bit grayscale images, which are detected the features. The first step of detection is locating. The method used for locating is template matching. Since template matching takes a long time to find the required features in the complete picture, the image is scaled to reduce the amount of image data. Therefore, image zooming is an important step in pre-processing in this system. In the software, image zooming is to help the user to define the size of the Image. For example, assuming that one of points on the image is (2, 2), this points’ value on the image would be half (1, 1). However, it is vital to note that when the camera is calibrated, the real coordinate values will not be broken by sampling. The next step is to link the tools with the original image processing to avoid impact on accuracy. The implementation step is that the resulting image is attached to the tool which needs to use the image. (2) Positioning There are many kinds of positioning tools. In practical situation, actual features on the image can decide to choose which tool is used, according to Yang et al. (2015). The surface features of PCB are complex, but the gray-level difference is relatively clear. This system uses template matching with sub-pixel precision to locate the PCB. Its advantages are: (I) since the characteristics of the round and line on the PCB might be suitable for used as positioning, using template matching is reasonable; (II) the system does not use hardware matching and artificially place PCB, because this usually has relative error. The detection tools will move to find the PCB according to the coordinates of template matching; (III) Zhong and Zhang (2013 p. 2915) shows that subpixel precision is applicated to gain higher precision. After image zooming, the template matching is used as shown in Fig. 3.

Fig. 3. Location tool usage

(3) Threshold Segmentation According to the characteristics of large grayscale difference between varying parts of the surface of PCB grayscale image, different threshold values are obtained to detect the surface of PCB through grayscale features. There are many tools for threshold segmentation used in Impact. For example, Blob can find the center of the point within the grayscale range on the image and read the coordinates. Yang et al. (2013 p. 36)


Z. Shen et al.

asserts that contrast can judge the average grayscale within the range and select the optimal one. The implementation steps are: select the detection area on the image; set the grayscale threshold; set the output result. The Blob working diagram is shown in Fig. 4.

Fig. 4. Blob tool

(4)Edge enhancement Lu and Yin (2016 p. 5095) suggests that the edge of the chip is not obvious in the left Fig. 5, so it is easier to recognize the pin with edge enhancement. The principle has been shown in the following formula (1.1) and (1.2). The setting of edge enhancement is: find the ROI that needs to be enhanced; ensure the angle of the edge enhancement; get enhanced images. The contrast enhancement is shown in Fig. 5.

G¼ 2

1 Gx ¼ 4 2 1

0 0 0

3 þ1 þ25  A þ1

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi G2x þ G2y 2


1 Gy ¼ 4 0 þ1

ð1:1Þ 2 0 þ2

Fig. 5. Edge enhancement contrast effect

3 1 0 5A þ1


Design and Implementation of PCB Detection and Classification System


(5) Multi-region Contrast Comparison In the detection of chip soldering pins and components, the range of grayscale connections is determined by using continuous flux analysis. The usage process of the contrast tool is: select all ROI; circulate ROI box on each ROI; determine the threshold form used; set up a ratio to determine whether the amount of gray-level distribution is reasonable; determines the output content. Zhu and Du (2014 p. 326) demonstrates that this approach can implement grayscale detection for multiple regions simultaneously. As shown in the Fig. 6, Contrast–Multiple tool is used.

Fig. 6. Contrast–multiple tool


Robot Software Design

The robot program mainly receives the coordinate information sent by the camera and sends the information to the camera as the trigger signal. While the robot is ready at the same time, the camera starts to work. In this system, the trigger signals between the camera and the air pump are sent to the serial port of the camera by the robot.

5 The Experimental Results 5.1

The Display of VPM

The detection results of PCB and user operation are mainly accomplished by VPM. Through PC, users can control the start and stop of the system, change the variance, and select the display of the detection objects. The system detection results and the operation interface are shown in Fig. 7. 5.2

The Experimental Data

After the completion of PCB detection and classification platform construction, the existing PCB was used for experiments. The recorded experimental results are shown in Table 1. Experiment 1: a batch of different PCBs was placed on the platform at the same angle for testing; Experiment 2: a batch of different PCBs was placed at different angles for testing. Experiment 3: one PCB was detected multiple times. Experiment 4: a


Z. Shen et al.

Fig. 7. The CPM interface in operation

batch of PCB was tested. Experimental data show that as long as the positioning of PCB can pass the detection of components, the results of detection and classification have high accuracy. Table 1. Experimental data summary Nu.

Testing times

Successful times

Success rate

Positioning success rate

1 2 3 4

80 60 20 20

79 59 20 20

98.75% 98.3% 100% 100%

98.75% 98.3% 100% 100%

Successful component testing rate 98.75% 98.3% 100% 100%

Sorting accuracy rate 98.75% 98.3% 100% 100%

6 Conclusion The research of PCB detection and classification system based on machine vision has been researched the application of AOI in PCB defect detection. The system has the following innovations: (1) Software registration is designed to speed up the placement and saves the cost if the system is without hardware registration. (2) With better interactivity, the interactive functions of CPM in the running mode have the ability to display the ROI area, pass or fail for each tool, the adjustment of tolerance, the function of password login, and the adjustment of the ROI area, etc. (3) A slight modification on software and hardware can also be applied in similar product quality inspection part, which is more versatile and practical. The system realizes the detection and classification of PCB defects, improves the detection efficiency, saves labor costs and reduces the error detection rate. The experimental results show that the system is feasible, practical and easy to observe.

Design and Implementation of PCB Detection and Classification System


References Lu., R., Yin, D.: Component surface defect detection based on image segmentation method. In: Chinese Control and Decision Conference, pp. 5093–5096 (2016). document/7531906/. Accessed 1 May 2018 Nand, G.K., Neogi, N.: Defect detection of steel surface using entropy segmentation. In: Annual IEEE India Conference, pp. 1–6 (2014). 7030439/. Accessed 1 Apr 2018 Yang, S., Yan, A., Chan, H.: Calibration of flicker meters in standards and calibration laboratory (SCL). In: Proceedings of the 2015 NCSL International Workshop and Symposium (2015) Yang, Y., Chen, X., Tang, Y., Bao, Z.: Ribbon detection based on threshold segmentation after grey level transformation. J. Silk 50(9), 36–40 (2013) Zhu, J., Du, X.: Advanced multi-scale fractal net evolution approach. Remote Sens. Technol. Appl. 29(2), 324–329 (2014) Zhong, Y., Zhang, L.: Sub-pixel mapping based on artificial immune systems for remote sensing imagery. Pattern Recogn. 46(11), 2902–2926 (2013)

Diagnosis of Out-of-Control Signals in Multivariate Manufacturing Processes with Random Forests Zheng Jian1, Beixin Xia2(&), Chen Wang2, and Zhaoyang Li2 1

School of Mechatronics Engineering and Automation, Shanghai University, Shanghai, China [email protected] 2 School of Management, Shanghai University, Shanghai, China [email protected]

Abstract. In a multivariate manufacturing process, there are two or more correlated quality characteristics which are needed to monitor and control simultaneously and hence various multivariate control charts are applied to determine whether a process is in-control. Once a control chart gives an alarm, the next work is to determine source(s) of the out-of-control signals. In this paper, a random forests model is developed to diagnose source(s) of out-of-control signals in multivariate processes. A bivariate manufacture process is conducted to compare the performance of the random forests model with a support vector machine (SVM) model. Results show that the random forests model is better than the SVM model. The results also indicate that the effectiveness of the random forests model in identifying the source of out-of-control signals. Keywords: Manufacturing quality control and management Predictive maintenance  Diagnosis and prognosis of machines Random forests

1 Introduction In modernized manufacturing process, inspired by application of more advanced technology and industrial expansion, a manufacturing system usually is composed of two or more correlated quality characteristics and hence it is essential to monitor and control all these quality variables simultaneously. Under this situation, multivariate control charts are the most widely applied tools to reveal the state of manufacture process [1, 2]. When a control chart gives an alarm, quality engineers should search for the assignable causes and take some necessary corrections to bring the out-of-control process back to the in-control state [3]. Therefore, it is essential to determine source(s) of the out-of-control signals. In the field of multiply quality diagnosis, various artificial intelligence techniques have recently been applied to identify the responsible variables when the process is considered to be out of control [4]. Low et al. [5] developed a neural network procedure for detecting variations of the variance in multivariate processes. Their simulation results demonstrated the superiority of the proposed model in process control while © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 262–267, 2019.

Diagnosis of Out-of-Control Signals in Multivariate Manufacturing Processes


multiple quality characteristics were simultaneously considered. Niaki and Abbasi [6] developed a multi-layer perceptron for identifying the source(s) of the out-of-control signals. Guh [7] proposes a neural network-based model that can identify and quantify the mean shifts in bivariate processes. To identify source(s) of variance shift, Cheng and Cheng [8] considered two classifiers based on artificial neural networks (ANN) and support vector machine (SVM) to interpret the out-of-control signals. Cheng et al. [9] proposed a model based on a least squares SVM to diagnose the bivariate process abnormality in covariance matrix. Recently, ensemble learning methods (bootstrap, boosting, etc.) have received a strong interest. They consist in learning several weak classifiers to generate a classifier with a strong decision rule [10]. Among ensemble learning methods, random forests (RF) [11] has been shown to be a powerful tool for multi-class classification problems. In this study, the random forests is developed to diagnose source(s) of out-ofcontrol signals in multivariate processes. A bivariate manufacture process was used to compare the performance of the random forests with a SVM model to this problem. Our results demonstrate that the random forests is a promising tool to interpret the outof-control signals. The remainder of this paper is organized as follows. First, the fundamental principles of random forests are reviewed briefly in Sect. 2. Then simulation experiments and the simulation results are described in Sect. 3. Conclusions are given in Sect. 4.

2 Review of Random Forests Based on decision trees and combined with aggregation and bootstrap ideas, random forests were introduced by Breiman [11]. There are two key parameters in random forests: the number of trees (T) and the number of variables (mtry). In general, the model with more trees can have a higher classification accuracy, more complex computation and longer running time. According to [12], the mtry is suggested to set the square root of the dimensionality of the input space. Random forests usually use information gain (see Eq. 1) and Gini index (see Eq. 3) as splitting criteria. The principle of random forests is to combine many binary decision trees through different combining rules, such as simple averaging, weighted averaging and majority voting. When given a training set T ¼ fðx1 ; y1 Þ; ðx2 ; y2 Þ; . . .; ðxn ; yn Þg; xi is a K-dimensional feature vector of the ith sample and yi is the label of xi . Each decision tree will be trained by the whole training set T. A is the number of classes in the initial training set. The kth dimensional feature has M possible values. The proportion of the jth class data in T is represented by pj . T m is the set of samples with the mth possible value in feature k. The information gain and Gini index can be computed as follow: Information gain: GainðT; kÞ ¼ EntropyðTÞ 

M X jT m j EntropyðT m Þ T m¼1



Z. Jian et al.

EntropyðTÞ ¼ 


pj log pj



Gini index: GiniIndexðT; k Þ ¼

GiniðTÞ ¼ 

M X jT m j GiniðT m Þ T m¼1


pj ð1  log pj Þ




3 Simulation Experiments 3.1

Generation of Simulation Data

In this study, we focus on using random forests to diagnose source(s) of abnormal variable(s) occurred in multivariate processes. Based on this, a Monte-Carlo simulation is adopted in this paper to generate the required multivariate data. In multivariate manufacturing process, there will usually be two or more correlated quality characteristics. In this study, a bivariate manufacturing process model is adopted to generate simulation data and it can be expressed as Eq. (5). Yt ¼ l þ Ft


where Yt is the quality characteristics at time t when the process is in-control. l is the process mean vector. Ft is subject to a bivariate distribution  normal    with unit variances 0 P 1 q ; 0¼ ). In this study, we and a correlation coefficient of q (i.e., l0 ¼ 0 q 1   0 assume that l ¼ , the q is 0.6. 0 In this study, the mean shifts pattern is adopted to the out-of-control process. If an out-of-control pattern occurs in the process at time t, this can be expressed as Eq. (6). Xt ¼ Yt þ d


where Xt is the quality characteristics at time t when the process is out-of-control. d is  d1 the shifts vector (d ¼ ). Five different kinds of shifts are selected to cover the d2 whole range of shifts pattern (i.e., d1 ; d2 ¼ 1:0r; 1:5r; 2:0r; 2:5r; 3:0r; r ¼ 1 in this study). For generating training data sets, we shift the mean vector as ððl1 þ 1:0r11 ; l2 Þ; ðl1 ; l2 þ 1:0r22 Þ; ðl1 þ 1:0r11 ; l2 þ 1:0r22 ÞÞ in this bivariate process.

Diagnosis of Out-of-Control Signals in Multivariate Manufacturing Processes



Structure of Simulation Experiments

This section mainly outlines the specific structure of simulation experiments. In this study, a bivariate manufacture process is considered to evaluate the performance of the random forests for diagnosis of out-of-control signals. The construction of our simulation experiments mainly involves two stages: off-line leaning and on-line evaluation. Figure 1 shows the scheme for the specific process of simulation experiments.

Fig. 1. Structure of simulation experiments

In the stage of off-line learning, Five hundred examples were generated for each abnormal case to obtain total 500  3 ¼ 1500 examples as the training data set. The parameters T and mtry are set to 300 and 3 respectively. After inputting training set to random forests, a well-trained model will be obtained. In the stage of on-line evaluation, 500 test data sets were generated to calculate the percentage of correct classification for each abnormal case. The possible abnormal cases can be divided into three types for a bivariate manufacture process: Type A: shift in variable 1 ðl1 þ 1:0r; l2 Þ, Type B: shift in variable 2 ðl1 ; l2 þ 1:0rÞ, Type C: shifts in two variables ðl1 þ 1:0r; l2 þ 1:0rÞ. 3.3

Results and Discussions

In this study, a bivariate manufacture process with three different abnormal cases (Type A, Type B, Type C) is designed to compare the performance of the random forests with


Z. Jian et al.

a SVM model. Relative parameters are set to what we have discussed. The detailed simulation results are shown in Table 1. Table 1. Comparison between the random forests model and the SVM method for the twovariable process Model

Shift magnitude Correct classification percentage (%) Type A Type B Type C Percentage SVM (1.0 0.0) 403 0 97 80.60 (0.0 1.0) 25 398 77 79.60 (1.0 1.0) 83 59 358 71.60 (1.5 0.0) 452 0 48 90.40 (0.0 1.5) 0 476 24 95.20 (1.5 1.5) 7 3 490 98.00 (2.0 0.0) 480 0 20 96.00 (0.0 2.0) 0 497 3 99.40 (2.0 2.0) 0 1 499 99.80 (2.5 0.0) 493 0 7 98.60 (0.0 2.5) 0 492 8 98.40 (2.5 2.5) 0 0 500 100.00 (3.0 0.0) 497 0 3 99.40 (0.0 3.0) 0 500 0 100.00 (3.0 3.0) 0 0 500 100.00 Aggregate 93.80 Random forests (1.0 0.0) 478 2 20 95.60 (0.0 1.0) 0 470 30 94.00 (1.0 1.0) 40 49 411 82.20 (1.5 0.0) 498 0 2 99.60 (0.0 1.5) 0 474 26 94.80 (1.5 1.5) 23 2 475 95.00 (2.0 0.0) 500 0 0 100.00 (0.0 2.0) 0 497 3 99.40 (2.0 2.0) 0 3 497 99.40 (2.5 0.0) 499 0 1 99.80 (0.0 2.5) 0 497 3 99.40 (2.5 2.5) 0 0 500 100.00 (3.0 0.0) 500 0 0 100.00 (0.0 3.0) 0 499 1 99.80 (3.0 3.0) 0 0 500 100.00 Aggregate 97.27

As is shown Table 1, the total average percentages of correct recognition of the two models are 93.80%, and 97.27% respectively. The random forests model shows a better

Diagnosis of Out-of-Control Signals in Multivariate Manufacturing Processes


performance than the SVM model in diagnosis of out-of-control signals in multivariate manufacturing processes. Therefore, the results also indicate that the random forests model is a promising tool for identifying the source(s) of out-of-control signals in the multivariate process.

4 Conclusion In this paper, a random forests model is successfully applied in a bivariate manufacture process to identify the source(s) of out-of-control signals in the multivariate process. The performances of the random forests model were evaluated by a simulation experiment and the random forests model was compared to a SVM model. According to the simulation experiment, the random forests model shows a better performance due to its characteristics of ensemble learning. The results also indicate that the random forests model is a reliable tool for diagnosing out-of-control signals in multivariate manufacturing processes.

References 1. Abdella, G.M., Al-Khalifa, K.N., Kim, S., Jeong, M.K., Elsayed, E.A., Hamouda, A.M.: Variable selection-based multivariate cumulative sum control chart. Qual. Reliab. Eng. Int. 33(3), 565–578 (2017) 2. Naderkhani, F., Makis, V.: Economic design of multivariate Bayesian control chart with two sampling intervals. Int. J. Prod. Econ. 174, 29–42 (2016) 3. Yu, J., Xi, L., Zhou, X.: Identifying source(s) of out-of-control signals in multivariate manufacturing processes using selective neural network ensemble. Eng. Appl. Artif. Intell. 22(1), 141–152 (2009) 4. Cheng, C.S., Lee, H.T.: Diagnosing the variance shifts signal in multivariate process control using ensemble classifiers. J. Chin. Inst. Eng. 39(1), 64–73 (2016) 5. Low, C., Hsu, C.M., Yu, F.J.: Analysis of variations in a multi-variate process using neural networks. Int. J. Adv. Manufact. Technol. 22(11–12), 911–921 (2003) 6. Niaki, S.T.A., Abbasi, B.: Fault diagnosis in multivariate control charts using artificial neural networks. Qual. Reliab. Eng. Int. 21(8), 825–840 (2005) 7. Guh, R.S.: On-line identification and quantification of mean shifts in bivariate processes using a neural network-based approach. Qual. Reliab. Eng. Int. 23(3), 367–385 (2007) 8. Cheng, C.S., Cheng, H.P.: Identifying the source of variance shifts in the multivariate process using neural networks and support vector machines. Expert Syst. Appl. 35(1–2), 198–206 (2008) 9. Cheng, Z.Q., Ma, Y.Z., Bu, J.: Variance shifts identification model of bivariate process based on LS-SVM pattern recognizer. Commun. Stat. Simul. Comput. 40(2), 274–284 (2011) 10. Pelletier, C., Valero, S., Inglada, J., Champion, N., Dedieu, G.: Assessing the robustness of random forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 187, 156–168 (2016) 11. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001) 12. Micheletti, N., Foresti, L., Robert, S., et al.: Machine learning feature selection methods for landslide susceptibility mapping. Math. Geosci. 46(1), 33–57 (2014)

Effect of Processing Parameters on the Relative Density of AlSi10Mg Processed by Laser Powder Bed Fusion Even Wilberg Hovig(&), Håkon Dehli Holm, and Knut Sørby Norwegian University of Science and Technology, 7031 Trondheim, Norway [email protected]

Abstract. In order for a material processed by laser powder bed fusion (LPBF) to achieve the desired mechanical properties, the relative density of the material should be as high as possible. In this study, the effect of various processing parameters of a LPBF process have been investigated in order to increase the relative density of processed AlSi10Mg. The results show that pre-heating the powder and increasing the substrate temperature increases both the relative density and build rate. In addition, a novel method to select the processing parameters, based on the laser penetration depth, has been proposed. Keywords: Laser powder bed fusion  Additive manufacturing AlSi10Mg  Relative density  Selective laser melting  Process parameter

1 Introduction Laser powder bed fusion (LPBF) is a process where a component is manufactured layer by layer from a three-dimensional CAD model. The feedstock material is typically a pre-alloyed powder, which is distributed in a thin layer over a substrate of a (usually) similar alloy. The first layer is melted by a laser, before the substrate is moved down by the distance of a layer height, and a new layer of powder is distributed over the preceding layer. The second layer is then melted, before the process repeats itself until a finished geometry is produced. The substrate may, or may not be, heated, and the process takes place in a controlled atmosphere, usually with inert gas to minimize the oxygen content in the build chamber. LPBF has some specific challenges when compared to traditional manufacturing methods such as casting. The challenges includes, but are not limited to, high cooling rates [1], elongated grain growth [2], and melt pool effects such as spatter [3]. The high cooling rates contributes to residual stresses in LPBF components which tends to be compressive in the centre of the component, and tensile near the edges [4]. This in turn may lead to cracks or distortion in the material. Distortion during LPBF processing can result in incomplete coating in subsequent layers. Another challenge with the high cooling rate is the possibility of forming keyhole pores. A keyhole pore is a result of rapid solidification and incomplete filling of molten material [5]. A keyhole pore is typically irregular in shape, and large in size (Ø > 100 lm). For LPBF of © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 268–276, 2019.

Effect of Processing Parameters on the Relative Density


AlSi10Mg, the high cooling rate also results in a very fine grain structure, which increases the tensile properties at the cost of ductility [6]. The direction of the thermal gradient is parallel to the build direction, perpendicular to the substrate, which leads to elongated grain growth parallel to the build direction. This leads to a crystallographic texture, which in turn leads to anisotropic mechanical properties [2]. Spatter may refer to either droplet spatter, or powder spatter [7]. Droplet spatter occurs due to the dynamics of the melt pool, and refers to the propulsion of spherical particles out of the melt pool [8]. Powder spatter occurs when powder particles are blown away by the metallic vapour [7]. The particle size of the droplet spatter is up to three times larger than the powder particles, and contains more oxygen than the average component [7]. In addition to the increased oxygen content, the large particles may increase the risk of incomplete powder coating of the next layer, or disturb neighbouring layers, resulting in porosity in the bulk material. Another cause of porosity in LPBF AlSi10Mg is entrapment of gas, either hydrogen or the processing gas (usually argon) [5]. The powder feedstock may also contain trapped gas pores, which may manifest itself into the processed material. In order to achieve the desired mechanical properties the relative density of the processed material should be as high as possible [9], and the processing parameters must be selected to reduce the porosity. Several studies have investigated the effect of various processing parameters on the relative density of LPBF materials [5, 9, 10]. The key parameters are listed in Table 1. Table 1. Key processing parameters for laser powder bed fusion. Laser parameters Laser power Laser scan velocity Hatch spacing Laser focus diameter Build parameters Layer thickness Substrate temperature Scan strategy

Symbol P v h d

Unit W mm/s lm lm

t Ts –

lm °C –

2 Materials and Methods Two sets of 19 cubes of 10  10  10 mm3 were produced with different processing parameters in a Concept Laser M2 Cusing powder bed fusion machine. Continuous heating of the substrate was done with a heated platform designed and installed in the LPBF machine by the authors. The AlSi10Mg powder feedstock was supplied by Concept Laser, with the chemical composition as listed in Table 2. Argon was used as processing gas to keep the oxygen content below 0.2% during processing.


E. W. Hovig et al. Table 2. Chemical composition of the powder feedstock according to the supplier. Si Mg Fe Mn Ti Cu Zn C Al 9.0–11.0 0.2–0.45 v v > d >   > < 2ckð3Þ  uv þ kcð4Þ  ðr 2 þ 2u2 Þ dx ¼ kcð3Þ  ðr 2 þ 2v2 Þ þ 2kcð4Þ  uv > > > > > > > : r 2 ¼ u2 þ v 2


According to the pinhole camera imaging model, light bar in the target coordinate axis z coordinates is ZW = 0. Where the light bar of the image coordinates [u v 1]T, denote the i’th column vector of the rotation matrix R by ri. Equation (3) simplifies the non-homogeneous linear equations and uses the least square method to solve the 3D points in the plane target coordinate system.

One Dimensional Camera of Line Structured Light Probe Calibration

2 3 u 4 v 5 ¼ K ½ r1 1



3 XW 6 YW 7 7 t 6 4 0 5 ¼ K ½ r1 1




3 XW t 4 YW 5 1




3 au 0 u0 The camera intrinsic matrix K ¼ 4 0 av v0 5, au, av are image u axis and v axis 0 0 1 scale factor, (u0,v0) are main point coordinates. 3.2.3 Fitting Light Knife Plane Equation The feature points in plane target coordinates, multiplied by the camera’s external parameters are converted into the camera coordinate system. According to the least square method, the plane equation is fitted, and the plane pose of light knife is calibrated at all calibration feature points. The equation of plane parameter: Ax + By + Cz + D = 0


With a0= −A/C, a1 = −B/C, a2= −D/C, fitting the plane equation with a series of points, the Eq. (5) is the minimum. a0,a1,a2 are solved and the plane equation of light knife is obtained. S¼

n1 X

ða0 x þ a1 y þ a2  zÞ2



3.2.4 Build a Virtual Camera Express Light Knife Plane A virtual camera, optical center and optical axis are built on the light knife plane. The X axis is perpendicular to the light knife plane, using the intrinsic parameters of camera as the unit matrix. In the camera coordinate system, the origin point X axis (0,0,0), Y axis (0,1,0), Z axis (0,0,1) coordinates are used to create a virtual camera coordinate system for the mapping point using the projection on the light knife plane. According to the principle of intersecting line and plane, the straight line’s vector is equal to the plane’s normal direction vector. The line is represented as Eq. 6 and the plane is as Eq. 7, get the intersection coordinate of the line to the plane. 8 < x ¼ m 1 þ v1  t y ¼ m 2 þ v2  t : z ¼ m3 þ v3  t


v1  ðx  n1 Þ þ v2  ðy  n2 Þ þ v3  ðz  n3 Þ ¼ 0


The process of the virtual camera coordinate system is to fit the straight line from the first point to the second point and get the Z axis unit vector. Then point multiplication is used to determine the normal direction from the first point to the second


Q. Zhan and X. Zhang

point. Finally, we obtain third points perpendicular to the Z axis and get the Y axis unit vector. According to right hand direction of vector multiplication, Z multiplied by Y to get X axis unit vector. The translation vectors of the virtual camera relative to its coordinate system are x, y, z axis unit vector and translation vector for the virtual camera coordinate system origin coordinates.

4 Experiment and Analysis 4.1

Experimental System

In this experiment, we use CCD camera for Point Grey industrial cameras CMLN13S2M-CS, the resolution is 1296  964 pixels, the lens is KOWA, and the focal length is 12 mm. The line laser is US coherent laser STR, with wavelength 660 nm and power 35 mw. 4.2

Calibration Experiments

The circular pattern target image is used for iterative camera calibration (See Fig. 2), the line structured light target image for line structured light probe calibration is shown in Fig. 3. There are at least six poses with and without line structured light ring pattern target pictures. The calibration interface is shown in Fig. 4, and the camera parameters are calibrated. Follow the step 3.2 light knife plane calibration, the light strip center extraction results shown in Fig. 5. The calibration posture of the plane attitude equation fitting according to the feature points is [0.77945554; −0.452360302; −0.873902559; 745.171082].

Fig. 2. Target picture


Fig. 3. Line structured light target picture

Fig. 4. Calibration interface

Fig. 5. Strip center picture

Calibration Accuracy Analysis

The actual line structured light measurement system, as shown in Fig. 6, is back projected to the image coordinate system by using the ring center point of the world coordinate system according to the camera’s internal and external parameters after the iterative camera calibration. The projection points and the actual extracted pixels aberrations are calculated to verify the accuracy of iterative camera calibration. Two

One Dimensional Camera of Line Structured Light Probe Calibration


gauge blocks are stacked into a stepped workpiece, with line structured light projector projecting structured light onto it, to get two lines L1 and L2. The average distance of each point on the line L1 to the line L2 is calculated and compared with the standard distance to verify the precision of the light knife plane calibration (See Fig. 7).

Fig. 6. The actual line structured light measurement system

Fig. 7. Verify the light knife plane calibration accuracy picture

The calibration experiment is repeated 10 times, as shown in Table 1. It can be seen from the table that the maximum error of the calibration of an iterative camera is within 0.085557 pixels, with an average error of 0.077220 pixels, and the standard deviation of each experiment is within 0.0026658 pixels. The maximum error of light knife plane is within 1.41906 mm, the mean error is 1.41517 mm and the standard deviation of each experiment is less than 0.0012523 mm. The algorithm in this paper has high calibration precision, which is mainly due to the calibration of the iterative camera and the calibration of the light knife plane, so as to get more accurate target points and a more accurate plane equation of the light knife.

Table 1. Calibration experimental data Index Intrinsic/pixel Error Standard deviation 1 0.069142 0.0025544 2 0.080219 0.0012458 3 0.085557 0.0026658 4 0.077506 0.0008478 5 0.081413 0.0013529 6 0.075493 0.0006937 7 0.073463 0.0012081 8 0.079437 0.0007985 9 0.076668 0.0003069 10 0.073299 0.0012436

Laser plane/mm Error Standard deviation 1.41161 0.0011283 1.41407 0.0004994 1.41498 0.0001691 1.41124 0.0012452 1.41215 0.0010339 1.41539 0.0003341 1.41787 0.0008587 1.41801 0.0009358 1.41736 0.0007525 1.41906 0.0012523


Q. Zhan and X. Zhang

5 Conclusions In this paper, a linear structured light probe calibration method for light knife plane modeling is presented. The target image of a ring pattern with just at least two postures and without a line structure is collected. By iterative camera calibration and line structured light probe calibration, triangulation is used to obtain 3D measurement values in stereo vision models. By analyzing the calibration accuracy, the average error of iterative camera calibration is 0.077220 pixels, and the average error of light knife plane calibration is 1.41517 mm. The calibration algorithm of line structured light probe has high accuracy. Acknowledgement. This research was partially supported by the key research project of Ministry of science and technology (Grant No. 2017YFB1301503) and the National Nature Science Foundation of China (Grant No. 51575332).

References 1. Niola, V., Rossi, C., Savino, S., et al.: A method for the calibration of a 3-D laser scanner. Rob. Comput. Integr. Manuf. 27(2), 478–484 (2011) 2. Vilaca, J., Fonseca, J.C., Pinho, A.M.: Calibration procedure for 3D measurement systems using two cameras and laser line. Opt. Laser Technol. 41(2), 112–119 (2009) 3. Dewar, R.: Sel-generated targets for spatial calibration of structured light optical sectioning sensors with respect to an external coordinate systems. In: Proceedings of Robots and Vision 88 Conference, pp. 5–13 (1988) 4. James, K.W.: Noncontact machine vision metrology within a CAD coordinate system. In: Proceedings of Autofact 88 Conference, pp. 9–17 (1988) 5. Zhao, Y., Chen, X.: The circular points-based camera self-calibration study on three classification algorithms. J. Comput. Inf. Syst. 7(4), 1140–1147 (2011) 6. Zhou, F.Q., Zhang, G.J.: Facilitated method to calibrate model parameters of vision sensor for surface measurement. Chin. J. Mech. Eng. 41(3), 175–179 (2005) 7. Zhou, F.Q., Zhang, G.J., Jiang, J.: Constructing feature points for calibrating a structured light vision sensor by viewing a plane from unknown orientations. Opt. Laser Eng. 43(10), 1056–1070 (2005) 8. Zhou, F.Q., Zhang, G.J.: Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations. Image Vis. Comput. 23(1), 59–67 (2005) 9. Hu, Z., Wang, Y., Yang, J., et al.: A novel calibration method for three-dimensional modeling system used in virtual maintenance. In: International Conference on Advanced Computer Control, pp. 301–303. IEEE (2010) 10. Santolaria, J., Guillomía, D., Cajal, C., et al.: Modelling and calibration technique of laser triangulation sensors for integration in robot arms and articulated arm coordinate measuring machines. Sensors 9(9), 7374 (2009) 11. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000) 12. Moré, J.J.: The Levenberg-Marquardt algorithm: implementation and theory. Lecture Notes in Mathematics, vol. 630, pp. 105–116 (1977)

Optimization of Sample Size for Two-Point Diameter Verification in Coordinate Measurements Petr Chelishchev(&) and Knut Sørby Department of Mechanical and Industrial Engineering, NTNU, 7491 Trondheim, Norway [email protected]

Abstract. This paper investigates the possibility for reduction of sample size for inspection of two-point diameters with a coordinate measuring machine, by use of statistical methods. The statistical methods implement the parametric and non-parametric statistic. As confirmed by the simulation results it is possible to keep the 95% confidence level with a relatively small data sample. A low sample size would be especially important for an operative online dimension inspection with CNC machine and immediate correction of a suspected part. Keywords: Two-point diameter  Sample size  Test statistic Model simulation  Measuring strategy  CMM

1 Introduction The main goal of Geometrical Product Specifications (GPS) inspection is to verify that the geometry and the dimensions of a part are inside of the tolerance limits specified by the drawing requirements, with some given confidence level. One of the important parameters of a measuring strategy in a coordinate measuring machine (CMM) is the number of the measuring points [1]. On one hand, a large sample size provides a better accuracy. On the other hand, a large sample size increases costs and time consumption in CMM inspection. The necessary sample size depends on many factors such as tolerance type (form, dimension etc.), magnitude of deviation from the desired value, and its ratio to the tolerance interval. Thus, the proper choice of an optimal number of measuring points is a nontrivial task. Different approaches were suggested to discover the sample size problem. Parametric statistic principles based on the normal distribution of the measured variable has been suggested [2, 3]. A fuzzy logic approach to reduce the number of measuring points with CMM was proposed by Cappetti et al. [4]. Other approaches with genetic algorithms [5], adaptive sample strategy with Kriging models [6] and combinations of analytical methods with uncertainty simulations [7] has been applied to solve the measuring strategy problem. In a previous paper, the authors estimated the optimal sample size for detecting 95% of the radius variation range (roundness form deviation of cylinder cross-sections) with 95% confidence level [8]. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 313–321, 2019.


P. Chelishchev and K. Sørby

In this paper, we have investigated reduction of number of measurements in twopoint size verification of a circular feature. According to ISO 14405-1 [9], “the two point size is the distance between two opposite points on an extracted integral linear feature of size”. The two-point size of the cylinder is also called “two-point diameter”. The two-point size of a circular feature is illustrated in Fig. 1.

Fig. 1. Two-point diameter: a-actual cylinder; aʹ-section profile; b, bʹ-Gauss associated cylinder; c, cʹ-axis of Gauss associated cylinder; d-cylinder median line; e-Gauss associated circle (of section); f-Gauss associated circle center of e; g-actual local size (two-point diameter), the straight line between two opposite points P1 and P2, which goes through the center f

Research in the sample strategy field is generally focusing on the evaluation of geometrical form deviation (e.g. roundness, flatness) [5, 7, 8, 11]. In this paper: we consider the case when the diameter tolerance band is assumed to be larger than the variation of the two-point diameter in one single part. The parameters of interest in part inspection is the mean value and the variation of the two-point diameter.

2 Method and Material We use the statistical hypotheses test approach to analyse the measurement data from CMM. The data sets need to be transformed and estimated before they are further applied for simulation. The more detail description is given below. 2.1

Experimental Data

In this case study, we have inspected an internal cylinder of an aluminium workpiece with the internal diameter 60 mm and the length 130 mm. The measurements have been performed in a Leitz PMM-C-600 coordinate measuring machine with an analogue probe. The least square cylinder method has been used to establish the cylinder axis, which is the z-axis in the coordinate system. Three cross sections (A, B, C) were measured, the first on the top, the second in the middle and the third on the bottom respectively. There are 500 measuring points in each cross-section, and 250 diameter values Di were calculated: Di ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxi  xi þ 250 Þ2 þ ðyi  yi þ 250 Þ2 ;

i ¼ 1. . .250


Optimization of Sample Size for Two-Point Diameter Verification


For simplicity of presentation and further data processing, the array Di has been   Di Þ  1000, where D  is the mean value of the diameter in transformed into ni ¼ ðD each cross-section. We make an assumption that all the lines, which connect the two opposite points agree with two-point diameter definition terms shown in Fig. 1. In order to derive the distribution shape of probability density function (pdf) f ðnÞ of standardized variable ni , the kernel density estimator (KDE) has been used [12]:   n X n  ni ^f ðnÞ ¼ 1 K ; bn i¼1 b



We have applied the Epanechnikov kernel K, the default MATLAB bandwidth b, and the sample size n with 250 variables ni . The estimation results of pdf fA ðnÞ; fB ðnÞ; fC ðnÞ for all three cross-sections are shown in Fig. 2.

Fig. 2. The KDE ^fA ðnÞ; ^fB ðnÞ; ^fC ðnÞ based on 250 variables, and the adjusted normal distribution

At this point, for further purposes, we need to adjust the standard deviation r0 of the normal distribution in such way that the six-sigma interval could cover any of the cross-section variation ranges. In the same time, we want this value as small as possible to avoid an unnecessary reduction of the tolerance interval. In this particular case study, the standard deviation r0 ¼ 1:6 lm has been chosen. The pdf objects with estimated parameters will be further used in simulation for generating of random measurements of workpiece to evaluate the influence of the sample size on the two-point diameter verification.



P. Chelishchev and K. Sørby

Statistical Method and Simulation

In order to estimate the influence of the sample size, we apply the statistical hypotheses test with a test statistic criteria and pre-specified significance level [13]. First of all this method tests the sample strategy, which was exploited for the CMM inspection. By other words, we establish the statistical method to evaluate if the chosen sample size are sufficient or not. Next, we consider if such method can be applied for verification of the two-point diameter. We are applying the conventional two-sample hypotheses test for solving of a nonstandard problem with some small modifications. The principle of these modifications is illustrated in Fig. 3. An example of unknown non-normal distribution of the workpiece diameter variable are depicted as 5, 6, and 7. As long as we do not know in which direction from the nominal size the deviation of the workpiece size can occur in advanced, then the two independent hypothesis tests must be prepared. However only one of them will be carried out for each single case.

Fig. 3. Statistical hypothesis tests for verification of two-point diameter: 1-lower tolerance limit; 2-upper tolerance limit; 3-upper Gauss (null hypothesis); 4-lower Gauss (null hypothesis); 5-KDE object with large deviation; 6-KDE object with medium deviation; 7-KDE object with small deviation; 8-lower boundary of the statistical test; 9-upper boundary of the statistical test

The mean values of both normal distributions are located in such way that the distance between one of the means and one of the tolerance limits (1 – lower, 2 – upper) equals to 3  r0 on each side as shown in Fig. 3. The cases when the sample mean is below or above the mean of the Gauss curves (Fig. 3) correspond to the null hypotheses H0L or H0U . When one of the null hypotheses is accepted then the special procedure with large number of points will be suggested. Thereby, the method does not reject a part, but it recommends larger sample size when it is necessary. As it was recently noticed, we need to test only one of the two hypotheses at the same time either for the lower tolerance limit (LTL) or for the upper tolerance limit (UTL). We consider the tolerance H7 for 60 mm diameter hole as an example, according to ISO 286-1 (EI ¼ 0; ES ¼ þ 30 lm) [14]. As long as the diameter

Optimization of Sample Size for Two-Point Diameter Verification


variable Di has been transformed, it has simplified a calculation of the tolerance limits: LTL ¼ EI and UTL ¼ ES. Then the theoretical mean values are: lL0 ¼ LTL þ 3r0 for U the null hypothesis H0L and lU 0 ¼ UTL  3r0 for H0 (the lower and the upper tolerance limits respectively). Thereby the tested hypotheses H0L or H0U are formulated in this way: the sample mean of measurements is equal to the one of theoretical means of the pffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffi r0 =nÞ either lS ¼ lL0 (for H0L ) or lS ¼ normal distributions NðlL0 ; r0 =nÞ, NðlU 0; U L U L U lU 0 (for H0 ), hence the alternative hypotheses H1 or H1 are lS [ l0 or lS \l0 n P respectively. We use the sample mean n ¼ 1n ni (to estimate lS ) as the test statistic. i¼1

The sample mean is computed from the sample data generated by KDE, which has unknown non-normal distribution KðlS ; rS Þ. The alternative hypotheses H1L ; H1U assume that the variation range of the sample data is inside of the critical range Vk . Then the critical range is defined with the lower bound  nLk for the LTL and the upper bound nU k for UTL respectively by: qffiffiffiffiffiffiffiffiffiffi nL ¼ u1a  r2 =n þ lL ; ð3Þ 0 k 0 nU ¼ ua  k

qffiffiffiffiffiffiffiffiffiffi r20 =n þ lU 0;


where ua and u1a are the quantiles of levels a, 1  a respectively for Nð0; 1Þ, and n is the number of observations. We have used the level of significance a ¼ 0:05 in the statistical simulation (i.e. u0:05 ¼ 1:645 and u0:95 ¼ 1:645).

3 Simulation Results By using two-sample tests of hypotheses, we must be aware of the significant tolerance interval reduction especially for the small sample sizes. The computed results for the boundaries and the critical range (Eqs. 3 and 4) are shown in the Table 1. Table 1. Reduction of 60 mm H7 tolerance interval ða ¼ 0:05; r0 ¼ 1:6Þ Sample size 5 10 15 20 30 40 50 60

Lower bound, lm 6.0 5.6 5.5 5.4 5.3 5.2 5.2 5.1

Upper bound, lm 24.0 24.4 24.5 24.6 24.7 24.8 24.8 24.9

Critical range, lm 18.0 18.8 19.0 19.2 19.4 19.6 19.6 19.8

Tolerance reduction, % 40.00 37.33 36.67 36.00 35.33 34.67 34.67 34.00


P. Chelishchev and K. Sørby

After the boundaries of the critical range are established, we can proceed with simulation. A number of N = 105 iterations have been simulated for each sample size ni . We consider the deviation of the mean lS to the LTL as denoted with 5 and 6 in Fig. 3. Then three different kernel distributions have been simulated for the mean difference dLj with the following values dL1 ¼ r0 , dL2 ¼ 0:5  r0 , and dL3 ¼ 0 such that l ¼ lL þ dL . The sample mean n was comparing with either the low bound  nL or the S




 L  U upper bound nU k Table 1. When the conditions of n [ nk or n\nk are fulfilled, the iteration assigned as 1 (0 otherwise) and summed up as the counters C0L or C0U , then the L U rejecting rates gL0 or gU 0 were calculated as C0 =N or C0 =N respectively. The simulation results for each cross-section A, B and C are presented in Tables 2, 3 and 4 (simulation results for the opposite side UTL are similar and not presented in the paper). Table 2. Rejecting rates gL0 of the sample mean location for the Section A Mean difference, dLj Sample size, n 5 10 15 L 0.78 0.98 1 d1 ¼ r0

20 1

30 1

40 1

50 1

dL2 ¼ 0:5  r0

0.22 0.46 0.65 0.79 0.93 0.98 1


0.02 0.02 0.02 0.02 0.02 0.02 0.02


Table 3. Rejecting rates gL0 of the sample mean location for the Section B Mean difference, dLj Sample size, n 5 10 15 L 0.82 0.99 1 d1 ¼ r0

20 1

30 1

40 1

50 1

dL2 ¼ 0:5  r0

0.21 0.46 0.66 0.82 0.95 0.99 1


0.01 0.01 0.01 0.01 0.01 0.01 0.01


Table 4. Rejecting rates gL0 of the sample mean location for the Section C Mean difference, dLj Sample size, n 5 10 15 20 30 40 50 60 65 L 0.75 0.95 0.99 1.00 1.00 1.00 1.00 1.00 1.00 d1 ¼ r0 dL2 ¼ 0:5  r0

0.29 0.47 0.63 0.75 0.88 0.95 0.98 0.99 1.00


0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03


Optimization of Sample Size for Two-Point Diameter Verification


4 Discussion of Results There are four important categories of the measurements regarding to the simulation results of the data mean lS location. The first G (good parts) corresponds to   category  L r . The second category the intersection of two subsets lS  l0 þ r0 \ lS  lU 0   L0  and T (transitional parts) includes the subsets l0 \lS \lL0 þ r0  U    U l0  r0 \lS \l0 , and the third category S (suspected parts) belongs to lS  lL0   and lS  lU 0 , which equivalents (according to the terms in Sect. 2.2) with the fourth category F (fail parts) of subsets flS \LTLg and flS [ UTLg. Obviously, all the boundaries are fuzzy but they help to clarify the simulation results. For illustrational purposes, we presume a uniform distribution Uð0; 30Þ of the manufacturing process over a long time period. The tolerance interval can be illustrated as follows in Fig. 4. Thus, the content of each region (S; T; G respectively) can be easily evaluated: Z3r0

1 du ¼ 0:16; 30


1 du ¼ 0:05; 30



Z4r0 3r0

lU 0 r0



1 du ¼ 0:57 30

þ r0

Fig. 4. The dimension tolerance interval based on the uniform distribution assumption



P. Chelishchev and K. Sørby

A correct decision about the data mean location lS for the S category can be defined at least with 95% probability (the accepting rate 1  gL0 , dL3 ¼ 0) even with the fiveobservation sample size. Moreover, the solution is independent on the sample size. Similar for the G category, the correct conclusion about a low deviation of the mean value can be done at least with 95% probability (the rejecting rate gL0 , dL1 ¼ r0 ) for the ten-observation sample size. According to Tables 2, 3 and 4 (dL2 ¼ 0:5  r0 ), for the transitional category T, the large sample over 40 observations might be required to confirm the compliance of the size with the tolerance limits with 95% CL. Nevertheless, the T regions are only 10% regarding to (6) relative to whole the uniform distribution (Fig. 4). Even by using the 10 observations sample (corresponds to 20 measuring points for the two-point diameter), we are still able to make the right decision in about 45% of the cases, without exciding of 5% probability.

5 Conclusion In this paper, we have investigated the number of required points in measurement of two-point diameter, and performed a case study on a part with 60 mm diameter and H7 tolerance. The reliability of the method strongly depends on the proper choice of the six-sigma range. The method reduces to some degree the tolerance interval size (acceptance interval), but it is compensated by the significant minimization of the sample size, more than in 4 times relative to [8] (for approximately 89% cases estimated with (5) and (7) with the assumption of the uniform distribution of the process, Fig. 4). Such reduction of the sample strategy will be in high demand for online inspection with CNC machine, where the suspected part can be corrected immediately.

References 1. De Chiffre, L., Hansen, H.N., Andreasen, J.L., Savio, E., Carmignato, S.: Geometrical Metrology and Machine Testing, DTU Mechanical Engineering (2015) 2. Yau, H.-T., Menq, C.-H.: An automated dimensional inspection environment for manufactured parts using coordinate measuring machines. Int. J. Prod. Res. 31(11), 1517–1536 (1992) 3. Bernard, C.J., Chiu, S.-D.: Form tolerance-based measurement points determination with CMM. J. Intell. Manuf. 13(2), 101–108 (2002) 4. Cappetti, N., Naddeo, A., Villecco, F.: Fuzzy approach to measures correction on coordinate measuring machines: the case of hole-diameter verification. Measurement 93, 41–47 (2016) 5. Cui, C., Fu, S., Huang, F.: Research on the uncertainties from different form error evaluation methods by CMM sampling. Int. J. Adv. Manuf. Technol. 43, 136–145 (2008) 6. Barbato, G., Barini, E.M., Pedone, P., Romano, D., Vicario, G.: Sampling point sequential determination by kriging for tolerance verification with CMM. In: Proceedings of the 9th Biennial ASME Conference on Engineering Systems Design and Analysis, ESDA 2008, p. 10. ASME, Israel (2008)

Optimization of Sample Size for Two-Point Diameter Verification


7. Ruffa, S., Panciani, G.D., Ricci, F., Vicario, G.: Assessing measurement uncertainty in CMM measurements: comparison of different approaches. Int. J. Metrol. Qual. Eng. 4, 163– 168 (2013) 8. Chelishchev, P., Popov, A., Sørby, K.: Robust estimation of optimal sample size for CMM measurements with statistical tolerance limits. In: The 2nd International Conference on Mechanical, System and Control Engineering, ICMSC 2018, MATEC Web of Conferences, p. 5 (2018) 9. ISO14405-1: Geometrical product specifications (GPS)—Dimensional tolerancing, in Linear sizes (2016) 10. Raghunandan, R., Rao, V.P.: Selection of sampling points for accurate evaluation of flatness error using coordinate measuring machine. J. Mater. Process. Tech. 202(1), 240–245 (2008) 11. Alexandre, B.T.: Introduction to Nonparametric Estimation. Springer, New York (2009) 12. Ronald, E.W., Raymond, H.M., Sharon, L.M., Ye, K.: Probability & Statistics for Engineers & Scientists, 8th edn. Pearson, London (2007) 13. ISO286-1: Geometrical product specifications (GPS)—ISO code system for tolerances on linear sizes, in Part 1: Basis of tolerances, deviations and fits (2010)

Paper Currency Sorting Equipment Based on Rotary Structure Lizheng Pan(&), Dashuai Zhu, Shigang She, Jing Ding, and Zeming Yin School of Mechanical Engineering, Changzhou University, Changzhou 213164, China [email protected]

Abstract. Aiming at the defects of traditional paper currency sorting equipment, such as seizing-up paper currency, high manufacturing cost, large size, inconvenient maintenance and so on, a rotary structure type is proposed to design the paper currency sorting equipment. The proposed design method not only effectively simplifies the structural design but also reduce the transmission distance of paper currency, which can avoid the defects of traditional equipment. Meanwhile, multi-information fusion sensing technology is employed during the control system design, which makes the each module work smoothly and collaboratively. The experiment results indicate that the equipment based on the proposed design method presents good performances. Keywords: Rotary structure  Paper currency  Sorting equipment Short transmission distance  Multi-information fusion

1 Introduction With the flourish development of our country’s economy, money transactions are largely needed in the fields of the current urban bus companies, trade, bank and other industries. However, the poor automation level of cash processing service increases the pressure on staff and affect economic gain, so it is necessary to design a certain paper currency sorting equipment. Research scholars have done a lot of experimental studies on the banknote sorting equipment and there are many related research results. Sargano et al. [1] designed an intelligent paper currency recognition system with robust features from Pakistani banknotes and three layers feed-forward Backpropagation Neural Network (BPN) for intelligent classification, the result indicated that the designed system worked with good recognition ability. In Ref. [2], product architecture design method was employed to guide the design and production of the paper currency sorting equipment with two objectives: minimization of total supply chain costs, and maximization of total supply chain compatibility index. In addition, many other recognition algorithm have been used for paper currency sorting, such as an attention detection system [3], fully agents [4], based on projection characteristics matching [5], distinctive point extraction and recognition algorithm [6], Gaussian mixture models based on structural risk minimization [7] and so on. Recently, many contributions are focused on the recognition © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 322–327, 2019.

Paper Currency Sorting Equipment Based on Rotary Structure


algorithm, few on the system mechanism design. However, due to the disadvantages of high price, complicated structure and high-frequency seizing-up paper currency, it is very necessary to design a new paper currency sorting equipment with low cost, structure optimization and easy to divide paper currency. In order to effectively solve the defects of traditional paper currency sorting equipment, such as seizing-up paper currency, high manufacturing cost, large size, inconvenient maintenance and so on, this paper presents a new paper currency sorting equipment with rotary structure, which effectively simplifies the mechanism design and reduces the transmission distance of paper currency.

2 Mechanical Structure Design As far as the paper currency sorting equipment is concerned, at present, the paper currency identification technology is quite mature, but the system structure design needs to be improved. The defects of traditional paper currency sorting equipment, such as seizing-up paper currency, high manufacturing cost, large size, inconvenient maintenance and so on, it is mainly caused by the complex system structure and the long transmission distance of paper currency. Therefore, the design of new system structure forms which effectively simplifies the mechanism design and reduces the transmission distance of paper currency is helpful to avoid the disadvantages of traditional equipment. 2.1

System Structure

In order to reduce paper currency transmission distance and improve sorting efficiency, rotating mechanism design method is employed. The proposed revolving sorting design method has broken the previous design concept with paper currency as the operation object. The movement of a rotating sorting collection plate is used to replace the transmission of currency money, which can effectively simplify the mechanism design and transmission distance of currency money. The mechanical structure of the paper currency sorting equipment based on rotary structure consists of four parts: paper currency transmission mechanism, rotary sorting mechanism, taking paper currency mechanism, and rack assembly. 2.2

Design of Main Mechanism

2.2.1 Paper Currency Transmission Mechanism The paper currency transmission mechanism works with twisting the currency money one by one for the next step sorting. During the process of paper currency sorting, twisting money is repeated until the last note is finished. According to the application, in order to improve the twisting wear resistance of the rubber ring, the diameter of twisting wheel is increased and a ring groove is designed in the outer circle. The designed paper currency transmission mechanism is shown in Fig. 1. In Fig. 1, the meaning of numbers corresponds to the following: 1. Board for picking up paper currency 2. Banknote identification module 3. Feeding banknote


L. Pan et al.

Fig. 1. 3D design drawing of paper currency transmission mechanism

board 4. Receiving station sensor 5. Twisting wheel 6. Feeding wheel 7. Banknote pressing wheel 8. Stationary barrier 9. White light source 10. Bobbin tape 11. Pulley for twisting banknote 12. Pulley for motor 13. Pulley for feeding banknote. 2.2.2 Rotary Sorting Mechanism The rotary sorting mechanism is used to collect the sorted currency money in the corresponding note box. Considering the Chinese currency types in circulation, in order to reduce paper currency transmission distance, the sorting plate is designed with a sixprism rotary structure that has a fixed angle, which can synchronously twist banknote and sort banknote. The paper currency transmission mechanism adopts a module design of continuously sorting and dispensing banknotes to avoid seizing-up paper currency. Meanwhile, combining the taking paper currency mechanism which is designed to take out the sorting bill conveniently, auxiliary guide is used for taking paper currency mechanism, which can achieve a good migration of the platform and improve overall system maintenance. The main rack material is made of plexiglass, which has good comprehensive mechanical properties. The designed rotary sorting mechanism and whole view of paper currency sorting equipment based on rotary structure are shown in Figs. 2 and 3, respectively. In Fig. 2, the meaning of numbers corresponds to the following: 1. Sorting tray 2. Motor cabinet 3. Coupling 4. Motor 5. Ball thrust bearing 6. Ring flange 7. Infrared transmitter.

Paper Currency Sorting Equipment Based on Rotary Structure

Fig. 2. The rotary sorting mechanism


Fig. 3. The paper currency sorting equipment

3 Control System Design The entire control system is mainly divided into three parts: main control module, paper currency recognition module, sensor detection module. Micro control unit of STC series is as a main control module, which is mainly used to detect sensor single, control motor, display information and detect press key. Using two TCS320 chips as paper currency recognition module detects color of RMB watermark and portrait, then using the color algorithm from RGB color space to HSV color space determines the paper currency value according to the converted tone value and brightness. Sensor-

Fig. 4. Control system workflow diagram


L. Pan et al.

detection module uses multi-information fusion sensor technology to realize control effectiveness and accuracy. The control system workflow diagram is shown in Fig. 4.

4 Experiments and Results According to the above Sects. 2.2.1 and 2.2.2, the paper currency sorting equipment is produced and its picture of real products is shown in above Fig. 3. In order to verify the practicality and effectiveness of the paper currency sorting equipment, there are five kinds of RMB used for experiment research analysis and their value is 5, 10, 20, 50, 100 Yuan, respectively. Through multiple experiments analysis, the colour tone feature value of different banknote value is summarized as Table 1. From Table 1, the colour tone feature value of different banknote value has great differences, so the color algorithm is an excellent classification method for RMB. In order to verify the effec-

Table 1. The colour tone feature value of different banknote value Banknote vale (Yuan) 5 10 20 50 100 H (color tone) 280 240 12 140 300

Table 2. Average identification results Banknote vale (Yuan) Recognition rate (%) 5 100.00 10 100.00 20 100.00 50 100.00 100 100.00

tiveness of the designed paper currency sorting equipment, take 100 pieces for each banknote value, mix 500 pieces together, and select 100 pieces each time randomly. Average identification results of five-group banknote sorting experiments are shown in Table 2. From Table 2, it can be seen that banknote sorting equipment based on rotary structure design and control system design has 100% recognition accuracy. The experiment result shows that the proposed paper currency sorting equipment based on rotary structure design presents a good performance.

5 Conclusion In order to reduce the disadvantages of traditional currency sorting equipment, a rotary structure design method is adopted, which employs the movement of rotating sorting collection plate to replace the transmission of currency money. The proposed design

Paper Currency Sorting Equipment Based on Rotary Structure


method effectively simplifies the mechanism design and reduces the transmission distance of paper currency. The designed control system well manages each module working smoothly and collaboratively. The experiment results verify the effectiveness of the paper currency sorting equipment based on rotary structure. Acknowledgments. This research was financially supported by the National Natural Science Foundation of China (61773078), Open Foundation of Remote Measurement and Control Key Lab of Jiangsu Province (YCCK201303), and Industrial Technology Project Foundation of ChangZhou Government (CE20175040).

References 1. Sargano, A.B., Sarfraz, M., Haq, N.U.: An intelligent system for paper currency recognition with robust features. J. Intell. Fuzzy Syst. 27, 1905–1913 (2014) 2. Bimal, N., Leslie, M., Oluwafemi, F.: Matching product architecture with supply chain design. Eur. J. Oper. Res. 216, 312–325 (2012) 3. Schwiegelshohn, F., Hubner, M.: Design of an attention detection system on the Zynq-7000. In: IEEE 2014, pp. 1–6 (2014) 4. Egon, O., Alain, J., Michel, F.: Fully agents for product configuration in collaborative and distributed design process. Appl. Soft Comput. 12, 2091–2105 (2012) 5. Cao, D.H., Liu, B.B., Wu, Y.B.: A fast recognition algorithm for banknote denomination based on projection characteristics matching. Optoelectron. Eng. 31, 59–61 (2014) 6. Lee, J.K., Jeon, S., Kim, H.: Distinctive point extraction and recognition algorithm for various kinds of euro banknotes. J. Control Autom. Syst. 2, 201–206 (2004) 7. Kong, F.H., Ma, J.Q., Liu, J.F.: Paper currency recognition using Gaussian mixture models based on structural risk minimization. In: International Conference on Machine Learning and Cybernetics, vol. 7, pp. 3213–3217 (2006)

Precision Analysis of the Underwater Laser Scanning System to Measure Benthic Organisms Pingping He(&), Xu Zhang, Jinbo Li, Liangliang Xie, and Dawei Tu Department of Mechanical Engineering and Automation, Shanghai University, Shanghai, China [email protected], [email protected]

Abstract. In order to realize high-speed and large-scale three-dimensional measurement of benthic organisms, the underwater laser scanning system is designed. First, a mathematical model of the underwater imaging is established, which can express the light propagation, light refraction, and the transformation between the pixel and the light. Then, the precision analysis of the underwater laser scanning system is studied. The effect of the camera’s lens, focal length and baseline distance on the precision of 3D reconstruction is analyzed. Finally, the system calibration is carried out and the precision of the underwater laser scanning system is verified with the standard targets whose distance are known before. The experiment of the precision evaluation with the ball bar shows that the RMS of the system can achieve 0.87 mm, when the depth is between 2 m and 3 m. Keywords: Three-dimensional measurement Underwater laser scanning system  3D reconstruction

1 Introduction The three-dimensional accurate measurement and geomorphological survey of underwater organisms are of great significance and application value in the exploration of submarine resources, marine development, underwater detection and underwater counter-terrorism. But the traditional deep-sea biological investigation and research can’t meet the needs of the current deep-sea biological research data. Although the deep-sea submersible is equipped with high definition underwater camera [1], the investigation of deep-sea biology remains at the qualitative observation stage of “view” without “measurement”. Therefore, using underwater line structured-light [2] to accurately measure organisms in the ocean environment is an important development direction of underwater measurement technology [3]. The underwater laser scanning system is in a sealed bucket with glass windows. The laser beam propagates in the air, passing through the glass surface, and illuminates the measured object in the water. The reflected light from the biological surface in the water passes through the glass surface into the camera in the air condition. multi-layer refraction occurs as light passes through different media. The measurement equations, © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 328–336, 2019.

Precision Analysis of the Underwater Laser Scanning System


system characteristics and system parameters are completely different from the air [4], conventional structured light technology [5] will produce errors and it is difficult to apply directly. The stereo vision measurement method proposed by Abdo [6] can measure sessile benthic organisms, which uses a dedicated stereoscopic digital still camera and 3D reconstruction software program to combine multiple views to reconstruct a biological model. The scanning device is complicated and takes a long time, which has higher reconstruction accuracy. But it can’t meet the requirements of high speed and wide range. Xie [7] proposed the underwater line structured-light selfscan three-dimension measuring technology. The algorithm compensation principle compensates for the errors caused by light refraction. Although it can compensate for the effects of light refraction in different media, the precision is limited because the detail mathematical model is not available. In this paper, three-dimensional measurement is achieved by the triangulation measurement principle, and the acquisition of high-accuracy correspondence is achieved by active laser scanning [8]. Through the theoretical modeling, the spatial three-dimensional coordinate measurement expression of the underwater triangulation ranging imaging system is obtained, and the influence of the main parameters of the system on the measurement accuracy of the system is analyzed, a laser high-speed line scanning 3D deep-sea biometric system is researched and designed.

2 The Model of the Underwater Laser Scanning System The working principle and overall structure of the system can be described in Fig. 1. The laser 1 generates a line-structured-light, through the mirror 2 and the window 5, finally projects on the surface of the measured object. A deformed light bar is formed on the surface of the measured object, and the diffused light reflected by the deformed light bar passes through the windows 4 and 6 and is captured by the two-dimensional camera 3, 7. The two cameras obtain the two-dimensional image of the deformed light bar modulated by the surface of the measured object, and calculate the threedimensional coordinate data of the surface of the measured biological object according to the multi-layer refractive imaging model based on light field [9], underwater laser fringe matching and three-dimensional reconstruction method. The rotation of the mirror 2 allows the light plane to sweep over the entire object. In this way, the threedimensional data of the object can be completely obtained. In Fig. 1(a), the focal length of the left and right cameras is set to f , the baseline distance of both is set to d, and the main points are all set to q0 . The distance between the optical center and the glass is g, and the thickness of the glass is t. The density of air, glass, and seawater is n0 ; n1 ; n2 respectively. The coordinate of the space point in the left camera coordinate system is ðx; y; zÞ, and the pixel coordinates of the image point of the left camera and the right camera are respectively ql ; qr . The model is shown in Fig. 1(b). The straight line is represented by a 4-dimensional light field ðu; v; s; tÞ, Light is affected by distance, refraction, reflection, etc. in the process of propagation. It can be expressed by the transformation equation of light.


P. He et al.

Fig. 1. System schematic (a) and system vision model (b)

The two light field coordinate systems are completely parallel, and only the optical axis direction movement distance d exists. 2 6 L0 ¼ T ðd Þ  L; T ðd Þ ¼ 6 4




d7 7 5 1

1 1


Refraction conforms to the law of refraction. The refractive index of the incident and refractive light materials are n1 ; n2 respectively, the incident angle is h1 , and the refraction angle is h2 . pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sinðh1 Þ n1 s2 þ t2 s02 þ t02 ¼ ; sinðh1 Þ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; sinðh2 Þ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sinðh2 Þ n2 1 þ s2 þ t 2 1 þ s02 þ t02


The constraint is satisfied because the refracted ray remains in a plane with the normal to the incident ray. s0 t ¼ st0


Then the direction of the light available is: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s t s0 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi s02 þ t02 ; t0 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi s02 þ t02 s2 þ t2 s2 þ t 2


The above process is represented by Eq. (5) ½ u0



t0  ¼ Reðn1 ; n2 Þ½u T





A pixel of the camera determines a ray, which can be visualized as a straight line through the center of the light and the image point. Left camera point is ql ¼ ½ x1 y1 , right camera point is qr ¼ ½ x2 y2 , Main point position is q0 ¼ ½ x0 y0 . The coordinate unit here is mm.

Precision Analysis of the Underwater Laser Scanning System


The light from the left camera point: h Lql ¼ 0 0

v0 v1


y0  y1 f



The light from the right camera point: h Lqr ¼ d


v0 v2


y 0  y1 f



Taking the left camera light as an example, the light passes through the “air-glassseawater” and eventually reaches an object point whose depth is unknown. Pixel light travels as Eq. (8): Lobject ¼ TðdepthÞ  Reðn1 ; n2 Þ  TðtÞ  Reðn0 ; n1 Þ  TðgÞ  Lql


Right camera is similar to the left camera: Lobject ¼ TðdepthÞ  Reðn1 ; n2 Þ  TðtÞ  Reðn0 ; n1 Þ  TðgÞ  Lqr


When Eqs. (8) and (9) indicate the same object point, it satisfies the following constraint equation (the two lines intersect at depth depth). depth ¼

u1  u2 v1  v2 ; depth ¼ s2  s1 t2  t1


Where u1 ; v1 ; s1 ; t1 ; u2 ; v2 ; s2 ; t2 is the outgoing light from the glass to the sea surface.

3 The System Design and Precision Analysis The parameters related to system measurement accuracy include: baseline distance, focal length, image pixel detection accuracy. The following mainly analyzes the effects of the baseline distance and image pixel detection accuracy on the instrument’s ranging accuracy to determine the system parameters. 3.1

Lens and Focal Length

The distance between the measuring system and the measured object is 3 m, and the scanning field of view is 2 m  2m. At 3 m, the corresponding field of the view size of the 4 mm, 5 mm and 6 mm lenses are 3.2 m  2.4 m, 2.7 m  2.1 m, and 2.4 m  1.8 m respectively. So the system chooses a lens with a focal length of 5 mm.



P. He et al.

Camera Baseline Distance

To meet the accuracy requirements of the system, precision analysis experiments were performed on different baseline distances. When the feature detection accuracy is 0.5 pixels, precision analysis is performed at baseline distances of 800 mm, 1000 mm, 1200 mm, and 1500 mm, respectively. The results are shown in Table 1. It is not hard to find out that baseline distance increases and accuracy increases. Table 1. Baseline distance influence on measurement accuracy (3 m) 0.5 pixel parallax matching accuracy Baseline distance mm 800 XYZ maximum error 1.3375 0.6938 2.8947 Maximum error mm 3.1153

900 1.2064 0.6252 2.5760 2.8043

1000 1.0979 0.5681 2.3189 2.5524

1200 0.9203 0.4772 1.9304 2.1409

1500 0.6648 0.3502 1.5447 1.6620

The Fig. 2 is the accuracy test chart with a line distance of 1200 mm.

Fig. 2. Three-dimensional measurement point XYZ error (a) and distance measurement error (b) with a line distance of 1200 mm

4 Experiments The underwater laser scanning system is developed. The system consists of a 1000 nw laser, two cameras (the resolution is 2048  1536) with 5 mm lens, a stepper motor, a high reflective mirror and an industrial computer. In order to verify the stability and reconstruction accuracy of the proposed method, system calibration (include accuracy verification) and the three-dimensional measurement experiment are carried out respectively.

Precision Analysis of the Underwater Laser Scanning System



System Calibration

System calibration is divided into air calibration and underwater calibration. The calibration board used in the air calibration is 14  11 concentric circle pattern and that used underwater is checkboard, whose square size is 35 mm, with 7  11 squares. For the calibration in the air, we calibrate the two cameras. An iterative calibration method was used [9]. In the calibration, more than 15 pictures are taken from different poses to estimate the intrinsic of two cameras. The left and right camera calibration results are shown in Table 2. Table 2. Calibration result of intrinsic parameters (left camera, right camera) Left camera Focal length [1451.5687, 1451.5107] Principal point [1028.2296, 777.0740] Pixel error 0.02533

Right camera [1453.9694, 1453.8842] [1028.5065, 787.3090] 0.02493

Next, it is necessary to determine the position and orientation of the left and right cameras and perform stereo calibration of the system. The pose relationships of the two cameras are R and T. 8 2 3 0:9051 0:0018 0:4151 > > < R ¼ 4 0:0083 0:9997 0:0220 5 > 0:4251 0:0234 0:9049 > : T ¼ ½ 1148:6938 10:7468 258:0317  For the underwater parameters calibration, the multi-layer refraction model is used [10]. The refractive index of water is lw ¼1:333, the refractive index of air is la ¼1:000, and the calibration results of the left and right camera tubes are shown in Table 3.

Table 3. Calibration result of multi-layer flat refractive geometry (left camera, right camera) Left camera Right camera Distance 7.0876 3.0000 Normal [−0.0024, 0.0081, 0.9999] [0.0082, −0.0101, 0.9999]

After all parameters calibration are finished, the accuracy must be verified. In this paper, we take two pairs of underwater checkerboard images, and then use the previous parameters to test. The error distribution in x and y directions of the small square in a pair of images are shown in Fig. 3(a, b). Figure 3(c, d) shows the error distribution of small square edges in a pair of images and the error distribution of two diagonals.


P. He et al.

Fig. 3. Test results


The Precision Test with the Standard Ball Bar

In the experiment, the system was placed above a pool with a depth of 3.5 m, a length of 3 m, and a width of 3 m. The depth measured by the system was 3 m. The system is completely immersed in water and is mounted on the basin with the measuring window facing downwards. Placing ball bar with a precision of more than 0.05 mm in a fixed position in the pool, the two balls of the ball bar have a diameter of 150.13 mm and a center distance of 497.72 mm. Ten positions of the target ball as shown in Fig. 4.

Fig. 4. Target ball positions

It can be seen from Fig. 5. that the RMS of the center distance, ball 1 and ball 2 are 0.87 mm, 0.53 mm and 0.74 mm respectively. At the tenth position, the error of the ball diameter is the largest, the maximum error is 1.172 mm, because the image is at the edge of the filled of the view, it’s greatly distorted. At the eighth position, the error of the center distance is the largest, the maximum error is 2.418 mm, as the ball bar is at the edge of the depth of field, and the distance is farthest and the object has the smallest pixel value in the image. The results indicate that the system has high measurement accuracy at a distance of 3 m below the water.

Precision Analysis of the Underwater Laser Scanning System


Fig. 5. Experimental results

5 Conclusion In this paper, the underwater laser scanning system is designed and studied. The mathematical model of underwater imaging is established considering the propagation of light, the refraction of light and the transformation between pixels and light. The effect of the camera lens, focal length and baseline distance on three-dimensional reconstruction precision are analyzed. At the same time, system parameters are calculated, and checkerboard is used to verify the precision of system parameters. Through the three-dimensional reconstruction of the standard target ball with known precision in the experimental pool, the ball diameter and the center distance of the target ball are fitted, and the bat error is calculated. The experimental results show that when the depth is between 2 m and 3 m, the RMS of the system can reach 0.87 mm, and the system can realize high-speed and large-scale measurement of underwater organisms. Acknowledgment. This research was partially supported by the National Nature Science Foundation of China (Grant No. 51575332 and No. 61673252) and the key research project of Ministry of science and technology (Grant No. 2016YFC0302401).

References 1. Pan, Y., Liu, Y., Ye, S., Fu, X.: Design of underwater camera controlling system based on MAX16802B. Comput. Meas. Control 32(2), 116–118 (2012) 2. Wei, Z.: Study on Underwater 3D Structured Light Measurement and Its Application in Vision Guidance and Positioning. Ocean University of China (2015) 3. Yu, S.C.: Development of real-time acoustic image recognition system using by autonomous marine vehicle. Ocean Eng. 35(1), 90–105 (2008) 4. Li, C., Zhang, X., Tu, D.: Deflectometry measurement method of single-camera monitoring. Acta Optica Sinica 37(10), 1012007 (2017)


P. He et al.

5. Zhuang, L., Zhang, X., Zhou, W.: A coarse-to-fine matching method in the line laser scanning system. In: International Workshop of Advanced Manufacturing and Automation, pp. 19–33. Springer, Singapore (2017) 6. Abdo, D.A., Seager, J.W., Harvey, E.S., McDonald, J.I., Kendrick, G.A., Shortis, M.R.: Efficiently measuring complex sessile epibenthic organisms using a novel photogrammetric technique. J. Exp. Mar. Biol. Ecol. 339(1), 120–133 (2006) 7. Xie, Z., Li, X., Xin, S., et al.: Underwater line structured-light self-scan three-dimension measuring technology. Chin. Laser 37(8), 2010–2014 (2010) 8. Zhang, X., Fei, K., Tu, D.: Modeling and analysis of synchronized scanning triangulation. Optoelectron. Laser 26(2), 295–302 (2015) 9. Datta, A., Kim, J.S., et al.: Accurate camera calibration using iterative refinement of control points. In: Computer Vision Workshops (ICCV Workshops), pp. 285–299. IEEE (2009) 10. Agrawal, A., Ramalingam, S., Taguchi, Y.: A theory of multi-layer flat refractive geometry. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 3346–3353 (2012)

Recognition Algorithm Based on Convolution Neural Network for the Mechanical Parts Duan Suolin(&), Yin Congcong, and Liu Maomao School of Mechanical Engineering, Changzhou University, Changzhou 213164, China [email protected]

Abstract. For the problems that the traditional mechanical parts identification algorithm needs to design and extract relevant features artificially, so that the process is complex and time consuming in the computation is larger as well as identification accuracy is easily affected by the diversity of parts morphology, a mechanical part identification algorithm based on convolutional neural network is proposed in this paper. The Leaky ReLU function algorithm as an activation function is used to improve the pooling method, and a SVM classifier is combined to construct a convolutional neural network WorkNet-2 for the recognition of mechanical parts. In the recognition experiments of common four kinds of mechanical parts, the trained WorkNet-2 network’s recognition accuracy on the test set reached 97.82%. The experimental results show that compared with the traditional mechanical parts recognition algorithm, this algorithm can extract the high-level features of the target parts, and has the advantages of small influence of parts shape diversity, the recognition rate is higher and good realtime performance. Keywords: Identification of the parts  Extraction of feature Convolutional Neural Network  Pooling method

1 Introduction The recognition of mechanical parts is an important application of computer vision technology in the field of industrial robots [1]. At present, the common methods of machine parts recognition based on vision mainly include: SIFT (Scale Invariant Feature Transform) algorithm, SURF (Speeded Up Robust Features) algorithm, geometric invariant moments combined with BP neural network algorithm, template matching algorithm, etc. [2–5] These traditional algorithms require complex preprocessing, the feature extraction process is complex, and they are greatly affected by the diversity of the part morphology [6]. For this reason, the Convolutional Neural Network (CNN) theory is introduced into the recognition of mechanical parts. Convolutional neural network is a type of deep learning, which can autonomously learn the high-level features of images during the training process [7]. Compared with the artificially designed features in the traditional algorithms, the learned high-level features are used to identify the classification images, which has strong generalization ability and high recognition efficiency [8]. In recent years, CNN has been widely used © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 337–347, 2019.


D. Suolin et al.

in image recognition detection neighborhoods and has achieved fruitful results. The literature [9] applies CNN to the recognition of automobile models, which greatly improves the monitoring efficiency of traffic flow. The literature [10] introduced the CNN algorithm in traffic sign recognition to improve the accuracy of the identification of traffic signs by the unmanned driving system. The literature [11] uses the CNN algorithm to fuse color data and depth data for visual recognition and improve the accuracy of target recognition. The literature [12] uses the CNN algorithm to identify the chess pieces in the industrial sorting robot and realizes accurate positioning and accurate recognition of the chess pieces on the conveyor belt. However, there is relatively little research on the application of CNN in the identification of mechanical parts. Therefore, aiming at the shortcomings of the traditional mechanical parts identification algorithm, Leaky ReLU function is used as the activation function and the median value pooling method is used on the basis of traditional CNN to construct a convolutional neural network for the recognition of mechanical parts combined the SVM (Support Vector Machine) classifier. Experiments show that this method does not require complicated preprocessing, has good real-time performance, is little affected by the diversity of parts morphology, and has a high recognition rate for simple and complex parts.

2 Convolutional Neural Network (CNN) CNN is an efficient identification method that has been developed in recent years and has attracted widespread attention. CNN is a deep neural network with a convolutional structure. The schematic diagram of its structure is shown in Fig. 1.

Input layer

Convolution layer

Fully connected layer Pooling layer

Output layer

Fig. 1. Schematic diagram of convolutional neural network structure


Convolution Layer

The convolution layer is the feature extraction layer in CNN. A learning convolution core is used to convolution the input image of the upper layer to extract the local feature, and then the feature map of the layer can be obtained by the activation function. A CNN usually has multiple convolutional layers. The first convolutional layer generally extracts low-level features such as edges, lines, and etc. The higher the level is, the more sparser the extracted features are, but the worse the interpretability is, the stronger the expression ability is. Because CNN’s convolutional structure can

Recognition Algorithm Based on Convolution Neural Network


extract high-level features of the picture, CNN’s generalization ability is very strong. The convolution calculation of the convolution layer is as follows: xlj



! xl1 i


þ blj



Where, xlj is the characteristic map of the j-th channel output of the convolutional layer l. f () is an activation function, common activation functions are: Sigmoid function, tanh function, ReLU function, etc. xl1i is a characteristic diagram of the output of the previous layer i channel. klij is the convolution kernel used for the connection between the input i-th channel feature map and the output j-th channel feature map. blj is the offset corresponding to the j-th channel feature map. 2.2

Pooling Layer

The pooling layer is generally set behind the convolution layer and is used to pool input feature maps. The pooling operation can reduce the feature dimension, thereby reducing the complexity of network computing. The pooling operation also reduces the respective rates of feature maps, so that the extracted features have certain transformation invariance for translation, scaling, and other transformations. The schematic diagram of the pooling operation is shown in Fig. 2. Convolution layer Pooling layer

Fig. 2. Schematic diagram of the pooling operation process

The output of the pooled layer is calculated as follows:   l xlj ¼ f blj down(xl1 Þ þ b j j


where, xlj is a characteristic diagram of the output of the j-th channel of the pooling layer l. b is the weight coefficient, which is related to the combination of the next convolutional layer. down () is a downsampling function. Common downsampling functions include: maximum downsampling, mean downsampling, and maximum two mean sampling. blj is the offset corresponding to the j-th channel feature map.



D. Suolin et al.

Fully Connected Layer

After the last convolution or pooling layer, a fully connected network of one or more fully connected layers is connected. All feature maps obtained from the last convolutional layer or pooled layer are converted into one-dimensional features as inputs to a fully connected network. The fully connected network classifies the input features and outputs the classification results through the output layer. Neurons in the fully connected layer are calculated in the same way as traditional neural networks.

3 Recognition Model of Mechanical Parts Based on Convolution Neural Network The existing CNN models are mostly based on common data sets, such as Mnist handwritten sample sets, Caltech 101 data sets, VOC data sets, etc. These data sets do not include mechanical parts commonly used in industrial production, and the image background, size, are also significantly different from those of actual mechanical parts. Therefore, these existing network models cannot be directly applied to the identification of mechanical parts. In order for CNN to be used in the identification of mechanical parts, it is necessary to improve the depth, convolution kernel size, activation function and classifier of traditional CNN. 3.1

Selection of Activation Functions

The activation function in the neural network is to introduce nonlinear factors and enhance the expression ability of the neural network. The common activation functions in traditional CNN are: Sigmoid function, tanh function, ReLU function, etc. The most popular activation function is the ReLU function. Its main advantage is that there is no saturation region, convergence is fast, and the calculation is simple. However, when a very large gradient flows through the ReLU neuron, after updating the parameters, the neuron will no longer have any data activation [13]. The probability of this phenomenon is related to the size of the learning rate. The greater the learning rate, the greater the probability of occurrence. In order to improve the training efficiency and effectiveness of CNN, Leaky ReLU function was chosen as the activation function [14]. The expression of the Leaky ReLU function is:  f ðxÞ ¼

x; x  0 ax; x\0


Where, a is a small constant, in order to keep some negative axis information, set as 0.05. 3.2

Improvement of Pooling Method

The pooling methods commonly used in the pooling layer of traditional CNN include the average pooling method and the maximum pooling method. The average pooling

Recognition Algorithm Based on Convolution Neural Network


method is to obtain the average value of the data in the pooling window as the pooling result. The maximum pooling method is to obtain the maximum value of the data in the pooling window as the pooling result. As shown in Fig. 3, the data distribution in two common 2  2 pooled windows is set.











Fig. 3. Schematic diagram of data distribution in pooling window

In Fig. 3a, the largest of the four pixel values in the pooling window is a and the rest is 0. If the average pooling method is used, the pooling result is a/4, which significantly weakens the original features in the window. In Fig. 3b, there are four different pixel values in the pooling window. If the maximum pooling method is adopted, the pooling result is a1, which also weakens the original features in the window. Therefore, if the pixel values in the pooling window are unknown, the average pooling method and the maximum pooling method are used directly, which may weaken the characteristics, resulting in a decrease in the final recognition rate. At present, the popular pooling method is the spatial pyramid pooling method proposed in literature [15]. This method mainly solves the problem that the traditional CNN must input a fixed-size picture in the application of object detection. In the application scenario studied in this paper, the worktables of the camera and the parts are all fixed, and there is no difference in the input image size. Therefore, in order to improve the efficiency of the algorithm, combining with the advantages of the average pooling method and the maximum pooling method, an improved pooling method is proposed. The calculation method is as follows: Sij ¼ T=2 þ b T¼

c X c c 1 X 1 ð Fij Þ þ sumM2ðFij Þ 2 c i¼1 j¼1 2 i¼1;j1

ð4Þ ð5Þ

In Eqs. (4) and (5), Sij is the final pooling result; Fij is the value of each element in the pooling domain; b is the bias, which can be set to zero in many cases; and T is the sum of the pooling result of the average pooling method and the maximum two pooling method. With the above improvements, the maximum of the first two values and the mean values in the pool domain are included in the pool result, so that more satisfactory pooling results can be obtained in the two cases shown in Fig. 3.



D. Suolin et al.

Selection of Classifier

The common classifiers are: softmax classifiers, SVM classifiers, etc. The classification layer in the traditional CNN uses the softmax classifier. In the softmax classifier, when the probability of a certain class is large, it means that an accurate classification result is obtained. However, the softmax classifier will still continue to calculate the loss function until the probability of this class is close to 1 [16]. The efficiency of such training is obviously not high, resulting in a waste of computer resources. The SVM classifier only processes samples with incorrect classifications, and no longer calculates the samples that have been classified correctly. This will not only improve the training speed, but also improve the generalization ability of the model [17]. Therefore, SVM classifier is used in the model of machine parts recognition network. 3.4

Design of Network Structure

Currently, there are many kinds of popular convolution neural network structures, mainly including lenet-5, AlexNet, VggNet, GoogleNet, etc. Among them, AlexNet, VggNet and GoogleNet belong to large-scale deep convolutional neural networks, and the accuracy of image recognition is very high. However, its structure is quite complex, and the training process is not only time consuming but also requires huge amounts of sample data. LeNet-5 is a streamlined convolutional neural network for hand-written digital recognition that can achieve a recognition accuracy of over 98% on the MNIST data set. Although its functionality is not as strong as large-scale deep convolutional neural networks, LeNet-5 can easily defeat traditional identification methods with a small amount of resources. The LeNet-5 network structure has a total of seven layers, including convolutional layers C1, C3, and C5, pooling layers S2 and S4, a fullyconnected layer F6 and an output layer. In this paper, LeNet-5 network is chosen as the research object, so that it can be used for workpiece recognition in industrial environment after being improved. After many simulation experiments, it is found that the following improvements can result in better training results. Delete the C5 layer in the LeNet-5 network, connect directly with the full connection layer after the pooling layer S4; reduce the number of convolution kernels at the C3 layer to 12, and modify the link relationship between the C3 layer and the S2 layer at the same time, the specific connection relationship is shown in Fig. 4; the number of fully connected neurons is reduced to 32; the output layer output size is changed to 4. The modified network structure is named WorkNet-1. Based on the WorkNet-1 network, the sigmoid function is replaced by the LReLU activation function; the original pooling method is replaced by the middle value pooling method; the Dropout regularization method is used to avoid over-fitting of the network during the training process. The new network that has been improved above is named WorkNet-2. Because there is no change in the network structure, the parameters and connections contained in each layer of the WorkNet-2 network are consistent with WorkNet-1. See Table 1 for specific parameters and connections.

Recognition Algorithm Based on Convolution Neural Network

0 1 2 0 X 1 X X 2 X X X S2 3 X X 4 X 5

C3 4 5 6 7 8 X X X X X X X X X X X X X X X X X X X 3


9 10 11 X X X X X X X X X X X X X

Fig. 4. The connection between the C3 layer and the S2 layer Table 1. Layer parameters in the WorkNet network Layer

Size and number of feature graphs

C1 S2 C3 S4 F5

96  96  6 48  48  6 44  44  12 22  22  12 1  1  32

Size of convolution kernel 55 22 55 22 11

Training parameters

The number of connections

156 12 1062 24 185888

1437696 69120 2056032 29040 185888

4 Experiments and Results Analysis 4.1

Data Preparation

A training data set of 2,400 images and a test data set of 800 images were established for the four common mechanical parts: gears, bearings, bolts and nuts. Among them, there are 600 pictures of each part in the training set and 200 pictures of each part in the test set. When collecting data, in order to ensure the validity of the data set, continuous shooting is performed by continuously moving and rotating the camera to obtain sample images with different positions and directions. After the original sample is collected, the sample is batch labeled and the pixel size of the picture is uniformly adjusted to 100  100. Some parts sample pictures are shown in Fig. 5. 4.2

Training Analysis

The error curve recorded during WorkNet-1 network training is shown in Fig. 6. When the iteration reaches the 70th time, the error rate is less than 5%. After the iterative training 200 times, the recognition accuracy of the WorkNet-1 network on the test set reached 96.35%. The error curve recorded during WorkNet-2 network training is shown in Fig. 7. It can be seen from the figure that the error curve of WorkNet-2 is faster than that of WorkNet-1. The trained WorkNet-2 network has an accuracy of 97.82% on the test set,


D. Suolin et al.

Fig. 5. Part of the sample pictures 0.5

Error rate

0.4 0.3 0.2 0.1 0 0

50 100 150 Number of iterations of training


Fig. 6. Training error curve of WorkNet-1 network

and the recognition rate is 1.5% higher than that of WorkNet-1. Because WorkNet-2 uses the Dropout regularization method during the training phase, the training efficiency is improved. It takes about 19 h for the training set to iterate 200 times, and it saves 1.5 h of training time compared to WorkNet-1 which takes 20.5 h. Through the above series of improvement, the finally obtained WorkNet-2 network model can achieve better recognition of mechanical parts. 4.3

Visual Analysis of Features

Due to the inconsistent size and large number of feature graphs extracted from various layers of the worknet-2 network, only some of the feature maps are displayed in this

Recognition Algorithm Based on Convolution Neural Network


0.4 0.35

Error rate

0.3 0.25 0.2 0.15 0.1 0.05 0 0


100 150 200 Number of iterations of training


Fig. 7. Training error curve of WorkNet-2 network

paper, and their sizes are uniformly adjusted. Figure 8 shows part of the feature images of the four part images in the C1 layer and C3 layer, where columns 2–4 are the feature graphs extracted from C1 layer, and columns 5–7 are the feature maps extracted from C3 layer. It can be seen from the figure that the front and rear view information of the image is extracted from the C1 layer. The feature map of column 2 is obviously the image after the background is deducted, and only the information of the workpiece is left. The feature maps in columns 3 and 4 represent the foreground and background of the image respectively. Obviously, the C1 layer extracts relatively low-level features. The characteristics of the first 3–5 columns can be seen clearly in the high-level features. The high-level features are often abstract and not well described. In particular, the feature map of the fifth column is similar to the three-dimensional feature information obtained under virtual light projection.

Fig. 8. Feature visualization


D. Suolin et al.

The features extracted by the convolutional neural network are generally from simple to complex, from low to high levels, from concrete to abstract layer by layer. It can be seen from the above simple analysis that the relationship between the feature graphs extracted from the worknet-2 network is also progressive. Through this combination of progressive layers, the convolutional neural network aggregates the extracted features to the full-connected layer output, outputs the classification layer and then classifies these rich features, and finally can obtain accurate classification results.

5 Conclusion For the problems that the traditional mechanical parts identification algorithm needs to design and extract relevant features artificially, the process is complex, and it is easily affected by the diversity of parts morphology, a method of workpiece recognition based on convolutional neural network is proposed. On the basis of LeNet-5 network, a new network WorkNet-2 was constructed by improving the activation function, pooling method and network structure, and Dropout-related techniques were used during training to avoid overfitting. Finally, the features of some parts were extracted through the trained WorkNet-2 network, and a visual analysis was performed. Experiments show that the trained WorkNet-2 network has an accuracy of 97.82% on the test set, and it can achieve better recognition of mechanical parts. Acknowledgments. This study was funded by the project of Jiangsu science and technology plan (BEK2013671); grants from the Jiangsu Province University Academic Degree Graduates Scientific Research and Innovation Plan (KYLX16_0630).

References 1. Zhao, P.: Machine Vision Theory and Application, pp. 28–29. Electronic Industry Press (2011) 2. Yang, D., Zhang, Z.: Research on the method of workpiece location and recognition using image segmentation. Small Microcomput. Syst. 37(9), 2070–2073 (2016) 3. Liu, P., Shen, Y., Yao, F., et al.: Region-based moving object detection using HU moments. In: IEEE International Conference on Information and Automation, pp. 1590–1593. IEEE (2015) 4. Cao, J., Yuan, A., Yu, L.: A part recognition algorithm based on SURF feature. Comput. Appl. Softw. 32(1), 186–189 (2015) 5. He, X., Xie, Q., Xu, H.: Part recognition system based on LabVIEW and BP neural network. Instrum. Technol. Sens. (1), 119–122 (2017) 6. Chang, L., Deng, X., Zhou, M., et al.: Convolution neural network in image understanding. Autom. J. 42(9), 1300–1312 (2016) 7. Dong, J., Jin, L., Zhou, F.: Summary of convolution neural network research. Acta Comput. Sci. 40(6), 1229–1251 (2017) 8. Li, T., Li, X., Ye, M.: Summary of research on target detection based on convolution neural network. Comput. Appl. Res. 34(10), 2881–2886 (2017) 9. Deng, L., Wang, Z.J.: Vehicle recognition based on deep convolution neural network. Comput. Appl. Res. 33(3), 930–932 (2016)

Recognition Algorithm Based on Convolution Neural Network


10. Gold, J., Liu, W., Wang, X.: Traffic sign recognition based on optimized convolution neural network. Comput. Appl. 37(2), 530–534 (2017) 11. Hua, K.L., Hsiao, Y.S., Sanchez-Riera, J., et al.: A comparative study of data fusion for RGB-D based visual recognition. Pattern Recognit. Lett. 73(10), 1–6 (2016) 12. Huang, G., Sun, L., Wu, X.: Fast vision recognition and localization algorithm for industrial sorting robot based on deep learning. Robot 38(6), 711–719 (2016) 13. Chen, T., Wang, N., Xu, B., et al.: Empirical evaluation of rectified activations in convolutional network. Comput. Sci. 209–213 (2015) 14. He, K., Ren, S., Zhang, X., et al.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: IEEE International Conference on Computer Vision, pp. 1026–1034 (2015) 15. He, K., Ren, S., Zhang, X., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015) 16. Jiang, S.: Image Recognition based on Convolution Neural Network, pp. 14–17. Jilin University (2017) 17. Dong, L., Li, P., Xiao, H., et al.: A cloud image detection method based on SVM vector machine. Neurocomputing 169, 34–42 (2015)

The Research of Three-Dimensional Morphology Recovery of Image Sequence Based on Focusing Method Qian Zhan(&) School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China [email protected]

Abstract. In this paper, in view of the limited focusing depth of industrial microlens and the feature of clear focusing when microscopic sample imaging needs sequence images, the non contact measurement method of 3D surface morphology based on depth of field is studied. A series of images of different focusing depth are taken by industrial microlens, and a new algorithm of 3D shape recovery is proposed. The 3D surface image of the object is recovered from the image sequence by the combination of the image preprocessing, the focusing evaluation function and the image post-processing. Use Laplasse function which has high robustness and high precision as the test function to calculate the three-dimensional depth information, and Gauss filter to optimize the depth map. The experimental results of solder balls on printed circuit boards show that the algorithm proposed in this paper can effectively restore the 3D surface morphology of objects with smaller errors. Keywords: Image processing Focusing evaluation function

 Focusing morphology recovery

1 Introduction 3D surface morphologic measurement has a wide application prospect in the fields of automatic processing, product quality detection, physical imitation, machine vision, art sculpture and so on. It has become a hot research topic at home and abroad. The noncontact optical measurement method is widely applied because of its non-destructive and fast speed. The main focus method algorithm comes from the Nayar method [1]. Darrell and Wohn first proposed that used Laplasse Gauss Pyramid to find the frame image with the largest region of high frequency energy, thus the recovery of morphology was achieved [2]. The research group, led by the S.K. Nayar, systematically studied focused depth recovery technology [3] and he provided a basic research idea for the future focus method to restore the morphology information. Many scholars have improved the peak search algorithm of the focus curve, analyzed the influence of the uncertainty of the peak of the focus on the recovery of the morphology information [4, 5], adopted the adaptive reconstruction method to restore the 3D morphology [6] and study the © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 348–355, 2019.

The Research of Three-Dimensional Morphology Recovery of Image


influence of the two surface correction by the least two multiplication fitting [7] to improve the precision of morphology recovery. In 2005, Arimitsu Yokota used FPGA to realize the data processing of depth interpolation, and finally realized the real-time morphology recovery [8]; Ohba and Jesus designed a full focal microscope and the parameters of the imaging system are changed by piezoelectric ceramics to achieve relatively high frequency response and accuracy [9–11]. Based on the analysis of experimental objects and experimental environment, a new method of 3D measurement and restoration is proposed in this paper, which starts out as image preprocessing and post-processing of the depth information, and then the focusing function with good focusing performance is used as the calculation function, and the Gauss filtering algorithm is used for filtering. Thus, a 3D image with less noise is obtained.

2 The Principle and Method of 3D Morphologies Recovery 2.1

Microimaging Process

The focal plane of the microscope is limited, which satisfies the convex lens imaging formula 1/u (object distance) +1/v (image distance) = 1/f (lens focal length). For industrial microscopic systems, the image distance v is fixed, and the corresponding object distance u is sure. So, no matter how the lens moves on the Z axis, only the parts in the depth of field can form a clear image on the induction plane of the image acquisition device, and the part out of the depth of field can only generate a fuzzy image on the CCD image surface. Therefore, the images obtained by the industrial microscope and the image acquisition equipment are focused only in the depth of field. 2.2

The Principle of 3D Morphologies Recovery

The basic idea of Depth From Focus (DFF) is: first, by adjusting the position of the Z axis of the lens, the sequence image of the microscopic sample is obtained, the whole Z axis direction information is covered by the whole sequence; Then the depth information is restored accurately by focusing analysis, and 3D reconstruct and measure can be done with two-dimensional image sequence. As shown in Fig. 1, a total of “K” images are taken, named 1 to K, each one is corresponding to any point (x, y) for time 0–T. The focus degree of the points on each picture is obtained by using the focusing evaluation operator, and the point with the greatest focus is found as the height value of this point.


Q. Zhan

Fig. 1. Principle diagram of focusing morphology

3 3D Shape Recovery Algorithm Based on Improved Image Processing Technology The noise on the surface of the 3D morphology based on DFF is so much that the low frequency region is more than the high frequency area. The object that comes is not good for vision inspection. So the improved 3D shape recovery algorithm is proposed in this paper. After collecting the desired sequence image, the image preprocessing is carried out first, then the focus definition analysis is carried out for each image in the image sequence, and the clarity of the image in the corresponding position is evaluated, and the results are recorded for the next step. Then we analyze the change of the definition value of the sequence image at different height of the same coordinate position, calculate the focus height of the coordinate position, then collect the focus height information of all the coordinates, and get the height matrix of the surface. Finally, combined with the height information of the object, the 3D surface of the object is reconstructed, and the accurate 3D shape is got through image postprocessing. 3.1

Image Preprocessing Algorithm

Due to the restriction of the experimental environment and the hardware facilities, the real images will be not high-quality, the contrast is not strong, and a lot of noises etc. For this reason, before we calculate the depth information of the image, it is necessary to do some preprocessing. 3.2

Selection of Focused Evaluation Function

The focus of vision is to judge whether the focus of the image is mainly whether the observed image is clear, and the same with computer. Because the focus image is clearer than the defocused image, the first one contains more information and details, and the more high frequency components are included. So there is a great relationship between the focusing and the high frequency components of the image. According to this principle, an appropriate focusing evaluation function can be selected. In this paper, the mean square deviation function and Laplasse function with better robustness and accuracy are used as reference evaluation functions.

The Research of Three-Dimensional Morphology Recovery of Image



Analysis of Focusing Evaluation Function Curve

Through the theoretical analysis of the focusing evaluation function mentioned above, In order to analyze the focusing, single peak properties and the ability to resist interference of the above focusing evaluation function, the focusing characteristic curves are shown as Figs. 2 and 3.

Fig. 2. Mean function



Fig. 3. Laplasse function

As can be seen from the above picture, the two functions have good single peak performance, good unbiased performance and high sensitivity. It is suitable for the calculation of the focusing evaluation function. In order to verify the focusing performance of the five functions better, the five focusing characteristic curves are smoothed and filtered respectively. The focusing characteristic curve after the smoothing is shown in Figs. 4 and 5.

Fig. 4. Mean square deviation after smoothing

Fig. 5. Laplasse after smoothing

From the above five graphs, we can see the smooth focusing characteristic curve can still maintain good single peak performance, unbiased, strong anti-interference ability and high sensitivity after smoothing. Therefore, this paper selects the mean deviation function and Laplasse function as a follow-up focusing evaluation test


Q. Zhan

function, which is used as a highly calculated test function in the calculation of 3D morphology. 3.4

The Post-processing Algorithm of Image Depth Information

After preprocessing the image sequence, the contour of the surface is obtained by selecting the corresponding focusing evaluation function to calculate the depth information of the object surface. However, the final depth map still has some noise, which affects the observation of the object’s surface profile. So post-processing must be taken to get more accurate depth map and a clearer morphology.

4 Experiment and Analysis of 3D Morphologies In order to restore the 3D shape of the above objects, the four axis motion platform is designed as the motion control part. The lens used in the image acquisition part is a zoom microlens for industry which is placed on the Z axis of the platform. The depth of field is smaller than that of the general industrial lens, and is suitable for the surface measurement of small objects. The designed device is shown in Fig. 6. In the beginning, the object in the lens field is transferred to the defocus state manually. Drive the lens by a fixed step from far to close, the starting height of the focus is 95 mm, after is 90 mm. The step distance Z is

Fig. 6. Experimental platform device for 3D topography measurement

set as 100um, and each step collects one frame, and a series of fuzzy to clear to fuzzy changes photos are collected at the end, 50 photos totally. The size of the calculated window was 10  10. The maximum value method is used to restore the restoration of 3D morphology. The experiment object is the solder ball on the circuit board. 4.1

Recovery of 3D Shape of Solder Balls on Printed Circuit Boards

Using the focused evaluation function with high robust performance as the test focus evaluation function. Carrying out the 3D surface recovery experiment of the solder ball on the surface of the PCB circuit board by combing the image preprocessing and the image post-processing into focusing appearance restoration algorithm proposed in this

The Research of Three-Dimensional Morphology Recovery of Image


Fig. 7. Pictures of some collected samples

paper. A part of the sample pictures collected which shows a changing trend from ambiguity to clarity to ambiguity (see Fig. 7). We use the algorithm and the traditional 3D shape recovery algorithm to test the collected 50 frame photos. The simulation results in MATLAB are obtained as shown in the following diagram, and the vertical axis is the relative depth after the normalization of the simulation, because there is no corresponding optical system parameters.

Fig. 8. 3D recovery results of mean square deviation function

Fig. 9. 3D recovery results of Laplasse function


Q. Zhan

Figure 8a–d are the results of the 3D shape recovery of the solder ball on the surface of the PCB circuit board by using the mean deviation function method as a test evaluation function. Figure 9a–d are the results of the 3D shape recovery of the solder ball on the surface of the PCB circuit board by using the Laplasse function method as a test evaluation function.

Table 1. Quantitative performance analysis

Mean square deviation function Laplasse function


Our method 0.331

Traditional method 2.551

Single image preprocessing 0.997

Single image postprocessing 0.772





Analysis of the Results of 3D Morphologies

In order to analyze the results of the experiment, we select the mean square error to measure the difference of each point on the surface of the object and the surface of the actual object to characterize the measured 3D surface morphology. The mean square error data obtained in the MATLAB are as follows. It can be seen from Table 1 that the mean square error of the algorithm calculated in this paper is less than the others, and the effect of Laplasse function is better.

5 Conclusion In this paper, based on the sequence images of different frames and different focus, based on the focus of morphology recovery technology, we proposed a new algorithm based on the combination of image preprocessing and post-processing to restore the 3D morphology of the solder ball on the surface of the PCB circuit board. Firstly, the appropriate focusing evaluation function is selected as the focusing function of the depth information. Then through the measurement and analysis of solder ball on the surface of PCB circuit board which is widely used in industry, several different algorithms are compared respectively. The experimental results show that the proposed algorithm has a high performance, a certain reliability, and a complete recovery of surface morphology. Subsequent surface quality inspection and analysis is of great significance. The next step will make further improvement in order to improve the accuracy and efficiency of the algorithm. Acknowledgement. This research was partially supported by the key research project of Ministry of science and technology (Grant No. 2017YFB1301503) and the National Nature Science Foundation of China (Grant No. 51575332).

The Research of Three-Dimensional Morphology Recovery of Image


References 1. Nayar, S.K.: Shape from focus system for rough surface. In: Proceedings of the Image Understanding Workshop, pp. 539–606 (1992) 2. Darrell, T., Wohn, K.: Pyramid based depth from focus. In: Proceedings of the Computer Vision and Pattern Recognition, pp. 504–509 (1988) 3. Nayar, S.K., Nakagawa, Y.: Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994) 4. Xiong, Y., Shafer, S.A.: Depth from focusing and defocusing. In: Proceedings of the Computer Vision Pattern and Recognition, pp. 68–73 (1993) 5. Xiong, Y., Shafer, S.: Depth from focusing and defocusing. Technical Report CMU-RI-TR93-07, The Robotics Institute, Carnegie Mellon University, pp. 1–28 (1993) 6. Helmli, F.S, Scherer, S.: Adaptive shape from focus with an error estimation in light microscopy. In: Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis, Pula, Croatia, pp. 188–193 (2001) 7. Blahusch, G., Eckstein, W., Steger, C.: Calibration of curvature of field for depth from focus. In: ISPRS Archives, vol. XXXIV, Part 3/W8, pp. 173–177. Munich, 17–19 September 2003 8. Yokota, A, Yoshida, T, Kashiyama, H., et al.: High-speed sensing system for depth estimation based on depth-from-focus. In: IEEE International Symposium on Circuits and Systems, ISCAS, vol. 1, pp. 564–567 (2005) 9. Ohba, K., Ortega, J.C.P., et al.: Implementation of real time micro VR Camera. IEEJ Trans. Sens. Micromach. 120(E6), 264–271 (2000) 10. Jesus, C, Ortega, P., Ohba, K., et al. Real-time VR camera system. In: Proceedings of the Fourth Asian Conference on Computer Vision, vol. 9, pp. 500–513 (2000) 11. Ohba, K., Ortega, J.C.P., Tanie, K., et al.: Microscopic vision system with all-in-focus and depth images. Mach. Vis. Appl. 15, 55–62 (2003)

Research on Motion Planning of Seven Degree of Freedom Manipulator Based on DDPG Li-lan Liu1, En-lai Chen1, Zeng-gui Gao1(&), and Yi Wang2 1

Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, China [email protected] 2 Business School, Plymouth University, Plymouth, UK

Abstract. For the motion control of the seven degree of freedom manipulator, there are many problems in the traditional inverse kinematics solution, such as high modeling skills, difficulty in solving the equation matrix, and a huge amount of calculation. In this paper, reinforcement learning is applied in seven degree of freedom manipulator. In order to cope with the problem of large state space and Continuous action in RL, the neural network is used to map the state space to the action space. The action selection network and the action evaluation network are constructed with the Actor-Critic framework. The action selection policy is learned by the training of RL based on DDPG. Finally, test the effectiveness of the method by Baxter robot in Gazebo simulator. Keywords: Reinforcement learning

 DDPG  Actor-Critic  Neural-network

1 Introduction In recent years, manipulators have been widely used in Industry. The seven degree of freedom (Dof) manipulator has a humanoid structure, which can achieve more complex movements and accomplish more difficult task such as grasping and sorting [1]. How to track and move safely is the key technology of manipulator. The common methods for the motion control has probabilistic road map, fast search tree, artificial potential field, genetic algorithm, neural network and so on [2–4]. Document [5] uses particle swarm optimization to optimize the joints angle of manipulator. Document [6], Gauss distribution is applied in the fast extended random tree algorithm to control motion. Document [2] uses ant prediction algorithm to plan moving paths in complex obstacle environments. These methods usually sample a variety of kinematic positions of the manipulator, and determine whether it meets the requirements according to the kinematics equation. Another method establish kinematics model or even dynamics model, and then solve the joint angle according to inverse kinematics. If the goal or obstacle moves, these methods need to be recalculated. When the number of DoF increases, the amount of computation will become huge, so it is difficult to achieve real-time dynamic control. In addition, these methods usually base on the global information of environment, the precision physical model and the prior knowledge, so they lack of generalization and sensitivity to physical models in actual tasks. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 356–367, 2019.

Research on Motion Planning of Seven Degree


Reinforcement learning (RL) is a method to learn the mapping from the state of the environment to the action space, and obtains the optimal action policy when the state transition probability is unknown. Compared with other methods, RL doesn’t rely on the environment model and any prior knowledge [7]. RL is the study of the agent in the environment which chooses action based on action policy, through continuous interaction with environmental to obtain reward, and achieve the expected goal by maximizing long-term rewards [8]. The action policy output next action according to the current state. There is little computation [4]. In this paper, the RL based on Deep Deterministic Policy Gradient (DDPG) is studied, and used it to deal with the continuous real-time control of manipulator. In order to enable RL to deal with continuous action, use a neural network to construct policy network for action selection, and use another neural network to approximate the value network for action evaluation. The research object is the Baxter double arm seven DoF robot, and joints state and the goal position are used as the state. Combining Actor-Critic framework to train action policy, adding experience replay and parameters normalization to promote convergence. For improve the stability, neural network is copied: One update the parameters in real time, and the other has fixed gradient parameters, which is used to stabilize the computation of gradient and update after a certain round. In training, the goal position is constantly changed, so that the algorithm can learns more situations to increase robustness. Finally, test the effectiveness by Baxter in the Gazebo simulator.

2 Algorithm Almost all RL is based on the Markov Decision Process (MDP). MDP contains 5 elements: S; A; P; R; c; where S is the state set of agent itself and the environment, and a state is represented as sS: A is the set of action, and a certain action is represented as aA: P is the system model, that is, the state transition probability. Pðs0 js; aÞ indicates the probability that the state s translate into the state s0 by the action a. R is the reward function. c is a discount factor, 0  c  1 [4]. MDP has Markov property, that is, the next state is only related to the current state, which greatly reduces the complexity of the problem. The objective function of RL usually has two forms, one is the value function, which is defined as the cumulative reward expectation with the folding loss:   V p ðsÞ ¼ Ep rt þ crt þ 1 þ c2 rt þ 2 þ    jst ¼ s


The second is the action-value function, also known as the Q function, which indicates that after the action a by policy p on the state s, the cumulative expected reward with the folding loss: Qp ðs; aÞ ¼ Ep ½rt þ crt þ 1 þ c2 rt þ 2 þ    jst ¼ s; at ¼ a



L. Liu et al.



The Actor-Critic framework is used to approximate parts that are difficult to obtain. The action policy structure is called Actor. The value estimation function is called the Critic [9]. The Actor-Critic updates two parameter sets: the Critic updates parameter w of the action-value function; the Actor updates policy function parameter h according to the evaluation of Critic. The Critic evaluates the error by Temporal Difference (TD) [10]: dt ¼ rt þ cV ðst þ 1 Þ  V ðst Þ


where V ðÞ is a value function. The TD error is used to evaluate the action at . If the TD error is positive, then the tendency to select at is strengthened; conversely, it decreases. Assuming that pðs; aÞ is the policy parameter of the Actor, that is, the action selection tendency, then changing the preference can by updating pðs; aÞ: pð s t ; at Þ

pðst ; at Þ þ dt


where  is a positive step factor. The Actor-Critic is a process of iterating, which trains the policy networks and Q networks through the interaction of environment, the Actor and the Critic. Only a small amount of computation can be used to select actions, which can be applied to infinitely and continuous actions. The structure is shown in Fig. 1.

Fig. 1. Actor-Critic framework



The DDPG, based on the Actor-Critic framework, can be used to solve the RL problem with continuous. Policy network and Q network are constructed to approximate action policy function and value function respectively: Policy Network: a neural network is used to approximate the action policy function. Its parameter is h, and each step can be obtained by at ¼ lðst Þ. Q network: Q function is not easy to calculate, and it’s approximated by another neural network, whose parameter is w. Use the DQN method [11].

Research on Motion Planning of Seven Degree


Behavior policy b: to give consideration to the exploration and exploit of action space, add noise in the action. And the noise is gradually attenuated. Using single neural network to approximate, the network parameters are constantly updated while calculating the gradient, training will be unstable [12]. Therefore, the copies of policy network and Q network are created respectively: 

gradient update h soft update h0


online : Qðs; ajwÞ gradient update w target : Q0 ðs; ajw0 Þ soft update w0


policy network  Q network

online : lðsjhÞ target : l0 ðsjh0 Þ

The online network and the target network have same structure. The online network updates the parameters in real time through gradient descent. The target network parameters update after a certain number of times, through soft update:  soft update :

h0 w0

sh þ ð1  sÞh0 sw þ ð1  sÞw0


where s generally takes 0.001. The target network parameters change little, used to calculate the gradient of the online network, so that the gradient calculation is stable and easy to converge. But the cost is that the process will slow down. 2.3

Optimization Goal

The error LðwÞ of the Q network: it’s defined as the mean square error (MSE). Use Monte-Carlo method sample the batch data instead of the mean value: LðwÞ ¼

1X ðyi  Qðsi ; ai jwÞÞ2 N i


where yi can be regarded as the “label” in supervised learning: yi ¼ ri þ cQ0 ðsi þ 1 ; l0 ðsi þ 1 jh0 Þjw0 Þ


Policy l evaluate: using function Jb ðlÞ, called performance objective: Z

Jb ðlÞ ¼ qb ðsÞQl ðs; lðsÞÞds ¼ ES  qb ½Ql ðs; lðsÞÞ



where Ql ðs; lðsÞÞ is the Q value generated by action in every state according to the policy l. That is to say, Jb ðlÞ is the expected value of Ql ðs; lðsÞÞ when s is distributed by qb . Optimization objective: to minimize the LðwÞ of the Q network and maximize Jb ðlÞ at the same time. For minimize LðwÞ, use back-propagation (BP). For maximizing Jb ðlÞ, the gradient can be in the [13], use the Monte-Carlo method:


L. Liu et al.

rh Jb ðlÞ 

 1 X ra Qðs; ajwÞjs¼si ;a¼ai  rh lðsjhÞjs¼si N i


Stores the sequence data of ðst ; at ; rt ; st þ 1 Þ, called D blocks. Random sampling is used for replay learning, so that parameters can be more consistent with the data of this period. The training procedure is as follows and shown in Fig. 2.

Fig. 2. Network training procedure

Research on Motion Planning of Seven Degree


3 Model Establishment and Experiment Baxter is a human-machine cooperative robot with two seven Dof arms, driven by series elastic actuator (SEA) [14]. The joints are sequentially recorded as S0, S1, E0, E1, W0, W1, W2, as shown in Fig. 3.

Fig. 3. Nomenclature of each joint


Kinematic Model

The basic coordinate Baxter is based on the trunk chassis center. Each joint takes the joint center as the original point, the axis of the rotating is Z axis, the X axis and the Y axis satisfy the right hand rule. Joints coordinate is established according to the reference attitude [14]. As shown in Fig. 4.

Fig. 4. Joints coordinate

The kinematic model of Baxter is based on the coordinate transformation. The transformation matrix of the adjacent coordinate is as follows: i i þ 1T

  ¼ Rh ðhi Þ  Txyz ðPi Þ  Rz ðaiz Þ  Ry aiy  Rx ðaix Þ



L. Liu et al.

where Ra is the rotation matrix around the a axis. Ta is the translation matrix along the a axis. The variables Pi ; aix ; aiy ; aiz are the deviations of joints, as show in Table 1. Joints limit is shown in Table 2, and left and right is determined by the Baxter’s view. hi is the angle move of joints. Thus, the forward kinematics model of single arm from base coordinate to end can be obtained: 0 8T

¼ 01 T  12 T  23 T  34 T  45 T  56 T  67 T  78 T


Table 1. Joint parameters x=mm y=mm z=mm r=rad p=rad g=rad

0!1 64.027 259.027 left −259.027 right 129.626 0.0 0.0 pi/4 left −pi/4 right

1!2 2!3 3!4 4!5 5!6 6!7 7!8 69.0 102.0 69.0 103.6 10.0 116.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 270.35 −pi/2 0.0 0.0

0.0 pi/2 0.0 pi/2

262.35 0.0 0.0 pi/2

0.0 pi/2 −pi/2 pi/2

270.69 0.0 0.0 pi/2

0.0 pi/2 −pi/2 pi/2

158.6 + 112.678 0.0 0.0 0.0

Table 2. Joint limit S0 S1 E0 E1 W0 W1 W2 Maximum −1.7016 −2.147 −3.0541 −0.05 −3.059 −1.5707 −3.059 Minimum 1.7016 1.047 3.0541 2.618 3.059 2.094 3.059


Algorithm Building and Training

Use python3 and tensorflow to build programs. The MAX_ episode is 8000, the MAX_ step is 500 per episode, the rate c is 0.9, the replay capacity is 1.5 million sets, and each batch is 32 sets of data. The Actor network contains two hidden layers, the first has 80 neurons, and the second has 60. The Critic network contains two hidden layers, the first has 50 neurons, and the second has 40 neurons. Taking into account the main areas and reducing the computation, the workspace is limited in the X is 500–1000 mm, Y is −350 to 350 mm, Z is −200 to 400 mm of base coordinate. Joints angle can be measured by the self-sensor, and the coordinate position of each joint can be solved by forward kinematics model. When training, the Baxter’s right arm is controlled to close the goal position and avoid the left arm, and the left arm random movement as an obstacle. The goal coordinates are given randomly and refreshed after a certain number of episodes, so that the learning results will are more robust. Joints angle, goal position and relative distance are taken as the state. The state is inputted Actor network and the next action is

Research on Motion Planning of Seven Degree


outputted. An action is a vector containing seven elements, each of which corresponds to a joint. After action is executed, the reward is added according to the end of the arm. The award rules are as follows: 8 k1  fingerpose  goalpose ; anytime in moion > > < ð14Þ RðsÞ ¼ k2  fingerpose  centralpose ; leave the workspace > 4; arrive the goal > : 2; two arms collided where fingerpose is the end of manipulator. The goalpose is the goal. The centralpose is center of workspace. k1 and k2 are regularization factors, make rewards is between 0 and 1. During training, the changes of neuron parameters are shown in Fig. 5. For the convenience of use gradient descent method, we set up a variable a loss, which is negatively correlated with Jb ðlÞ of policy l, as a record shown in Fig. 6. It can be seen that the a loss is fluctuant decreasing, that is, the Jb ðlÞ is increasing.

Fig. 5. The changes of neuron parameters


L. Liu et al.

Fig. 6. Negative correlation change of cumulative reward expectation

The change of LðwÞ in the Q network is shown in Fig. 7. It can be seen that the data fluctuate in a low range, and may be affected by the noise addition.

Fig. 7. The error change of Q network

The algorithmic structure is shown in Fig. 8.

Fig. 8. Algorithmic structure

Research on Motion Planning of Seven Degree



Simulation Test

After the training, Actor network is action policy. The current state, joints angle and the goal position, is inputted into the Actor network, and the next action of right arm is outputted by the forward propagation. The Gazebo simulator is used to test. Joints angle of the simulator is obtained by the ROS. The goal position is given randomly. Joints angle and goal position input the Actor network to obtain the movement control information to control arm. Meanwhile, determine whether the end of the arm reaches the goal position. The simulator is shown in Fig. 9.

Fig. 9. Baxter in Gazebo simulator

After multiple random tests of the goal position, in most cases, the end of the arm can reach the goal position within 3 s, which can meet the real-time requirements. On the part of the failure, from the posture, it is mainly the emergence of multiple joint movements to their extreme positions. The arm interacts with each other, causing stiffness to move in place, unable to move normally, as shown in Fig. 10.

Fig. 10. Multiple joints movements to their extreme positions

4 Summary The RL base on DDPG can deal with the problem of continuous action space and balance the contradiction between action explorations and exploit in RL. For the seven DoF manipulator, after the training is completed, the Actor network forward calculate according to the environment state, and the speed of the action control is given to meet


L. Liu et al.

the real-time control requirements. It can handle different goal position. However, there are still a problem that multiple joints are running to the limit and stuck sometimes. In the future work, we can increase the size of the network to improve the approximation of the policy and the Q function. For the arm of a larger number of DoF, such as the seven DoF manipulator, we should consider how to avoid joints near their limits in motion. Besides we should avoid the “dead zone” after multiple joints enter the its limit position and unable to work properly. Acknowledgements. The authors would like to express appreciation to mentors in Shanghai University for their valuable comments and other helps. Thanks for the program supported by Shanghai Municipal Commission of Economy and Informatization of China. The program number is No. 2017-GYHLW-01037. Thanks for the program supported by Shanghai Science and Technology Committee of China. The program number is No. 17511109300. Fund Program: Financed by Program of Shanghai Municipal Commission of Economy and Informatization. No. 2017-GYHLW-01037. Financed by Program of Shanghai Science and Technology Committee. No. 17511109300.

References 1. Tsarouchi, P., Makris, S., Michalos, G., et al.: Robotized assembly process using dual arm robot. Procedia CIRP 23(3), 47–52 (2014) 2. Shen, J., Gu, G.C., Liu, H.B.: Mobile robot path planning based on hierarchical reinforcement learning in unknown dynamic environment. Robot 28(5), 544–547+552 (2006) 3. Liu Y.X., Wang, L.L., Hang, X.S., Tang, Q.: A survey of coordinated control for dual manipulators. J. Inner Mongolia Univ. (Nature Science Edition) 48(4), 471–480 (2017) 4. Wang, Z., Hu L.S.: Industrial manipulator path planning based on deep Q-learning. Control Instrum. Chem. Ind. 45(2), 141–145+171 (2018) 5. Zhang, S., Li S.: Reinforcement learning based obstacle avoidance for robotic manipulator. Mach. Des. Manuf. 8, 140–142 (2007) 6. Li, Y., Shao, J.: A Revised Gaussian distribution sampling scheme based on RRT* algorithms in robot motion planning. In: International Conference on Control, Automation and Robotics, Nagoya, pp. 22–26 (2017) 7. Xu, X.: Sequential anomaly detection based on temporal-difference learning: principles, models and case studies. Appl. Soft Comput. 10(3), 859–867 (2010) 8. Zhou, W., Yu, Y.: Summarize of hierarchical reinforcement learning. CAAI Trans. Intell. Syst. 12(05), 590–594 (2017) 9. Xia, L.: Reinforcement learning with continuous state-continuous action. Comput. Know. Technol. 7(19), 4669–4672 (2011) 10. Chen, X., Gao, Y., Fan, S., Yu, Y.: Kernel-based continuous-action actor-critic learning. PR & AI 27(02), 103–110 (2014) 11. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529 (2015)

Research on Motion Planning of Seven Degree


12. Silver, D., Lever, G., Heess, N., Degris, T., et al.: Deterministic policy gradient algorithms. In: International Conference on Machine Learning, pp. 387–395 (2014) 13. Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., et al.: Continuous control with deep reinforcement learning. Comput. Sci. 8(6), A187 (2015) 14. Chen, X., Huang, Y., Zhang, X.: Kinematic Calibration method of Baxter robot based on screw-axis measurement and Kalman Filter. J. Vibr. Measur. Diagn. 37(5), 970–977+1066– 1067 (2017)

Research on Straightness Error Evaluation Method Based on Search Algorithm of Beetle Chen Wang1(&), Cong Ren1, Baorui Li2, Yi Wang3, and Kesheng Wang2 1


School of Mechanical Engineering, Hubei University of Automotive Technology, Shiyan 442002, China [email protected] Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai 200072, China 3 School of Business, Plymouth University, Plymouth, UK

Abstract. In order to improve the accuracy of spatial straightness assessment, a mathematical model of spatial straightness is established. The objective function is solved by using a variable step length beetle search algorithm. It mainly includes the steps of the search and step length of the beetle. The precision of the original search algorithm is not high, and it is easy to fall into the local optimal problem. The variable step length method is designed to enhance the diversity of the algorithm and improve the calculation precision. Finally, the solution is solved and the calculation results are obtained according to the termination criterion. Compared with the results of other methods, this optimization result verifies the feasibility and superiority of this algorithm. Keywords: Spatial straightness Variable step

 Search algorithm of beetle antennae

1 Introduction With the uninterrupted development of precision manufacturing technology, digital measurement of parts has become a key step in product lifecycle. Among evaluation elements of parts, spatial straightness is a key elements of form and position factor in tubes and shafts, whose accuracy of the evaluation results will largely affect the evaluation results of the whole parts. In the relevant international standards and national standards, main algorithms for evaluating straightness errors are the least region method, least squares method and intelligent optimization algorithm. Intelligent optimization algorithms, such as genetic algorithm and particle swarm optimization algorithm have widely been used in space straightness errors assessment, but the relationship between these algorithms and the algorithm parameters is relatively large, calculation speed is slow and the accuracy is not high enough, the robustness of algorithm needs to be further strengthened. Huang Fugui performed the experimental data of eight different kinds of straights were obtained on coordinate measuring machine and presented two algorithm models,

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 368–374, 2019.

Research on Straightness Error Evaluation Method


then evaluated by least square method and least envelope zone method respectively. The experimental results indicate that two methods conforms to the least condition [1]. Zhang Xinbao presented and evaluating algorithm of rotating the axis of enclosure cylinder in determined-direction against the problems of spatial straightness error evaluation is too large or evolutionary algorithm spend too much time, this algorithm is very robust for its determined searching direction and no iteration. Wu Hulling proposed a hybrid calculation methods based on Monte Carlo method and GUM method in the problem of straightness measurement uncertainty evaluation [3]. With the development of the swarm intelligence optimization algorithm, various new algorithms have emerged constantly, at the same time, changing the complexity of previous algorithms and the time-consuming calculation. Liao Ping proposed a method of calculating 3-D line error based on genetic algorithm, we used the numeric encoding in the genetic algorithms and solving the spatial straightness model. The algorithm is simulated and verified by actual data and showed the algorithm can be realized easily using computer, find the global optimal solution, infinitely approach real value in terms of theory and has fast convergent velocity [4]. Mao Jian proposed a method to evaluate spatial straightness errors adopting particle swarm optimization and set up mathematical model of spatial straightness to using the particle swarm algorithm for solving. The better results are achieved through actual data measured by a three coordinate measuring machine and other measurement systems [5]. Ye Ming presented a hybrid optimization algorithm, combining the least squares algorithm and artificial fish swarm algorithm to solve straightness error. The computational results have higher accuracy than the genetic algorithm and particle swarm optimization [6]. At present, using the new intelligent optimization algorithm to evaluate the spatial straightness error has the characteristics of high precision, high speed and so on. Therefore, the use of designing or improving a new intelligent optimization algorithm to solve the spatial straightness mathematical model is a better solution.

2 Problem Modeling First determine the measured part and through the coordinate measuring machine obtain the spatial straight line measurement data of the part; the space straightness errors is actually the deviation of the straight line over the ideal straight line, that is, the smallest cylinder that contains all the measuring points. As shown in Fig. 1:

Measuring point O`

actual line


Ideal straight line L


Fig. 1. Spatial straightness schematic diagram


C. Wang et al.

x  x1 y  y1 z  z 1 ¼ ¼ a b c


The point-wise parametric equation of the space line is shown in Eq. (1) and the point-to-space line distance solution method is shown in Eq. (2), mathematical model for spatial straightness errors evaluation is established, where (x1, y1, z1) is accommodate the fixed point on the ideal cylindrical axis L of all measuring points, (ɑ, b, c) is the vector parameter of L in the x, y, and z directions, f is the spatial straightness sought, and the objective function is the formula (3): vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi      u u x  x 1 y  y 1  2  x  x 1 z  z 1  2  y  y 1 z  z 1  2  þ  þ  u  a  b t a b  c  c  f ¼2 a2 þ b2 þ c 2 F ¼ minðmax f ðx1 ; y1; z1; a; b; cÞÞ

ð2Þ ð3Þ

3 Solution Algorithm 3.1

Beetle Antennae Algorithm

The beetle antennae search algorithm (BAS) is a new intelligent optimization algorithm proposed by Li etc. in 2017, which has the advantages of fast solution speed and high accuracy [7]. BAS is also called the Beetle algorithm, it is an algorithm developed under the inspiration of the foraging principle. The calf forages food based on the smell of the food and has two long tentacles, if the scent intensity received by the left antennae is higher than the right one, then search the food to the right and vice versa. The purpose of beetle is to find the largest point with the global odor value, and Li etc. follow behavior of beetle think out intelligent optimization algorithm to perform efficient function optimization. BAS is similar to particle swarm optimization, genetic algorithm and other intelligent optimization algorithms. The beetle antennae search algorithm don’t require a specific form of the function and gradient information, which can achieve efficient optimization and solution. Compared with the particle swarm algorithm, BAS algorithm search only requires one individual, that is, a beetle, so that the algorithm calculation amount is greatly reduced. However, original BAS algorithm has a fixed step size, which makes the search speed efficiency and accuracy more general in the global search and local search. In view of the above problems, the variable step beetle antennae search algorithm is designed in this paper, which makes algorithm starts to grow, and then gets smaller. Start with a rough search to determine the rough range. Later refined search, so that both of them combined, guaranteed the algorithm accuracy and global convergence speed between the balances. The start is equivalent to a rough search first to determine the approximate range. Later, fine search makes combination with the two and guarantees the balance between accuracy and global convergence speed.

Research on Straightness Error Evaluation Method



Algorithm Flowcharts

Read the measurement data and bring it into the objective function (3), Initialization of the beetle antennae search algorithm, mainly including: variable step parameter Eta, the distance between the beetle antennae d0, beetle step length step, iterations n, and Problem dimension D, random initial solution x = rinds(D, 1). X is a random initial value within D − 1. Then, calculate left antennae coordinates of beetle: XL ¼ x þ d0  dir=2


Calculate right antennae coordinates of beetle: XR ¼ x  d0  dir=2


Where dir = rands (D, 1), dir. is a random value in D − 1. Calculate antennae intensity (function fitness value). Fleft ¼ fðXLÞ


Fright ¼ fðXRÞ


Using the method of variable step to calculate the next position of beetle: x¼


x þ Eta  step  dirðXL  XRÞ; Fleft \ Fright x  Eta  step  dirðXL  XRÞ; Fleft [ Fright


Finally, judge the terminating condition. Whether the number of iterations meets the maximum number of iterations, if it is, the calculation is terminated, and if it is not satisfied, the loop is continued. The fitness function value after the iteration is terminated is the spatial straightness error of the measuring point, and the position coordinate is the solution satisfying the objective function (3), that is, the parameter of the equation of the spatial straight line. The iterative curve is shown in Fig. 2.

4 Experimental Verification In order to make further verify the model and algorithm proposed in this paper, the actual data of the partial space straightness measurement was verified by a threecoordinate machine, the measurement results are shown in Table 1. Some parameters of the VSBAS algorithm are set as follows: population size NP = 50, variable step parameter Eta = 0.95. The algorithm is implemented by matlab2016a programming [8], running memory is 8 GB, CPU is binuclear and 3.20 GHz, the operating system is on the Windows10 professional version of the high-performance computer.


C. Wang et al.

Fig. 2. Algorithm flowchart

Table 1. Straightness measurement by measuring the coordinates of the measured center of the center 1 2 3 4 5 6

X 6.60 6.49 6.95 6.16 6.33 6.72

Y Z 15.58 65.46 15.59 53.4 15.55 101.46 15.56 65.43 15.43 30.46 15.57 83.46

X 76.83 86.72 96.39 106.13 116.47 126.95

Y 15.46 15.49 15.52 15.51 15.53 15.54

Z 89.46 71.16 42.46 24.36 53.46 87.56

Table 2 shows the final results obtained using Genetic Algorithm (GA), Particle Swarm Optimization (PSO) [9] and VSBAS algorithm. From the calculation results, the numerical results obtained by the VSBAS algorithm are the smallest and the highest accuracy. Compared with other algorithms, VSBAS has greatly improved results and show that variable step beetle antennae search algorithm has a better effect on the model solution. Table 2. Straightness error calculation results Method GA PSO VSBAS

Iteration number Straightness error 300 0.0753 300 0.0701 300 0.0632

Research on Straightness Error Evaluation Method


Figure 3 shows the iteration curves of GA, PSO and VSBAS algorithms. As shown in Fig. 3, the convergence speed of GA is the slowest, PSO converges faster and more accurate than GA. VSBAS algorithm is superior to other algorithms in accuracy and convergence.

Fig. 3. Iterative curve of the algorithm

5 Conclusion In order to improve the accuracy of spatial straightness evaluation, this paper establishes the spatial straightness mathematical model and uses the variable-scale beetle antennae algorithm to solve it. Through experimental verification and comparison, the variable-scale beetle antennae algorithm is superior to other algorithms in accuracy and convergence. The intelligent optimization algorithm is simple and easy to understand in solving the problem of spatial straightness error, it is more convenient to solve by programming, and has achieved good results by applying the measurement data to the space straightness error assessment field. Acknowledgment. This research work was supported by Key Laboratory of Automotive Power Train and Electronics (Hubei University of Automotive Technology, No. ZDK1201703) and the youth project of Hubei Provincial Department of Education Foundation China (No: Q20181801).

References 1. Huang, F., Cui, C.: Comparison of evaluating precision of straightness error between least square method and least envelope zone method. Opt. Precis. Eng. 15(6), 889–893 (2007) 2. Zhang, X., Xie, J.: Evaluating spatial straightness error by approaching minimum enclosure cylinder. J. Huazhong Univ. Sci. Technol. (Nat. Sci. Ed.) 39(12), 6–9 (2011) 3. Wu, H.: Straightness measurement uncertainty evaluation based on monte carlo method and GUM method. Tool Technol. 51(5), 104–107 (2017) 4. Liao, P., Yu, S.: The solution of spatial straightness error based on genetic algorithm. J. Cent. S. Univ. (Nat. Sci. Ed. 6, 586–588 (1998)


C. Wang et al.

5. Mao, J., Cao, Y.: Evaluation method for spatial straightness errors based on particle swarm optimization. J. Eng. Des. 13(5), 291–294 (2006) 6. Ye, M., Tang, D.: Study on the evaluation of straightness error via hybrid least squares and artificial fish Swarm algorithm. Mech. Sci. Tech. Aerosp. Eng. 33(7), 1013–1017 (2014) 7. Jiang, X., Li, S.: BAS: beetle antennae search algorithm for optimization problems. arXiv preprint arXiv:1710.10724 (2017) 8. Lin, Z., Zhou, J.: Date processing of straightness error based on MATLAB. Tool Tech. 42(3), 84–87 (2008) 9. Wang, C., Yang, Y., Yuan, H., etc.: NC cutting parameters multi-objective optimization based on hybrid particle swarm algorithm. Mod. Manuf. Eng. (3), 77–82 (2017)

Production Management

Analysis of Machine Failure Sorting Based on Directed Graph and DEMATEL Min Ji(&) School of Intelligent Manufacturing and Control Engineering, College of Engineering, Shanghai Polytechnic University, No. 2360, Jinhai Road, Shanghai, China [email protected]

Abstract. The modern manufacturing industry is more and more large-scale, flexible and intelligent. The machine failures are also more complicated. In order to analyze the importance levels of failures and focus on the most possible failure during the production process, this paper proposes a machine failure sorting model based on directed graph and DEMATEL, which can eliminate the uncertainty of expert judgment, determine the direction of failures and sort each failure according to its centrality degree. An empirical analysis is presented to show the calculation process step by step. We can find that this model can help decision-makers to quickly locate the important failure and reduce production problems and production costs caused by failure. Keywords: Machine failure sorting

 DEMATEL  Directed graph

1 Introduction In the modern manufacturing industry, the machine failures are more and more complicated [1]. In order to understand the mechanism of the failure, this paper introduces a method based on directed graph model and DEMATEL to analyze the impact on the failure by the reliability of each subsystem, the interaction between each subsystem, the internal and external interference factors. In recent years, machine failure analysis has gradually become one of the hot topics at home and abroad. Many methods have been applied to fault analysis, such as neural networks [2], support vector machines [3], fault tree analysis (FTA) [4], and FMECA [5]. The neural network has been successfully applied to fault diagnosis because it has a simple structure and rapid training process. However, the diagnosis process is difficult to understand, and the structure and parameters are difficult to determine. At the same time, the training results are easily trapped in a local optimum. Based on the principle of structural risk minimization, SVM maps low-dimensional data into highdimensional space to classify data through an optimal hyperplane, which has high inference accuracy and good adaptability, and is very suitable for processing small sample data, but the determination process of optimal hyperplane takes a long time, and the selection of the determination of kernel function depends on the operator’s experience and requires repeated experiments. FMECA is mainly used to define, identify, and eliminate known or potential faults of the production system [6] by using the Risk © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 377–382, 2019.


M. Ji

Priority Number (RPN) to determine the impact of the failure. However, the risk factors are difficult to obtain an accurate assessment and the calculation of RPN also has some controversy. The FTA method analyzes various failure causes from the overall to the part according to the dendritic structure [7]. This method does not need to train data, but the establishment of the fault tree requires an accurate analytical model, and this method cannot show the knowledge used in the solution [8]. In view of the above problems, this paper proposes a method based on DEMATEL and directed graph model. The failure-directed graph was used to establish the interactions between failures, and the DEMATEL model was used to determine the key failure factors according to the centrality degrees.

2 DG-DEMATEL Method Based on the fault directed graph model, this study uses DEMATEL model to sort machine failures. Unlike methods such as AHP, DEMATEL does not require that the elements should be independent, and it is possible to determine the interrelatedness of various elements of the system. Among many failures, it can identify the cause failures and the result failures, and sort the failures according to the centrality degrees, which provide the basis for decision making. DEMATEL model is an effective method for failures analysis and identification. This method is more effective for dealing with complex social issues, especially for those systems with uncertain relationship, and is widely used in many fields [9–11]. By determining the degree of mutual influence between the failures, it uses the matrix theory to calculate the structural relationships between the failures and the influence sizes, and establishes the system structure model between the failures [12]. However, traditional DEMATEL model invites experts to evaluate the relationships between the failures by experience, which is subjective and the criterion is easy to be divided in semantic understanding, so we introduce the failure-directed graph model to describe the interaction between the failures, which ensures the objectivity of DEMATEL and eliminates the subjective influence. The specific algorithm is as follows: Analyze the direct relationships between the failures and calculate the strength of the relationship based on the fault directed graph. The degree of mutual influence among the failures can be calculated, where 0 means no influence relationship and N is the times of one failure caused by another. A direct influence matrix A can be constructed: 2

  A ¼ aij nn

a11 6 .. ¼4 . an1

 .. . 

3 a1n .. 7 . 5



aij means the times of failure j caused by failure i. Normalize the direct influence matrix A, and assign the element values of matrix A between 0–1:

Analysis of Machine Failure Sorting Based on Directed Graph

  X ¼ xij nn ¼


 max max


Pn j¼1

aij ; max

Pn i¼1





Construct a comprehensive influence matrix T: T ¼ X ðE  X Þ1


Calculate the degree of influence D and the degree of being influenced R of each failure: D ¼ ðti Þn1 ¼

n X


  R ¼ tj 1n ¼



n X i¼1


! ð4Þ

tij 1n

Differentiate the cause failures and the result failures, and calculate the centrality of each failure. If Di − Ri > 0, the failure i is the cause failure. If Di − Ri < 0, the failure i is the result failure. The result failure is influenced by the cause failure. The centrality degree Di + Ri can be arranged in descending order. The larger the value is, the greater the importance of the failure is.

3 Empirical Analysis We assume that a machine failure system is shown in Fig. 1, which consists of 10 subsystems with complex connections among them.

1 3

2 10




6 9


Fig. 1. The failure directed graph

According to the above steps, a direct influence matrix (as shown in Table 1) can be obtained. The larger the value, the greater the influence of one failure affected by another. aij means the times of failure j caused by failure i.


M. Ji Table 1. The direct influence matrix A 1 2 3 4 5 6 7 8 9 10

1 0 0 0 0 0 0 0 0 0 0

2 1 0 0 0 0 0 0 2 0 0

3 1 0 0 1 0 0 0 0 0 0

4 0 0 0 0 0 0 0 0 0 0

5 0 1 4 0 0 0 0 0 0 0

6 0 1 0 0 0 0 0 0 0 0

7 0 0 0 0 1 1 0 0 0 0

8 0 0 0 0 0 2 0 0 0 3

9 0 0 0 0 0 1 0 0 0 0

10 0 0 0 0 0 0 0 0 0 0

Normalize the direct influence matrix A, and construct a comprehensive influence matrix T (as shown in Table 2).

Table 2. The synthetic influence matrix T 1 2 3 4 5 6 7 8 9 10

1 0 0 0 0 0 0 0 0 0 0

2 0.2066 0.0331 0 0 0 0.1653 0 0.4132 0 0.2479

3 0.2 0 0 0.2 0 0 0 0 0 0

4 0 0 0 0 0 0 0 0 0 0

5 0.2013 0.2066 0.8 0.16 0 0.0331 0 0.0826 0 0.0496

6 0.0413 0.2066 0 0 0 0.0331 0 0.0826 0 0.0496

7 0.0485 0.0826 0.16 0.032 0.2 0.2132 0 0.0331 0 0.0198

8 0.0165 0.0826 0 0 0 0.4132 0 0.0331 0 0.6198

9 0.0083 0.0413 0 0 0 0.2066 0 0.0165 0 0.0099

10 0 0 0 0 0 0 0 0 0 0

According to the comprehensive influence matrix T, the influence degree and the degree of being influenced of each failure are calculated. The centrality degree Di + Ri of the failure i indicates its importance degree in the system. The larger the value, the more important it is. The cause level Di − Ri indicates the influence degree of failure i on other failures (as shown in Table 3). According to the above table, we can find that the causal failures are 1, 3, 4, 6 and 10. The failure 3 and failure 6 were the highest, the failure 1 and failure 10, and failure 4 were the lowest. The result failures are 2, 5, 7, 8 and 9. The descending order of centrality degrees are 8, 5, 2, 6, 3, 10, 7, 1, 4 and 9. According to the centrality degrees, we can divide the failures into three classes. The first class includes the most important failures: 8, 5 and 2, these three failures should receive special attention. The second class is the second important failures: 6, 3, 10, 7 and 1. After excluding the first class,

Analysis of Machine Failure Sorting Based on Directed Graph


Table 3. D, R, D-R, D + R of each failure 1 2 3 4 5 6 7 8 9 10

Di 0.7226 0.6529 0.96 0.392 0.2 1.0645 0 0.6612 0 0.9967

Ri 0 1.0661 0.4 0 1.5332 0.4132 0.7893 1.1653 0.2826 0

Di þ Ri 0.7226 1.719 1.36 0.392 1.7332 1.4777 0.7893 1.8264 0.2826 0.9967

Di  Ri 0.7226 −0.4132 0.56 0.392 −1.3332 0.6512 −0.7893 −0.5041 −0.2826 0.9967

the second class can be the focus of attention. The third class is 4 and 9, whose degree of importance in the entire system is the lowest. As you can see from the example above, according to the fault directed graph, the DEMATEL algorithm can find out the key failures affecting the operation of the system, and can sort the failures according to the degree of influence, so as to help the decision-makers understand the failures, focus on the key failures, and improve the operation stability of the production system.

4 Conclusion By using the DG-DEMATEL model, directed graph can objectively describe the direct and indirect influence relationship between failures and the influence strength, and the DEMATEL model can calculate the causal degree, result degree and centrality degree of each failure. According to the centrality degree, we can find the subsystem in which the failure frequently occurs and locate the weak reliability. The causal degree and the result degree can determine the direction between the failures in order to quickly find the failure. Acknowledgements. This work is supported by the Yong Teacher Training Project of Shanghai Municipal Education Commission (Grant No. ZZegd16007).

References 1. Xia, J., Wang, H.: Fault cause analysis of complex manufacturing system based on DEMATEL-ISM. J. Beijing Inf. Sci. Technol. Univ. (Nat. Sci. Ed.), (1) (2018) 2. Chine, W., Mellit, A., Lughi, V., et al.: A novel fault diagnosis technique for photovoltaic systems based on artificial neural networks. Renewable Energy 90, 501–512 (2016) 3. Patel, R.A., Bhalja, B.R.: Condition monitoring and fault diagnosis of induction motor using support vector machine. Electr. Mach. Power Syst. 44(6), 683–692 (2016)


M. Ji

4. Dongiovanni, D.N., Iesmantas, T.: Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case. Fusion Eng. Des. 109–111, 613–617 (2016) 5. Cai, Z., Sun, S., Si, S., Wang, N.: Modeling of failure prediction Bayesian network based on FMECA. Syst. Eng. Theor. Pract. 33(1), 187–193 (2013) 6. Zhai, S.: Research on failure analysis of automotive components quality based on after-sale data. Chongqing University (2016) 7. Liang, F.,Jiang, H., Guo, Y., Zhu, M.: Analysis welding robots diagnosis based on fault tree analysis. J. Mech. Elect. Eng. 31(8), 1067–1070 (2014) 8. Zong, Q., Li, G., Guo, M.: Design of diagnostic expert system for elevator system based on FTA. Control Eng. China 20(2), 305–308 (2013) 9. Wu, Q., Wu, C., Kuang, H.: Influencing factors identification of transportation lowcarbonization capacity based on the RBF-DEMATEL model. Sci. Res. Manag. 34(10), 131– 137 (2013) 10. Yang, Y., Yang, S., Zhang, M., Zheng, T.: An approach to the travel reservation app assessment based on DEMATEL. Tourism Tribune 43(2), 64–74 (2016) 11. Sun, H., Cheng, X., Dai M., Wang, X., Lang, H.: Study on the influence factors and evaluation index system of regional flood disaster resilience based on DEMATEL methodtaking CHAOHU basin as a case. Resour. Environ. Yangtze Basin 24(9), 1577–1583 (2015) 12. Tzeng, G.H., Chiang, C.H., Li, C.W.: Evaluating intertwined effects in e-learning programs: a novel hybrid MCDM model based on factor analysis and DEMATEL. Expert Syst. Appl. 32(4), 1028–1044 (2007)

Applying Decision Tree in Food Industry – A Case Study James Mugridge and Yi Wang(&) The School of Business, Plymouth University, Plymouth, UK [email protected]

Abstract. Managers Naked Necessities LTD has to make a decision as to whether the company should open a café selling hot food, or just cold snacks. The company also has the option to carry on trading as it is and not open a café. When faced with a decision such as this, the management should first identify whether the decision to be made is a qualitative or quantitate decision. This will influence the tools and models that should be used to make the decision. This is a financial decision and concerns numerical data, therefore a quantitative approach is advised. A decision tree can be a clear way to represent complex data in a simple graphical form. The calculations involved can be used to create scenarios and outcomes of the decision. If clear objectives of the decision have been established by the management, the decision making process can me made relatively simple by using a decision tree. The management should be advised that one of the main criticisms of the decision tree model is that it is prone to bias during the probability phase of the model. Managers should be aware of this issue. Literature suggests that as much historical, numerical data should be ascertained as possible to put into the calculations. The literature suggests that by using more numerical data, it will increase the validity of the results of the model. Keywords: Decision tree Business decision

 Decision making  Critical analysis

1 Introduction As a new business, one of it’s core objectives will be to grow and increase revenue. This is supported by Bonnet et al. [1] who state that small to medium enterprises should be focused on increasing market share. For the purpose of meeting it’s growth objectives, the business is considering opening a café alongside their retail enterprise. The decision to be made has three main options; serve cold snacks, provide hot food, or do not open a café. Hou et al. [2] state that in a situation such as this, the management should use all of the tools at their disposal to make an informed decision. In the case of Naked Necessities, the management need to decide whether opening up a café is the best course of action for the business. This is an example of strategic decision making and could determine the success of the business, as Papazov [3] explains in the journal article “Strategic Financial Decisions for Small Business Growth.” Management should set a core objective to aid in © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 383–388, 2019.


J. Mugridge and Y. Wang

the decision making process. In this case, it would be to increase revenue and profits. This is an important decision for the company, and Love and Roper [4] warn that a strategic growth decision of this nature could require a substantial capital investment. It is therefore, not a decision to be made without a great deal of consideration. According to literature, as much evidence, data, and experience should be considered as is available [5]. The aim of this data gathering is to reduce the risk involved in the decision, by minimizing uncertainties [6].

2 Decision Making Beiber et al. [7] explain that some decisions involve a more subjective opinion of the directors and management, or the customer base and views. Lewin et al. [8] agrees with this view and suggests that these types of decisions may be better served using a qualitative based decision making tool. It is likely that a qualitative view of decision making is likely to have originated from a psychologists, or sociologists perspective of solving management problems [9]. A Qualitative decision making tools such as shared decision making (SDM) would be more useful if, for example, the organization was trying to decide if the products they stock in the café should be organic or not. This example is supported by Ai et al. [10] who states that when criteria is subject to opinion, a qualitative decision making tool should be used. The quantitative dimension of decision making relates to the elements which utilise numerical values. The quantitative paradigm or decision making is likely to have originated from statisticians, primarily concerned with data and numerical figures, used to calculate and solve management issues [11]. Tillal et al. [12] in the journal article “quantitative and qualitative decision-making methods in simulation modelling”, describe the differences between qualitative and quantitative approaches to decision making. The difference in paradigms of the approach to decision making is a result of many academic disciplines exploring their own way of dealing with decision making, as Remingiusz [13] describes. As a result, there are many different ways to approach, and solve decision making problems. Galanc et al. [14] take the view that there may not be such a distinction between the approaches. They suggest that a primarily quantitative based approach would eliminate the role of a manager as a decision maker. The journal article suggests that even with quantitative approaches to decision making, a certain amount of judgement, and subjective assumptions must be made. It is important therefore, to view the models as aids in making decisions, rather than final answers.

3 Solution The core objective of Naked Necessities is to increase profits as a bi-product of growth. The decision to be made is complex, but can be aided by data analysis. Naked Necessities have two main options in the decision; open a café with hot food, or just serve cold food. Finally Naked Necessities could do nothing and continue trading without the café. Most of the outcomes of the decision can be quantified. The cost of

Applying Decision Tree in Food Industry – A Case Study


creating the café can be calculated using existing data, both for providing just cold food, and for providing hot. The potential earnings from the café can also be calculated, although this requires some level of assumption. The potential earnings of the café can be split into low and high earnings to provide two sub scenarios. Decision trees are used to assess the probability of outcomes and expected results. For the decision Naked Necessities is trying to make, it would be advisable to use a decision tree to calculate the potential outcomes of the decision based on quantitative modeling literature [10]. In Naked necessities situation, this would not cost anything, and should not impact the current profits. Therefore, there it has no result. The next phase of the tree is represented by a circle. This is where the option has uncertainty and two possible outcomes. In reality, this can be split further into more possible outcomes, however for simplicity, it has been kept as two in the example. The probability of the café earning a high revenue or a low revenue for each option should be calculated based on a certain number of visits per day. This can be represented as a probability, with 1 being full certainty of an outcome, and 0 being impossible. The total probability of the option can only total 1. This is represented in Table 1. Table 1. Historic data and assumptions on probability

Open café with hot food Open café with cold food Probability

Cost of option £190,000

High sales (annual profit) £300,000

Low sales (annual profit) £150,000






It would seem clear from the data provided in the example that if the company chose option 2 (serving cold snacks), the company would lose £40,000 over the year. Option 1 would make the company a £50,000 gain over the year. Doing nothing would not make or loose naked Necessities anything. In terms of ranking, Option 1 would be the best option. Doing nothing is the second most favorable. Over only one year, option two would be unadvisable as the company would lose money.

4 Critical Evaluation of the Solution Decision trees are a useful tool for quantitative decision making. The tool still requires some assumptions, but risk can be mitigated through extensive data acquisition an analysis. Jiuyong et al. [15] make the case that once the data has been formulated and calculations made to determine costs of options, and the probability of uncertainties, the decision tree is relatively simple to construct. Gepp et al. [16] add that the outcomes of the decision tree are transparent, clear, and provide simple to understand figures for management. This can be particularly useful in the case of Naked Necessities where the main objective is to increase revenue.


J. Mugridge and Y. Wang

As demonstrated in the example above, simple figures were determined from the calculations that made the decision between the options relatively straight forward for management. Decision trees can be expanded to consider all possible outcomes for decision choice. De Ville [17] explains how complex data can be presented in a decision tree, which is useful for management when over-complicated decisions need to be made. Correa et al. [18] add that decision trees not only deal with complex data, but can also they can be useful when dealing with sensitive financial decisions. Aharoni [19] states that often probabilities are made as assumptions, even when based on data. The article also argues that because of the assumptions, the result is a potential uncertainty to actual outcomes. Although a decision tree calculation is designed to account for uncertainty, it is at the stage of assigning probabilities that decision trees have some of their limitations. Therefore, a suggestion made by Kanget al. [20] is that decision trees should be used in conjunction with manager’s experience, and as much available data and modelling as possible. Huysmans et al. [21] highlight a major limitation of a decision tree and argue that even a small change in the input of data, has the potential to cause large changes in the outcomes. Combined with potential limitations of making assumptions, it brings into question the validity of decision trees as a method of modelling quantitative decision making. For more straightforward management decisions such as the example in Naked Necessities, the decision tree model could be well suited. One of the major issues highlighted by Mays et al. [22] is that decision trees don’t accommodate non-numerical factors. For a decision that requires a more qualitative approach, decision trees would be a poor choice of model to use. Opinion and more subjective decisions would require a model from a more psychological perspective, drawing on opinion and experience of the manager. Although Decision trees are relatively simple to create once the data is collated, Sim et al. [23] argue that the acquisition and research of the data and processing can potentially be time consuming. Hodgkinson et al. [24] explore the issues with decision trees being subject to bias from the manager during creation of the data, and can sometimes be prone to error as a result.

5 Conclusion As supported by the literature, if there is enough numerical data available to create accurate probabilities, then a decision tree can be a useful way to graphically represent the decision to be made. The calculations leave the viewer with a simple set of likely outcomes from each choice, and this can be easy for management to read and understand. Literature suggests that the company should first set it’s objectives for the outcome of the decision. This can aid in selecting which decision making tool to use. The outcome that best benefits the company’s objectives is then selected. In the example given, the outcome which made the company the most profit should be selected. When presented with a decision, the management should first identify whether the decision to be made is qualitative or quantitative. As the literature states, this will influence the tool that should be used. If a decision tree is selected to support the

Applying Decision Tree in Food Industry – A Case Study


decision making process, management should be careful not to add bias to the probability phase of the calculation. This is important in ascertaining accurate results.

References 1. Bonnet, J., Le Pape, N., Nelson, T.: The route to high growth: patterns of financial and operational decisions for new firms in France. Contemp. Entrepreneurship 40(2), 95–110 (2016) 2. Hou, I., Huang, C., Tsai, H., Lo, Y.: Research on decision making regarding high-businessstrategy café menu selection. Int. J. Comput. Sci. Inf. Technol. 7(2), 89 (2015) 3. Papazov, E.: A “Reverse” approach to coordination of strategic and tactical financial decisions for small business growth. Procedia - Soc. Behav. Sci. 156(1), 161–165 (2014) 4. Love, H., Roper, S.: SME innovation, exporting and growth: a review of existing evidence. Int. Small Bus. J. 33(1), 28–48 (2015) 5. Brynjolfsson, E., McElheran, K.: The rapid adoption of data-driven decision-making. Am. Econ. Rev. 106(5), 133–139 (2016) 6. Van der Kleij, M., Vermeulen, A., Schildkamp, K., Eggen, J.: Integrating data-based decision making, assessment for learning and diagnostic testing in formative assessment. Assess. Educ. Principles Policy Pract. 22(3), 324–343 (2015) 7. Bieber, C., Müller, G., Blumenstiel, K., Schneider, A., Richter, A., Wilke, S., Hartmann, M., Eich, W.: Long-term effects of a shared decision-making intervention on physician–patient interaction and outcome in fibromyalgia: a qualitative and quantitative 1 year follow-up of a randomized controlled trial. Patient Educ. Counsel. 63(3), 357–366 (2006) 8. Lewin, S., Glenton, C., Munthe-Kaas, H., Carlsen, B., Colvin, J., Gülmezoglu, M., Noyes, J., Booth, A., Garside, R., Rashidian, A.: Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses. PLoS Med. 12(10), 18–30 (2015) 9. Teale, M., Dispenza, V., Flynn, J., Currie, D.: Management Decision Making, pp. 10–24. Pearson Education Ltd., London (2003) 10. Ai, J., Brockett, L., Wang, T.: Optimal enterprise risk management and decision making with shared and dependent risks. J. Risk Insur. 84(4), 1127–1169 (2017) 11. Kakeneno, R., Brugha, M.: Usability of Nomology-based methodologies in supporting problem structuring across cultures: the case of participatory decision-making in Tanzania rural communities. CEJOR 25(2), 393–415 (2017) 12. Tillal, E., Zahir, I., Ray, P., Love, P.: Quantitative and qualitative decision-making methods in simulation modelling. Manag. Decis. 40(1), 64–73 (2002) 13. Gawlik, R.: Encompassing the work-life balance into early career decision-making of future employees through the analytic hierarchy process. Adv. Intell. Syst. Comput. Ser. 594 (2016) 14. Galanc, T., Kolwzan, W., Pieronek, J., Skowronek-Gradziel, A.: Logic and risk as qualitative and quantitative dimensions of decision-making process. Oper. Res. Decisions 26 (3), 21–42 (2016) 15. Jiuyong, L., Saisai, M., Thuc, L., Lin, L., Jixue, L.: Causal decision trees. IEEE Trans. Knowl. Data Eng. 29(2), 257–271 (2017) 16. Gepp, A., Kumar, K., Bhattacharya, S.: Business failure prediction using decision trees. J. Forecast. 29(6), 536–555 (2010) 17. De Ville, B.: Decision trees. Comput. Stat. 5(6), 448–455 (2013)


J. Mugridge and Y. Wang

18. Correa, A., Djamila, B., Ottersten, B.: Example-dependent cost-sensitive decision trees. Expert Syst. Appl. 42(19), 6609–6619 (2015) 19. Aharoni, Y.: The foreign investment decision process. In: International Business Strategy, pp. 24–34. Routledge (2015) 20. Kang, N., Feinberg, M., Papalambros, Y.: Integrated decision making in electric vehicle and charging station location network design. J. Mech. Des. 137(6), 61402 (2015) 21. Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., Baesens, B.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011) 22. Mays, N., Pope, C., Popay, J.: Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. J. Health Serv. Res. Policy 10(1), 6–20 (2005) 23. Sim, Y., Teh, S., Ismail, I.: Improved boosted decision tree algorithms by adaptive apriori and post-pruning for predicting obstructive sleep apnea. Adv. Sci. Lett. 24(3), 1680–1684 (2018) 24. Hodgkinson, P., Maule, J., Bown, J., Pearman, D., Glaister, W.: Further reflections on the elimination of framing bias in strategic decision making. Strateg. Manag. J. 23(11), 1069– 1076 (2002)

Applying Decision Tree in National Health Service Freddy Youd and Yi Wang(&) The School of Business, Plymouth University, Plymouth, UK [email protected]

Abstract. Managers are often required to make decisions that will ultimately affect an entire organisation. This decision therefore needs to be structured and based upon reasonable evidence available to the manager. As with every organisation, managers are likely to face a number of problems that will require effective and appropriate solutions. A problem identified in this report is the high number of resources allocated to the recruitment process for the National Health Service. A clear solution has been identified, with the use of decision trees available to managers. This problem-solving method has proven to still be relevant in the modern era, with a number of benefits available to any organisation using decision trees. As mentioned throughout this report, using decision trees will enable managers to structure and have the correct evidence behind any managerial decision-making process for an organisation. Keywords: Decision tree Product/service design

 National Health Service  Cost effective

1 Introduction Managers are often at the top end of an organisation having to make decisions that can have an overall and lasting impact on a firm. However, these decisions will often lead to a significant commitment of resources, with significant impact on the firm as a whole and on its long-term performance [1]. With organisations growing significantly since the turn of the century, there are many challenges that organisations need to overcome in order to succeed. A number of decision making problems include, supplier selection, application of a specific technology, improving group decision making in a multicultural environment, and improving collaboration between marketers and designers. These are just a few problems which have been highlighted that require effective solutions. The NHS is currently the 5th biggest employer in the world [2]. The organisation faces a number of recruitment issues year on year, particularly when identifying the right talent for individual roles [3]. With health care technology on the rise, this is constantly creating a demand for highly skilled employees who are trained across this new technology. The problem the NHS faces is making sure the right applicants are applying for the right roles within the company. These problems require solutions that are cost effective and efficient for an organisation. There are a wide range of solutions available to firms that can provide them with the answers to their problems. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 389–393, 2019.


F. Youd and Y. Wang

These include, multi criteria decision making, quality function deployment, game theory, and decision trees. To overcome the problem of wasting resources finding ineligible applicants, the use of pre-employment decision trees can be used as a solution to this problem. To explore this problem and solution further, included in this report will be a literature review, assessing the concept and content surrounding decision trees with further assessment on how beneficial this solution can be to modern day organisations. Following on from this review, a more in-depth approach will explore the recruitment problem the NHS is currently facing. To go forward from this problem, a suitable solution will be highlighted examining the key strengths and weaknesses behind using this resolution. This will enable this solution to be critically evaluated, highlighting whether or not this is the best solution for this problem.

2 The Problem The NHS employs an average of five thousand full time members of staff each year [4]. This has a significant impact on all areas of the NHS. Despite their best recruitment methods, the NHS is still managing to spend over £400 million a year on agency staff [5]. A further problem the NHS is facing is employee turnover, this can be linked to the recruitment process, with the NHS potentially employing unsuitable candidates for available roles. It may be that the job is not for them, or simply their skills do not fit the criteria the NHS are looking for. However, employing these staff are having a detrimental effect throughout NHS England and Wales. Figures have shown that at least ten per cent of qualified nurses leave the NHS within a year of qualification [6]. The growing problem within the NHS is the time, money and labour resources that are being wasted on matching an individual’s ability to the organisational need. The NHS’ current recruitment strategy is in desperate need of change due to the latest developments of health care technology. It is now essential that the NHS are employing the most suitable candidates that are highly skilled and have experience with the latest new technologies. However, there is no such solution in place to avoid applicants applying, that may not have the experience or attributes to carry out the job role. Therefore, a clear recruitment strategy is required to prevent a mismatch between an applicant’s skills and actual job demands [3]. This problem is a bigger risk in the health care system, given that the quality of patient care is at stake. This problem is not only affecting the NHS, but other organisations as well. This problem is relatable to any organisation recruiting high volumes of individuals. Hei [5] explores how companies hiring software engineers are struggling to find the ideal candidate from a large number of employee profiles. The problem is the inability to filter the top candidates from these lists.

Applying Decision Tree in National Health Service


3 Solution This For the NHS to overcome this problem, a suitable and effective solution is required. This needs to ensure that the solution is the correct and most appropriate fit for the problem. As discussed within the literature review, the use of decision trees is the most relevant solution to this problem. This will be discussed in further detail below. As stated by [3] pre-employment decision trees can play an integral part in the recruitment process. This can ultimately be used to encourage the applicant to either apply for the role or disregard the position completely. The main use of the decision tree, allows an applicant to follow through the necessary pathways of the structure, allowing the applicant to determine whether or not their skills and attributes fit that role. This efficient approach can be used throughout a wide range of organisations and can be effective for the majority of recruitment strategies. Although by using a decision tree, you cannot stop an applicant from applying, it does provide the individual with a set of attributes that can be matched to themselves. For organisations adopting this approach to recruitment, it provides managers with clear decisions based on the applicant’s responses [8]. Further evidence is provided below on the benefits and limitations to using this solution for the problem identified.

4 Critical Evaluation of the Solution For managers exercising the use of decision trees in the hiring of employees for an organisation will bring a host of advantages to the company. One of the main benefits of using decision trees for this problem, is it provides managers with a broad framework for identifying key attributes of an individual [9]. They can use this information provided to match these skills to the organisation itself. This will provide managers with a better idea of how compatible the two parties are. Decision trees also allow for the manager to state the attributes required for the role. This allows any candidates interested in the role, to match their own skills to those required, and then determine whether or not to apply for that role. There are also no prior assumptions about the nature of the candidate, which is an added benefit of using decision trees for recruitment problems [10]. A further added benefit for the organisation throughout the recruitment process is the reduction in the administration cost. By using preemployment decision trees, it eliminates the need to carry a file for an applicant who does not manage to effectively navigate through the tree [3]. Having analysed the benefits of using decision trees for this problem, there are also several disadvantages that need to be considered. As mentioned within the literature review, decision trees can tend to be overcomplicated and too difficult to understand [7]. This is therefore a problem when analysing the data. However, the example used in this report would be difficult to describe as being overcomplicated given the lack of branches and leaves on the tree. Decision trees could also be considered to have a slight level of bias, given the example used, it could potentially discriminate certain applicants from applying. Although the use of a decision tree can prove to save organisations time, creating the decision tree to suit individual organisations can prove to be


F. Youd and Y. Wang

extremely time consuming [11]. Perhaps the most significant limitation in this process is that this type of decision tree can only act as a guide of whether or not to apply for a position. The tree cannot stop an applicant from applying, even if they decide that they do not suit the above criteria. This therefore is an argument against one of the benefits mentioned, with the fact it may indeed not save the organisation any time. Managers will therefore still need to make decisions but can use this process as a guide throughout the recruitment stages.

5 Conclusion This report has explored a variety of problems that are currently facing managers in modern times. Having explored the NHS in detail, it has been evident throughout this report, the time, money and resources spent throughout their recruitment process. The challenges the NHS faces in recruiting and retaining their employees has been made clear, and a solution is required to make this process more efficient and effective. This report has challenged their original process by providing the NHS with an alternative for recruiting new applicants. With a clear example of how a decision tree could be used within the NHS, this will provide managers with more structured decisions on which applicants would be most suited to the organisation. The decision tree is a way to save both money and resources, and benefits both the applicant and the organisation in saving each other time. This approach has been highlighted as the most appropriate solution to the problem, given the criteria it meets in dealing with the challenges of the NHS’s recruitment process. Throughout this report a number of benefits and limitations have been highlighted, including a number of ways in which the example decision tree could be improved. If the NHS were to proceed in using this solution for their recruitment problem, it could be expanded on in the future. This type of decision tree could have several other benefits to the organisation. One possibility the NHS could look to explore, would be to use decision trees for matching applicants against one another. This could provide managers with a solution, when deciding upon which applicant to hire for the organisation. Having analysed and weighed up each of the advantages and disadvantages of using decision trees, it has been evident throughout this report that they are still appropriate in modern times. By using the above solution, the NHS could potentially gain and have at their disposal, a more highly skilled and appropriate talent pool of applicants.

References 1. Chambers, D., et al.: Strategic decision-making processes: the role of management and context. Strateg. Manag. J. 19(1), 115–147 (1998). 10.1002/%28SICI%291097-0266%28199802%2919%3A2%3C115%3A%3AAIDSMJ941%3E3.0.CO%3B2-5. Accessed 17 Apr 2018 2. Nuffield Trust: The NHS is the world’s fifth largest employer, Nuffield Trust (2017). Accessed 13 Apr 2018

Applying Decision Tree in National Health Service


3. Liberman, A., Rotarius, T.: Pre-employment decision trees: job applicant self-election. Health Care Manager 18(4), 48–54 (2000). org/ehost/pdfviewer/pdfviewer?vid=2&sid=2b94a528-42c0-40d9-9d2b-d12e8d49eed2% 40sessionmgr4008. Accessed 10 Apr 2018 4. NHS Staffing Numbers: NHS FTE staffing numbers outside general practice, 2010–17, The Kings Fund (2017). Accessed 19 Apr 2018 5. Bodkin, H.: NHS spending on agency staff increases despite control measures, The Telegraph (2016). . Accessed 17 Apr 2018 6. Newman, K., et al.: The nurses satisfaction, service quality and nurse retention chain. J. Manag. Med. 16(4), 271–291 (2002). 1108/02689230210445095. Accessed 20 Apr 2018 7. Hei, S.: A decision tree approach to filter candidates for software engineering jobs using GitHub data. Comput. Sci. J. 1(1), 1–34 (2015). etd-03232015-135032/unrestricted/SHei.pdf. Accessed 16 Apr 2018 8. Delen, D., et al.: Measuring firm performance using financial ratios. Expert Syst. Appl. 40 (10), 3970–3983 (2013). 413000158-main.pdf?_tid=e22c588c-e535-4ae8-a095-82f3d090127a&acdnat=1524580388 _4b216d65572991046ab26cb93e5f29e1. Accessed 24 Apr 2018 9. Duncan, R.: What is the right organisational structure? Decision tree analysis provides the answer. Organ. Dyn. 7(3), 59–80 (1979). %40sessionmgr4006. Accessed 17 Apr 2018 10. Zhao, Y., Zhang, Y.: Comparison of decision tree methods for finding active objects. Adv. Space Res. 41(12), 1955–1959 (2008). 24493100_1d6ef89e7d6a8b1cfb9caa924de1d20b. Accessed 12 Apr 2018 11. Ville, B.D.: Decision trees. Comput. Stat. 5(6), 448–455 (2013)

Cognitive Maintenance for High-End Equipment and Manufacturing Yi Wang1, Kesheng Wang2,3(&), and Guohong Dai3 1

School of Business, Plymouth University, Plymouth, UK [email protected] 2 Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Trondheim, Norway [email protected] 3 Changzhou University, Changzhou, China

Abstract. Traditionally, In order to predict impending failures and mitigate downtime in their manufacturing facilities, we have to combine many techniques, both quantitative and qualitative, such as smart sensors, high-end intelligent equipment, smart networks, Internet of Thing (IOT), Artificial Intelligence (AI), business analysis decision-making and Internet of service IOS. Based on Industry 4.0 concept, Cognitive Maintenance (CM) or Intelligent Predictive Maintenance (IPdM) systems, which uses intelligent data analysis and decision making techniques, offers the maintenance professionals in high-end equipment the potential to optimize maintenance tasks in real time, maximizing the useful life of their equipment and manufacturing assets while still avoiding disruption to operations. In this paper, we will present the impact of CM to highend equipment, the framework of Cognitive Maintenance (CM) system and a case study. Some lessons learned from the implementation of CM system in industry are discussed. Keywords: Cognitive maintenance  Industry 4.0  High-end equipment Predictive maintenance  Data analysis  Green monitor  Smart manufactory

1 Introduction High-end equipment and manufacturing industries are continuously keeping the equipment and manufacturing processes in order for having the expected production rates for making products with increasing complexity and high quality. In recent years, the growth of Cognitive Maintenance or Intelligent Predictive Maintenance (IPdM) has been most pronounced in industries. CM systems are a recent maintenance strategy which is based on AI and big data techniques in Industry 4.0 and applied for high-end and expensive equipment and industry. CM not only optimizes equipment uptime and performance, but also reduces the time and labor costs of checking preventive maintenance. Industry 4.0 provide a systematic way to develop CM system, where more powerful sensors, together with big data analytics, offer an unprecedented opportunity to track high-end equipment performance and health condition. However, industry only

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 394–400, 2019.

Cognitive Maintenance for High-End Equipment and Manufacturing


spend 15% of their total maintenance costs on predictive (vs reactive or preventative) maintenance, because CM is not so easy to be implemented in practice. More research and practice experience are needed. In the future, it is extremely necessary to cultivate talents who know how to use data to assist in maintenance. In addition, it assists industries that are still in preventive maintenance, and is promoted to the high-value predictive maintenance upgrade. A series of new technologies, for example, sensors, IOT, cloud computing, big data, data analysis, and smart decision-making are developed to support a new industry revolution, that is, Industrial 4.0 or the fourth industry revolution. In Industry 4.0, the following potential working area will be created in smart manufacturing or smart factory: Big Data Driven Quality Control; Robot automated Production; Smart Logistics Vehicles; Production Line Simulation; Supply Network; Cognitive Maintenance; Machines as a Service; Self-Organizing Production; Additive Manufacturing of Complex Parts; Augmented Work, Maintenance, and Service; etc. Five sections has been organized ass the following: Sect. 1 introduces the importance of Cognitive Maintenance system for high-end equipment industry. The framework of CM is described in Sect. 2. A project of Green Monitor as a case of CM has been presented in Sect. 3. In Sect. 4, we discuss the lessons we have learned for implementation of the concept of CM in high-end equipment industries. The conclusions and future development are put in Sect. 5.

2 The Framework of CM System Cognitive Maintenance (CM) aims to maximize the useful life of their equipment and their components while avoiding unplanned downtime and minimizing planned downtime. With the advent of Industry 4.0 for production, companies are able to leverage new technologies in order to monitor and gain deeper insight into their operations in real time, turning a typical high-end equipment into a smart factory. Simply put, a smart factory is one equipped with technology that enables machine-tomachine (M2M) and machine-to-human (M2H) communication in tandem with analytical and cognitive technologies so that decisions are made correctly and on time. CM system utilizes data from various sources, such as critical equipment sensors, enterprise resource planning (ERP) systems, computerized maintenance management systems (CMMS), and production data. Smart factory management systems couple this data with advanced prediction models and analytical tools to predict failures and address them proactively. Additionally, over time, new machine-learning technology can increase the accuracy of the predictive algorithms, leading to even better performance. Several monitoring/sensor systems in high-end equipment require data mining and decision support technologies for making accuracy fault diagnosis and prognosis to different machines or components. Therefore, a systematic framework based on data mining and decision support technologies to predict the faults and Remaining useful life (RUL) for machine centers health is necessary. Figure 1 present a systematic framework which is used to develop a cognitive maintenance system in high-end equipment for the purpose of fault diagnosis and prognosis.


Y. Wang et al.

Fig. 1. The framework of CM system [2]

The CM solution offers an innovative new possibilities for many industries. The big data are collected by use of Cyber-physical Systems (CPS) and transmitted by Internet of Things (IOT). The health condition machine/process is automatically monitoring. Using AI and data mining process, we can find any abnormal patterns that indicate failures of machines. This decision use Internet of service (IOS) to allow maintenance professionals to take some suitable actions, for example, to stop the machines early before the machine will broke down or make corrective maintenance effectively and safely. That means unplanned shutdown can be avoided. The framework are consists of the following modules: • CPS module for monitoring the condition of equipment The main functions of the CPS module are to connect sensors and equipment to monitoring the condition of the equipment. Manufacturing systems includes equipment, manufacturing processes and products. Many kinds of sensors has been used in different CPS, for example, vibration sensors, acoustic emission sensors, ultrasonic sensors, thermal sensors and computer vision systems. • Internet of Things (IOT) and cloud computing module The sensor signals from physical equipment are transmitted through digital networks or IOT system to local data bases or cloud data center for further usage. Some advanced machines have own SCADA system. Sensor data can be fused with SCADA data for better analysis in Data mining module. • Data Mining (DM) and Knowledge Discovery (KD) module Most important and difficult tasks in CM is to diagnose the reasons for failure of equipment and predict the Remaining Useful Life (RUL) of components and equipment. Many companies who are interested in implementing CM are most lacking in data analysis skills, which is related to knowledge discovery and data mining technologies. Our data mining module contains the four functions need to be done: 1. Signal preprocess; 2. feature extraction; 3. Fault diagnosis and 4. Fault

Cognitive Maintenance for High-End Equipment and Manufacturing


prognosis. The data-driven methods based on Computational Intelligence or soft computing has been applied to solve diagnosis and prognosis in CM. • Internet of Service (IOS) module The results of data mining module can be used for business application. Internet of Service (IOS) module consists of three functions: 1. Key Performance Indicator function; 2. Maintenance planning and optimization function; and 3. Failure correction and compensation. The last function will be connected with CPS system to use the results from data mining module to make physical changes of the component and equipment to make manufacturing system in order. The details of the framework of CM can be found in the reference [2].

3 Green Monitor – CM Implementation Case The purpose of high-end equipment in manufacturing is to make high quality products with fast, accurate and safe machining processes. But if an unexpected failure happens, it will make a devastating accident and financial losses for the company. Early detection and prediction of a fault and keep machine in order is an important factor for the company to be successful in a competitive market. Recently most advanced industries have run Cognitive Maintenance programs which is focus on monitoring, big data, IOT and cloud computing, fault diagnosis and prognosis, novel intelligent techniques in data analysis in high-end equipment industry. For the purpose of implementing the framework of CM proposed, Norwegian University of Science and Technology (NTNU) has cooperated with InterConsult Bulgaria (ICB) and Kongsberg Terotech (KTT) to work on a Green Monitor project supported by Norwegian Research council. In the project, NTNU as a research institute developed the main framework and strategy of CM for monitoring and detect the high-end equipment - of an important and complex vertical machine center. ICB provided their assistance in developing an IOT and Could-based service and integrated software, empirical knowledge, and data sources for the project. KTT is a maintenance company for maintaining the machine center in a good operation process. The machine center is a critical equipment in the manufacturing of a company. The reason for selecting the machine center as a test bed is that it is easy and possible to setup new sensors and collect existing data in a control system. The aim of the project is also to decrease the cost on faults, defects, maintenance and energy during the manufacturing process through remote condition monitoring and data analysis model based on shallow and deep Artificial Neural Networks techniques. As shown in Fig. 2, in the CPS module, the data is collected from remote customer site of KTT, which will collect sensor data, SCADA data and maintenance history data and transported them to the data analysis center (NTNU) for data analysis through IOT and clouding computing module (ICB). In the data mining and knowledge discovery module, we can do data analysis/mining process. In the IOS module, then the results and decision made by NTNU will return to the customers (KTT) for further decision-making by maintenance experts who will execute maintenance actions, scheduling and correction, and compensation of the errors, for example, to change scheduling and prepare logistics of parts or to adjust/repair components of the machine center. The more details on the project, you can referrer the articles [1–5].


Y. Wang et al.

Fig. 2. The Green Monitor system of IPdM in machine center maintenance

4 Lessons We Have Learned Maintenance strategy and processes are the core elements for any successful maintenance organization. And it’s important to note here that while technology is a key enabler. But it’s only one of the important factors for success. We have to understand that not all companies require the same level of cognitive maintenance. Your organization’s mission requirements and maintenance program maturity should be the first step to run a predictive maintenance project. Then you can make your prototype with one well-suited equipment to get experience and knowledge. Prime prototype for the equipment should be highly integral to operations and must fail with some regularity in order to create baseline predictive algorithms. Through several practice of implementing the CM systems for high-end equipment in different industries, such as machine centers, wind turbines, elevators and hydropower plants, some lessons we have learned are summarized in the following: • Define the business objective within company It is of paramount importance to define the business objective clearly and to decide which critical components or equipment to be focused on in their factory or enterprise. These components and equipment are able to offer enough data including normal or abnormal conditions for develop a correct analysis model. Examples of critical sub-systems are the gearbox from a wind turbine, a fan disk from a jet engine or a turbo charger of a diesel engine and backlash compensation for machine centers.

Cognitive Maintenance for High-End Equipment and Manufacturing


• Understanding the full nature of the equipment We have to understand the characteristics of the equipment, including its individual function, its relationship to other machines/devices in the system, and the cascading effects that will result if an equipment fails, for example, the impacts on processes and products, operators and managers, maintenance activities, and company reputation if a single machine fails while it’s in production for a manufacturing factory. • Start with small dataset Putting all the company’s data in a very large database for analysis is not the first step you should start with. Start with a small dataset and learn what do you need for the big one. Quantify the characteristics of this dataset, discover the quality, the relationship, mutual information of the signals, develop methods to deal with missing data, make a choice which database system is suitable for you, etc. When you gather enough knowledge about your data your project is ready to scale up. Examples of small datasets are data from one wind turbine instead of data of the whole wind park; data from one jet engine of an airline instead of data from all engine of the whole fleet, etc. • Big Data make maintenance process faster and efficient Even though Green Monitor project did not focus on solutions for data collection and storage, exchange of experience with different data collection systems and platform was a part of activities of CM project. Big data platform is one requirement of providing easy and effective access to monitoring and sensor data. Current maintenance practices focus on periodic, sometimes manual, data analysis based on a pre-defined library of undesired events. The techniques used include exceedance detection, statistical analysis, and trend analysis. Big data technologies can make this process much faster and much more efficient by automating the existing techniques. For this purpose the company should hire good data engineers capable to design the right data infrastructure for automatization and continuous monitoring, data collection and data analysis. • Deep Learning (DL) for future CM A Cognitive Maintenance (CM) strategy is a data driven approach which use AI machine learning, computational Intelligence methods to determine the health condition of equipment to preemptively perform maintenance to avoid adverse machine performance. In CM, data is collected over time to monitor the state of the machine and then analyzed to find patterns to predict failures. As the big data can be collected easily and effectively, modern algorithms needs to be developed. Based on our experiment, Deep Learning is the most fruitful tools for implement big data analysis. There are many types of models or tools of Deep learning. We have to study which model is best for our analysis purpose. For example, we can use Long Short Term Memory (LSTM) networks for predict Remain Useful Life (RUL) of equipment, since they’re designed to learn from sequences of data.


Y. Wang et al.

5 Conclusions The framework to CM developed has shown a systematic approach according to the pioneer contributors for cognitive maintenance in machine centers, which is based on the data mining and deep learning techniques. Moreover, Data mining technologies will play a core role for cognitive maintenance in machine centers due to their complexity and high machining precision. Some lessons from Green Monitor practice are listed in this paper. The first step to start with is to scope the project by defining the business objective. Within the scope, narrow the focus on a critical equipment or sub-system and try to show first results as soon as possible using small dataset. Give your data scientists the freedom to use their preferable tools and let them choose for interpretable and explainable models. How to integrate data mining algorithms, especially, Deep Learning algorithms into cognitive maintenance systems (sensors allocation, data collection/cleaning/ transmission, maintenance management and service) will be the hot topic and a key successful factor for industries to make Industry 4.0 prospects in the future.

References 1. Wang, K., Li, Z., Braaten, J., Yu, Q.: Interpretation and compensation of backlash error data in machine centers for intelligent predictive maintenance using ANNs. Adv. Manuf. 3(2), 97– 104 (2015) 2. Wang, K.: Key technologies in intelligent predictive maintenance (IPdM)—a framework of intelligent faults diagnosis and prognosis system (IFDaPS). Adv. Mater. Res. 1039, 490–505 (2014) 3. Zhang, Z., Wang, K.: Wind turbine fault detection based on SCADA data analysis using ANN. Adv. Manuf. 2(1), 70–78 (2014) 4. Wang, Y., Ma, H., Yang, J., Wang, K.: Industry 4.0: a way from mass customization to mass personalization production. Adv. Manuf. 5(4), 311–320 (2017). s40436-017-0204-7 5. Li, Z., Wang, Y., Wang, K.: A data-driven method based on deep belief networks for backlash error prediction in machining centers. J. Intell. Manuf. (2017). s10845-017-1380-9 6. Li, Z., Wang, Y., Wang, K.: Intelligent predictive maintenance for fault diagnosis and prognosis in machine centers: Industry 40 scenario. Adv. Manuf. 5(4), 377–387 (2017)

Decision-Making and Supplier Trust Abbie Buchan and Yi Wang(&) The School of Business, Plymouth University, Plymouth, UK [email protected]

Abstract. This report will highlight a problem in regard to supplier management’s decision making, and how this decision can affect their organisation and their supply chains. Therefore, underlining that there are necessary steps and theories which need to be taken into consideration when making important decisions, and how specific models can be applied to assist these decisions. Decision-making is a daily procedure carried out by management because they are making decisions daily regarding many aspects in their departments, people are said to make decisions after the reflection they regard as wrong. Although there are strategic models which can be used to help aid these decisions, they can also be used for solutions as not every decision made is successful or know to follow these models for guidance. Keywords: Tesco

 Decision making  Scoring model  Business decision

1 Introduction Tesco’s 2013 horse meat scandal highlighted that they have a supplier management decision-making problem in regard to who they can trust of their suppliers within their supply chain [1]. This is because Tesco was very publicly caught out by the Ministry of Agriculture for advertising that their Everyday Value frozen beef burgers and lasagne contained 100% beef, when they had traces of horse meat DNA which had been supplied by their trusted chosen suppliers [2]. Due to Tesco providing such a wide range of beef products to their consumers they have a wide range of suppliers around the UK and Ireland. Having these suppliers so wide spread means Tesco has a long supply chain which allows for long-term supply relationships to be built with many meat suppliers [3]. However, having this type of supply chain has to allow for transparency, communication and trust which was not provided by Tesco’s suppliers. This problem has publicised that Tesco’s supplier management have made the wrong decisions when trusting and picking suppliers to supply their meat as they were unable to follow guidelines such as standards on food safety, and quality which had been set by Tesco’s [4]. This management decision-making problem has left Tesco with unreliable suppliers and unreliable Tesco Value products which have been sold to their consumers throughout the UK. There are many decisions which would have needed to be made by Tesco’s supplier management, such as being responsible for setting and implementing guidelines in regard to Tesco’s supply chain [5]. But, one of the main decisions in regard to their © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 401–405, 2019.


A. Buchan and Y. Wang

supply chain is developing the criteria of how suppliers are chosen, and what their minimum requirements are in order to become one of Tesco’s beef suppliers [6]. Büyüközkan [7] makes the point that the success of a supply chain is highly dependent on its suppliers and, thus, the supplier selection problem needs to consist of major research beforehand. Therefore, Tesco’s supplier management failed to check their suppliers and ensure they were meeting Tesco’s requirements against the criteria they had set, such as only sourcing all meat/ingredients from the UK and Ireland [1]. Thus, supporting that Tesco trusted their suppliers to meet their criteria without following through with thorough quality checks and specific guidelines. Due to these decisions being made, the level of uncertainty and risk has been increased and heightened. Risk needs to be pre-analysed to create pre-loss objectives [8]. But, this was not carried out by Tesco as they were unaware of their suppliers sourcing from outside of the UK and Ireland. This made it very difficult for Tesco’s to prepare for the losses they received after this which widely affected their organisation.

2 Solution The Supply chain management must consider applying the solution of more in-depth criteria to assess potential suppliers against before they work with them to ensure complete supplier trust in order to rebuild their trust and their consumers trust [9]. Thus, it has been made obvious that Tesco have suffered extensively from this particular decision-making problem within their supply chain [10]. Therefore, this has caused for the need of a detailed solution to ensure suppliers are fully aware of their duties. Multiple criteria decision-making is said to be the selection of the best, from a set of alternatives, each of which is evaluated against multiple criteria [11]. Thus, this leads on to the use of the scoring model which is said to be the easiest way to identify the best decision alternative for a multi-criteria decision problem (Moore 1969). This model allows for quantitative and qualitative factors to be taken into consideration on the final decision [12]. Therefore, allowing the comparison between scores for each criterion, thus allowing the decision maker (DM) to learn why a particular decision alternative has the highest score which makes it the best decision [13]. Afterwards, assigning a weight to each criterion, rating how well each decision alternative satisfies each criterion, compute the score for each decision alternative, and finally order the decision alternatives from highest score to lowest score [14]. Step four which is consisting of computing the score allows a mathematical model to be constructed to help allow the DM to create a numerical ranking system, for example, Sj = wi rij highlights the score, weight and rating in regard to criterioni and decision alternativej [13]. Thus, allowing the DM to use this against their criteria which will impact the selection of alternatives. Yoon & Hwang [15] mentions that the DM gains a thorough understanding of the functional relationship among its components when they possess sufficient data to create a statistical relationship. Ensuring that the more information the DM can obtain the easier it will be for them to create a useful scoring model to help aid their final decision.

Decision-Making and Supplier Trust


The scoring model is used to aid multi-criteria decisions, but there is always element of risk which needs to be considered. Awati [16] makes the point that the weighting of the criterion and rating of decision alternatives is highly subjective and do not consider cognitive bias. Thus, showing that the scoring model can be used in decision-making, but there are also risks which could affect the organisation due to who is making the final decision and under what circumstances they interpret the data from the model.

3 Critical Evaluation of the Solution One of the scoring model’s main strengths is that it is relatively quick to compose and carry out, thus, making it less time consuming and more efficient for the DM [12]. This allows supplier management to gather all of their data and create the model without having to set aside a large amount of time. Therefore, allowing management to make numerous important decisions, this will increase management’s level of efficiency and productivity. Hence, this strength accommodates for the need of quick decisions which allows management to cope with demands such as the market, management/employee decisions and organisational decisions which will allow for overall success [17]. Having this high level of productivity will also aid the organisation in increasing their profitability because management will be able to process numerous decisions, these decisions could increase their profits as they are able to analyse more than a handful of them. Another strength associated with the scoring model is that it is very specific data which has been gathered, thus, allowing the DM to follow the simplistic steps to compose their scoring model. Jansma [18] highlighted that the scoring model allows organisations to individually identify key criteria, therefore, allowing the DM to asses all data very specifically which will ensure that the final decision was the best decision which could have been made for management and the organisation. The scoring model supports many organisations as they are all different dependant on their goals and objectives. This shows that the scoring model can be used in a wide range of businesses and their management decision-making process because the model is very versatile and doesn’t need a specific problem in order to be carried out. An additional strength is that the model ensures that the decision-making process is more consistent. Having this consistency ensures that the decision process becomes almost automated which will ensure that the same methodology is used over and over again in each decision [19]. Having this process within the core of management will ensure a routine is created, the routine of knowing that once an important decision needs to be made this model will be used, and thus, ensuring that everyone is aware of the data and criteria needed before a decision can be made. The first limitation is that the scoring model is often thought of as being considerably less accurate and reliable in regard to its ability to process data compared to other approaches and models [12]. Thus, this could lead management to explore other approaches and models as the scoring model may not be seen as the most reliable model to use. Having the potential for unreliability may expose management to elements of risk which may not have been accounted for before choosing the scoring model. Having these elements of risk could mean management wouldn’t have created


A. Buchan and Y. Wang

pre-loss and post-loss objectives [8] which could jeopardise the survival of the organisation and increase societal disengagement. The fundamentals of reliability and risk are aspects that management want to reduce as much as they can, therefore, they are more likely to encounter negatives whilst carrying out the scoring model when decision-making. A limitation would be that management can inherit an ethos of calculating numbers instead of promoting critical thinking [20]. Management and their teams may get caught up in the use of the scoring model and forget to evaluate their own knowledge beforehand, refer to appendix 4. Thus, eliminating the thought that they may be able to carry out their decision-making without the need for the scoring model.

4 Conclusion Tesco’s decision-making problem lies within the amount of trust that they give to their suppliers in their supply chain without checking that their suppliers are meeting their policies, or criteria in order to be a Tesco supplier. These high levels of trust initially led Tesco to become unaware of their part in the horse meat scandal of 2013 because their suppliers were not supplying 100% beef products within the Tesco’s value range. The scoring model is the chosen model which should be incorporated in the decision-making process to ensure that no supplier trust issue occurs again. The scoring model allows for specific steps such as listing the decision-making criteria and assigning a weight to each criterion, thus, allowing the DM to construct a specific scoring model using quantitative and qualitative data to help aid them. While, the scoring model has strengths and limitations when being used by the DM. Strengths include the model being efficient and not very time consuming to construct, therefore, allowing productivity of management to be increased. On the other hand, there is the lack of explicit model structure, meaning that management could be wasting time trying to construct the model rather than analysing their data. Although there are equal limitations and strengths, the limitations could outweigh the strengths in regard to survival of the organisation and profitability if they are not taken into consideration or the evaluation of risk has not been identified. Overall, the scoring model is a good aid for a DM within supplier management, but there is a gap in literature containing a core structure for the scoring model, this meant it wasn’t possible to include a diagram to show an example of the basic scoring model which could be used to assist management decision-making.

References 1. Madichie, N., Yamoah, A.: Revisiting the European horsemeat scandal: the role of power asymmetry in the food supply chain crisis. Thunderbird Int. Bus. Rev. 59(6), 664–672 (2016) 2. Ministry of Agriculture: Food Engineering & Ingredients, 25(5), 13 (2000)

Decision-Making and Supplier Trust


3. Lindgreen, A., Hingley, M.: The impact of food safety and animal welfare policies on supply chain management: the case of the Tesco meat supply chain. Br. Food J. 105(6), 328–349 (2003) 4. Trienekens, J., Zuurbier, P.: Quality and safety standards in the food industry, developments and challenges. Int. J. Prod. Econ. 113(1), 107–122 (2008) 5. Ellram, L., Siferd, S.: Total cost of ownership: a key concept in strategic cost management decisions. J. Bus. Logist. 19(1), 55–84 (1998) 6. Business gateway: Choosing and managing supplier (2014). business-guides/grow-and-improve/suppliers-and-outsourcing/choosing-and-managingsuppliers. Accessed 15 Apr 2018 7. Büyüközkan, G.: An integrated fuzzy multi-criteria group decision-making approach for green supplier evaluation. Int. J. Prod. Res. 50(11), 2892–2909 (2011) 8. Haimes, Y.: Risk Modelling, Assessment, and Management. Wiley, New York (2015) 9. Ho, W., Xu, X., Dey, P.: Multi-criteria decision-making approaches for supplier evaluation and selection. Eur. J. Oper. Res. 202(1), 16–24 (2010). Tse et al. (2016) 10. Masud, A., Ravindran, A.: Multiple criteria decision making. Oper. Res. Manag. Sci. 5, 1–35 (2008). Moore (1969) 11. Xu, Z.: Multi-person multi-attribute decision making models under intuitionistic environment. Fuzzy Optim. Decis. Making 6(3), 221–236 (2007) 12. Sproles, G., Kendall, E.: A methodology for profiling consumers decision-making styles. J. Consum. Aff. 20(2), 267–278 (1986) 13. Yoon, P., Hwang, C.: Multiple Attribute Decision Making, An Introduction. Sage Publications, London (1995) 14. Awati, K.: On the limitations of scoring method for risk analysis (2009). https://eight2late. Accessed 24 Apr 2018 15. Miller, C., Ireland, D.: Intuition in strategic decision making: friend or foe in the fast-paced 21st century? Acad. Manag. Perspect. 19(1), 45–50 (2005) 16. Jansma, M.: How scoring helps your business (2018). blog/what-are-the-advantages-scoring-models/. Accessed 25 Apr 2018 17. Roberts, N.: 5 Reasons to use risk scoring models for your organization (2015). Accessed 25 Apr 2018 18. Fox, J.: Frequentist vs. Bayesian statistics: Resources to help you choose (2011). https:// Accessed 25 Apr 2018

Groups Decision Making Under Uncertain Conditions in Relation—A Volkswagen Case Study Arran Roddy and Yi Wang(&) The School of Business, Plymouth University, Plymouth, UK [email protected]

Abstract. This report analyses the role of management in association to the functions of decision-making. Conceptualising an understanding of the necessary process steps and exploring the scientific underpinning of decision-making techniques. Using The Volkswagen Group (VW) as a case study there will be analysis of their decision-making approaches relating to the 2015 corporate deception scandal. The initial part of this report engages existing literature, using frameworks and governance to identify ethical concerns of the international economy and its impact on VW. Furthermore, discussion of decision trees will facilitate unique measures relevant to uncertainty in VW’s decision-making methods. Engaging whether the production of ‘greener’ vehicles will assist in recovering lost market shares. The end of this report critically analyses decision trees. Keywords: Volkswagen

 Decision making  Critical analysis

1 Introduction In 2015, Volkswagen was condemned of rigging emission tests to make diesel vehicles seemingly emit fewer pollutants than they were [1]. The actual result of VW emission test was forty times higher than on the cheated results [2]. Hou et al. [3] concluded that the scandal expressed a significant threat to people’s health internationally. Through the duration of six years, VW diesels contributed to the injection of 36.7 million kg of nitrogen oxide to the environment [4]. Nitrogen oxide is a primary element of smog matter, carpeting the ground with numerous diseases leading to heart disease, premature death, bronchitis [5, 6]. Also estimating that excess emissions caused by VW diesel cars cost 45,000 disability adjusted life years [7]. Finally, the abundance of nitrogen oxide released contributes to acid rain, presenting a loss of habitat. This adds to natural resource degradation [8]. Resonating with the veil of ignorance, an ethical concept where organisations will compromise future generations. However, the potential production of electronic cars by VW presents an opportunity to reduce emissions whilst maintaining economic objectives [9]. Thus coinciding with the Brundtland commissions take on sustainable development of ‘founding durable prosperity for future humanity’ [10]. Therefore, VW’s management must decide whether to manufacture less hostile products such as e-vehicles or continue with pollutant diesel vehicles. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 406–410, 2019.

Groups Decision Making Under Uncertain Conditions in Relation


It is documented that the deceptive ignorance was instigated resulting from of a lack in technology, prompting Volkswagen to cheat in emission tests [11]. The CEO Martin Winterkorn, who resigned in 2016, stated he was not aware of the precise number of engineers involved, inciting that it was not a corporate decision [12]. Kirkley [13] suggests that examining firm decisions can be a hindrance to understanding the processes and therefore, individual decisions can hereby by difficult to identify. However, Painter and Martins [14] uncovered that VW engineers may have already known about the higher emissions prior to 2015. This immediately highlights the decision-making techniques within Volkswagens corporate culture. Furthermore, the assembly line is forced to work under critically centralised conditions [15]. Therefore, working on a ‘need to know basis’, suggesting management maintain a locus of control [16]. Upon the revelation of the scandal, stock price of the company dropped by over 30% (Fig. 1). Highlighting how collective reputation has an important role to economics and social sciences [17]. Furthermore, the emission scandal influenced the loss of billions from the value of Volkswagen [18]. Whilst competitors benefited as demand for VW vehicles dropped from 40% to 30% [19]. Therefore, in order to cope with crisis, VW announced that employee bonuses will be reduced substantially [20]. As a result, this leads to a considerable diminution of variable remuneration.

Fig. 1. End of day stock price of the Volkswagen Group (5, p. 7).

The Volkswagen Group face re-establishing credibility to its direct and indirect stakeholders from their unethical behaviour. The literature review has documented the deterioration of the positive public sentiment towards VW, with competitors gaining growth in market shares as a result. Management of VW must assess solutions to decide how to protect global health and recapture public trust.

2 Solution The Decision trees (DT) are a hierarchical structure classifying data based on attributes of the underlying classes [2]. A DT model provides a graphical representation of possible statistical solutions based upon the classification algorithm [22]. Further elaborated as a sequential process design, calculating the probability of a result to identify the expected value (EV = R (Probability  Outcome)) of an outcome [23].


A. Roddy and Y. Wang

Probabilities within decision methodology reflect the maximum likelihood of a result [24]. Thus being a scientific data-mining technique to comprehend how to improve the uncertainty of a firm’s decision [25]. The choices for VW are complex with dependent exogenous cultural input and output variables not being comparable [26]. This is resultant of a single attribute function becoming partitioned according to the attribute value. Therefore, maintaining a holistic perspective of all sub groups in the DT improves accuracy of EMV prediction relative to regression-based approaches [22].

3 Discussion However, the nature of the classification algorithm of a top-down decision tree can become demanding and complex. Calculating information entropy by using logarithmic algorithms demands time and tree size [14]. For instance, an increase in tree depth, from root to end node, jeopardises the comprehensible essence of the model [20]. As with more decisions comes less accuracy of the expected results. To counter this, the solution of prematurely stopping tree growth when a minimum level of impurity is reached ensures generalisability of results [27]. Nonetheless, stopping criteria can degrade a trees performance [18]. Therefore, pruning allows a tree to over fit the data set and then be trimmed by removing subsets. Esposito et al. [21] indicates that the results of various pruning methods lead to over pruning, thus leading to smaller and inaccurate trees. Therefore, to achieve optimal results that can be generalised, VW need to integrate a stopping mechanism when the splitting improvement of a node becomes too small. Berry et al. [28] stated that target proportion of an end node needs to be between 0.25 and 1.00% of the data set to advance the tree and ascertain a degree of reliability. However, if the stopping criteria is too tight, than the tree will be under fit [23]. Decision trees work best when more information is available. Upon the construction of a tree with missing values, it is difficult to make detailed adjustments without compromising the tree integrity. Therefore, upon more information becoming known, slight modifications of a variable can lead to contrasting data sets. Therefore, variables must be concrete for the results to be conclusive [8]. Sun and Hu [29] argue that effective attribute selection can significantly improve the timing demands to construct a tree and its interpretability. Further stating that the “rationality of the relationship among the selected attributes for the tree model improve the sequential flow”. Concluding that the purpose is to avoid a heuristic bias to improve information gain and tree performance. A result of the classification, for organisations such as VW, this data mining technique will be costly. Complex tree mandates individuals creating the trees to have advanced knowledge in quantitative statistical analysis. The algorithms contained in decision trees are complicated and require expertise so training will need to be provided [4]. However, the goal of decision trees is to conclude the EV of a nominal or numerical target variable based upon several preceding input variables. An inductive inference method [22], becoming an essential step in the process of knowledge discovery [14]. For VW, a complex tree is probable; making it crucial that attribute selection is without

Groups Decision Making Under Uncertain Conditions in Relation


bias. Furthermore, missing values must be minimal to avoid noise. This will help VW analyse data so it can be converted into meaningful information as to proceed with the focus on diesel vehicles or ethically approved electrical motors [30].

4 Conclusion In this report The Volkswagen Group have been examined as a case study for uncertainty in management decision-making. The discussion regarding the evaluation of decision trees has illustrated that criteria must be implemented to avoid complexity and noise. The critical analysis suggests that features of the methodology have distinct advantages and disadvantages. Nonetheless, decision trees represent a popular unique measure for managers, with the need to manage risk and uncertainty becoming paramount. Furthermore, the classification algorithm of decision trees represents a method to correctly classify outcomes and diminish uncertainty. Supported further upon VW ascertaining higher levels of information input to benefit feature selection and minimalise unknown values. The model is particularly useful to management as it evaluates potential responses of the consumer population according to various strategies. Thus satisfying the constraints and improves accuracy. Potential improvement for decision trees in future research would be to define an attribute selection algorithm for a multi class dataset to consume less time and space.

References 1. Tarsa, K.: Won’t get fooled again: why Vw’s emissions deception is illegal in Europe and how to improve the Eu’s auto regulatory system. Boston Coll. Int. Comp. Law Rev. 40, 315–341 (2017) 2. Burki, T.: Diesel cars and health: the Volkswagen emissions scandal. Lancet Respir. Med. 3(11), 838–839 (2015) 3. Hou, L., Zhang, K., Luthin, M., Baccarelli, A.: Public health impact and economic costs of Volkswagen’s lack of compliance with the United States’ emission standards. Int. J. Environ. Res. Public Health 13(9), 891 (2016) 4. Chossière, G., Malina, R., Ashok, A., Dedoussi, I., Eastham, S., Speth, R., Barrett, S.: Public health impacts of excess NOx emissions from Volkswagen diesel passenger vehicles in Germany. Environ. Res. Lett. 12(3) (2017) 5. Mansouri, N.: A case study of Volkswagen unethical practice in diesel emission test. Int. J. Sci. Eng. Appl. 5(4), 211–216 (2016) 6. Holland, S., Mansur, E., Muller, N., Yates, A.: Response to comment on “damages and expected deaths due to excess NOx emissions from 2009–2015 Volkswagen diesel vehicles”. Environ. Sci. Technol. 50(7), 4137–4138 (2016) 7. Oldenkamp, O., Van Zelm, R., Huijbregts, M.: Valuing the human health damage caused by the fraud of Volkswagen. Environ. Pollut. 212, 121–127 (2016) 8. Jonson, J., Borken-Kleefeld, J., Simpson, D., Nyíri, A., Posch, M., Heyes, C.: Impact of excess NOx emissions from diesel cars on air quality, public health and eutrophication in Europe. Environ. Res. Lett. 12(9) (2017) 9. Volkswagen: Strategy. (2018). Accessed 22 Apr 2018


A. Roddy and Y. Wang

10. World Commission on Environment and Development: Our Common Future. Oxford University Press, Oxford (1987) 11. Bovens, L.: The ethics of dieselgate. Midwest Stud. Philos. 40(1), 262–283 (2016) 12. Trope, R., Ressler, E.: Mettle fatigue: VW’s single-point-of-failure ethics. IEEE Secur. Priv. 14(1), 12–30 (2016) 13. Kirkley, W.: Creating ventures: decision factors in new venture creation. Asia Pac. J. Innov. Entrep. 10(1), 151–167 (2016) 14. Painter, C., Martins, J.: Organisational communication management during the Volkswagen diesel emissions scandal: a hermeneutic study in attribution, crisis management, and information orientation. Knowl. Process. Manag. 24(3), 204–218 (2017) 15. Mačaitytė, I., Virbašiūtė, G.: Volkswagen emission scandal and corporate social responsibility – a case study. Bus. Ethics Lead. Ship 2(1), 6–13 (2018) 16. Arora, J.: Corporate governance: a farce at Volkswagen? Case J. 13(6), 685–703 (2017) 17. Tirole, J.: A theory of collective reputations (with applications to the persistence of corruption and to firm quality). Rev. Econ. Stud. 63(1), 1 (1996) 18. Jung, J., Park, S.: Case study: Volkswagen’s diesel emissions scandal. Thunderbird Int. Bus. Rev. 59(1), 127–137 (2016) 19. Georgeevski, B., Al Qudah, A.: The effect of the Volkswagen scandal (a comparative case study). Res. J. Financ. Account. 7(2), 55–57 (2016) 20. Castille, C., Fultz, A.: How does collaborative cheating emerge? a case study of the Volkswagen emissions scandal. In: Proceedings of the 51st Hawaii International Conference on Systems Sciences, pp. 94–103 (2018) 21. Chopra, T., Acharya, J.: Decision tree based approach for fault diagnosis in process control system. Int. J. Comput. Commun. Instrum. Eng. 4(1), 162–165 (2017) 22. Wang, Y., Li, Y., Song, Y., Rong, X., Zhang, S.: Improvement of ID3 algorithm based on simplified information entropy and coordination degree. Algorithms 10(4), 124 (2017) 23. Kamiński, B., Jakubczyk, M., Szufel, P.: A framework for sensitivity analysis of decision trees. Cent.L Eur. J. Oper. Res. 26(1), 135–159 (2017) 24. McIver, D., Friedl, M.: Using prior probabilities in decision-tree classification of remotely sensed data. Remote Sens. Environ. 81(2–3), 253–261 (2002) 25. Rojas, W., Meneses, C.: Graphical representation and exploratory visualization for decision trees in the KDD process. Procedia Soc. Behav. Sci. 73, 136–144 (2013) 26. Venkatasubramaniam, A., Wolfson, J., Mitchell, N., Barnes, T., Jaka, M., French, S.: Decision trees in epidemiological research. Emerg. Themes Epidemiol. 14(1) (2017) 27. Bohanec, M., Bratko, I.: Trading accuracy for simplicity in decision trees. Mach. Learn. 15(3), 223–250 (1994) 28. Berry, M., Linoff, G., Berry, M.: Mastering data mining. Wiley Computer Publications, New York (2000) 29. Sun, H., Hu, X.: Attribute selection for decision tree learning with class constraints. Chemometr. Intell. Lab. Syst. 163, 16–23 (2017) 30. Farid, D., Rahman, C.: Assigning weights to training instances increases classification accuracy. Int. J. Data Min. Knowl. Manag. Process. 3(1), 13–25 (2013)

Health Detection System for Skyscrapers Lili Kou, Xiaojun Jiang, and Qin Qin(&) Shanghai Polytechnic University, Measurement & Control Technology and Equipment, Jinhai Road No. 2360, Pudong New Area, Shanghai, China {llkou,xjjiang,qinqin}

Abstract. This work designed a set of health detection system based on singlechip for skyscrapers. The design is divided into two sets of hardware systems: the core board and the sensor board. The core chip of the sensor system is the STC89 series chip, equipped with MEMS sensors, fixed at the top of skyscrapers to detect the maximum shaking of buildings, and sends wireless vibration signals to the core board. The core board is based on STM32 series chip and has a real-time wireless receiving vibration signals. Tests have shown that the system can detect a minimum acceleration of 0.01 g and the minimum angle of 0.01°. The health detection system for providing a good solution for buildings is connected to Internet of Things. Keywords: Health detection

 Internet of Things  MEMS sensors

1 Introduction The process of modern urbanization has brought the rapid rise of skyscrapers. These tall buildings or homes have become the symbol of cities and enterprises. By 2017, China has 802 buildings with a height of more than 152 m. In the top ten list of the world’s tallest buildings, there are 6 in China. By 2022, China will own 1318 skyscrapers. In pursuit of altitude, high-rise buildings are also pursuing intelligence. The scope of intellectualization includes not only security, home, but also building health monitoring [1, 2]. According to the regulation of high building concrete structure technical specification, the height is not more than 150 m, the ratio of top displacement to layer height, according to different structure system, limit value from 1/550 to 1/1000. The building with a height of not less than 250 m shall not exceed 1/500. In addition, the pulsation and torsional force measurement of buildings are important indicators for building health monitoring [3]. However, the health monitoring of highrise buildings is faced with more difficult problems such as sensor placement, longdistance transmission and multiple data collection [4]. The wireless Internet of things is possible for us to solve these problems [5–7]. This paper uses the method of short distance and long distance internet device, realizes the collection, transmission the data of several simulation points, and proves the feasibility of the scheme. The rest of this paper is organized as follows: Sect. 2 illustrates the design method. The testing of the system is evaluated in Sect. 3. Finally, the conclusions are given in Sect. 4.

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 411–417, 2019.


L. Kou et al.

2 System Design For the requirements of multi-sensor acquisition and transmission of the signal, we have adopted a star-shaped design network, the structure is as shown in Fig. 1.

Host Computer

Long-distance wireless relay module

Central MCU control unit

Short-range wireless module

Long-range wireless module

Sensors for short-range

Sensors for long-range

Fig. 1. Architecture of the wireless detection system

The system is composed of multi-sensor acquisition unit, data processing and wireless transmission and reception network. Different type of wireless devices are used, which depends on the transmission distance. Details as follows: 2.1

Sensor Node Group

The monitoring content of high-rise buildings is mainly divided into two categories: load and structure response. Among them, the load includes the weights in the building, wind, temperature, earthquake, etc.; the structural response includes strain, acceleration, displacement, and the like [8]. The structural response signal can be divided into acceleration type (tilt type, sway type) sensor detection and deformation type (stress type) sensor detection [9, 10]. The acceleration sensor mainly detects the sway velocity and inclination angle of the building. The selection of sensors is mainly based on the range and accuracy of measurement. The deformation sensor mainly detects and prevents the deformation and fracture of the bearing part, and is mainly arranged in the main bearing beam and column of the frame type building. This paper focuses on the detection of the first order response sloshing and the irregular vibration signals of the buildings. The top of the high-rise building is sloshing slowly under the influence of wind pressure and the vibration frequency of the building itself. The amplitude of the sloshing is larger, but the acceleration is small. Therefore, the sensor of a relatively large angle range, such as MPU6050, is needed. In view of the irregular vibration caused by the movement of the earth’s crust, traffic, and the movement of the objects in the building, the amplitude of the building displacement is relatively small and the frequency is irregular, and the high sensitivity accelerometer is needed. Therefore, the SD1221L sensor is used to detect such signals.

Health Detection System for Skyscrapers



Determine the Sampling Frequency

For the building’s own first-order response deformation tilting sway signal, it may take several days to complete a measurement cycle, so the sampling period is above 1 s. For measuring irregular vibration signals, the sampling rate needs to be above 1 kHz/s. 2.3

Signal Filtering

In the process of wireless detection of building, noise, air flow and interference signals that are not in the tested structure directly affect the analysis and processing of the signals. Therefore, it is necessary to perform filtering processing on the detected vibration signal. In addition to the filter circuit designed in the hardware circuit, a digital filtering algorithm is required at the signal front end, that is, the programming method of the MCU at the sensor node. The Kalman filter is added specifically for the MPU6050, and the Butterworth low-pass filter unit is added for the SD1221L sensor. 2.4

Wireless Transmission Network Module Selection

One of the big problems with wireless transmission is the high bit error rate [8–10], so you need to choose a module that is strong against interference. At the same time, the module needs to work at a suitable transmission rate, because the increase of the transmission rate often leads to a high error rate under the premise of the chip device. Select the corresponding wireless transmission module according to the transmission distance. For the sensors in the same floor, the RF module with small power and small transmission distance can be selected. Correspondingly, when the distance between the relay and the central server reaches several hundred meters, a chip with strong power and strong anti-interference performance is required. For the design requirements, we use two wireless transmission modules, one is NRF24L01, which is suitable for transmission distances of about 15 m. The NRF24L01 needs to be mounted on the STC89C52 series MCU to transmit and receive data for communication between the sensor and the relay. The other is STR-15, which is used for communication between the relay and the server. STR-15 wireless transmission chip, equipped with SANT-302 rod antenna, can realize point-to-multipoint wireless communication, including serial communication protocol, providing 8 channels, and the transmission distance can reach more than 1000 m. 2.5

Analysis of Information Transmission Structure

Therefore, based on the sensor type and wireless transmission characteristics, this paper adopts a two-layer wireless sensor network topology. The network consists of a common sensor node, a local master node LSM (local site master) and a central master node CSM (central site master) composition. The sensor node communicates with the local LSM to form a low-level network; the communication between LSM and the LSM with the CSM to form a high-level network. The sensor node sends the collected information to the LSM through the lower layer network, and the LSM sends the data


L. Kou et al.

to the CSM through the upper layer network. In this paper, CSM uses STM32 series MCU to complete data storage and forwarding. 2.6


In the programming, the location of the node in the network and the type of wireless transmission module are considered. The program flow chart of CSM is shown in the Fig. 2.


MCU receives PC receiving data command Y N Verification judgment

MCU cyclically receives data

MCU sends a packet to the PC Y


CRC check


Fig. 2. Program flow chart of CSM

3 Measurement and Results We use the swaying platform to simulate the sloshing of the building, and fix MPU6050 on the central axis of the swaying platform, so that the sway angle is between −5° and +5°, and the time of one cycle is about 40 min. For testing the irregular noise, the SD1221L sensor was used. We use an ultrasonic generator to simulate irregular noise with a vibration frequency of 1 kHz and a time period of 5 s. Figure 3 shows the measurement results of the MPU6050 signal after Kalman filtering. The result shows a sloshing cycle completed in 40 min, the sway angle is in the range of −5 to 5°, the minimum vibration of 0.01° can be measured, and the noise is

Health Detection System for Skyscrapers


Sloshing angle [degree]

6 4 2 0 -2 -4 -6 0





Time [min]

Fig. 3. Simulate building angle measurement. The abscissa is time, the measurement period is 40 min, the ordinate is the sway angle, and the sway angle is between proximately −5° and +5°.

Acceleration [g]

small. There is a deviation of about 0.3° at the start of the test, but does not affect the overall measurement results. A part of the measurement data of the irregular noise is collected as shown in Fig. 4, the abscissa is time, and the ordinate is acceleration, which varies between −0.5 g * +0.5 g. The measurement data is a mixture of multiple frequency signals.

0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 0






Time [ms] Fig. 4. Simulated building irregular signal measurement.

The CPU is constantly accepting sensor data, and if it is not processed, the data will increase indefinitely. Therefore, while analyzing and processing data, it is necessary to continuously delete invalid data. Comprehensive analysis of the above two signals, the building’s first-order response signal will form a good periodicity, if there is a step change, or a cliff-type signal change, the sensor will record the signal time, the sensor position and send out a warning. If there is no abnormality, the signal will be cancelled after 1 week. The data detected by the SD1221L sensor is affected by the sloshing signal. However, after a long time of recording, we decide the amplitude of the acceleration does not exceed the threshold of −0.5 g * +0.5 g. In the field measurement, the value is smaller than 200 mg, and the sensitive of SD1221L is 2000 mV/g, which has the sensitivity of 0.01 g. If the value exceeds the


L. Kou et al.

multi-threshold range, the time and position of the acceleration will be recorded. Similarly, if the data is within the threshold range, the data will be deleted periodically after 1 h.

4 Conclusions In order to detect the first-order response sloshing of high-rise buildings and irregular vibration signals, two sensors MPU6050 and SD1221L, are designed as sensor acquisition nodes. A wireless transmission network composed of two wireless transmission chips of NRF24L01 and STR-15. The first-order response vibration of the building and the irregular vibration signal are measured by the simulate measurement platform, The measured first-order response signal reflects the periodicity of the sloshing, with a vibration of 0.01°. The irregular signal shows the equipment has the relatively high sensitivity, which can match 0.01 g. The two wireless chips used in this paper have the characteristics of low cost, flexible operation and stable performance. In this paper, the star design structure, communication between relay and relay cannot be carried out. If a relay is broken, the following sensors will not receive the signal. Therefore, the communication method using mesh structure in subsequent work is needed.

References 1. Goi, Y., Kim, C.-W.: Damage detection of a truss bridge utilizing a damage indicator from multivariate autoregressive model. J. Civ. Struct. Health Monit. 7, 153–162 (2017) 2. Tatar, A., Niousha, A., Rofooe, F.R.: Damage detection in existing reinforced concrete building using forced vibration test based on mode shape data. J. Civ. Struct. Health Monit. 7, 123–135 (2017) 3. Algohi, B., Bakht, B., Mufti, A.: Long-term study on bearing restraint of a girder bridge. J. Civ. Struct. Health Monit. 7, 45–55 (2017) 4. Carrion, F.J., Quintana, J.A., Crespo, S.E.: SHM of a stayed bridge during a structural failure, case study: the Rio Papaloapan Bridge. J. Civ. Struct. Health Monit. 7, 139–151 (2017) 5. Xiao, H., Lu, C., Ogai, H.: A multi-hop low cost time synchronization algorithm for wireless sensor network in bridge health diagnosis system. In: Proceedings of the IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), pp. 392–395 (2012) 6. Yu, S.-Y., Wu, X.-B., Chen, G.-H., Dai, H.-P., Hong, W.-X.: Wireless sensor networks for bridge structural health monitoring. J. Softw. 26(6), 1486–1498 (2015) 7. Li, B., Wang, D., Wang, F., Ni, Y.Q.: High quality sensor placement for SHM systems: refocusing on application demands. In: Proceedings of the 29th IEEE Conference on Information Communications (INFOCOM), pp. 650–658. IEEE Press, New York (2010)

Health Detection System for Skyscrapers


8. Yang, D.H., Yi, T.H., Li, H.N., Zhang, Y.F.: Monitoring-based analysis of the static and dynamic characteristic of wind actions for long-span cable-stayed bridge 8, 5–15 (2018) 9. Pudipeddi, G.T., Ng, C.T., Kotousov, A.: Effect of central and non-central frequency components on the quality of damage imaging. J. Civ. Struct. Health Monit. 8, 49–61 (2018) 10. He, S., Ng, C.T.: Modelling and analysis of nonlinear guided wave interaction at a breathing crack using time-domain spectral finite element method. Smart Mater. Struct. 26, 085002 (2017)

Integrated Production Plan Scheduling for Steel Making-Continuous Casting-Hot Strip Based on SCMA Lilan Liu1, Pengfei Sun1, Zenggui Gao1(&), and Yi Wang2 1

Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shangda Road 99, Baoshan, Shanghai, China [email protected] 2 Business School, Plymouth University, Plymouth, UK

Abstract. This paper research the coordination matching problem of three key process with steel making, continuous casting and hot strip rolling in steel production. The steel making-continuous casting-hot strip rolling integration production plan scheduling model is built based on packing model and vehicle routing model. This paper proposes a new Symbiosis Coevolution Multiobjective optimal Algorithm (SCMA) which combine genetic algorithm (GA), artificial bee colony (ABC) and firefly algorithm (FA) to solve the integration plan scheduling problem. In this new algorithm, the hot strip rolling plan is scheduled firstly by FA considering resource constraints, energy constraints and process constraints. Then based on the production needs of hot strip rolling, casting plan and charge plan with no constraints are generated as the initial population to schedule the casting plan and charge plan considering the production constraints of casting plan and charge plan by ABC and GA. The proposed method has been verified availability and feasibility to solve this integration production plan scheduling problem by comparing the single process scheduling result. Keywords: Multiple production scheduling  Packing model Vehicle routing model  Symbiosis coevolution multi-objective algorithm

1 Introduction Production plan management is an important method to improve production management level. Especially in steel enterprise, resource and energy waste seriously, amounts of inventory increase year by year and enterprise margin decrease year by year. So steel enterprises are faced serious situation which should be upgrade quickly. Therefore, researching production plan management of steel enterprises play an important role to enhance interlinks among different processes, improve production efficiency, reduce production cost and promise production quality. Steel making, Fund Project: Financed by the Ministry of industry and information technology, China Manufacturing 2025 key project. No. TC17085JH. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 418–431, 2019.

Integrated Production Plan Scheduling


continuous casting and hot strip rolling are the three key process which connect closely and have the relationship of order processing. The upstream product is the material of the downstream process and the whole production process is in high temperature. In order to promise the continuous of production, improve the production capacity, reduce the energy consumption and reduce the inventory, we must schedule the production plan by the method of integration of multiple processes. Recently, researches on plan scheduling of steel making, continuous casting and hot strip rolling has got some success, however, most of which are focus on schedule of single process. The researches of charge plan are as follows: Chai et al. [1] proposed a model considering furnace capacity and slab width base on the shortage of charge plan and use the hybrid variable neighbor search algorithm and iterative local search algorithm to solve it. Karmarkar [2] proposed a one dimension packing model to solve NP-hard problem. Lodi et al. [3] build a three dimension packing model and use a tabu search algorithm with heuristic neighbor to solve the model. Yang et al. [4] built a traveling salesman model with minimizing the casting penalty cost, minimizing pouring penalty cost and minimizing unselected charge penalty cost and proposed hybrid heuristic cross entropy algorithm to solve it. Prins [5] proposed a available coding method to define a solution of VRP and introduced local search algorithm to improve GA. The experiment result indicated the modified GA can solve the VRP well. An effective hybrid discrete FA was introduced by Karthikeyan and Asokan [6] for solving multi-objective flexible job shop scheduling problems. The production plan scheduling of single process can arrange production task of the single process itself well for making sure production quality and production efficiency. However, it doesn’t consider the production capacity and production needs of adjacent processes, so it can’t satisfy the continuous regulation of whole production. The integration production schedule of multiple process with steel making, continuous casting and hot strip rolling is the heated point in domestic and foreign. IBM company [7] always provide IT solutions for AK Steel company and developed PDOS software of steel production plan scheduling which has a core competitive power. Zhang and Li [8] gave an integration strategy and built a satisfied constraints model based on particle swarm optimization (PSO). Ma et al. [9] proposed a two loop control strategy of model control and parameter control to solve the steel making-continuous casting-hot strip rolling integration production plan scheduling problem. This paper research the coordination matching problem of three key process of steel making, continuous casting and hot strip rolling in steel production. The steel makingcontinuous casting-hot strip rolling integration production plan scheduling model is built based on packing model and vehicle routing model. This paper proposes a new SCMA which combine GA, ABC and FA to solve this scheduling problem.

2 Mathematic Model When schedule the plan of single process, we can only ensure the best schedule solution of the single process. We can’t ensure the production coordination of adjacent process which is because it only considers the production constraint and optimal objective of single process which lack of considering the production needs of adjacent


L. Liu et al.

process. In this paper, the integration schedule model of steel making, continuous casting and hot strip rolling is built based on the reach of TDPM, MVRPM and HVRPM. The integration schedule problem is described as follows: The amounts of slabs is n. The information of slab hardness, slab length, slab width, slab thickness, slab type, slab weight are given. Considering the constraints of steel making, continuous casting and hot strip rolling, ms charge plan, mc casting plan and mr hot strip rolling plan are scheduled at the same time. In order to describe the schedule module, the model parameter and variable are defined as follows: ns : Slab amounts. nzs : Staple material slab amounts. nts : Warm-up material slab amounts. nc : Charge plan amounts in one casting. ms : Total charge plan amounts. mc : Casting plan amounts. mr : Hot strip rolling unit amounts. mh : High temperature slab amounts. is : Slab number (is ¼ 0; 1; 2;    ; n), is ¼ 0 is the virtual slab. ic : Charge number (ic ¼ 0; 1; 2;    ; n). ic ¼ 0 is the virtual charge. wi : Weight of slab is . Li : s Length of slab is . wic : Weight of charge ic . Qi : Open order weight of charge ic . Tmin : s r Minimum furnace capacity. Tmax : Maximum furnace capacity. Tmin : Minimum rolling r c c length. Tmax : Maximum rolling length. Tmin : Minimum casting charge weight. Tmax : Maximum casting charge weight. gis : Steel grade of slab is . wis : Width of slab is . Tis : Hardness of slab is . lis : Length of slab is . gic : Steel grade of charge ic . wic : Width of charge is .Tic : Thickness of slab is . PSis js : The penalty cost of slab is and slab js in one charge plan. PSis js ¼ PSwis js þ PStis js þ PSgis js , PSwis js , PStis js , PSgis js are penalty cost of width difference, penalty cost of thickness difference, penalty cost of steel grade difference. PCic jc : The penalty cost of charge ic and slab jc in one casting plan. Pcic jc ¼ PSwic jc þ PStic jc þ PSgic jc , PSwic jc , PStic jc , PSgic jc are penalty cost of width difference, penalty cost of thickness difference, penalty cost of steel grade difference. PRis js z : The process penalty cost of slab is and the adjacent slab js in staple materials slabs, t h w t h PRis js z ¼ PW is js z þ Pis js z þ Pis js z , Pis js z , Pis js z , Pis js z are penalty cost of width difference, penalty cost of thickness difference, penalty cost of hardness difference. PRts js t : The process penalty cost of slab. is and the adjacent slab. js in warm-up materials slabs. t h W t h PRis js t ¼ PW is js t þ Pis js t þ Pis js t Pis js t , Pis js t , Pis js t are penalty cost of width difference, penalty cost of thickness difference, penalty cost of hardness difference. PEijz : The penalty cost of adjacent slabs in staple materials. PEijt : The penalty cost of adjacent slabs in warm-up materials. Prize: The prize of high temperature slabs. Above all, the steel making–continuous casting-hot strip rolling integration plan scheduling problem can be formulated as follows: S1 ¼ minða1 þ a3

n P m P

i¼1 j¼1 m P n P

Plj ðT  Wi Xij Þ þ a2

mc P nc P

ðTC  WCi  XCik Þ

k¼1 i¼0


ðLmax  Lik  yik ÞÞ

k¼1 i¼0

S2 ¼ min b1 þ b3

ns P ns P ms P

i¼1 j¼1 ks nr P nr P mr P

PSis js XSis js ks þ b2

nc P nc P mc P i¼1 j¼1 kc

ðPRis js z þ PRis js t ÞXRis js kr

i¼1 j¼1 kr

PCic jc XCic jc kc ð2:2Þ

Integrated Production Plan Scheduling ns X z

S3 ¼ minð

ic ¼1

S4 ¼ min d1

ns X ic ¼1

ns X



PEijz XRis js kr þ z

ic ¼1

Pzic Qic  max d2

PEijt XRtis js kr Þ

mr X


ðPrize  mhk Þ



The first part of formula (2.1) is minimizing the penalty cost of unused furnace capacity. The second part of formula (2.1) is minimizing the penalty cost of unused tundish capacity. The third part is minimizing the penalty cost of unused production capacity of rolling machine. The first part of formula (2.2) is minimizing the process penalty cost of charge plan. The second part of formula (2.2) is minimizing the process penalty cost of costing plan. The third part of formula (2.2) is minimizing the process penalty cost of rolling plan. Formula (2.3) is the penalty cost of energy consumption jump of adjacent slabs. The first part of formula (2.4) is maximizing the amounts of high temperature slabs. The second part of formula (2.4) is minimizing the weight of open orders. Subject to: 1. Each slab must only be scheduled in a charge plan once:

ms P ks ¼1

is ¼ 1; 2; 3. . .; ms . 2. Each charge must only be scheduled in a casting plan once:

mc P kc ¼1

ic ¼ 1; 2; 3. . .; mc . 3. Each slab must only be scheduled in a rolling plan once:

mc P kc ¼1

XSis ks ¼ 1

XCic kc ¼ 1,

XRis kr ¼ 1,

is ¼ 1; 2; 3. . .; mr . 4. The limit of maximum furnace capacity and minimum furnace capacity: ns P s s  Wic XSis ks  Tmax , is ¼ 1; 2; 3. . .; ns . Tmin is ¼1

5. The limit of maximum casting charge weight and minimum casting charge weight: P c c Tmin  XCic WCi  Tmax , ic ¼ 1; 2; 3. . .; nc . 6. The limit of rolling length of warm-up materials and staple materials: ns ns P P r r s s  Wic XRzis js kr  Tmin Tmin  Wic XRtis js kr  Tmax . Tmin is ¼1

is ¼1

3 Algorithm Design for Integration Plan Scheduling Problem 3.1

Coevolution Algorithm

Coevolution is firstly proposed by Ehrlich and Rave [9] to discuss the mutual influence of insects and plant in the evolutionary process. The research contents of coevolution is


L. Liu et al.

very widely, which includes coevolution between competing species, coevolution between parasites and parasitic system, coevolution between predator and prey and symbiosis coevolution between multiple population. In this paper, SCMA is proposed to solve the coevolution problem among charge plan population, casting plan population and rolling plan population. Firstly, the integration plan scheduling problem is resolved three sub problems which are charge plan problem, casting problem and rolling problem. Each of the three sub problems is solved by one evolutionary algorithm with GA, ABC and FA. When solve a multiple population evolutionary problem, every evolutionary population is only part solution of the whole problem. The solutions of three evolutionary population constitute a complete solution. The strategy of SCMA is as follows. step 1: Set an empty external population for saving the elite solution; step 2: Initialize n sub population, each of these sub populations correspond to a part of the solution; step 3: Combine individuals of one specific population and the other population and select the optimal individuals as the complete solution; step 4: Search the optimal individual of each population by GA, ABC and FA; step 5 : Repeat step 3 and step 4 until reach the enough iteration. Then compare the optimal solution of individual with the solution of external population. Use the current optimal solution to replace the inferior solution of the external population and ensure the solution of the external population gradually tend to be optimal; Repeat step 2, 3, 4, 5 until satisfy the maximum iterations of external population. 3.2

Coevolution Algorithm

In this paper, the SCMA is proposed to solve integration plan scheduling problem model. In this new algorithm, the hot strip rolling plan is scheduled firstly by FA considering resource constraints, energy constraints and process constraints. Then based on the production needs of hot strip rolling, casting plan and charge plan with no constraints are generated as the initial population to schedule the casting plan and charge plan considering the production constraints of casting plan and charge plan by ABC and GA. 3.2.1 GA Design for Sub Population of Charge Plan GA is proposed by Holland [10] in the book of adaptability named “natural and artificial systems” in 1975. It is a simultaneous optimization method with multiple parameter and multiple population which imitate the natural evolution process. The key optimal operations of GA to solve the charge plan are as follows: (1) Coding: Code the solution of charge plan firstly. Each chromosome represent one solution of the charge plan; Generate the initial population with the coded chromosome; (2) Constructing fitness function: Construct fitness function based on optimal objective. Calculate the value of fitness by the fitness function. The high value of chromosomes can be saved in a high probability and the low value of chromosomes can be eliminated. In the evolution procedure, the saved chromosomes can built a new population to search the optimal solution; (3) Cross operation: Genetic of the father

Integrated Production Plan Scheduling


inherited to the next generation by the cross operation of father chromosome. The generation of son is the optimal process which can generate a new solution; (4) Mutation operation: In the procedure of generating a new chromosomes, genetic may be mutated as the certain probability to ensure the variety of population. 3.2.2 ABC Design for Sub Population of Casting Plan ABC [11] is a new swarm intelligence algorithm based on the foraging behavior of nature bee. The basic bee colony model includes three key elements of food source, employed bee and unemployed bee. There are two foraging behaviors including recruit and abandon can be choose. When solve the casting problem by ABC, each food source represent a solution for casting plan, the value of food source represent the quality of solution (the objective value of casting). The amounts of solution equal the amounts of employed bee. The key optimal operations of ABC to solve the casting plan are as follows: Employed bee search food source based on formula (3.1). In formula (3.1), j 2 f1; 2; . . .; dgk 2 f1; 2; . . .; Ng, however, j 6¼ k, rij 2 ½1; 1. xij ¼ xij þ rij ðxij  xkj Þ


In each loop, employed bee do the neighbor search based on formula (3.2). In formula (3.2), xij is the coordinate of number j of solution i. vij is the new solution. a is the neighborhood search coefficient and a is a random number between −1 and 1. vij ¼ xij þ aðxij  xkj Þ; i 6¼ k


The food source is replaced based on formula (3.3), in which fv and fx are the fitness value of vi and the fitness value of, respectively.  xi þ 1 ¼

vij ; fv  fx xij ; fv [ fx


Fitness value of food source is calculated as formula (3.4).  fitðiÞ ¼

1=ð1 þ fi Þ; fi  0 jfi j; fi \0


3.2.3 FA Design for Sub Population of Rolling Plan Firefly Algorithm [12] (FA) is proposed by Cambridge scholars Yang in 2009. It is a stochastic optimization algorithm based on swarm intelligence following particle swarm algorithm, genetic algorithm and simulated annealing algorithm. The key optimal operations of FA to solve the charge plan are as follows [12].


L. Liu et al.

1. Relative fluorescence intensity I of firefly on location xi can be calculated by formula (3.5). Iðxi Þ ¼ I0  earij


In formula (3.5), I0 is the maximum fluorescence which is directly proportional to the value of the objective function. a is light Volatilization Coefficient. rij is the space length between i and j. 2. Formula (3.6) is attraction degree of firefly of location xi . bðxi Þ ¼ b0  earij



b0 is maximum attraction degree which is attraction of light source. 3. In iteration t, location updating from firefly i to firefly j is formula (3.7). xi ðtÞ ¼ xi ðtÞ þ b  ðxj ðtÞ  xi ðtÞÞ þ /  ðrand  1=2Þ


xi ,xj is the space location of firefly i and firefly j, respectively. / is the step size factor and / is a constant between 0 and 1. rand is a random factor between 0 and 1. 3.2.4 SCMA Algorithm Design for Integration Plan Scheduling Optimization Based on charge plan population, casting plan population and rolling plan population, SCMA is used to optimized the three population simultaneously and find the optimal solution of integration plan scheduling finally. The steps of optimizing process are as Fig. 1: Step 1: Initialize parameter of FA, parameter of GA and parameter of ABC simultaneously. FA n is number of firefly population and set FA n = 20. Set b0 = 10. Set a = 0.5. Set r = 2. GA n is number of chromosome population and set GA n = 20. GA P1 is crossover probability and set GA P1 = 0.8. GA P2 is mutation probability and set GA P2 = 0.2. ABC n is the bee population and set ABC n = 20. The maximum iteration is timax . Step 2: Production contacts in production life style is the scheduling object. Considering constraints of rolling plan and constraints of charge plan, initialize rolling plan sub population with production contacts. Step 3: Search the optimal firefly for rolling plan sub population based on formula (3.5) to formula (2.1). Based on the objective of minimizing capacity of rolling machine, minimizing the process plenty cost and minimizing the jump of energy consumption of adjacent slabs, calculate the fitness function of firefly population and reserve the son generation. Step 4: Based on rolling plan of step 3, slabs are grouped by steel type and steel grade and then obtain original charge plan. Based on key step of GA to solve charge plan problem in 4.2, search the optimal solution for charge sub population and reserve optimal solution (charge sub plan).

Integrated Production Plan Scheduling


Fig. 1. Integrated production plan scheduling based on SCMA

Step 5: Based on charge sub plan of step 3, charge sub plan are grouped by slab width and then generate original casting sub population. Search the optimal casting plan based on ABC. Calculate order of casting slabs by optimal solution. Considering production capacity rolling machine and order of casting slab, the original rolling plan obtained by step 3 should be adjustment. Step 6: Repeat the steps above until ti [ timax or satisfy termination conditions. Output experiment results of integrate production plan including the optimal rolling plan, the optimal charge plan and the optimal casting plan.

4 Algorithm Design for Integration Plan Scheduling Problem 4.1

Original Data Analysis

The software of matlab 7.0 is used to solve this problem. The hardware are as follows. CPU is 2.5 GHZ and internal storage is 2 g. This paper use 800 contacts to do the experiments. The original data of experiment are as Table 1. The parameters are set as follows. Total amounts of firefly population is 10, the maximum attraction degree b0 ¼ 5, Populations of chromosomes GA n = 100. Cross probability GA P1 = 0.8. Mutation probability GA P2 = 0.1, Spatial dimension d = 10. The maximum iteration rmax = 300. Maximum furnace capacity is 150t and maximum rolling length of warm material is 6 km and maximum rolling length of staple is 54 km. The original data were 800 slabs sourced from steel company. Information of original data include steel type,


L. Liu et al.

slab width, slab thickness, slab weight, slab length, slab hardness, steel grade, energy consumption of different steel type, time of slab in furnace and DHCR (direct hot rolling). Table 1. Original data Num. Steel type

Width Thickness Weight Length Hardness Steel grade

Energy consumption


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

1650 1650 1650 1650 1650 1300 1300 1300 1300 1300 1496 1496 1496 1546 1496 1340 1340 1550 1550 1650 1650 1650 1650 1650 1650 1650 1340 1300 1300 1300 1650 1650 1650 1650 1650 1650 1650 1650 1650 1650

66.883 66.883 66.883 66.883 66.883 81.999 81.999 81.999 81.999 81.999 78.852 78.852 78.852 78.852 78.852 104.549 104.549 64.185 64.185 66.883 66.883 66.883 66.883 66.883 66.883 66.883 104.549 94.995 94.995 94.995 195.547 195.547 195.547 195.547 195.547 195.547 195.547 66.883 66.883 66.883

337 341 341 340 340 0 0 0 241 240 238 241 241 0 0 235 0 238 0 233 224 224 219 223 219 223 345 340 338 341 213 213 0 0 308 321 323 332 332 331

JT5823A8 JT5823A8 JT5823A8 JT5823A8 JT5823A8 AP1055E5 AP1055E5 AP1055E5 AP1055E5 AP1055E5 DT3850D1 DT3850D1 DT3850D1 DT3850D1 DT3850D1 DU6892A1 DU6892A1 NT6610D8 NT6610D8 JT5823A8 JT5823A8 JT5823A8 JT5823A8 JT5823A8 JT5823A8 JT5823A8 DU6892A1 IR4250E1 IR4250E1 IR4250E1 JU6822A8 JU6822A8 JU6822A8 JU6822A8 JU6822A8 JU6822A8 JU6822A8 JT5823A8 JT5823A8 JT5823A8

250 250 250 250 250 210 210 210 210 210 246 246 246 246 246 220 220 220 220 250 250 250 250 250 250 250 220 250 250 250 250 250 250 250 250 250 250 250 250 250

27370 27000 26700 27290 27360 26650 27060 27180 27340 27140 27120 26510 26640 26980 26900 36650 36730 26440 26830 35000 28210 28100 26990 28260 28020 31670 38810 29510 29680 29680 37320 37590 37120 37140 37180 37150 32040 37123 37070 21500

9420 9410 8890 8960 10300 8050 9090 9090 9090 9090 9090 10900 9830 9840 9000 10930 11800 8300 11810 11810 5330 11320 4830 5780 5000 4960 4970 11800 9300 9300 11160 8970 8780 9060 8780 9010 9010 8980 11080 8110

5 5 5 5 5 1 1 1 1 1 2 2 2 2 2 6 6 7 7 5 5 5 5 5 5 5 6 4 4 4 6 6 6 6 6 6 6 5 5 5

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 2 2 2 1 4 2 3 3 3 3 3 3 3 3 3 3 3 3

0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0


Integrated Production Plan Scheduling


Table 1. (continued) Num. Steel type

Width Thickness Weight Length Hardness Steel grade

41 42 43 44 45 46 47 48 49 50

1650 1650 1620 1650 1650 1650 1650 1650 1650 1650


JT5823A8 JT5823A8 JU6822A8 JT5823A8 JT5823A8 JT5823A8 JT5823A8 JT5823A8 JT5823A8 JT5823A8

250 250 250 250 250 250 250 250 250 250

21850 21690 21740 22090 21560 21540 21780 21520 21530 21670

8110 8050 8250 8410 8670 8300 8110 8330 8180 10040

5 5 6 5 5 5 5 5 5 5

3 4 4 4 4 4 4 4 4 4

Energy consumption 66.883 66.883 66.883 66.883 66.883 66.883 66.883 66.883 66.883 66.883

Time DHCR 341 339 326 340 0 0 336 340 342 344

0 0 0 0 1 1 0 0 0 0

Comparisons of Solution Quality

In the experiment of integrated scheduling and single process scheduling, data of Table 1 are used to the original data. Solution of integrated scheduling obtained by SCMA are compared with solutions of steel-making, continuous casting and hot rolling obtained by GA, ABC and FA, respectively. Comparison results are analysed as follows. The rolling scheduling results obtained from SCMA and FA are compared as follows. As the total amounts of rolling planning is too big, so 10 rolling scheduling results of total amounts of rolling planning are compared. The results of hot rolling scheduling obtained by SCMA are as follows. The value of total objectives of 10 hot rolling planning are 2595, 2761, 2395, 2827, 2710, 2710, 2482, 2597, 2560 and 2664, respectively. The average rolling length of 10 hot rolling planning are 795 m, 910 m, 695 m, 925 m, 905 m, 905 m, 755 m, 795 m, 795 m, and 810 m, respectively. The direct hot rolling ratio of 10 hot rolling planning are 29.3%, 27.1%, 31.2%, 27.0%, 26.8%, 29.1%, 32.3%, 31.0%, 29.1% and 28.9%, respectively. The specification penalty cost of 10 hot rolling planning are 1690, 2120, 1530, 2300, 2120, 2100, 1660, 1680, 1520, and 1710, respectively. The results of hot rolling scheduling obtained by FA are as follows. The value of total objectives of 10 hot rolling planning are 2779, 2919, 2585, 2976, 2841, 2761, 2670, 2705, 2812 and 2848, respectively. The average rolling length of 10 hot rolling planning are 790 m, 905 m, 655 m, 825 m, 820 m, 820 m, 810 m, 815 m, 860 m and 871 m, respectively. The direct hot rolling ratio of 10 hot rolling planning are 20.1%, 20.5%, 25.8%, 21.3%, 22.6%, 23.5%, 19.8%, 28.5%, 19.6% and 19.5%, respectively. The specification penalty cost of 10 hot rolling planning are 1680, 2230, 1410, 2260, 2150, 2160, 1820, 1900, 1460 and 1660, respectively. The comparisons of objective function value, average rolling length, specification plenty cost and direct hot rolling ratio obtained by SCMA and FA are shown in Figs.2(a), 2(b), 2(c) and 2(d), respectively. As can be seen in Fig. 2(a), objective value of rolling planning obtained by SCMA less than objective value obtained by FA. As can be seen in Fig. 2(b), rolling length of the first six rolling planning obtained by SCMA are better than FA. Rolling length of the other four rolling planning obtained by SCMA are not better than FA. As can be seen in Fig. 2(c),


L. Liu et al.

Fig. 2(a). Comparison of objective value

Fig. 2(c). Comparison of specification plenty cost

Fig. 2(b). Comparison of average rolling length

Fig. 2(d). Comparison of direct hot rolling ratio

specification plenty cost of part rolling planning obtained by SCMCA are higher than FA. As can be seen in Fig. 2(d), direct hot rolling ratio of rolling planning obtained by SCMA are higher than FA. The charge planning scheduling results obtained from SCMA and GA are compared as follows. As the total amounts of charge planning is too big, so 10 charge planning scheduling results of total amounts of charge planning are compared. The results of charge planning scheduling obtained by SCMA are as follows. The value of total objectives of 10 charge planning are 495, 530, 565, 505, 490, 450, 565, 620, 585 and 590, respectively. The real production weights of 10 charge planning are 143t, 142.5t, 142.5t, 145t, 147t, 142.5t, 145t, 142.5t, 142.5t and 142.5t, respectively. The open orders weights of 10 charge planning are 0t, 1.5t, 3t, 0t, 0t, 3.2t, 0t, 5.3t, 3.5t and 4.1t, respectively. The specification penalty costs of 10 charge planning are 165, 210, 120, 300, 220, 210, 160, 180, 150 and 165, respectively. The results of charge planning scheduling obtained by GA are as follows. The values of total objectives of 10 charge planning are 510, 525, 580, 495, 560, 550, 565, 615, 580 and 585, respectively. The real production weights of 10 charge planning are 142.5t, 143t, 142.5t, 143t, 145t, 142.5t, 143t, 142.5t, 142.5t and 142.5t, respectively. The open orders weights of 10 charge planning are 1.5t, 2t, 3.5t, 0t, 1.0t, 3.2t, 1.6t, 5.3t, 3.6t, 4.3t, respectively. The specification penalty costs of 10 charge planning are 150, 165, 115, 305, 280, 165, 175, 190, 145 and 160, respectively. The comparisons of objective function value,

Integrated Production Plan Scheduling


real production weights, open orders weights and specification plenty cost obtained by SCMA and GA are shown in Figs. 3(a), 3(b), 3(c) and 3(d), respectively. As can be seen in Fig. 3(a), objective value of charge plan obtained by SCMA less than objective value obtained by GA. As can be seen in Fig. 3(b), real production weights obtained by SCMA are more than GA. As can be seen in Fig. 3(c), open order weights of rolling planning obtained by SCMA are less than GA. As can be seen in Fig. 3(d), specification plenty cost of part rolling planning obtained by SCMCA are better than GA.

Fig. 3(a). Comparison of objective value

Fig. 3(b). Comparison of real production weight

Fig. 3(c). Comparison of open order weight

Fig. 3(d). Comparison of specification plenty cost

The casting planning scheduling results obtained from SCMA and ABC are compared as follows. As the total amounts of casting planning is too big, so 10 results of total amounts of casting planning are compared. The results of casting planning scheduling obtained by SCMA are as follows. The values of total objectives of 10 casting planning are 228, 365, 694, 360, 395, 137, 290, 365, 605 and 550, respectively. The real production weights of 10 charge planning are 743.5t, 1055t, 1152t, 1148t, 292t, 430.5t, 732.5t, 1140t, 1859t and 1173t, respectively. The specifications plenty cost of 10 charge planning are 70, 110, 350, 130, 30, 0, 50, 60, 260, 260, respectively. The extra penalty costs of 10 charge planning are 115, 170, 220, 100, 0, 30, 160, 110, 150 and 170, respectively. The results of casting planning scheduling obtained by ABC are as follows. The values of total objectives of 10 casting planning are 260, 435, 793, 369, 406, 156, 307, 386, 595 and 595, respectively. The real production weights of 10


L. Liu et al.

charge planning 685t, 879t, 956t, 988t, 292t, 395.5t, 658.5t, 1030t, 1560t and 985t, respectively. The specifications plenty cost of 10 charge planning are 85, 135, 405, 155, 55, 20, 65, 68, 280 and 268, respectively. The extra penalty costs of 10 charge planning are 125, 180, 255, 130, 50, 35, 65, 125, 165 and 180, respectively. The comparisons of values of total objectives, real production weights, specification plenty cost and extra penalty cost obtained by SCMA and ABC are shown in Figs. 4(a), 4(b), 4(c) and 4(d), respectively. As can be seen in Fig. 4(a), objective value of charge plan obtained by SCMA less than objective value obtained by ABC. As can be seen in Fig. 4(b), real production weights obtained by SCMA are more than GA. As can be seen in Fig. 4(c), specification plenty cost of rolling planning obtained by SCMA are less than GA. As can be seen in Fig. 4(d), extra penalty cost of part rolling planning obtrained by SCMCA are better than GA.

Fig. 4(a). Comparison of objective value

Fig. 4(c). Comparison of specification plenty cost

Fig. 4(b). Comparison of real production weight

Fig. 4(d). Comparison of extra plenty cost

With comparison of the above, conclusion can be obtained as follows. Integrated scheduling of steel making, continuous casting and hot rolling by SCMA can get more better solution than FA, GA and ABC. Especially, SCMA can improve the production efficiency of the whole production process. What’s more, it can reduce the open order weight and energy consumption in the whole production process.

Integrated Production Plan Scheduling


5 Conclusions Based on the research of packing model for charge planning, vehicle routing model for casting plan and hot rolling plan, integrated production plan scheduling model of steel making, continuous casting and hot rolling has been built. SCMA is proposed to optimize charge plan sub population, casting plan sub population and hot rolling plan sub population simultaneously. During the optimizing process, information of three sub populations can be communicated to each other which can obtain global optimal solution for charge plan, casting plan and rolling plan. With maximizing the capacity of machines including furnace, conticaster and rolling mill, minimizing process plenty cost of three productions planning including charge plan, casting plan and rolling plan, minimizing energy consumption cost, maximizing direct hot rolling ratio and minimizing open order weight, integrated production plan is scheduled. Effectiveness of Integrated model and co evolutionary algorithm has been verified. Though the comparison results with integrated production plan scheduling and single process plan scheduling, the production efficiency of machines including furnace, conticaster and rolling mill has been improved based on SCMA. The open order weight and energy consumption has been reduced based on SCMA. The specification plenty cost has a little increase. In summary, results of integrated production plan scheduling are better than single process production plan scheduling. Acknowledgement. The authors would like to thank Shanghai Baoshan Iron & Steel Co. Ltd. and Shanghai Baosight Software co., Ltd. for providing us the real steel production data and the corresponding manual scheduling results as well as evaluating our scheduling results made by the method proposed above. This article is supported by fund project: Financed by the Ministry of industry and information technology, China Manufacturing 2025 key project No. TC17085JH.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

Ma, T.N., Luo, X.C.: Syst. Eng. 5, 684–701 (2013) Karmarkar, N., Karp, R.M.: IEEE Annual Symposium on vol. MAG-23, pp. 312–320 (1982) Lodi, A., Martello, S., Vigo, D.: Eur. J. Oper. Res. 2, 410–420 (2002) Yang, F., Li, Q.Q., Liu, S.: Comput. Integr. Manuf. Syst. 9, 2241–2247 (2014) Prins, C.: Comput. Oper. Res. 12, 1985–2002 (2004) Karthikeyan, S., Asokan, P., Nickolas, S.: Int. J. Adv. Manuf. Technol. 1, 1567–1579 (2014) Wang, X., Tang, L.: Int. J. Adv. Manuf. Technol. 5–6, 431–441 (2008) Zhang, W.X., Li, T.K.: Comput. Integr. Manuf. Syst. 6, 1297–1299 (2013) Holland, J.H.: U Michigan Press, Ann Arbor (1975). 1992, 6(2), 126–137 Karaboga, D., Basturk, B.: J. Global Optim. 3, 459–471 (2007) Yang, X.S.: Stochastic algorithms: foundations and applications, pp. 169–178 (2009) Liu, C.P., Ye, C.M.: Comput. Appl. Res. 9, 172–176 (2011)

Knowledge Sharing in Product Development Teams Eirin Lodgaard(&) and Kjersti Øverbø Schulte SINTEF Raufoss Manufacturing AS, Raufoss, Norway {eirin.lodgaard,kjersti.schulte}

Abstract. The ability to innovate and introduce viable new products is critical to the competitive position of Norwegian industries in increasingly more dynamic market places. How well collaboration among team members functions determines the extent and effectiveness of integration in the design and development of products. The process of knowledge sharing furthermore represents a critical asset to a product development team and its capacity to innovate new products in an increasingly global and demanding market. This study therefore, using semi structured interviews, examines the factors that influence knowledge sharing, within product development teams. It is concluded that more structured and formal ways of knowledge sharing may play an important role in the future. This will force, as teams become more dispersed, more attention on innovative communication capabilities, with the aim of creating a better prerequisite for knowledge sharing. Keywords: Product development

 Knowledge sharing

1 Introduction Long-term business sustainability depends on the ability to acquire knowledge throughout an organization and which can promote the development of better products and production processes. Knowledge development, sharing and application are considered to be crucial to the development of new and improved products in mature hightech industries such as automobile part industries and for survival in a global market. Aluminium is an important product in Norway, both historically and economically. Some companies and industries have developed competences and knowledge in aluminium engineering and production/manufacturing over a long timeframe and succeeded in developing and producing competitive products in a global market. Successful new product introduction however requires the ability to share and integrate existing organizational and inter-organizational knowledge [17]. This article therefore attempts to shed light on how teams in product development and production development share knowledge to maintain quality and speed in new product development.

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 432–438, 2019.

Knowledge Sharing in Product Development Teams


2 Knowledge Sharing in Product Development Team There are many definitions of product development (PD). Two definitions are: “Product development is the sum of activities that have their outcome in a realized need and lead to production and sale of a product that satisfies the customer’s need” [1] and “is a human-centered procedure for developing competitive products and services of high quality, within a reasonable amount of time, and with an excellent price-performance ratio” [21]. This is in line with the PD process definition: “the sequence of steps or activities that an enterprise employs to conceive, design and commercialize a product” [20]. Existing literature furthermore emphasize that to succeed with product development, collaboration across functional units and teams is essential. How wellfunctioning collaboration is determines the extent and effectiveness of integration in product design and development [24]. Reasons for using teams in product development can include the complexity of the problem and the need for knowledge, know-how and experience from several areas of expertise. Drucker [6] predicted that competitive advantages in the future would be determined by knowledge resources. Knowledge sharing has been argued to contribute to innovation capabilities and is vital to a company’s performance [10]. One aspect is the process of knowledge sharing among team members in a PD project. Knowledge sharing at team level represents a critical asset in product development and should be therefore be heavily management. Nonaka [13] defines knowledge as what a person knows, meaning that knowledge is a dynamic and ongoing process. Knowledge sharing refers to employees mutually exchanging their knowledge and contributing to knowledge application [23]. Polanyi [15] differentiates between explicit and tacit knowledge, both explicit and tacit knowledge sharing playing key roles in the transformation of knowledge. Explicit knowledge are forms of knowledge sharing that are institutionalized within an organization [5]. Normally this knowledge can be easily captured, codified and transmitted. Tacit knowledge, in contrast, are face to face interaction, the key to sharing being the willingness and capability of individuals to share what they know and to use what they learn [8]. Tacit knowledge, which overlaps embodied knowledge, are trained highly individualized skills, for example building a violin, where body and mind are coordinated, and where the knowledge is difficult to verbally share with others [22]. Knowledge sharing among team members is positively associated with team effectiveness outcome [18]. Backer et al. [3] claim that the longer a team has been formed and the higher the level of team cohesiveness, the more likely that they share knowledge. A study by Srivastava et al. [16] highlighted that empowering leadership will have a positive effect and increase participation in knowledge sharing within a team. An important finding to take into account is that team members who consider themselves a minority in some way, were less likely to share knowledge among team members [14]. It is also important to consider how to transfer knowledge from experts to novices [7]. Getting team members to share knowledge effectively is difficult. Most of the work conducted in a PD team is invisible. Ideation and decisions take place in engineers’ minds. Team members sit at computers writing reports, performing different types of


E. Lodgaard and K. Ø. Schulte

analysis and carrying out other tasks that are difficult for team members to discern. Dispersed teams, due to globalization, are becoming more widespread, a development that will not make these challenges easier in the future. One reason for this is that dispersed teams reduce the opportunities to see what others do when working and with whom. There has, in the past few decades, been an increase in the use of information and communication technology to make knowledge visible [11]. The potential benefits of this has meant many organizations have invested in knowledge management systems to facilitate the collection, storage, and distribution of knowledge [23]. Unfortunately, high levels of organization results in the failing to share knowledge despite the use of a management system [2]. An important reason for this is the lack of consideration of how interpersonal context and individual characteristics influence knowledge sharing [4].

3 Research Design Face to face interviews are an essential source of evidence and are commonly used in the collection of information. Well informed interviewees can also provide important insight into the facts on a topic and their opinions [25]. 8 semi-structured in-depth interviews divided across 2 companies were conducted to gain insight into factors that influence knowledge sharing between team members in a PD project. This was the data collection technique used in this work. The companies are involved in the automotive supplier industry. The research team has 8 years of experience of five research projects in one of the companies and 6 years of experience of two research projects in the other company. All the research projects have included the PD field. This has given us a large amount of detailed document studies, observations and informal interviews on the PD process and knowledge sharing. An interview guide was outlined prior to the interviews to ensure that the same questions were asked of all participants. The interviews took the form of semistructured conversations, in which the informants were asked prewritten questions but in which they were allowed to answer freely. The questions were formulated around knowledge sharing within PD teams, based on facts and their opinions on the topic. The same questions were asked of multiple professionals, as recommended by Karlsson [9]. The semi-structured interviews were therefore answered by people working in product design, process design, engineering, and project management.

4 Results and Discussion The companies involved in the study are small subsidiaries of a global corporation, their main business being development and production in the light-weight materials segment. They are engaged in the design, production and marketing of products to customers worldwide. PD activities are closely related to product design and manufacturing design and extend from the concept and design phase to testing and the trial production phase. Both operate as a PD headquarters for the development of

Knowledge Sharing in Product Development Teams


components in the light weight material segment and as first-tier suppliers to original equipment manufacturers in the automotive industry. They furthermore maintain long term relationships with strategically important customers worldwide. Multidisciplinary teams are the normal way to organize product development projects, as was described by all informants. They emphasized if new products and technologies are to be brought from idea to the market, then it is necessary to have teams that consist of individuals with different knowledge and who work in an efficient way together. The opinion of informants is, more specifically, that teamwork is an enabler in which you listen for the unique knowledge that one person has and the other’s do not. Achieving this substantially depends on knowledge sharing among team members. This was true for the informants involved in this study. The characteristics of close connection between action oriented and pleasing the customer seem to be the heart of PD. A high level of focus on pleasing the customer through a good technological solution which is based on the high level of competence in the company and an emphasis on high speed, is the valued approach. It is apparent that they are very solution oriented in the way they work in PD projects. Their goal is, more specifically, to produce a new product more cheaply and with better performance or more embedded functions. They have a strong belief that their knowledge and the way they work in a multidisciplinary team means that they will always manage to deliver better product design and to produce the new design more efficiently and at a higher level of quality. This dimension illustrates the importance of thinking of efficient knowledge sharing as an investment that puts a company in a position to meet future customer expectations and requirements. Informal knowledge sharing is today reported as being widespread. Formal ways of sharing knowledge among team members are not utilized to the same degree. Interview subjects also emphasize that they are too informal and could be more formal with respect to documentation and communication of the knowledge created. Interview subjects, however, emphasized that the informal approach they apply is less time consuming and is sufficient to achieve a product design that can satisfy customers. Knowledge sharing can be performed through both formal and informal interaction. In formal interaction, information and communication technology can act as a facilitating tool. The challenge is, however, the need to integrate and align this with team design and the process of collaboration and communication [26]. A team is, in this case, normally organized through formal and informal physical meetings, net meetings, telephone meetings and e-mail. Meetings range from active participation in activities to passive observer. A formal meeting is defined as a planned interaction with several predefined actions and a purpose. An informal meeting is unplanned face to face communication. An informal meeting may be more or less random, such as a discussion by the coffee machine and in corridors, at lunch, during a walk-through or after a formal meeting. The informants highlighted that both the informal and formal meeting area was of importance to the sharing of knowledge between team members. Documenting useful knowledge and making it visible and easy to access when needed was seen as being difficult. A common approach identified in this case study that is used to obtain the information or competence needed on a specific topic was to ask the person who they knew possesses the knowledge they require. It was apparent that they experienced searching in a management system for documented knowledge to


E. Lodgaard and K. Ø. Schulte

be more difficult than asking those who they know have the knowledge. This view is supported by Szulanski [19] who stated that face-to-face interaction was still an indispensable mechanism for knowledge sharing, particularly when more complicated knowledge is involved. Important pieces of knowledge can, however, be lost due to poor visibility where problems are solved through informal talk. Knowledge sharing within a team was considered to be an important issue. Finding a good knowledge sharing solution was, however, not to so easy. One explanation was that knowledge created during the PD process was seen to be difficult to document due to its complex nature and through it being accomplished through the cooperation of many people performing different functions. Interviewees however highlighted the importance of making the knowledge more visible, to improve knowledge sharing within a team and future use. This aspect is of importance when developing innovative knowledge management systems and the action-oriented approach. Other vital issues raised during the interviews were that transparency in a small organization makes knowledge more accessible in an informal way than in a larger structure of dispersed teams. The interviewees explain that their geographical closeness (resources gathered together at one location) provides them with a unique opportunity to share knowledge with others. The companies involved in the study are small subsidiaries of a global corporation. One challenge they experience is dispersed teams. An example that was explained to us was a PD team which was geographically distributed across two countries. Net meetings, digital project rooms and phone calls were used. This was perceived to function well when the team members know each other well. Some of the informants considered team members meeting in person at the first meeting to be of great value. Online meetings are seen to be sufficient after this. All of the interviewees agreed, as stated in Zakaria et al. [26], that more dispersed teams require innovative communication capabilities for the teams to work effectively together. It is important to those who work in dispersed teams to create efficient collaboration strategies that solve the problem of not being physically at the same site. The focus of the informants was on managing collaboration and finding the best solution using the available tools, which includes net meetings, video, phones, and emails. One could argue, based on the above findings, that it is very important to focus on communication visibilities between team members in a dispersed team, and on increasing their awareness of who knows what and whom [11].

5 Concluding Remarks The process of sharing knowledge among team members in a PD is a critical asset to innovating new products in an increasingly demanding global market [10]. This study has, however, identified that there are still opportunities to improve how team members perform knowledge sharing. This study reveals that a balance has to be found between a formal and informal structure if successful knowledge sharing is to be achieved. This should, more specifically, be closely related to one major aspect - that people in PD are driven by a request for their knowledge and skills. The close connection between action-oriented and informal communication seem to be at the heart of PD.

Knowledge Sharing in Product Development Teams


This study indicates that more structured and formal ways of knowledge sharing are needed in the future. Created knowledge should, furthermore, be shared in a more visible way among all team members. Experienced people, however, expect the freedom in their work to apply their special competence. It is therefore important to agree upon a level of standardization [12]. The rise of dispersed teams [18] will furthermore force more attention on innovative communication capabilities, new and improved visible communication technologies potentially playing an important role in this. More research is required to enhance the findings. Further research will be undertaken to outline different strategies of knowledge sharing, giving a more comprehensive picture of the factors that influence the flow of knowledge sharing. Acknowledgement. The research was funded by the Research Council of Norway.

References 1. Andreasen, M.M.: Buur J. Problemløsning, Konstruktion, Produktudvikling. Ljungby: The Technical University of Denmark (1984) 2. Babcock, P.: Shedding light on knowledge management. HR Mag. 49(5), 46–50 (2004) 3. Bakker, M., Leenders, R.T.A.J., Gabbay, S.M., Kratzer, J., Van Engelen, J.M.L.: Is trust really social capital? Knowledge sharing in product development projects. Learn. Organ. 13 (6), 594–605 (2006) 4. Carter, C., Scarbrough, H.: Towards a second generation of KM? The people management challange. Educ. Train. 43(4), 215–224 (2001) 5. Coakes, E.: Storing and shearing knowledge: supporting the management of knowledge made explicit in transnational organisations. Learn. Organ. 13, 579–593 (2006) 6. Drucker, P.F.: Managing for the Future: The 1990s and Beyond. Truman Talley Books/Plume, New York (1993) 7. Hinds, P.J., Patterson, M., Pfeffer, J.: Bothered by abstraction: the effect of expertise on knowledge transfer and subsequent novice performance. J. Appl. Psychol. 86, 1232–1243 (2001) 8. Holste, J.S., Fields, D.: Trust and tacit knowledge sharing and use. J. Knowl. Manag. 14, 128–140 (2010) 9. Karlsson, C. (ed.): Researching Operations Management. Routledge, New York (2009) 10. Kogut, B., Zander, U.: What firms do? Coordination, identifying, and learning. Organ. Sci., 502–518 (1996) 11. Leonardi, P.M.: Social media, knowledge sharing, and innovation: toward a theory of communication visibility. Inf. Syst. Res. 25(4), 796–816 (2014) 12. Mintzberg, H.: The Structuring of Organizations: A Synthesis of the Research. Prentice-Hall, Englewoods Cliffs (1979) 13. Nonaka, I.: A dynamic theory of organizational knowledge creation. Organ. Sci. 5(1), 14–37 (1994) 14. Ojha, A.K.: Impact of team demography on knowledge sharing in software project teams. South Asian J. Manag. 12(3), 67–78 (2005) 15. Polanyi, M.: The Tacit Dimension. Routledge & Kegan Paul, London (1966) 16. Srivastava, A., Bartol, K.M., Locke, E.A.: Empowering leadership in management teams: effect on knowledge sharing, efficacy, and performance. Acad. Manag. J. 49(6), 1239–1251 (2006)


E. Lodgaard and K. Ø. Schulte

17. Stalk, G., Evans, P.: Competing on capabilities: the new rules of corporate strategy. Harv. Bus. Rev., 57–69, March–April 1992 18. Staples, S., Webster, J.: Exploring the effects of trust, task interdependence and virtualness on knowledge sharing in teams. Info Syst. 18, 617–640 (2008) 19. Szulanski, G.: The process of knowledge transfer: a diachronic analysis of stickiness. Organ. Behav. Hum. Decis. Process. 82, 9–27 (2000) 20. Ulrich, K.T., Eppinger, S.D.: Product Design and Development. McGraw Hill, New York (2011) 21. Vanja, S., Burchardt, C.: Dynamic development structures of integrated product development. J. Eng. Des. 9(1), 3–15 (2010) 22. von Krogh, G., Roos, J.: Organizational Epistemology. Macmillan Press, London (1995) 23. Wang, S., Noe, R.A.: Knowledge sharing: a review and directions for future research. Hum. Resour. Manag. Rev. 20, 115–131 (2010) 24. Wheelwright, S., Clark, S.: Revolutionizing Product Development. The Free Press, New York (1992) 25. Yin, R.K.: Case Study Research. Design and Methods, 4th edn. Sage, Beverly Hills (2009) 26. Zakaria, N., Amelinckx, A., Wilemon, D.: Working together apart? Building a knowledgesharing culture for global virtual teams. Creat. Innov. Manag. 13(1), 15–29 (2004)

Multi-site Production Planning in a Fresh Fish Production Environment Quan Yu(&) and Jan Ola Strandhagen Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway [email protected]

Abstract. Making an appropriate production plan in a production network environment is not easy regarding the coordination between facilities and transshipment. This paper presents a case study of a production network modeling for a fresh fish product producer, with the cutting process as the main task in the filet production. Because the cutting process produces coupled finished products with respect to specific cutting pattern. Objective product will be produced together with its bi-products, which are also profitable. The paper proposes a mixed integer linear programming (MILP) model aiming at maximizing the profit under the condition of meeting the demand, in consideration of product profit, labor cost, transshipment cost, and transshipment time. The model is validated with real data from the producer. Keywords: Multi-site production

 MILP  Transshipment

1 Introduction Production network is used for improving operation efficiency, reducing production cost, and increasing competence [1, 2]. Coordination between facilities is always necessary to balance the utilize the raw materials and the capacity [3], especially in fresh food production [4]. Making an appropriate production plan becomes more complex with the increase of the production network size [5], differences between the production capacities, number of workers, use of parallel machines [6], etc. This paper proposes a mixed integer linear programming (MILP) model for a production network of the case company in fish industry, which produces fresh fish filet as major profitable products. Because of the characteristics of filet production, the producer on the one hand must fulfill orders from sellers, on the other hand need to make more profitable by-products to be pushed to the market. In addition, the producer must pay workers for a full shift, regardless of production volume. For extra shifts on the weekend, the payment is 30% higher than on weekdays. Consequently, the central manager has to decide to re-distribute the raw materials within the production network, and even close some plants down to reduce the labor cost. Thereby the proposed model can be used as a quantitative decision support tool for the central manager to coordination the production. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 439–447, 2019.


Q. Yu and J. O. Strandhagen

2 Description of the Production Network The case company has a production network with three type of facilities, i.e., big plant, small plant and receiving station. The differences between them are shown in Table 1. Table 1. Differences between facility types Receiving raw materials Production Big plant Yes Yes Small plant Yes Yes Receiving station Yes No

Capacity Higher Lower –

Worker More Less –

Fresh fish is received as raw materials by every facility, which is distributed afterwards within the network for higher facility utilization and lower cost. Facilities are grouped geographically, which affects the transshipment time of the raw materials (Fig. 1).

Fig. 1. A production network

Cutting is main task of this fresh fish production. The finished products are filets cut from the fish. Each fish can be cut by different patterns to make different products simultaneously. Thereby the major objective is to meet the demand and maximize the profit margin by distributing raw materials to plants and selecting most profitable cutting patterns.

3 Production Planning Model 3.1


We introduce first the notation of sets, indices, constants, and variables in the model. • Sets. L set of lines in the production network, L ¼ f1; 2; . . .; N F g; LðlÞ set of lines belong to facility l, LðlÞ  L;

Multi-site Production Planning in a Fresh Fish


• Indices. i raw material ði ¼ 1; 2; . . .; N R Þ; j production line ðj ¼ 1; 2; . . .; N L Þ; m; n cycle ðm; n ¼ 1; 2; . . .; N C Þ;   k working day k ¼ 1; 2; . . .; N d ; p cutting pattern ðp ¼ 1; 2; . . .; N P Þ; u finished product ðu ¼ 1; 2; . . .; N E Þ; l; m facility ðl; m ¼ 1; 2; . . .; N F Þ • Constants. Cij capacity of raw material i at line j; demand (orders) of product u in a planning horizon; du FR fill rate (service level); number of cycles (shifts); NC d number of working days; N E number of finished product types; N number of facilities; NF number of production lines; NL number of cutting patterns; NP number of raw material types; NR W real number of workers at line j; Nj W;max maximum number of workers at line j; Nj TnC cEu cLn cT cTl;m cRi cPE pu cRP ip qR qRiln slm

time of cycle (shift) n in hour; unit sale price of finished product u; unit labor cost in cycle n; coefficients vector of unit transshipment cost of raw materials; coefficients of unit transshipment cost from facility l to facility m; unit purchase cost of raw material i; conversion rate of end product u by plan p; switch of plan p for raw material i; quantity vector of direct supply of raw materials (before internal transshipment); direct supply of raw material i at facility l in cycle n; transshipment time from facility l to facility m measured in cycle

• Decision variables. The decision variables consist of three categories, which are of expense of raw materials, internal transshipment of raw materials and open/close status of facilities. x raw materials allocated to facilities; y finished product; d transshipped raw materials within the network;


Q. Yu and J. O. Strandhagen

dilmn ^dmlimn

transshipped raw material i from facility l to facility m in cycle n; received raw material i at facility l in cycle n that transshipped from facility m in cycle m, m ¼ n  sml ; open/close status of facilities in the network; open/close status of line j in cycle n; changes of raw material i at facility l in cycle n; supply of raw material i at facility l in cycle n after transshipment

n njn DqRiln ^ qRiln


Objective Function

We define the objective function z ¼ f ðx; d; nÞ where x; d; n are respectively the decision variables of raw materials expense, raw material transshipment, open/close status of plants, and z is the total profit, i.e., the total sales zE subtracted by raw material cost zR , labor cost zL , inventory cost zI and transshipment cost zT z ¼ zE  zR  zL  zI  zT Where zI is a constant according to the producer, and zE ¼




z ¼ R

X i

z ¼ L



! yuijpn




! xijpn


( cLn


"  max


zT ¼

X l;m



TnC ;

X xijpn NjW;max  Cij NjW ip


cTl;m dl;m


Constraint by Raw Material Availability. Total processed raw materials i is less or equal to available raw materials, i.e. n X jp;n¼1


n X l;n¼1


Multi-site Production Planning in a Fresh Fish


Constraint by Demand. Demand of product u should be met with a fill rate F R , X

F R  du 

yuijnp  du


Constraint by Cycle Capacity Related to Open/Close Status Variables. Total production time in each cycle should be less or equal to corresponding cycle time, X xijpn ip


 TnC  njn

Where njn 2 f0; 1g representing the open/close status of a plant. If plant j is closed down in cycle n, njn ¼ 0, which means the corresponding capacity is also 0. Constraints by Transshipment. These constraints are set regarding the effect of raw materials re-distribution from the perspective of available quantity and time. 1. Each transshipment variable is non-negative. dlmin  0 2. Cumulative raw material i transported out from facility l should be less or equal to its cumulative direct supply in cycle n. The cumulative direct supply here excludes those transported in by internal transshipment. n X X


n¼1 mjm6¼l

n X



3. Raw material i expense at facility l in cycle n should be less or equal to its supply, including its direct supply and those transported in. n X X

xijpn & 

n¼1 j2LðlÞ;p

n   X qRlin þ DqRlin n¼1

where qRlin and DqRlin are respectively the direct supply and the changes of raw material i at facility l in cycle n. The changes consist of both outbound and inbound P of materials in each cycle between facilities. The former is represented by  m dlmin while the latter is represented by DqTlin , s.t. DqRlin ¼  P

X m

dlmin þ DqTlin :

^ dmlimn , meaning that all raw materials i arrived at facility l in cycle n are aggregated, where ^dmlimn ¼ dlmim jm¼nsml . If materials that shipped out cannot be received in the same planning horizon, they are rectified to be 0 but the occurred costs DqTlin ¼



Q. Yu and J. O. Strandhagen

are still counted, meaning that they only generate transshipment cost but are unavailable in the network. This constraint is used to avoid unnecessary transport. n X n¼1


0 @


xijpn þ

X m


dlmin þ

X mm

1 dlmim jm¼nsml A 

n X

qRlin 8l; i; n


MILP Model Construction

The objective function is written in the form of Ax  b so that conventional LP algorithms can be applied. The coefficient matrix A and the vector b are assembled by 1. Ax and bx related to decision variables of raw materials expense only, 2.  Ad and bdrelated to decision variables of transshipment only, x;d and bx;d related to raw materials expense and transshipment decision 3. Ax;d x jAd variables,   x;n 4. Ax;n and bx;n related to both raw materials expense and facility open/close x jAn status decision variables. 0


B A¼B @ Ax;d x

Ax;n x

1 Ad Ax;d d

C C A Ax;n n


1 bx B bd C C b¼B @ bx;d A bx;n

4 Validation and Discussion We used sample data of from the case company to validate the proposed model. There are 11 facilities including three big plants, four small plants and four receiving stations, which belong to three regions as shown in Fig. 2. Each facility has only one filleting line, of which the capacity depends on the number of workers. We selected 10 major raw materials and five major finished products cut from them. Each raw material has only one cutting pattern in the test data. The conversion rates from cutting pattern to finished products are shown in Table 2.

Multi-site Production Planning in a Fresh Fish


Fig. 2. Tested production network and transportation time Table 2. Conversion rates from cutting pattern to finished products

Plan Plan Plan Plan Plan Plan

1 2 3 4 5 6

Product 1 2 0.03 0.10 0.02 0.08 0.07 0.08 0.05 0.06 0.08 0.08 0.06 0.06

Applied raw material no. 3 0.08 0.06 0.08 0.07 0.08 0.06

4 0.07 0.06 0.08 0.06 – –

5 0.24 0.19 0.21 0.17 0.28 0.22

10 3, 6 9 1, 2, 5 8 4, 7

Figure 3(a) shows that big plant 2, small plant 2 and small plant 4 receive most of raw materials. Figure 3(b) shows that raw materials have been used up in a planning horizon. In fact, small plant 1 received 345 kg raw materials in shift 13, which is invisible from the figure. Because the labor cost will be higher than possible profit and there is not enough time for other facilities to receive them by transport, there is no production for them. Regarding the expense of raw materials, Fig. 3(c) shows that big plant 2, small plant 2 and small plant 4 are top three in accordance with the raw material supply, where the first uses more raw materials than its direct supply and the last two run oppositely. Figure 3(d) shows that raw materials are mainly shipped out from small plant 4, receiving station 1, receiving station 2 and small plant 2. These raw materials go to big plant 2 and big plant 1 respectively as shown in Fig. 3(e). Figure 3 (f) shows that with the test data, the most often used facilities are successively. Results in Table 3 can be seen as a balance after lowering labor cost (using small plant as much as possible) and lowering transshipment cost (using nearby plant as much as possible).


Q. Yu and J. O. Strandhagen

Fig. 3. Test results (a) cumulative supply of raw materials (b) cumulative inventory of raw materials (c) cumulative expense of raw materials (d) cumulative outbound of raw materials (e) cumulative inbound of raw materials (f) cumulative utilization of lines Table 3. Facilitiy utilization Facilities Workers

Small plant 2 8

Small plant 4 15

Big plant 2 26

Small plant 3 10

Small plant 1 12

Big plant 1 24

5 Conclusion This paper presents a general MILP model addressing production planning problem for a company in fish industry, where the cutting process is the main task in the production. The model aims at maximizing the profit in consideration of labor cost, product profit,

Multi-site Production Planning in a Fresh Fish


transshipment cost, and transshipment time. The model is validated with test data from the case company. Because the historical production is not available in the test, the comparison between the solution from the model and the actual plan is missing, which limit the evaluation of the model performance. This will be investigated in the future work. Acknowledgement. This work was supported by the Research Council of Norway through the Qualifish project.

References 1. Behnamian, J., Fatemi Ghomi, S.M.T.: A survey of multi-factory scheduling. J. Intell. Manuf. 27, 231–249 (2016) 2. Chan, H.K., Chung, S.H.: Optimisation approaches for distributed scheduling problems. Int. J. Prod. Res. 51, 2571–2577 (2013) 3. Tomasgard, A., Høeg, E.: A supply chain optimization model for the Norwegian meat cooperative. Appl. Stoch. Program. 253–276 (2005) 4. Vahdani, B., Niaki, S.T.A., Aslanzade, S.: Production-inventory-routing coordination with capacity and time window constraints for perishable products: heuristic and meta-heuristic algorithms. J. Clean. Prod. 161, 598–618 (2017) 5. Behnamian, J.: Heterogeneous networked cooperative scheduling with anarchic particle swarm optimization. IEEE Trans. Eng. Manage. 64, 166–178 (2017) 6. Yu, Q., Nehzati, T., Hedenstierna, C.P.T., Strandhagen, J.O.: Scheduling Fresh Food Production Networks, pp. 148–156. Springer International Publishing (2017)

Product Design in Food Industry A McDonald’s Case Polly Dugmore and Yi Wang(&) The School of Business, Plymouth University, Plymouth, UK [email protected]

Abstract. McDonald’s is a well-established fast food chain globally however, this has resulted in numerous problems being faced. The decision-making problem being discussed in this report is that of food product design. An issue arose when customers began to question the meat quality of the chicken nuggets and burgers served by the chain. Consumers discovered that the meat being used was not 100% chicken breast and incorporated some less desirable cuts of meat. Overall, this fell as a food product design decision making problem by McDonald’s as they chose to use cheaper meat. Literature regarding the theory behind food product design will be evaluated and analysed in order to deduce quality function deployment (QFD) as a potential solution to the problem faced by McDonald’s. The QFD method will be scrutinised and both benefits and drawbacks of the concept discussed in order to demonstrate the value that this chosen solution could provide. Keywords: Food industry Business decision

 Quality function deployment  Product design

1 Introduction In the food industry, the perceived food product quality that consumers experience should be considered at the product design stage as this will ensure adequate quality in the final product. This quality element is necessary to ensure sales, without what the consumers perceive to be a decent level of quality the product will not reach maximum sales due to lower customer satisfaction levels [1]. Sajadi and Rizzuto [2] investigate how consumer perception can influence both satisfaction and loyalty to fast food chains. The key finding suggested that consumer perception of meal quality in fast food chains such as Mc Donald’s is very much subjective. This said, it was found that some measures could be standardised to a certain minimum level of quality in order to save costs [3]. One alternative approach towards food product design is to utilise the primary consumers in order to develop the product backwards as to what characteristics are desired from the food product and incorporate these into the initial design. Costa et al. [4, p. 403] suggest that in order to design a consumer-oriented food product its highly important to “translate all of the subjective customer needs in order to create an overall objective product’’. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 448–452, 2019.

Product Design in Food Industry


This theoretical approach would identify the choice criteria that consumers use when evaluating and deciding between various products. Effectively using the views and opinions of consumers prior to the realisation of a product could in fact reduce the problems to arise once the product has been launched [5]. The desires would have already been made very clear and can be incorporated into the food product design process. Bogue and Sorenson [6] state that large organisations not only require information from internal sources when evaluating potential new product development opportunities. It is believed that external sources are also crucial in order to make well informed business decisions. Customer knowledge is considered the most valuable resource when designing a new product [7]. It allows the company to understand and acknowledge customer desires at the concept stage of development. Effective use of customer knowledge can not only ensure customer satisfaction of a product once it is developed but it can also serve as a way of generating new innovative ideas for future products ensuring long term benefit to the organisation [8]. Rudolph [9] suggests that re-design and re- development may be necessary to overcome food product design problems. This milestone structure clearly defines points at which to review the progress, in the product definition phase, the implementation phase and also the product introduction phase. On the other hand, Ortega et al. [10] suggest that the two key tasks to food product design are to develop the product characteristics and then to sell it to the consumers, requiring technological and marketing capabilities. Companies who adopt this philosophy focus upon exploitation and exploration.

2 Quality Function Deployment Costa et al. [4] suggest that QFD is an adaptable approach to translate the desired customer quality demands and incorporate these into the product development process. Further to this, Benner et al. [12] explain that QFD involves the construction of numerous matrices. There’s a need to interpret customer quality demands in order to develop the most appropriate product to the consumer and therefore, hopefully attain high sales rates [13]. According to Baran and Yildiz [13], QFD can provide organisations with a competitive advantage over others in the industry particularly in the food industry as it allows quick response to changing customer demands. The use of this method allows for successful product development and design as customer requirements are integrated into the product design process from the outset [14]. The intent of QFD for food related products is to improve customer satisfaction and loyalty [13]. Benner et al. [15] not only believe that QFD can improve the product in the near future. The use of QFD in food product design processes ensures quality is instilled into the product itself and can be referred to as “designed-in quality’’ as oppose to being “inspected-in quality’’ [16, p. 469]. This ensures the organisation moves away from carrying out quality inspections once the product is complete. Such an approach would potentially entail both higher costs and more time compared to designing products through analysing customer needs at the start.


P. Dugmore and Y. Wang

One model within QFD is House of Quality (HOQ), Martins and Aspinwall (2001) state that HOQ which is also referred to as the A-1 matrix is considered to be the initial building platform for QFD. The matrix within HOQ aids researchers to identify customer requirements (what’s) and also which technical characteristics of the product influence the customer requirements (how’s) (Shen et al. 2000). Using a matrix format allows for a structured approach to transform customer requirements and ensures prioritisation of certain technical measures which can then be implemented into the development process [17]. Only the ‘How’s’ which are highly prioritised by the company and therefore pose a risk to organisational performance are taken to the next stages of QFD [18]. On the other hand, Cristiano et al. [19] state that HOQ clearly links customer requirements to process requirements. According to Kumar et al. [11] due to the generic nature of the methodology for developing HOQ the concept can be applied to a wide variety of situations and is not limited for use in specific industries. Research into the use of HOQ in the food industry is limited however, its ability to be “adapted to the requirements of a particular problem, makes it a very strong and reliable system to use in several sectors’’ [20, p. 21]. Contrary to this, there are some negative factors to using HOQ such as the fact that there are a large number of consumer demands [21]. There could also be interactions between different attributes and finally that some product requirements could affect more than one type of consumer demand [22]. The use of the HOQ model ensures organisations are able to maintain their competitive position and potentially aid strategic growth in the market provided that the matrix is constructed and analysed with the company’s strategic goals in mind [23].

3 Critical Evaluation of the Solution The way in which QFD would benefit an organisation is through allowing the company to gather consumer opinions and understand what is desired from the suggested product [4]. On the other hand, a limitation to consider with regards to QFD is the difficulty distinguishing between the basic and superior qualities [12]. It may be challenging to separate the quality factors that are 100% required to be integrated into a food product and the qualities which go above this level and offer an extra selling point and potential competitive advantage. Furthermore, Cardoso et al. [24] mention another drawback as being the difficulty to effectively interpret customer needs as qualitative data is subjective and may be perceived differently by each individual. Not only would sales increase but scandals will also be minimised as any issues regarding consumer moral values or expectations will have been identified at the concept stage and therefore the risk of rejection will be minimised [18]. Research and application of QFD in the food design industry is limited, there is said to be potential design cost reductions and reduced design time [25]. The “concept of QFD was originally developed and widely adopted by manufacturing companies for product development purposes [26]. There is a limitation in the sense that it may be difficult to fulfil all of the customer requirements expressed [21]. However, Jeong and Oh [26, p. 376] suggest that the QFD method should be viewed as a “comprehensive model integrating market

Product Design in Food Industry


demands into internal research and development (R and D) activities’’ maximising the potential for a positive receipt of the product by customers once introduced into the market. A further limitation to using QFD is that implementation issues may arise if external strategic planning teams are introduced [27]. The reason this has negative implications is because people external to the company may not fully understand the organisational strategy.

4 Conclusion The decision-making problem of food design for McDonalds regarding their burger meat quality was initially introduced. The literature review suggested numerous key aspects within the topic area. A recurring idea to arise was that during the process of new food product design its important to consider what consumers will believe to be high quality and wish to purchase. Throughout the literature review the overall conclusion to be drawn was the importance of acknowledging and scrutinising customer feedback and needs regarding food products. Furthermore, the concept of integrating these criteria developed by consumers into the food product design process has the potential to maximise customer satisfaction, loyalty and also sales. It became apparent that being able to understand and interpret consumer quality demands can enable an organisation to do more than simply please customers. There’s an opportunity to improve brand loyalty, customer satisfaction and overall company reputation, aiding the development of a competitive advantage in the market.

References 1. Awati, K.: On the limitations of scoring method for risk analysis. Available at: Reich, Y. and Paz, A., 2008. Managing product quality, risk, and resources through resource quality function deployment. J. Eng. Des. 19(3), 249–267 (2009) 2. Etemad-Sajadi, R., Rizzuto, D.: The antecedents of consumer satisfaction and loyalty in fast food industry: a cross-national comparison between Chinese and Swiss consumers. Int. J. Qual. Reliab. Manag. 30(7), 780–798 (2013) 3. Mattsson, J.: Food product development: a consumer- led text analytic approach to generate preference structures. Br. Food J. 109(3), 246–259 (2007) 4. Costa, A., Dekker, M., Jongen, W.: An overview of means-end theory: potential application in consumer-oriented food product design. Trends Food Sci. Technol. 15(7–8), 403–415 (2004) 5. Schifferestein, H.: Employing consumer research for creating new and engaging food experiences in a changing world. Curr. Opin. Food Sci. 2, 27–32 (2015) 6. Bogue, J., Sorenson, D.: Managing customer knowledge during the concept development stage of the new food product development process. J. Int. Food Agribus. Mark. 21(2–3), 149–165 (2007) 7. Ryynanen, T., Hakatie, A.: We must have the wrong consumers - a case study on new food product development failure. Br. Food J. 116(4), 707–722 (2014)


P. Dugmore and Y. Wang

8. Garcia, M., Carlos, P., Felipe, I., Briz, J., Morais, F., Navarro, M.: Quality function deployment: can improve innovation efficiency in the food industry? Int. Eur. Forum Syst. Dyn. Innov. Food Netw. 15(7), 23–31 (2007) 9. Rudolph, M.: The food product development process. Br. Food J. 97(3), 3–11 (1995) 10. Ortega, A., Garcia, M., Santos, V.: Effectuation-causation: what happens in new product development? Manag. Decis. 55(8), 1717–1735 (2017) 11. Kumar, A., Jiju, A., Tej, S.: Integrating quality function deployment and benchmarking to achieve greater profitability. Benchmarking Int. J. 13(3), 290–310 (2006) 12. Cohen: Quality Function Deployment: How to Make QFD Work for You. Addison – Wesley, Reading (1995) 13. Baran, Z., Yildiz, M.S.: Quality function deployment and application on a fast food restaurant. Int. J. Bus. Soc. Sci. 6(9), 122–131 (2015) 14. Ozgener, S.: Quality function deployment: a teamwork approach. Total Qual. Manag. Bus. Excel. 14(9), 969–978 (2003) 15. Benner, M., Geerts, R., Linnemann, A., Jongen, W., Folstar, P., Cnossen, H.: A chain information model for structured knowledge management: towards effective and efficient food product improvement. Trends Food Sci. Technol. 14(11), 469–477 (2003) 16. Guinta, L., Praizler, N.: The QFD Book: The Team Approach to Solving Problems and Satisfying Customers Through Quality Function Deployment. Amacom, New York (1993) 17. Park, S., Ham, S., Lee, M.: How to improve the promotion of Korean beef barbecue, bulgogi, for international customers. An application of quality function deployment. Appetite 59(2), 324–332 (2012) 18. Chan, L., Wu, M.: Quality function deployment: a literature review. Eur. J. Oper. Res. 143(3), 463–497 (2002) 19. Cristiano, J., Liker, J., White, C.: Customer- driven product development through quality function deployment in the U.S and Japan. J. Prod. Innov. Manag. 17(4), 286–308 (2003) 20. Pelsmaeker, S., Gellynck, Z., Delbaere, C., Declercq, N., Dewettinck, K.: Consumer-driven product development and improvement combined with sensory analysis: a case-study for European filled chocolates. Food Qual. Prefer. 41, 20–29 (2015) 21. Elias, A., Nieta, G., Jesus, A., Marcos, G., Cendon, A., Limas, J.: A new device for dosing additives in the food industry using quality function deployment. J. Food Process. Eng. 37(4), 387–395 (2014) 22. Lev, B., Shin, W.: Quality function deployment: integrating customer requirements into production design. Interfaces 22(4), 117–118 (1992) 23. Lu, M.: House of quality in a minute- quality function deployment. TQM Mag. 19(4), 379 (2006) 24. Cardoso, J., Filho, N., Miguel, P.: Application of quality function deployment for the development of an organic product. Food Qual. Prefer. 40(A), 180–190 (2015) 25. Kaulio, M.: Customer, consumer and user involvement in product development: a framework and a review of selected methods. Total Qual. Manag. 9(1), 141–149 (1998) 26. Jeong, M., Oh, H.: Quality function deployment: an extended framework for service quality and customer satisfaction in the hospitality industry. Hosp. Manag. 17(4), 375–390 (1998) 27. Killen, C., Walker, M., Hunt, R.: Strategic planning using QFD. Int. J. Qual. Reliab. Manag. 22(1), 17–29 (2005)

Research and Practice of Bilingual Teaching in Fundamental of Control Engineering Wangping Wu1(&) and Tongshu Hua2 1

School of Mechanical Engineering, Changzhou University, Changzhou 213164, People’s Republic of China [email protected] 2 Changzhou University Huaide College, Jingjiang 214513 People’s Republic of China

Abstract. This work is based on the practice of fundamental of control engineering, to summary and analysis the teaching effect through the teaching books, teaching methods and evaluation methods. At the same time, some suggestions for improving the effect of bilingual teaching in Fundamental of Control Engineering are addressed. Keywords: Bilingual teaching

 Control Engineering  Research and practice

1 Introduction The college students need to become internationalized talents with international view, consciousness and communication ability. Except the fluency in oral English, the students should read and study the major English vocabulary, which is one of important goals in China’s Higher Education Reforms. Bilingual teaching becomes one of the most popular modes to train professionals who would be competent in the future [1, 2]. Fundamental of Control Engineering is one of important basic courses in engineering college, which could provide the foundation of understanding specialty English. For the major of Material Forming and Control Engineering with one class of 30 students, the bilingual teaching in “Fundamental of Control Engineering” is offered with common foreign language teaching in the first time. We have systematically made the research and exploration since 2014. The authors teach this course for the undergraduate students using trimester practices with bilingual teaching, which could get a good effect. We will summary and analysis the teaching effect through the teaching books, teaching methods and evaluation methods.

2 Teaching Books Teaching books are the materials that are applied not only by the teachers in teaching, but also by the students in learning as a reference. The teaching materials for the general technology have been carefully planned and compiled for the undergraduate students in accordance with the requirements of the obligated course and of bilingual teaching mode [3]. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 453–456, 2019.


W. Wu and T. Hua

With the purpose of getting a good teaching effect, the selection of teaching books plays an important role due to the different levels of English for students. The textbook editing with English includes lots of specialized vocabulary, which is not good for the students with poor level of English. For a long time, these students will be not interested in learning the course. Finally, these student could not well learn both the specialized English and professional knowledge. Therefore, the following books has been introduced and both used in the bilingual teaching. (i) One book is edited in Chinese, named “Fundamental of Control Engineering” written by Jiwei Wang and Zhenshun Wu. This book could be as a reference book for students, especially for the students with poor English. (ii) Foreign excellent original textbooks have been imported in China, which will naturally introduce the advanced teaching ideas and methods. The textbooks could improve and strengthen student’ ability of English language application. We choose one book titled “Elements of Control System” that is edited by Sudhir Gupta. This book is suitable for two- or four-year college programs requiring an in-depth understanding of control systems, and a one-semester university course at the freshman level. This book has three obvious advantages [4]: First, the definition of concept is strict, detailed and less derived from pure mathematics. Secondly, The textbook has a strong engineering practice background, large amount of examples, and meanwhile combined with engineering practice. Lastly, English is fluent and easy to be read in the book for language. At the same time, the course of Material Processing Professional English is also scheduled in the same semester, which could be helpful for students to early learn some professional English vocabulary. On the other hand, the course of control engineering will be strengthened and improved to some extent.

3 Teaching Methods The PowerPoint courseware is present for the students, especially some principles with pictures. The blackboard should be used for sometimes, where the knowledge is difficult to understand for students. For example, the time domain is changed to s-domain by Laplace change, and Chapter 7-Common Transfer Functions. Giving out some examples for student are to be taught in the classroom, control system engineering has lots of examples, from the easy math model to the difficult math model. Students will be interested in the examples, which is easy to be understood and focused on in the class. We express in English for definitions and theorems, and then use Chinese to explain, analysis examples in Chinese. The difficult and important points should be mainly taught in the classroom. 3.1

English Multimedia Teaching

In the class, the resources from multimedia and network should be full used for students. Lots of pictures, animation and video should be inserted in the courseware, which could be easy to understand the complex and abstract theoretical knowledge.

Research and Practice of Bilingual Teaching


Some videos could be downloaded in China from Mooc videos and abroad from YouTube Website. English multimedia teaching could improve students’ interest in learning. 3.2

Listing Examples for Teaching

A familiar control system as an example is introduced, which is used as the teaching example. This control system will be firstly introduced from establishing math models of each parts of this control system, subsequently drawing the blocks diagram to get transfer function of this system, then analyzing its stability, finally solving the time response, steady state error and overshoot. Further, the root locus and Bode diagram should be figured out by MATLAB software, some different correction methods are applied to improve the stability of the system. It is deep to understand the professional knowledge through listing examples for teaching. 3.3

Discussion in the Class

When we meet some difficult problems in the teaching process, we should organize some small groups, each group with 3–5 students to discuss the problem in English and Chinese. This teaching method could be helpful to improve the level of English ability.

4 Evaluation Methods Examination is a good method, but more important form reflecting the grasping of knowledge of students. Examination aims at the reinforcement of education, relieves the stress, and inspires the passion of the students [5]. 4.1

Diversification of Evaluation

We evaluate the learning effect of students from two parts, one is usual performance, and the other is the result of examination. For the usual performance, the small quiz and mid-term examination will be tested in the class after several chapters. Sometimes, the questions will be asked for student to answer. Attendance, assignment, the communication of foreign language and small quiz and mid-term examination accounts for 10%, the level of homework is also 10%, the experimental report accounts for 10%, final course grade accounts for 70%. 4.2

Strengthen the Evaluation of English Ability

The one of aims of bilingual teaching is to improve the ability of English, such as oral, reading, listening and writing. In teaching process, the textbook, courseware and reference materials are made in English. We try to talk and communicate in English, which could be helpful to improve the oral English in a English-learning environment. We arrange students to do homework in English and write on the blackboard or on the computer.


W. Wu and T. Hua

5 Teaching Effects At present, lots of students do not like actively to answer the questions in the class. It is a big problem. Therefore, teachers must ask the name of the students to answer it. We promise that the exact answer could make you get more points for ordinary appearance. At the same time, we use easy and difficult questions to ask the students with poor and good English, respectively, which could make every students participate in the activity. After a short period, some students are interested in learning this course and could read this textbook. Students could actively answer some questions. The effect of homework is increased to some extent. Sometime, students do not understand the question, so the answer is not right. Students like to ask teachers about these difficult questions and then obtain some guidance, finally get a good answer. At the end of term, students could get good results. But only 1–3 students could not pass the exam in every semester.

6 Conclusions It is urgent for teachers to change teaching mode to improve students’ ability of practical use. Not only should the teacher lecture students the necessary theories and skills in the class but also use communicative teaching method and various teaching strategies to improve students’ abilities of English language application and communicative competence. On the other hand, the study time should be extended. Due to the short study time, some examples could not be listed in the class for students. Acknowledgments. This work has been supported by the Project of Higher Education and Teaching Reform of School of Mechanical Engineering (Grant Number: JXJY2017011) from Changzhou University.

References 1. Liu, T.Y.: Research on bilingual teaching in modern control engineering. J. High. Educ. 12, 40–41 (2015) 2. Yuan, S.M., Liu, Q.: The probe and practice of bilingual teaching. J. High. Educ. Res. 34, 83–85 (2011) 3. Yang, Y.J.: Practice and suggestion on bilingual teaching in fundamental of control engineering. Theory Pract. Educ. 27, 188–189 (2007) 4. Gupta, S.: Elements of Control System. China Machine Press (2004) 5. Cao, D.L., Su, K.Q., Liang, B.S., Wang, R.: Research and practice on engineering mathematical bilingual. Procedia Eng. 15, 4105–4109 (2011)

Research on Assembly Line Planning and Simulation Technology of Vacuum Circuit Breaker Wenhua Zhu1 and Xuqian Zhang2(&) 1

Engineer Training Center, Shanghai Polytechnic University, Shanghai, China [email protected] 2 Shanghai University, School of Mechatronic Engineering and Automation, Shanghai, China [email protected]

Abstract. In order to improve the production level of vacuum circuit breaker, this paper studies the assembly line planning and simulation technology of vacuum circuit breaker. First the structure and assembly process of vacuum circuit breaker are analyzed. In accordance with the study of assembly line layout planning methodology, a modified systematical layout planning (SLP) method is put forward. Simulation methods are applied to verify and optimize the layout scheme. The method of mathematical model and line balancing are studied. It proposes the physical simulation method for assembly line spatial relationship presentation in static and dynamic. Finally a case study illustrates the feasibility of the proposed approach to improve the production efficiency of vacuum circuit breaker. Keywords: Vacuum circuit breaker  Assembly line planning Mathematical simulation  Physical simulation

1 Foreword As a very important electricity distribution equipment, vacuum circuit breaker got a great expand in market demand. The survey report shows that production level is the key factor that affect the development of Chinese vacuum circuit breaker [1]. In order to meet the market demand of vacuum circuit breaker, it is urgent to speed up the production and improve the production level. There are many vacuum circuit breaker manufacturers who are still using scattered way for assembly, which seriously affects the production efficiency and getting hardly to meet production demand. In view of these assembly line planning problems, so this paper proposes vacuum circuit breaker assembly line planning methods, and assembly line simulation techniques.

Shanghai Pujiang talent plan project (No. 16PJC040). © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 457–466, 2019.


W. Zhu and X. Zhang

2 Related Work Assembly line is an effective combination of the material system and operating devices, and widely used in manufacturing industry [2]. Planning and layout design is the main work to design an assembly line. According to the product process, capacity needs and projected costs, it determines the quantity and spatial location of the assembly equipment [3]. A well planned assembly line cannot only ensure the normal production needs, but also be able to take full advantage of the production capacity of the equipment and reduce materials transport time [4]. Many layout planning methods has been applied to the design of production line in 1960s, such as mathematical graphical method, the layout model analysis and interactive graphical analysis [5–8]. Systematic Layout Planning (SLP) method is a representative method proposed by Richard Muther in 1961 [9]. It is a kind of process oriented layout method and is widely used among enterprises and academic world. Illinois State University has studied the production line layout planning. Following the procedures of SLP method, it divides the production line into several units and build relation diagram. Finally a reasonable layout is generated by the computer program, and the corresponding evaluation is provided [10]. Yuhang Chen from Beijing Jiaotong University studied the production line layout design through theories as well as practical ways. By analysis of the relations between each unit area according to SLP method, they planed the facilities and storages layout [11]. There are many research on production line layout while the SLP method is the most widely used, but these traditional SLP methods cannot satisfy the demand of modern enterprise’s production line planning. In view to the problems, this paper focus on how to improve the manufacturing level of vacuum circuit breaker from the perspective of assembly line planning and simulation.

3 Vacuum Circuit Breaker Assembly Line Planning and Simulation 3.1

Structure and Assembly Process

Assembly sequence greatly depends on the product structure, so it is of significance to analyze the structure and assembly process of vacuum circuit breakers. 3.1.1 Vacuum Circuit Breaker Structure The vacuum circuit breaker designed with modular design principle. It divide the breaker into many modules and standardize each module, it is also a good way for large-scale standardized production. The outside and inside structure of the BE-12/Ttype vacuum circuit breaker are show as Figs. 1 and 2.

Research on Assembly Line Planning and Simulation Technology

Fig. 1. Vacuum structure




outside Fig. 2. Vacuum structure




3.1.2 Vacuum Circuit Breaker Assembly Process Assembly work is carried out in accordance with the process sequence. In an assembly line, the product assembles step by step along the stations that are responsible for one or a few assembly processes. A vacuum circuit breaker assembly process is shown in Fig. 3.

Fig. 3. Vacuum circuit breaker assembly process


Assembly Line Planning and Layout Technique

The purpose of assembly line planning and layout is to make a reasonable arrangements for the stations and facilities, determining the scale of the production system as well as the configuration. 3.2.1 Assembly Line Layout Principle In order to the rational allocation of system resources, assembly line of vacuum breaker assembly process planning and layout follow some principles for each module assembly line for analysis, planning, layout. Shortest route principle, reasonable area principle, specialization, synergy, relevance, division of labor, flexibility and security are the main principles of planning assembly line. Due to changes in market demand or equipment capability, so we should take vacuum circuit breaker assembly line’s flexible layout into account.


W. Zhu and X. Zhang

3.2.2 Design Mode of SLP and the Process Improved Systematical Layout Planning (SLP) is a strong logic layout design method, which combine logistics and relevancy of work units to get a reasonable layout scheme. It is based on the system process and specific business data. By the quantitative and qualitative analysis, it ascertains the relationship of the various functional areas and logistics. In combination with non-logistics analysis, it could generate a scheme of compact space and smooth logistics [12, 13]. The SLP method summarizes the planning and layout issues into five basic elements. They include product, quantity, route, supporting service and time [14], and these five elements are the key to solve the issue of planning and layout. The design mode of SLP method is shown as Fig. 4.

Fig. 4. SLP method procedures

Fig. 5. Modified SLP method procedures

There are some deficiencies in lack of logistics strategy planning, flexibility, dynamic route analyze when apply SLP method in vacuum circuit breaker manufacturing system layout [15, 16]. In considering of the problems, a modified SLP method is proposed. Combining the strengths of tradition SLP method and the characteristics of vacuum circuit breaker manufacturing system, the process of layout design using the modified SLP method is show in Fig. 5. We can find the optimal initial solutions more efficiently from the modified SLP. By the modified SLP we could experience intuitively in 3D layout simulation and analyze the dynamic route of logistics.

Research on Assembly Line Planning and Simulation Technology



Assembly Line System Simulation Techniques

The mathematical simulation focus on the logical sequence and the nodes of the process, while physical simulation is committed to build physical world scenario in static and dynamic spatial [17]. 3.3.1 Assembly Line Mathematical Simulation Techniques Mathematic Modeling of assembly line contains all kinds of equipment, process information, this paper make the whole assembly line model into many hierarchical models. As is shown in Fig. 6. Object-oriented modeling and programming language is a good way to achieve mathematical modeling, and there are some commercial production system simulation software which can achieve functions.

Fig. 6. Assembly line hierarchy model

Because of the difference of process time, workload is different between stations which the processes divided into. The purpose of assembly line balancing is to make the operation time of each station as similar as possible by adjusting the load for each station, at the same time to reduce the idle time of each station to least [18]. When we get the equipment utilization data from the simulation, assembly line balancing and optimization could be carried out in accordance with the laws of the elimination, combination, rearrange and simplify (ECRS). 3.3.2 Assembly Line Physical Simulation Techniques It is possible to simulate the characteristics of physical world in virtual models, physical simulation could accurately demonstrate devices shape and position as well as space relations; also it could simulate all the movements in assembly line and present a working assembly line vividly. By the physical simulation, most layout deficiencies and material flow problems can be found and avoid when implementing [19]. Assembly line physical simulation based on the 3D geometry model of products, equipment, plant and workers, present the plant scene and simulate the assembly line movements by a series of control and feedback information [20]. Assembly line physical simulation imitate the real material flow and process procedures by controlling the movements of parts and equipment. Interaction is an important character of physical simulation, which is the way for designer to participate in manufacturing process in real time. For instance, when selecting an equipment with the mouse, the equipment can be concentrated display and demonstrate the process operations.


W. Zhu and X. Zhang

4 Application Example The BVE-12/T-type vacuum circuit breaker is the research object in the paper. The methods proposed above are applied to plan the assembly line. Taking the software Plant Simulation as the assembly line discrete event simulation tool, analyze the bottleneck and balance of the assembly line. And then create the physical simulation scene in interactive way, with the software 3DVIA Studio as the physical simulation development platform to simulate the scheme and assembly process of the assembly line. 4.1

Assembly Line Planning and Layout

Apply the modified SLP method to plan the assembly line of the vacuum circuit breaker and build up the preliminary program. According to the characteristics of vacuum circuit breakers and output requirement the product principle layout form is adopted. By qualitative and quantitative analysis of the logistics and non-logistics relationship between operating units, we built up the operating units consolidated relation diagram. According to the mutual position of operating units indicated by the relation diagram, draw the relativity position diagram of operating units. Comparing the position diagram to the real size of the area, get the operating units area correlation diagram. So the vacuum circuit breaker assembly line layout scheme is shown in Fig. 7.

Fig. 7. Vacuum circuit breaker assembly line layout scheme


Assembly Line Mathematical Simulation

By the object-oriented modeling method and the high-level programming language, we can well realize the creation of the assembly line mathematical model, achieved the visualization of the entire assembly line. Figure 8 is shown the view layer of the model, which means that assembly line equipment, stations and so on.

Research on Assembly Line Planning and Simulation Technology


Fig. 8. Assembly line visual model

The mathematical model of the vacuum assembly line includes the work schedule of equipment in the system, such as machine start time, downtime and halfway rest time. The production scheduling plan is shown in Fig. 9 and the station process time assign is shown in Fig. 10. Logistics scheduling policies mainly by Sim Talk language to programming implementation. The Fig. 11 shown the logistics path selection method of the spare parts.

Fig. 9. Production scheduling

Fig. 10. M0 station process time assign

Fig. 12. Station workload analyze

Fig. 11. Material method



W. Zhu and X. Zhang

According to the ECRS rule to balance and optimize scheme and simulation again, basically to achieve satisfactory performance. After balancing and optimization of the assembly Line, the equipment utilization is relatively uniform and production line efficiency is improved. The Fig. 12 is shown that the station workload compared before and after balance optimization. 4.3

Assembly Line Physical Simulation

Firstly the 3D models of the vacuum circuit breaker and devices of the assembly line are created in the virtual assembly line scene according to the layout scheme. Also the assembly movements and interaction behaviors can be easily developed in the 3DVIA Studio software. Then we could experience the spatial layout of the assembly line and plant logistics in the physical simulation scene. By means of interaction we can adjust position of devices or participant in assembly process, so that minimize the potential layout or assembly process problem (Figs. 13, 14 and 15).

Fig. 13. Assembly line physic simulation scene

Fig. 14. Parts assembly behavior definition

Fig. 15. Interaction behavior definition

So we planed an assembly line of the BE-12/T vacuum circuit breaker, and verified the productivity and balanced the line by mathematical simulation, and participated in the assembly process to minimize layout problems by a physical simulation. It verified the feasibility of these methods to plan and optimize an assembly line of vacuum circuit breaker.

Research on Assembly Line Planning and Simulation Technology


5 Conclusion The simulation results showed the implementation of the optimization scheme could significantly improve the production efficiency of the target assembly line, and be useful references to the modeling and optimization of the similar manufacturing or logistics systems.

References 1. 2012–2016 China in vacuum circuit breaker market research and strategic investment consulting report. Insight & Info Consulting Ltd. (2013) 2. Becker, C., Scholl, A.: A survey on problems and methods in generalized assembly line balancing. Eur. J. Oper. Res. 168(3), 694–715 (2006) 3. Rashid, M.F.F.: A review on assembly sequence planning and assembly line balancing optimisation using soft computing approaches. Int. J. Adv. Manuf. Technol. 59, 335–349 (2012) 4. Zhao, Y., Zhao, Q., Jia, Q.-S. Event-based optimization for dispatching policies in material handling systems of general assembly lines. In: Proceedings of the 47th IEEE Conference on Decision and Control, pp. 2173–2178 (2008) 5. McMullen, P.R., Frazier, G.V.: A heuristic for solving mixed-model line balancing problems with stochastic task durations and parallel stations. Int. J. Prod. Econ. 51, 177–190 (1997) 6. Kelleret, P., Tchernev, N., Force, C.: Object-oriented methodology for FMS modeling and simulation. Int. J. Comput. Integr. Manuf. (S1362-3052), 10(6), 405–434 (1997) 7. Healy, K.J.: The use of event graphs in simulation modeling construction. In: Proceedings of the 1993 Winter Simulation Conference, pp. 689–693 (1993) 8. Paul, R.J.: Activity cycle diagrams and three phase method. In: Proceedings of the Winter Simulation Conference. Atlanta, Georgia, pp. 123–131 (1993) 9. Chen, J.: Efficiency Improving Operational Manual. Haitian Press (2006) 10. Chen, R., Shihua, M.: Production and Operation Management. Advanced Education Press (2010) 11. Chen, Y., Hu, J., Liu, J.: Study on the design and optimization of enterprise assembly-line layout. Logist. Technol. 30(1), 116–119 (2011) 12. Ardavan, A.-V., Gilbert, L.: Loop based facility planning and material handling. Eur. J. Oper. Res. 164(1), 1–11 (2005) 13. Li, J., Blumenfeld, D.E., Huang, N., Alden, J.M.: Throughput analysis of production systems: recent advances and future topics. Int. J. Prod. Res. 47(14), 3823–3851 (2009) 14. Tak, C.S., Yadav, L.: Improvement in layout design using SLP of a small size manufacturing unit. IOSR J. Eng. 2(10), 01–07 (2012) 15. Le, L.I.: Improved SLP method based on dynamic simulation and its application in Wuhan Cigarette Warehouse. Logist. Technol. 30(12), 132–134 (2011) 16. Zhang, J., Hua, C., Lin, W.: Research on improvement of traditional SLP method. Mark. Mod. 639(2), 22–24 (2011) 17. Nielebock, S.: From discrete event simulation to virtual reality environments. In: Computer Safety, Reliability, and Security. Lecture Notes in Computer Science, vol. 7613, pp. 508– 516 (2012)


W. Zhu and X. Zhang

18. Mustafa, H.: Productivity study and line balancing of GGMG & CALICO production line. Adv. Mater. Res. 576, 700–704 (2012) 19. Bongers, B., vander Veer, G.C.: Towards a multimodal interaction space: categorization and applications. Pers. Ubiquit. Comput. 11, 609–619 (2007) 20. Liverania, A., Caligiana, G.: A CAD-augmented reality integrated environment for assembly sequence check and interactive validation. Concurr. Eng.: Res. Appl. 12(2), 67–77 (2005)

Shop Floor Teams and Motivating Factors for Continuous Improvement Eirin Lodgaard(&) and Linda Perez Johannessen SINTEF Raufoss Manufacturing AS, Raufoss, Norway {eirin.lodgaard,Linda.Johannessen}

Abstract. Manufacturing companies continuously strive to sustain their competitive advantages worldwide. One strategy for improving competitiveness is the implementation of continuous improvement (CI) programs based on active participation of the company’s entire workforce. CI emphasizes the involvement of everyone and on working together to make incremental improvements through ongoing efforts. Everyone includes shop floor operators. This study focuses on one aspect of CI, namely improvement activities by shop floor teams and what motivates them to actively participate in this process. Semi-structured interviews have therefore been conducted in a manufacturing company that has for a number of years systematically implemented a continuous improvement program involving shop floor teams, the aim of the program being to identify the motivating factors influencing successful implementation. Raising awareness of these motivating factors may help the manufacturing organization adopt a correct approach and increases the odds of the successful implementation of a CI program. Keywords: Continuous improvement

 Shop floor  Motivation factors

1 Introduction A key to survival and success in a demanding market lies in the capability of a manufacturing company to increase productivity and profitability whilst improving quality and providing greater value to the customer. Manufacturing organizations are currently encountering the need to implement modern management approaches such as continuous improvement (CI), lean manufacturing, Lean Six Sigma and Total Quality Management to improve their competitiveness and to meet the challenges posed by the contemporary competitive environment. As a result, many manufacturing organizations are currently encountering the need to implement a CI process in all aspects of their operation. Unfortunately, designing, executing and achieving CI are not straightforward tasks. The application of CI within manufacturing industry however presents a variety of challenges [18]. The disappointment around and failure of CI reported by many organizations is primarily derived from a lack of understanding of the behavioral dimension [4]. Too great a focus on CI tools and techniques and too little on human factors and how CI behavior patterns emerge in the workplace are reported as a pitfall in a study by Ljungström and Klefsjö [19].

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 467–474, 2019.


E. Lodgaard and L. P. Johannessen

Much research has been conducted on CI success factors and enablers [14]. There has, however, been little focus on the motivation factors at the shop floor level in dayto-day work. Based on this, this article focuses on one particular aspect of CI, namely improvement activities by shop floor teams and the motivating factors for active participation in this process.

2 Continuous Improvement CI can be seen as being a culture of sustained improvement that targets the elimination of waste in all systems and processes in an organization [6]. Another definition of CI is “a company-wide process of focused and continuous incremental innovation” [5]. Both definitions emphasize the involvement of everyone and working together to make incremental improvements as a result of ongoing efforts, including shop floor operators. A study by Garcia-Sabater and Marin-Garcia [14] identified the main enablers for the successful implementation of CI. They are management commitment, leadership, a need to measure the progress, motivation of workers, resources, cultural issues and working in cross-functional teams. These findings are also supported by other authors [5, 17, 22]. 2.1

Organizing for Continuous Improvement

CI organization can take place at three levels - the management level, team level and the individual level. Firstly at the management level in the organization strategy, secondly at the team level in terms of broadly defined tasks and finally at the individual level and improvement in day-to-day tasks [16]. Bhuiyan and Baghel [6] exemplify this in management responsibility to address CI at each of these levels. Shop floor operators may be involved at the team and individual level. A shop floor team is a group of people who are responsible for trying to improve day-to-day operational activities and their own practices and performance [11]. One common approach to improvement activated by the individual is the use of a suggestion system [2] in which individuals make suggestions for improvements which are implemented by specialists. Deniels [12] claims that shop floor operators are the experts and those who are going to solve their problems. All they need is some direction in their approach to CI to actualize this. 2.2

Motivating Factors for Continuous Improvement

Active participation with empowered shop floor operators is, according to the definition of CI, a key factor in CI programs [1, 3]. Not only as executors of developed improvement but as a source of ideas in the improvement process. According to the definition of CI, implementing without involving shop floor operators will result in the improvement not occurring. As described above, the motivation of the employees is identified as one of the enablers by leading authors in the literature on sustainability of CI [1, 4], and confirmed by a later study [14]. CI progress and achieving high performance is dependent on the

Shop Floor Teams and Motivating Factors


employees working well together and enjoying doing so. Furthermore, achievement of high performance is dependent on the ability of those involved, the work environment and finally but not least a motivation to embark on improvement activities [9]. Based on a research in US manufacturing industry, Cheser [10] claims that CI programs generate a positive increase in motivation and a change in people’s attitude. Incorporating the use of the enablers, the behaviors of individuals and the organizational ability they reflect therefore gradually becomes more ingrained and people begin to behave in the new way naturally [7, 8]. Recognizing achievements and effort will also influence motivation in a positive way [15]. A facilitator and the recognition of management is required if the involvement of operators is to be achieved [14]. A facilitator is seen to be of high value in achieving maximum impact on both employee attitudes and problem solving ability [13, 23]. A study by Berling [3] claims that experience of team work in achieving improvement has a positive influence on peoples engagement, training in teamwork being a presumption. Direct contact and communication between the individual and their leader is of ultimate importance [26]. However, CI programs call for substantial management commitment from all levels of management and the willingness to invest in people [6].

3 Research Design The purpose of this case study was to understand patterns between CI activities in shop floor teams and the motivating factors for actively embarking on this. A case study can use an appropriate approach when the thing being studied is in progress or has recently been in progress and where research variables cannot be controlled [27]. This study met those requirements. This was an ongoing situation for the case company. We interviewed operators working in different shop floor teams in the same manufacturing department to identify these motivating factors. 12 semi-structured interviews were carried out across 7 shop floor teams. This was the data collection technique. The questions were formulated to cover both how they approached improvement activities and their opinions of what could have been done differently. Face to face interviews are an essential source of evidence and are commonly used to collect information. Well informed interviewees can provide important insight into facts and also opinions [27]. The actual interviews were conducted as semi-structured conversations, in which the informants were asked prewritten questions. The informants were allowed to answer freely. The aim of the conversations was, however, to create an informal setting in which the interviewee would open up and provide rich data. All of the semi-structured interviews were tape recorded and transcribed, to provide an accurate representation of what was actually said during the interviews [27]. Permission was obtained in advance from the production manager in additional to verbal permission from each of the participants before the interviews were recorded. All interviews were conducted at the company’s production site.


E. Lodgaard and L. P. Johannessen

The data was coded and categorized. Coding is a way of classifying collected data into smaller pieces and reducing it into categories [21, 25]. The collected data was assembled into an array, the categories being in rows and evidence from the semistructured conversations being in columns.

4 Results and Discussion The case company is a large subsidiary of a global corporation located in Norway. It develops and produces complex, high-end products of high quality to a global market. The annual turnover is approximately USD 220 million and had in 2017 around 700 employees. The long production series, long cycle times and high price of the company’s products means that the company is dissimilar to other production companies. Production is a mix of highly automated manufacturing and craftsmanship. The case company has been working with CI for decades. The company has only in the last two year aggregated the focus to work more systematically with CI, aided by external consultants. The main purpose of the CI approach for this company is to systematically reduce waste in the workplace and increase the flow of the manufacturing processes. CI may be useful if the desired result is to reduce waste, streamline the flow of the manufacturing process and improve the quality of the new product [6]. It was apparent that the operators were motivated to produce high quality parts, which is a good starting point for improvement activities. They started this two-year period by designing team boards for each of the shop floor teams and making these a meeting area. Team board meeting participants are the operators, their leader and sometimes also process engineers and the maintenance and production manager. The core activity of team board meetings is information exchange about production numbers and CI activities. One tool used by all shop floor teams is a suggestion system on their team board. They report here both suggestions and implemented issues. The leader looks at these proposals together with the operators and they together decide whether a suggestion should be put into effect, and who should be the person in charge of this. The inspiration of the operators determines the topics of suggestions [24]. Considerable organization support is required to process, evaluate and act upon the suggestions and converting them into actual improvements. Most of the interviewees claimed that not all improvement initiatives went without problems. For example, operators in our case complained that their suggestions for improvements were not always followed up in a serious way. Either because they felt that they got little or no information about what happened to their suggestions after they decided to act upon or it took for too long to implement a solution, so their motivation often strongly decreased [11]. The main reason for low levels of progress was that they needed someone outside the team to assist them. The improvements suggested by the shop floor team were based on small improvements. The problem and solution were therefore clear without detailed analysis being required. They, however, emphasized that small improvements had led to significant results. This insight had matured over the years of improvement activities. All in the interviews were very clear of the importance of their involvement in CI. Almost

Shop Floor Teams and Motivating Factors


all wanted to be included in CI, most being eager to learn more about CI and only a few having no interest and walking away when their machine stops/breaks down. After two years of systematic implementation of CI, operators interestingly tended to become impatient with the progress of improvement activities. Their impatience indicates that they wanted CI to progress one step further. They wanted more. Despite positive results from this way of working, informants believed that the manufacturing department must invest more effort in becoming an excellent CI organization. This perception of low progress is most likely to be a source of frustration for shop floor operators, motivation being lowered due to the time and energy spent on CI. This partly relates to involving shop floor operators in more complex problem solving and cross functional team work. It was, more specifically, about their contribution to the improvement of flow of the manufacturing processes. The company had a traditional hierarchical organization. Cross functional teams working on manufacturing process problem solving had yet to be implemented across the whole company. Operators were less concerned about creating knowledge in the use of problem solving tools and practices and more focused on the need to extend their knowledge of the production machines they used in their workplace. They more specifically wanted to be involved in more complex problems based on their knowledge and skill in the manufacturing machines they used. All the operators emphasized that they knew the machines best, that it is they who can see, hear and “feel” the machine. Most of the operators did not see the need for training in the use of improvement tools and practices. This type of knowledge was for managers and support functions. None of the informants were, however, willing to miss the opportunity to expand their knowledge about CI tools and practices when asked. A plausible interpretation is that many operators at the shop floor level were so unfamiliar with problem solving tools and practices that they did not quite know what to expect and did not know how problem solving tools and practices could support their day-to-day work. Operators were not necessarily expected to use problem solving tools and practices or to solve complex problems. Existing research shows that an important driving force of the success of CI programs is knowledge of CI combined with the knowledge and skills to perform necessary activities [20]. The operators emphasized the need for facilitators in their day-to-day work area to accelerate the speed of CI activities. One reason for this was their experience with an external consultant, who they considered to have a high CI knowledge level and helped them in a positive way with facilitation. This view is supported by Ringen et al. [23] who claim that the use of a facilitator can create enthusiasm, involvement and motivation. Another interesting result is that the operators considered management CI knowledge and skills to be, in general, inadequate. It was apparent that this influenced their motivation. They also felt the absence of leadership acting as a facilitator and helping them to speed up the CI process. This study indicates and contributes to the existing knowledge that leadership is identified as a premise for good motivation of human resources in the implementation process [15]. This combined with expansion to involve operators in cross functional teams solving more complex problems indicates a better way of coping with more motivated operators at shop floor level for the case company.


E. Lodgaard and L. P. Johannessen

Most of the operators argued that the number of improvements reported by the management does not motivate them, as the focus is on the number and not on the improvement achieved. They also feel this disregards the purpose of the improvement. They see the leader talking big about the numbers without focusing on the content. They also question whether this helps them at all. A study by Deniels [12] shows that enabling operators to establish their own measures will have a positive effect on the achievement of fundamental improvement on the shop floor. It could thereby be argued that they could have achieved more highly motivated operators if they had been a part of the process defining the measure.

5 Concluding Remarks CI is seen as a valuable approach to achieving improvement at the shop floor level. However, the implementation of CI on the shop floor is not a trivial task. The above review indicates that operators in shop floor teams are motivated to contribute to improvement activities under certain conditions. Furthermore, the degree to which the motivating factors are present will influence how successful CI efforts will be. The interviewees confirm the research literature, but also extend the knowledge of motivation factors. The main finding is that involvement, being taken seriously and the opportunity to contribute expertise is a good starting point. Furthermore, this study indicates that CI at a certain level of progress does motivate operators in shop floor teams to spend time and energy on CI. On the other hand, this study reveals that highly skilled people around them with the aim to facilitate their improvement activities is necessary. More specifically, to support them when needed at their workplace and help them in determining what next to focus on and how to proceed in a proper way. This paper provides an insight into motivating factors for the contribution of shop floor teams to successful CI implementation and its challenges. Raising awareness of these motivating factors may help the manufacturing organization to arrive at the correct approach and increase the odds of a successful CI program. More research is needed to enhance the result and to increase generalizability. Acknowledgement. The research was funded by the Research Council of Norway.

References 1. Bateman, N.: Sustainability: the elusive element of process improvement. Int. J. Oper. Prod. Manag. 25(3–4), 261–276 (2005) 2. Berger, A.: Contionuous improvement and kaizen: standardization and organizational designs. J. Integr. Manuf. Syst. 8(2), 110–117 (1997) 3. Berling, C.: The human side of continuous improvement. Int. J. Hum. Resour. Dev. Manag. 1(2/3/4), 183–191 (2001) 4. Bessant, J., Caffyn, S., Gallagher, M.: An evolutionary model of continuous improvement behavior. Technovation 21(2), 67–77 (2001)

Shop Floor Teams and Motivating Factors


5. Bessant, J., Caffyn, S., Gilbert, J., Harding, R., Webb, S.: Rediscovering continuous improvement. Technovation 14(1), 17–29 (1994) 6. Bhuiyan, N., Baghel, A.: An overview of continuous improvement: from the past to the present. Manag. Decis. 43(5), 761–771 (2005) 7. Caffyn, S.: Extending continuous improvement to the new product development process. R&D Manag. 27, 253–267 (1997) 8. Caffyn, S., Grantham, A.: Fostering continuous improvement within new product development processes. Int. J. Technol. Manag. 26(8), 843–856 (2003) 9. Castka, P., Bamber, C.J., Sharp, J.M., Belohoubek, P.: Factors affecting successful implementation of high performance teams. Team Perform. Manag. 7(7/8), 123–134 (2001) 10. Cheser, R.: The effect of Japanese kaizen on employee motivation in US manufacturing. Int. J. Organ. Anal. 6(3), 197–217 (1998) 11. de Lange-Ros, E., Boer, H.: Theory and practices of continuous improvement in shop-floor teams. Int. J. Technol. Manag. 22(4), 344–358 (2001) 12. Deniels, R.C.: Performance measurement at sharp and driving continuous improvement on the shop floor. Eng. Manag. J. 5(5), 211–218 (1995) 13. Farris, J.A., Van Aken, E.M., Doolen, T.L., Worley, J.: Critical success factors for human resource outcome in Kaizen events: an empirical study. Int. J. Prod. Econ. 177, 42–65 (2009) 14. Garcia-Sabater, J.J., Marin-Garcia, J.A.: Can we still talk about continuous improvement? Rethinking enablers and inhibitors for successful implementation. Int. J. Technol. Manag. 55 (1), 28–42 (2011) 15. García, J.L., Maldonado, A.A., Alvarado, A., Rivera, D.G.: Human critical success factors for Kaizen and its impacts in industrial performance. Int. J. Advan. Manuf. Technol. 70, 2187–2198 (2014) 16. Kaizen, I.M.: The key to Japan’s competitive success, 1st edn. Random House, New York (1986) 17. Jorgensen, F., Boer, H., Gertsen, F.: Jumping-starting continuous improvement through selfassessment. Int. J. Oper. & Prod. Manag. 23(10), 716–722 (2003) 18. Lillrank, P., Shani, A.B.R., Lindberg, P.: Continuous improvement: exploring alternative organizational designs. Total Qual. Manag. 12, 41–55 (2001) 19. Ljungström, M., Klefsjö, B.: Implementation obstacles for a work development-oriented TQM strategy. Total Qual. Manag. 13(5), 621–634 (2002) 20. Lodgaard, E., Gamme, I., Aasland, K.: Success factors for PDCA as continuous improvement method in product development. In: Emmanouilidis, C., Taisch, M., Kiritsis, D. (eds.) Advances in Production Management Systems Competitive Manufacturing for Innovative Products and Services. IFIP Advances in Information and Communication Technology, vol. 397, pp. 645–652. Springer, Berlin, Heidelberg (2013) 21. Miles, H., Hubermann, M.: Quality Data Analysis: A Sourcebook. Sage Publications, Beverly Hills (1994) 22. Rich, N., Bateman, N.: Companies’ perceptions of inhibitors and enablers for process improvement activities. Int. J. Oper. Prod. Manag. 23(2), 185–199 (2003) 23. Ringen, G., Lodgaard, E., Langeland, C. (eds.): Continuous improvement in a product development environment. In: 2nd Nordic Conference on Product Lifecycle Management, Chalmers University of Technology, Göteborg, Sweeden (2009) 24. Tennant, C., Roberts, P.: Hoshin Kanri: a tool for strategic policy deployment. Knowl. Process. Manag. 8(4), 262–269 (2001) 25. Tjora, A.: Kvalitative forskningsmetoder i praksis, 2nd edn. Gyldendal Akademisk (2010)


E. Lodgaard and L. P. Johannessen

26. Wickens, P.D. (ed.): Production management: Japanese and British approaches. In: IEE Proceedings A: Science, Measurement and Technology (1990) 27. Yin, R.K.: Case Study Research: Design and Methods, 4th edn. Sage Publications, Beverly Hills (2009)

Structural Modelling and Automation of Technological Processes Within Net-Centric Industrial Workshop Based on Network Methods of Planning Vsevolod Kotlyarov(&), Igor Chernorutsky, Pavel Drobintsev, Nikita Voinov, and Alexey Tolstoles Institute of Computer Science and Technology, Peter the Great St. Petersburg Polytechnic University, Polytechnicheskaya 29, Saint Petersburg 195251, Russia [email protected]

Abstract. The aim of this paper is to provide an approach to planning and optimization of small-scale net-centric manufacturing. Automation of small-scale net-centric manufacturing in machinery requires solution of many tasks such as automated formalization of technological processes (implying conversion of existing and new production documentation into technological paths of automated manufacturing), distribution of workshop equipment, materials and tools between technological paths, monitoring of concurrent processes of supply and execution, analysis, network planning and manufacturing optimization considering miscellaneous criteria distributed among three levels of industrial network. Keywords: Net-centric control  Formal description of technological processes Technology macro-operation Analysis of technological processes based on network methods Technology chain model  Direct and inverse tasks of workshop control Optimization and planning of technological processes  Multi-criteria planning

1 Introduction Usage of industrial Internet networks with net-centric control sets the driving trend of the future material manufacturing of goods and services. The bright future of this approach is out of doubt provided these complex net-centric systems will function with high reliability and flexible control of technological processes of small-scale and single-part manufacturing [1]. There are three levels of control specified in this work (Fig. 1): 1. Technological macro-operations of machines, robots and other objects providing control actions and gathering data about objects condition in the network; 2. Technological processes (execution control of sequences of technological macrooperations); 3. Multi-criteria hierarchical optimization and manufacturing planning.

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 475–488, 2019.


V. Kotlyarov et al.

Fig. 1. Three levels of control of net-centric machinery industrial workshop

Technological process (TP) is a set of interdependent elementary operations, which shall be executed to produce specified number of goods of specific nomenclature from corresponding number of billets. Also it is supposed that manufacturing requires groups of machines and manipulators as well as means of delivery of billets, tools and ready goods from one workshop area to another. Technology of modeling of TPs described in the paper is directed towards solving several main problems [2–7]. The first one is selection of optimal (rational) scenario of TP implementation in accordance with time criteria considering concurrent work of equipment and downtime due to waiting for equipment vacancy after executing previous operations. The second one is determination of resource reserves while executing separate non-critical operations. The third task is to determine direction of TP modernization to reduce overall time of implementation as well as provide maximal economy of resources while producing the same nomenclature of goods. For example, if some machine has long downtime, it can be replaced by the slower and cheaper one. It is required to define specific finite sets such as set of produced goods of defined nomenclature in specific amount to formalize a TP and create its corresponding structure. It is required to have a corresponding number of billets to produce these goods. Also the common model shall consider usage of several different billets processed on different equipment to produce one particular product. The whole TP eventually boils down to execution of specific set of standard actions (ai), from delivery of billets and tools from warehouse to machines to transition of produced goods of specific nomenclature to warehouse. Time for all operations is specified. Some operations can be executed simultaneously. Obviously downtime of

Structural Modelling and Automation of Technological Processes


equipment is not desired. Description of TP and specification of set of operations (ai) allows setting existing consequence links between separate operations. For example, producing a detail on the machine is impossible without delivering of corresponding billet from warehouse. At the same time the machine shall be free (this is also an operation), as well as all required manipulators for delivering and placing of the detail. Technological table T(ai) where number of rows equals to number of macro operations (ai) shall be created as a result. Each row indicates what operations this operation is based on (for example, the machine is free, the billet and cutting tools are prepared and installed). Later operations are based on other operations (for example, the machine has finished processing of previous detail, adjuster arrived to set a billet, etc.). Unlike standard approaches to modeling of TP and manufacturing [7–13], network methods described below in terms of common formal model allow to quite simply create an implementation chart of technological process, analyze the implementation, determine bottlenecks and provide ways to optimize manufacturing cycle. An important quality criterion of technological process implementation is the time required for its complete execution. Network methods allow calculating this time considering possibility of simultaneous execution of some operations and create critical path. At the same time existing reserves of time and critical operations making impact on the overall time of technological process are evaluated. For example, if a critical path contains operation performed by an adjuster (with corresponding time of implementation), this can be a signal to improve the process by replacing this man with faster mechanical manipulator for this operation. At the same time if some operations do not belong to critical path and have reserves of time, their requirements can be reduced which would save resources. The most time consuming procedure at planning stage is the procedure of technological table creation. A methodic of its creation based on principles of dynamic programming is proposed. The idea of the methodic is the following. Analysis of technological chain is performed from its end, i.e. when all details have been produced and placed in the warehouse for produced goods. A detail should be delivered to the warehouse to place it there. This is only possible if it has been processed, taken from the machine and placed on a pallet. For this it have to be taken off by free manipulator and delivered on an empty pallet. Manipulator can only be free if it has completed previous operation and so on. So the process goes from its end to beginning which is required to create table T(ai).

2 Description of the Structure of a Technological Process Stage 1. Specification of manufactured products. Number of a single production unit –“detail” and required amount of details (sets D and ND) are specified. For example: D ¼ ðD1; D2; D3; D4; D5; D6; D7; D8; D9; D10Þ;


ND ¼ ð4; 5; 5; 5; 3; 3; 3; 5; 3; 3Þ:



V. Kotlyarov et al.

Stage 2. Specification of technological equipment for implementation of current technological process (set of TO objects). For example 1. 2. 3. 4. 5. 6. 7. 8.

Lathe LB300 (C1) Turning and milling machine Multus B300 (C2) Turning and milling machine Multus B300 (C3) Manipulator in processing area (M) Pallet 1 (P1) – pallet with billets Pallet 2 (P2) – pallet with produced details Pallet 3 (P3) – pallet with cutting tools (cutting tools from previous operation) Pallet 4 (P4) – pallet with cutting tools (cutting tools for next operation) In current case: TO ¼ ðC1; C2; C3; M; P1; P2; P3; P4Þ


Stage 3. Specification of processing paths for each detail (what actions in which order shall be performed and what machines shall be used). Here it is important to consider possible interdependence of separate stages of technological process. Formal means of specifying of such interdependence are required. For example, execution of some operation with specific detail can only be completed after execution of other operations with this detail or its part (Table 1). Table 1. Example of specific sequence of operations for producing details on machines № of queue LB 3000 (C1) Multus B300 (C2) Multus B300 (C3) 1 D1 Op.1 D2 Op.1 D3 Op.1 2 D4 Op.1 D2 Op.2 D3 Op.2 3 D4 Op.2 D5 Op.1 D8 Op.1 4 D7 Op.1 D7 Op.2 D8 Op.2 5 D5 Op.2 D6 Op.1 D10 Op.1 6 D9 Op.1

This sequence can be strictly specified or can be just one of possible and allowed sequences. Overall model of technological process shall contain special mechanisms to formalize this situation and describe possible interdependencies. Formally specification of concrete processing sequence implies specifying some technological sets. It is assumed that time of continuous billet stay on a machine corresponds to one operation. Also it is assumed that in common case one machine can carry out different operations of processing of the same detail at different times. Stage 4. Specification of time of operations of details production (Table 2): Stage 5. Specification of nomenclature of cutting tools (Table 3). Stage 6. Specification of manipulators involved in the process and providing execution of operations on machines:

Structural Modelling and Automation of Technological Processes Table 2. Estimation of processing time Detail № Operation № Estimation (min) D1 1 3.6 D2 1 1.65 2 1.8 D3 1 2.5 2 6.05 D4 1 0.65 2 1.6 D5 1 5.6 2 6.9 D6 1 8.8 D7 1 1.3 2 8.62 D8 1 0.9 2 1.3 D9 1 7.2 D10 1 6.1

Table 3. Example of specification of nomenclature of cutting tools Detail № Operation № Cutting tool № 1 1 3 51 52 6 7 1 1 51 2 1 8 D3 1 1 3 51 10 2 1 51 9 9 51 D4 1 1 51 2 1 10 8 D5 1 1 51 51 51 2 1 3 10 8 D6 1 1 51 51 52 D7 1 1 3 2 1 51 52 6 9 9 D8 1 1 51 2 1 51 10 D9 1 1 3 51 10 D10 1 1 51 10 4 D1 D2

• Mechanical hand (M1) • Adjuster (M2) • Pallets (P№, № - pallet number – M3, M4, …, MN)



V. Kotlyarov et al.

Thus a set of manipulators is specified as M ¼ ðM1; M2; . . .; MNÞ


Stage 7. Specification of allowed actions for each manipulator. • For M1: M11 ¼ C2  C1 – movement from machine 2 (C2) to machine 1 (C1) M12 ¼ CD  C1 – taking off a detail (CD) from machine 1 (C1) M13 ¼ D1  1ð2=5Þ – Detail1 (D1), operation 1, the second detail from a set, size of a set is 5 (2/5) M14 ¼ yD  C1 – placing a detail (УD) to machine 1 (C1) M15 ¼ D1  1ð3=5Þ – Detail 1 (D1), operation 1, the third detail from a set, size of a set is 5 (3/5). … We obtain a set M1S ¼ ðM11; M12; . . .; M1SÞ;


where S is the amount of possible actions of manipulator M1. A set of execution time of all possible manipulations is also specified: TM1 = (TM11, TM12, . . .; TM1S):


• For M2: M21 ¼ C1  C2 – movement of adjuster from machine 1 (C1) to machine 2 (C2) M22 ¼ ðC  C2Þ=ðD2  1Þ – taking away (C) cutting tool from machine 2 (C−C2) after processing detail 2 in operation 1 (D2−1) M23 ¼ ðy  C2Þ=ðD2  2Þ – placing cutting tool to machine 2 (У−C2) to process detail 2 in operation 2 (D2−2) and so on, all possible actions of manipulator M2 (adjuster in this case) are listed. Thus a set of K elements is obtained: TM2 = (TM21,TM22,. . .; TM2K),


• For each pallet P1, P2, P3, P4 (manipulators M3, M4, M5, M6) the following operations are possible (examples are for pallet M3): M31 ¼ ðAC  TOÞ=ðD1  1Þ – transition of pallet from automated warehouse AC to a machine TO with cutting tools to perform operation 1 for detail D1 M32 ¼ ðTO  ACÞ=ðD3  2Þ - transition of pallet from a machine (TO) to automated warehouse (AC) with billets/produced details/cutting tools/for detail 3 in operation 2 (D3−2) and so on.

Structural Modelling and Automation of Technological Processes


After processing of all manipulators and their operations the full set of possible operations and corresponding set in the form of rectangular matrix of size M*N are obtained, where M is the number of rows (equals to the overall number of manipulators), N is the number of columns corresponding to possible operations performed by each manipulator. This matrix will be referred to as M-matrix. Stage 8. Specification of time of each operation in M-matrix. For example, for the current case we have the following (Table 4): Table 4. Estimating time of pallets and mechanical hand transition Transition Pallets from AC to TO Mechanical hand C1–C2, C2–C3 Mechanical hand C1–C3 Mechanical hand: taking off/placing of a billet

Time, min 0.25 0.25 0.5 0.5

For specified time characteristics overall time of technological process is estimated as 3.5 h.

3 Technological Matrix Analysis with Network Methods Let’s construct a technological matrix. Each its operation can be started only after the end of the other operations which it relies on (they are listed in the third column). This is the only logical limitation to the process. And many operations can be performed simultaneously. The dashes in the third column mean that these operations are independent and can be started at any time. It is assumed that each of the operations relies on operations with lower order numbers in the Table 5. This can always be achieved by proper ordering of operations and their renumbering. The algorithm for such renumbering is presented below in the corresponding section.

Table 5. Technological matrix T(ai) № 1 2 3 4 . n

Operation ai Operations which it relies on Execution times ti for operations ai a1 – t1 a2 – t2 a3 ai ; . . .; ak t3 a4 aj ; . . .; am t4 . . . an al ; . . .; ar tn


V. Kotlyarov et al.

For the given technological chain (TC) the following direct tasks can be solved: 1. Determine the total time for the implementation of the specified TC and a list of bottlenecks - its critical operations. 2. Determine the time reserves for all non-critical operations in order to further optimize TC. 3. Identify the most “threatened” operations, the performance of which is the most important. The inverse tasks (optimization tasks) form the essence of the management of the industrial workshop. They are solved on the basis of algorithms for direct tasks. The goals of optimization can be different. For example, the task may be to minimize the total time for the implementation of the TC by accelerating certain operations with additional investments of necessary reserves. Optimization can be carried out already at the design stage of a given TC. It is obvious that in the first place critical operations are being accelerated. However, in mathematical models of optimization it is necessary to take into account that when the TC is varied during the optimization process, the operations that are not critical in the initial version can become critical and vice versa. The following optimization tasks can be formulated: Task 1. What amount of additional resources should be allocated so that the total time for the implementation of the TC does not exceed the set value of T0 and additional investments are minimal? Task 2. Another situation is tied to the redistribution of fixed resources between individual operations in order to minimize the total time for the implementation of the TC (optimal transfer of resources from non-critical operations to critical ones). Task 3. It may happen that the calculated time T of the TC implementation is less than the specified value of T0. How to direct the available time reserve T0 - T for saving of the resources and a corresponding improvement of the technological process? Necessary algorithms, including algorithms for solving direct problems, are developed in the form of appropriate mathematical models, implemented both at the stage of initial design of the technological chain, and in online management mode. 3.1

Model of the Technological Chain

A finite ordered set of operations performed in the workshop B: B ¼ fb1 ; b2 ; . . .; bn g


and the mapping f, which determines the logical links between operations and their mutual conditionality: f : B ! MðBÞ; where MðBÞ is the set of all subsets of the set B, are defined.


Structural Modelling and Automation of Technological Processes


Then we have the correspondence: 8i : bi ! f ðbi Þ 2 MðBÞ


The definition. The pair \B; f [ is called the technological chain TC. We introduce some natural restrictions on the structure and properties of the mapping f. The definition. Let’s define X ¼ fbk1 ; . . .; bkn g 2 MðBÞ


Then the index of the set X is an integer IðXÞ ¼ maxfk1 ; . . .; kx g:


The definition. The mapping f is called almost regular if the inequalities 8i : Iff ðbi Þg\i


are satisfied. The Theorem. Any mapping f can be made almost regular if the elements bi of the set B are renumbered appropriately. Proof. We introduce the notion of rank for each element bi of the set B. The element bi has the rank r(b) = 1, if and only if f ðbi Þ ¼ / - would be an empty set. Further, the operation bj has the rank k, rðbj Þ ¼ k, if the following condition is satisfied. Let f ðbi Þ ¼ fbk1 ; . . .; bkn g 2 M ðBÞ:


  r bj  k  1;


9i 2 fk1 ; . . .; ks g : rðbki Þ ¼ k  1:


Then we require that:

Acting by induction along the chain k ¼ 1; 2; . . .; m (in the indicated order) all operations from the set B get their rank. (This is also a condition for the end of the ranking process). Next, we perform a renumbering of the elements of the set B, starting with elements of the first rank. Next, proceed to operations of the second rank, etc. Within each rank, the numbering is arbitrary (for definiteness, from the minor to the senior). From the construction, it is obvious that the newly constructed mapping will be almost regular. We keep the old designation f for it. In this case, the operations bi will receive new numbers and the set B ¼ fb1 ; b2 ; . . .; bn g is transformed into some ordered set A ¼ fa1 ; a2 ; . . .; an g.


V. Kotlyarov et al.

In the following for the elements of the new set A and for the set itself we retain the old designations of the operations, and mapping f will initially be assumed an almost regular. Let’s consider another transformation of the mapping f – “cleaning”. Let 9i : f ðbi Þ ¼ fbk ; . . .; bkl g 2 MðBÞ


9bm 2 fbk ; . . .; bkl g : 9bs fbk ; . . .; bl g : s\i; bm 2 f ðbs Þ


Then the element bm is removed from the set f ðbi Þ ¼ fbk ; . . .; bl g. The definition. If the cleaning of an almost regular mapping f is performed for all possible numbers i, then a “cleaned” almost regular mapping is called regular. After cleaning, we obtain a mapping different from f. Here we also retain the old designations. Thus, we will initially assume that a technological chain of the form \B; f [ contains a regular mapping f: f : B ! MðBÞ: Suppose that a set B and a regular mapping f : B ! MðBÞ are given. If to each element bi 2 B a certain strictly positive number ti ¼ uðbi Þ is put in correspondence, then the triple \B; f ; u [ is called the model of the technological chain (TC model). Further, we will assume that we initially have the process chain model when solving the problem of managing industrial workshop. 3.2

Direct Tasks for Managing of the Industrial Workshop

Let’s pass to algorithms of the analysis of TC models and solving corresponding direct tasks. Next, we will consider the time-based technological chains, when ti is interpreted as the time intervals necessary to perform the operations bi. We introduce the necessary notation: si - the lowest possible starting time of the i-th operation bi, counted from the beginning of the technological chain implementation; Ti - the lowest possible end time of the i-th operation bi, counted from the beginning of the technological chain implementation. Clearly, Ti ¼ si þ ti


Using the introduced variables and the original TC model, it is possible to calculate all times sI, Ti, for all operations B ¼ fb1 ; b2 ; . . .bn g. Each set f ðbi Þ is associated with a number si ¼ maxfTk1 ; Tk2 ; . . . Tkn g;


Structural Modelling and Automation of Technological Processes


where f ðbi Þ ¼ fbk1 ; . . .; bki g;


where i ¼ 1; 2; . . .; n. Writing relations (20) for all operations b1 ; b2 ; . . .bn , by the assumption of the regularity of the mapping f, one can obtain the corresponding sequence of numbers fsi g; fTi g; i ¼ 1; 2; . . .; n:


Then, by construction, the total execution time of a given technological chain is as follows: T ¼ maxfTig:


From this information it is possible to determine the so-called critical operations, which determine the completion time of the entire technological chain. To do this, it is sufficient to perform the following steps. Step 1. Find the operation bi, for which T ¼ max fTi g, and the operation bi will be the first critical operation. Step 2. From the found bi, find the corresponding number si: si ¼ maxfTk1 ; Tk2 ; . . .; Tki g:


Step 3. Find the set of numbers ArgmaxfTk1 ; Tk2 ; . . .; Tki g:


The operations bi, on which this maximum is reached, will correspond to the second critical macro-operation, counting from the end (there may be several – by the number of maximums in (25)). Step 4. Continue the process until it ends with the exhaustion of the list of operations b1, …, bn.

4 Hierarchical Structure of the Task of Managing the Industrial Workshop When constructing an automated system for a network-centric industrial workshop, the following should be taken into account. 1. In the process of implementing a specific work schedule (optimal in a certain sense), various unforeseen failures are possible: breakage of the machines, shortage of components, unforeseen delays in the individual macro operations, etc. Therefore, the control system should provide continuous monitoring mode of the entire process and a regime for promptly changing the schedule for doing the remaining work in the new conditions with the aim of optimizing it. Thus, it is necessary to correct the


V. Kotlyarov et al.

process of performing the set of necessary operations in real time, taking into account the optimization requirements and taking into account the formulated criteria for optimality. 2. In addition, when forming the structure of the control system, it is necessary to take into account the possibility of multicriterial statements of optimization tasks, when several particular indicators of the quality of the work of the industrial workshop are specified. In this case, the task of ensuring the operation of the industrial workshop corresponding to a certain Pareto-optimal regime can be posed. Generally, it is advisable to use some physically justified convolution form of the vector optimality criterion and proceed to optimization according to the corresponding generalized criterion. 3. Another point must be considered when solving management tasks for the industrial workshop with a hierarchical structure. It is assumed that each macro operation, in turn, is a controlled process and consists of macro operations of the subordinate (lower) level. In this case, there is a need to organize the interaction of subprocesses at different hierarchical levels, both among themselves and with the main control center. For this reason, it is advisable to address the principles of networkcentric control and methods of coordination in hierarchical systems. It is also advisable to use methods of hierarchical construction of Pareto sets at various technological levels [1]. All three of the above factors in real industrial workshop tasks are present simultaneously, which significantly complicates the management process as a whole, as well as its algorithmic support and software. 4.1

Inverse Tasks for Managing Industrial Workshop

The three provided possible basic statements of optimizing the operation of the industrial workshop correspond to certain level of a specific complex production process, and relate to the inverse tasks of modeling the process of automating the functioning of a network-centric industrial workshop. For each hierarchical level, one can formulate its optimization goals and put its inverse tasks. Naturally, the inverse problems are based on the direct methods described in Sect. 3.2. Task 1. Time interpretation. Estimate the amount of additional resources needed to ensure that the total time for the implementation of the technological chain does not exceed the specified value of the T0, and additional investments were minimal. Let the model of the technological chain on a specific hierarchical level of the general process of functioning of the industrial workshop \B; f ; u [ be given, and as a result of solving the direct task of analysis, the critical path bk ¼ ðbk1 ; bk2 ; . . .bkm Þ is found. Let the execution time of the entire set of operations be T¼

Xm   u bkj [ T0 ; j


where T0 is specified time for all necessary operations. The requirements for the process are violated, since it is required that the T  T0 inequality must be satisfied.

Structural Modelling and Automation of Technological Processes


However, there are reserves present. Reserve models can be different. In this case, we can assume that the allocation of a certain additional resource ri for the operation bi decreases the value of ti ¼ uðbi Þ


Vti ¼ di ðri Þ;


by the value

where d ¼ ðd1 ; d2 ; . . .; dn Þ is a given resource vector function. The number of components of a given function is equal to the total number of macro-operations considered at a given hierarchical level. A natural statement of the problem arises. It is required to choose the resource vector r ¼ ðr1 ; r2 ; . . .; rm Þ in such a way that, firstly, the following inequality is satisfied X     Tr ¼ fu bj  dj rj g  T0 ; ð29Þ k

where the summation is performed on the critical operations of the new critical path, obtained in accordance with the final distribution of additional resources (for example, performance, machinery downtime, production costs, etc.) in such a way that the total used resource is minimal: r¼


ri ! min:


It is important to take into account that when selecting a resource vector in the optimization process for each next “test” value of this vector, it is necessary to solve anew the direct task of analysis for calculating the next new critical path.

5 Conclusion A method is proposed for ordering the set of elementary operations that make up the technological process. The search of operations begins “from the end” as in the method of dynamic programming. As a result, a technological matrix is constructed that defines the model of the technological process in the form of a set of interrelated and interdependent elementary operations. On the basis of this model, the analysis task is solved, which allows definition critical and non-critical operations with network methods and constructing a schedule for the implementation of the technological process. Principles of control of the technological process on the basis of solving inverse tasks are indicated. Acknowledgements. The work was financially supported by the Ministry of Education and Science of the Russian Federation in the framework of the Federal Targeted Program for Research and Development in Priority Areas of Advancement of the Russian Scientific and Technological Complex for 2014-2020 (№ 14.584.21.0022, ID RFMEFI58417X0022).


V. Kotlyarov et al.

References 1. Voinov, N., Chernorutsky, I., Drobintsev, P., Kotlyarov, V.: An approach to net-centric control automation of technological processes within industrial IoT systems. Adv. Manuf. 5 (4), 388–393 (2017) 2. Dobrzyński, M., Przybylski, W.: Simulation research on the tool cycle in automated manufacturing system at selected tool duplication levels. Adv. Manuf. Sci. Technol. 36(3), 55–66 (2012) 3. Grzesik, W., Niesłony, P., Bartoszuk, M.: Modelling of the cutting process – analytical and simulation methods. Adv. Manuf. Sci. Technol. 33(1), 5–29 (2009) 4. Serguei, A., Abdelhakim, F.: Christophe, a generic production rules-based for on-line simulation, decision making and discrete process control. Int. J. Prod. Econ. 112, 62–76 (2008) 5. Adam, R., Kotzé, P., Van der Merwe. A.: Acceptance of enterprise resource planning. Systems by small manufacturing enterprises. In: Proceedings of the 13th International Conference on Enterprise Information Systems, Beijing, China (2011) 6. Fritzson, P.: Introduction to Modeling and Simulation of Technical and Physical Systems with Modelica. Wiley-IEEE Press (2011) 7. Dong, S., Medeiros, D.: Minimising schedule cost via simulation optimisation: an application in pipe manufacturing. Int. J. Prod. Res. 50, 831–841 (2012) 8. Cicirelli, F., Furfaro, A., Nigro, L.: Modelling and simulation of complex manufacturing systems using statechart-based actors. Simul. Model. Pract. Theory 19, 685–703 (2011) 9. Neumann, M., Westkämper, E.: Method for situation-based modeling and simulation of assembly systems. Procedia CIRP 7, 413–418 (2013) 10. Neumann, M., Constantinescu, C., Westkämper, E.: Method for multi-scale modeling and simulation of assembly systems. Procedia CIRP 3, 406–411 (2012) 11. Rolón, M., Martínez, E.: Agent-based modeling and simulation of an autonomic manufacturing execution system. Comput. Ind. 63, 53–78 (2012) 12. Zbib, N., Pach, C., Sallez, Y., Trentesaux, D.: Heterarchical production control in manufacturing systems using the potential fields concept. J. Intell. Manuf. 23, 1649–1670 (2012) 13. Akartunali, K., Miller, A.J.: A heuristic approach for big bucket multi-level production planning problems. Eur. J. Oper. Res. 193(2), 396–411 (2009)

Student Learning Information Collection and Analysis System Based on Mobile Platform Chuanhong Zhou(&), Chong Zhang, and Chao Dai(&) School of Mechatronic Engineering and Automation, Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai, China [email protected], [email protected], [email protected]

Abstract. This article is mainly based on the mobile platform for student learning situation collection and analysis system, which is based on data mining technology and Kivy mobile platform development technology, according to the user daily input to the system of knowledge point information, for users to reason about the learning situation and provide a learning plan push service, so that users easily and easily enter the knowledge of the corresponding point information to operate, so as to form a smart personal knowledge point information collection applications. To help users achieve better management knowledge points and review the purpose of knowledge points. Keywords: Student learning situation collection and analysis system Kivy  Reasoning of learning

1 Introduction With the constant breakthrough and development of science and technology, people have more and more channels for obtaining information, and people’s dependence on information is also increasing. Especially in the current situation of popular smart devices, people usually want to download some special apps through smart phones so as to obtain the information they want to obtain in real time. Especially in teaching, people hope that they can not only obtain some information through teaching software, but also hope to use some teaching software to help them record relevant information of some learning situations, and use the system to help them to do a good job in classifying the learning situation and learning plan recommendation [1]. However, the current actual situation is that most software can implement user records of data, export of data, and there are very few softwares for classifying user data and inferring some accurate and reasonable results based on classification. Therefore, the emergence of this system can make up for these deficiencies in the market. The technology adopted by this system to collect information is the kivy mobile terminal platform development technology. It is not only portable but can be run on windows, linux, Android, iso and other systems after compilation, and has good compatibility. The most important thing is that the way we collect information is different from the way that most other apps collect. The way we collect information is © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 489–497, 2019.


C. Zhou et al.

by entering information related to our own learning, and we can enter information in a variety of ways. In summary, we can find that in the fast-paced life, people develop their own intelligent tools or on the premise that people are more and more demanding in terms of intelligence and convenience, comfort, and efficiency. Products will become more and more important.

2 Mixed Reasoning Mechanism Based on Rule-Based Reasoning and Bayesian Reasoning Rule-based reasoning and Bayesian reasoning [2] have their own advantages and disadvantages. This paper will organically combine the two to form a hybrid reasoning mechanism for research. The inference process is shown in Fig. 1.

Fig. 1. Mixed reasoning flow chart based on rule reasoning and Bayesian inference

Among them, the implementation process is as follows: (1) First, the system performs search matching based on the rules in the rule base through given feature facts. (2) If the system passes rule reasoning, it can retrieve the matching rules in the rule base and obtain the conclusion about the learning of the knowledge point through the rule reasoning mechanism. Then the entire process of hybrid reasoning ends. (3) If the system cannot derive reasoning results through rule inference, then the system will start the Bayesian reasoning program. Based on the training data in the knowledge base, the Bayesian algorithm calculates the probability that each type of learning situation when some specified feature attributes appear [3]. Finally, by comparing the probability values of various types of learning situations, the reasoning results of the final learning situation are obtained, and the entire mixed reasoning process is ended.

Student Learning Information Collection and Analysis System


3 System Requirements Analysis Prior to the actual design of the system, we needed to analyze the system’s requirements, combined with the requirements analysis and the system architecture, to make a reasonable analysis of the system’s functions, so that the system can achieve our ideal function. 3.1

Functional Requirements Analysis

The purpose of developing a student learning situation collection and analysis system based on a mobile platform is to create an intelligent platform that is convenient for individuals to collect and manage personal learning knowledge. Through the system learning situation recommendation technology can help users to know about themselves in time. The learning situation of the knowledge point and the system’s learning plan are recommended, and ultimately the purpose of helping the user to improve the learning efficiency is achieved. Specifically, the system can be divided into four major functional modules. 1. The main interface page module, the main interface module should include login, registration, logout, information acquisition module and data management and other major modules. 2. The information collection module should be composed of different sub-modules or functions. It must be able to collect different types of information data, such as the collection of diverse data information such as texts, audios, pictures, videos, and unified management of data information. 3. The learning plan recommendation module can help the user to reason and classify the learning situation of each knowledge point through data mining technology and formulate a corresponding review plan through the rules of forgetting curve and recommend it to the user for reference and use. 4. The data management module can add/delete/review the redundant or useless data stored and stored in the data management module and can provide the data import/export function. 3.2

Interface Requirements Analysis

System interface general requirements. To be friendly, reasonable layout, soft and comfortable colors, comfortable button size and position. It is required to use uniform background images and icons as far as possible so that all parts of the software can present a unified style. System interface special requirements. To reflect the style and characteristics of student learning. The icons for the software are graphics related to student learning. The interface background should have a logo related to learning education. In particular, the main interface should reflect more elements related to student learning.


C. Zhou et al.

4 Detailed Design of the System According to the description and introduction of the system requirements analysis in the last chapter, this chapter will describe the overall structural design of the learning and acquisition system based on the mobile platform based on these requirements. Through the overall analysis of the system, the design of the system The detailed design and implementation of the system are discussed and analyzed in many aspects such as process, functional module design, and system database design. 4.1

System Design Flow

After the system software starts up and running normally, the various modules of the system start to coordinate work. According to the following process, the personalized recommendation of the user is completed. The user can perform knowledge point review according to the system-recommended learning plan and can also customize itself through the system feedback mechanism. Study plan and other information. The System design flow is shown in Fig. 2.

Fig. 2. System design flow chart

Student Learning Information Collection and Analysis System



System Function Module Design

The functional module design of this system is mainly divided into four modules, which are the information collection module design, data processing module design, learning condition inference module design, and prediction result push module design of the system. A. Information Collection Module It mainly collects the user’s knowledge points, the name of the current knowledge point, the learning situation, and other remarks, and updates the collected data to the database for storage, which prepares for further data processing and analysis [4]. B. Data Processing Module Due to the phenomenon of missing, duplication, redundancy, and nonstandardization of some or more data in the collected information data, we need to clean up the data. At the same time, if the data we collect is continuous, we need to discretize the data of continuous data to facilitate the calculation of mixed reasoning results for subsequent learning situations. C. Learning Situation Inference Module This module is the core of the entire module and includes the following three parts: Rule inference: For processed data, rule reasoning is first performed, and the results of rule inference are used to determine the user’s current learning situation for a certain knowledge point. Bayesian reasoning [5]: When the rule reasoning does not produce a result, that is, there is no corresponding matching rule in the rule base, the program starts the Bayesian reasoning method to infer the learning situation and calculates the probability of each type of learning situation. The most probable learning conditional reasoning result serves as the final reasoning result. D. Learning Plan Push Module The learning plan is generated according to the forgotten curve law: the knowledge point review rule table is established by the rule of forgetting curve, and the review content is reminded to the user at each review node. At the same time, the user can also customize and feedback the content of the reasoning of the learning plan. At the same time, according to the degree of overlap of user-defined knowledge points and the content of associated knowledge, help users to develop personalized recommendations. 4.3

System Database Design

(1) The data model design of the system Usually the database uses an ER Diagram (Entity-Relation Diagram) to build a data model and describe the relationships between entities. The ER diagram provides methods for representing entities, attributes, and contacts that can visually describe the


C. Zhou et al.

relationships between the data in the system. According to the previous analysis of functional requirements, the ER diagram of this system can be shown in Fig. 3.

Fig. 3. System ER diagram

Student Learning Information Collection and Analysis System


(2) System database table design Since the amount of data in this system is not too large, in order to facilitate the expression of relationships between data and the management of data, it is possible to structurally store system data, improve the sharing and independence of data, and reduce data redundancy, so we choose Use open source lightweight data SQLite as the system database. The main table in the system is used as an example. The specific structure is shown in Table 1: Table 1. The structure of the main table Field name info_id owner_id kind_of info_name key_words difficulty_level save_date file_path backup1 backup2 backup3

Field type Int Int Varchar Varchar Varchar Varchar Datetime Varchar Varchar Varchar Varchar

Field meaning Record number User number Subject type Content name Keyword Difficulty degree Save time Save path Spare field 1 Spare field 2 Spare field 3

Is it allowed to be empty Explain No Primary key No Foreign key No No No No No No Yes Yes Yes

5 System Implementation The learning and acquisition system of student learning based on mobile platform is based on the kivy graphical interface platform. It uses PyCharm programming tool and open source python compiled language, and calls NumPy scientific computing library and Pyke rule engine library written based on python. The following will use some knowledge information as an example to verify the implementation process of the system. The first stage: input of knowledge point information. Start innovative teaching APP, enter the account and password in the login page, enter the main interface; then, in the main interface, select the “text editing” button, then enter the text editing interface; then fill in the contents of the corresponding, after confirmation Click the “Save” button to complete the input of a knowledge point information. The second stage: data processing. Data processing is done automatically through a system-written program. The basic data processing content includes: data type conversion, data transcoding, default value setting and so on. The third stage: the implementation of a hybrid inference mechanism based on rulebased reasoning and Bayesian inference [6]. The system first calls the RBR rule inference algorithm library, and assigns the measured knowledge point object, and then uses the get_result method in the class library to generate inference results. If the result


C. Zhou et al.

is empty, it is replaced by a library of BN Bayesian inference algorithms. Similarly, the value of the object to be measured is assigned, and the final reasoning result is obtained through the get_result method in the class library. In this process, the system also uses the try…except method to capture the exceptions that occur during the calculation and save them in a timely manner to the log file for subsequent system maintenance and debugging. The fourth stage: the realization of the learning plan recommendation. According to the forgotten curve law and the knowledge acquired in the previous stage, the information about the learning plan is generated and presented to the user on the system interface [7, 8]. The specific process is shown in Fig. 4:

Fig. 4. Implementation of the learning plan recommendation

6 Conclusion This topic is a simple and convenient student learning situation collection and analysis system developed based on python’s kivy framework and some data mining technologies. It initially realizes the user’s demand for the system in the real-time

Student Learning Information Collection and Analysis System


information collection service, and further builds intelligent information collection for the future. It lays a solid foundation for learning the situation and pushes the business. It also provides certain reference and experience for the design and implementation of information acquisition systems in other industries. Acknowledgments. This work is supported by subject of the Ministry of Education Cooperative Education Collaborative Project “Internet plus Intelligent Manufacturing Innovation and Entrepreneurship Education Platform Construction (No. T.08-0109-17-101)”. We thank Shanghai Key Laboratory of Intelligent Manufacturing and Robotics for assistance with our work.

References 1. Liu, W.: State estimation for discrete-time Markov jump linear systems with time-correlated measurement noise. Automatica 76 (2017) 2. Siuly, Wang, H., Zhang, Y.: Detection of motor imagery EEG signals employing Naïve Bayes based learning process. Measurement 86 (2016) 3. Basistov, Y.A., Yanovskii, Y.G.: Comparison of image recognition efficiency of Bayes, correlation, and modified Hopfield network algorithms. Pattern Recognit. Image Anal. 26(4) (2016) 4. Lv, L., Zhang, Q., Zeng, S., Wu, H.: Traffic classification of power communication network based on improved hidden Naive Bayes algorithm. In: 4th International Conference on Electrical & Electronics Engineering and Computer Science (ICEEECS 2016) (2016) 5. Ren, B., Shi, Y.: Research on spam filter based on improved Naive Bayes and KNN algorithm. In: 4th International Conference on Machinery, Materials and Computing Technology (2016) 6. Quan, W., Jiang, S., Han, C., Zhang, C., Jiang, Z.: Research on Bayes matting algorithm based on Gaussian mixture model. In: International Symposium on Multispectral Image Processing and Pattern Recognition (2015) 7. Liu, T., Li, R., Bu, Q.: A radar clutter suppression method based on fuzzy reasoning. In: Other Conferences (2016) 8. Tronco, T.R., Garrich, M., César, A.C., de Lacerda Rocha, M.: Cognitive algorithm using fuzzy reasoning for software-defined optical network. Photonic Netw. Commun. 32(2) (2016)

Task Modulated Cortical Response During Emotion Regulation: A TMS Evoked Potential Study Wenjie Li1,2, Yingjie Li1(&), and Dan Cao1 1


School of Communication and Information Engineering, Qianweichang College, Shanghai University, Shanghai 200444, China [email protected] School of Information Science and Engineering, Changzhou University, Changzhou 213164, China

Abstract. Cognitive reappraisal is a strategy that achieves successful emotional regulation through cognitive changes. Temporal dynamics, cortical oscillation and functional networks have been studied in existing neuroimaging researches. However, there still lacks evidence of direct neural activity involved in emotional cognitive regulation, and it is not clear how the brain regions of the brain network interact with each other. We used synchronous TMS-EEG techniques to study brain responses to the dorsolateral prefrontal cortex stimuli during cognitive reappraisal tasks. TMS evoked potentials were analyzed. It was found that there was significant difference on the amplitudes of N100 between resting state and emotional tasks on Cz. N100 in resting state was significantly greater than in task states for TMS conditions, which represented an inhibition effect of emotional tasks on TMS evoked potentials. The discovery was not found in sham TMS condition, which showed difference in stimulus conditions. There was no effect found between three different emotional tasks. Keywords: Emotion  Cognitive reappraisal TMS evoked potentials  N100


1 Introduction Emotion regulation is a key skill that plays an important role in human social activities. Cognitive reappraisal is a strategy that achieves successful emotional regulation through cognitive change. Reappraisal of emotional stimuli can reduce the effects of negative emotions, this activity is especially related to the prefrontal cortex [1, 2]. Existing researches using fMRI has revealed that reappraisal recruits frontal and parietal regions to modulate emotional responding in the amygdala [3]. But it’s not clear how the neural mechanism of emotional regulation works. In an EEG research, the temporal dynamics of emotion regulation was studied, and changes of late positive potential during the regulation process were reported [4]. The results of a cortical oscillation research suggested left middle/inferior frontal as part of the regulation network [5]. Although existing studies have found a number of electrophysiological phenomena and brain regions associated with emotional regulation. We still lack © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 498–502, 2019.

Task Modulated Cortical Response During Emotion Regulation


evidence of direct neural activity involved in emotional cognitive regulation of brain network functioning, it is not clear how the brain regions of the brain network interact with each other during emotional regulation. Transcranial magnetic stimulation (TMS) is a magnetic method used to stimulate small regions of the brain, provides a non-invasive means to trigger or modulate neural activity. A series of TMS evoked potentials (TEPs) have been reported by using single pulse TMS. The TEPs are named P30, N45, P60, N100, P180, and so on, according to the latency after TMS onset. N100 and P180 are most reproducible components. Especially N100, has been reported in diverse functional paradigms, and Cz have been confirmed to be the site with the highest amplitudes [6]. The analysis of TEPs can provide information about the way the neural signal is distributed and modulated during cognitive processing [7]. Giulia Mattavelli used TMS-EEG to explore cortical excitability at rest and two different face processing tasks, and found that performing face tasks significantly modulated TEPs recorded at the occipital and temporal electrodes [8]. By applying TMS stimulation to a specific functional brain region, corticalcortical connectivity and brain functional status can be analysed by observing evoked potentials in the region of interest. We used TMS-EEG to explore the cortical response when subjects were in resting state or performing emotional tasks. Studies have shown that the reduction of negative emotion during regulation is associated with prefrontal cortical activity, especially in the right dorsolateral prefrontal cortex (DLPFC) [9], so we applied single pulse TMS to right DLPFC.

2 Materials and Methods We investigated the effects of TMS intervention on emotion regulation with TMS-EEG technique. Single pulse TMS/sham TMS stimulation was applied to the right dorsolateral prefrontal cortex, and the neural response were analyzed in different tasks and stimulation conditions. Twenty-four healthy volunteers (mean age = 23.17 ± 2.83 years, mean education = 17.46 ± 1.54 years) participated in this study. All subjects were interviewed with Self-Rating Anxiety Scale (SAS), Self-Rating Depression Scale (SDS), Five Factor Model (FFM), Emotion regulation self-efficacy scale (ERS), and Toronto Alexithymia scale (TAS). It was confirmed that all the subjects met experimental requirements. An affective picture emotion regulation task was carried on according to the paradigm and experimental Procedure. Each stimulus sequence contained a task description box of 2000 ms, a random interval of 1000 ms, an affective picture, a valence judgment box and an arousal decision box. The affective pictures included neutral pictures and negative pictures. All the affective pictures were collected from international affective picture system, and a total of 30 neutral pictures and 90 negative pictures were selected as stimuli. The tasks of emotion regulation were divided into natural gaze of neutral pictures, natural gaze of negative pictures and cognitive reappraisal of negative pictures (Fig. 1).


W. Li et al.

Fig. 1. Schematic diagram of stimulus sequence

Monophasic TMS pulses were delivered using a figure of eight coil (model MCF75, Denmark) connected to MagPro X100 magnetic stimulator. The maximum output strength is 2.5 T, and the model is MCF-B70. The coil was positioned to right DLPFC using neuronavigation techniques. Stimulation was performed at 120% of the resting motor threshold (RMT) intensity. RMT was determined as the minimum intensity required to evoke at least 5 motor evoked potentials greater than 50 lV in amplitude. EEG was recorded from 64-channel surface electrodes mounted in an elastic cap. The EEG sampling frequency was 5000 Hz. EEG data processing was performed using fieldtrip [10] and custom scripts on the MATLAB platform. EEG data were epoched around the TMS pulse (−500 to 2000 ms), baseline corrected (−500 to −110 ms) and bad trials were discarded. Data containing large TMS artifacts were cut out, and the blanks were interpolated by using cubic interpolation. EEG data were then downsampled to 1000 Hz and detrended is applied to it. Then independent component analysis (ICA) is used to remove the remainder of the muscle artefact and EOG. At last low-pass filtering is used for data and the cutoff frequency is 20 Hz.

3 Results TMS induced artifacts were successfully removed. The nerve responses to stimulation were analyzed. The TEP waveforms were observed, and task modulation effects, task comparison and stimulation condition comparison results were got by using statistical methods. For resting state, the trials were averaged for each subject, and the TEPs were got. For task state, subjects execute three different tasks with and without stimulation. Average across trails was done for each task under each stimulation condition. In order to analyze TEPs in task state, trials of the TMS no task condition were subtracted from the TMS task conditions, so we got TEPs when performing three task. The same thing is down for TMS stimulation condition and sham TMS stimulation condition. TEPs were discovered in each subject. The component of N100 and P180 were most reproducible. The amplitudes of N100 were examined on different electrodes, and it was confirmed that Cz with the greatest amplitude of N100. Grand average was done, TEP waveforms on Cz for TMS group is shown in Fig. 2, and TEP waveforms on Cz for sham TMS group is shown in Fig. 3. Both TMS and sham TMS simulation evoked N100 and P180. The P180 in the TMS group was larger than in the sham group, but N100 was smaller than in the sham group. TEPs in resting state has a larger amplitude than in three task states, and there is no obvious difference between the three tasks.

Task Modulated Cortical Response During Emotion Regulation


Fig. 2. Schematic diagram of stimulus sequence

Fig. 3. TEP waveforms on Cz for sham group

Statistical Analysis was carried out according to the amplitudes of N100 and P180, and the amplitude was calculated as the average signal between ±5 ms of the peak latency. We examined whether the N100 and P180 components were influenced by tasks. A repeated measures ANOVA with 4 task states (rest, negative, neutral, and reappraisal) as within-subjects factors was conducted both for TMS group and sham TMS group. N100 was first examined. There was a significant relation between the task states and N100 amplitude for TMS group (F[1, 9] = 3.08, p = .044), and preplanned


W. Li et al.

contrasts indicated that, N100 in resting state was significantly greater than in natural gaze of negative pictures (F[1, 9] = 6.63, p = .030) and natural gaze of neutral pictures (F[1, 9] = 5.91, p = .038). There was a trend that N100 in resting state was greater than in cognitive reappraisal task (F[1, 9] = 4.83, p = .056). The same statistical method was used for sham group, but there was no effect in N100. Then P180 was analyzed, there was no significant effect. A repeated measures ANOVA on TEP amplitude with 3 emotional task states (negative, neutral, and reappraisal) as within-subjects factors was conducted to test the hypothesis that there were differences in three emotional tasks. However, results showed no significant statistical difference between the three tasks. We tested the mean of resting state TEPs between TMS condition and sham TMS condition. Independent-Samples T Test was used with TMS conditions as the factors. However, there is no significant statistical difference.

References 1. Ochsner, K.N., Gross, J.J.: The neural architecture of emotion regulation. In: Gross, J.J. (eds.). Handbook of Emotion Regulation, pp. 87–109. Guilford Press, New York (2007) 2. Maxwell, C.: A Treatise on Electricity and Magnetism, vol. 2, 3rd edn., pp. 68–73. Clarendon, Oxford (1892) 3. Buhle, J.T., Silvers, J.A., Wager, T.D., Lopez, R., Onyemekwu, C., et al.: Cognitive reappraisal of emotion: a meta-analysis of human neuroimaging studies. Cereb. Cortex 24, 2981–2990 (2014) 4. Thiruchselvam, R., Blechert, J., Sheppes, G., Rydstrom, A., Gross, J.J.: The temporal dynamics of emotion regulation: an EEG study of distraction and reappraisal 87(1), 84–92 (2011) 5. Ertl, M., Hildebrandt, M., Ourina, K., Leicht, G., Mulert, C.: Emotion regulation by cognitive reappraisal–the role of frontal theta oscillations. NeuroImage 81, 412–421 (2013) 6. Du, X., Choa, F.-S., Summerfelt, A., Rowland, L.M., Chiappelli, J., et al.: N100 as a generic cortical electrophysiological marker based on decomposition of TMS evoked potentials across five anatomic locations. Exp. Brain Res. 235(1), 69–81 (2017) 7. Miniussi, C., Thut, G.: Combining TMS and EEG offers new prospects in cognitive neuroscience. Brain Topogr. 22, 249–256 (2010) 8. Mattavelli, G., Rosanova, M., Casali, A.G., Papagno, C., Lauro, L.J.R.: Top-down interference and cortical responsiveness in face processing: a TMS-EEG study. NeuroImage 76, 24–32 (2013) 9. Kalisch, R., Wiech, K., Herrmann, K., Dolan, R.J.: Neural correlates of self-distraction from anxiety and a process model of cognitive emotional regulation. J. Cogn. Neurosci. 18, 1266– 1276 (2006) 10. Oostenveld, R., Fries, P., Maris, E., Schoffelen, J.-M.: FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. (2011). 11. Rogasch, N.C., Thomson, R.H., Farzan, F., Fitzgibbon, B.M., Bailey, N.W.: Removing artefacts from TMS-EEG recordings using independent component analysis: importance for assessing prefrontal and motor cortex network properties. NeuroImage 101, 425–439 (2014)

The Research on the Framework of Machine Fault Diagnosis in Intelligent Manufacturing Min Ji(&) School of Intelligent Manufacturing and Control Engineering, College of Engineering, Shanghai Polytechnic University, No. 2360, Jinhai Road, Shanghai, China [email protected]

Abstract. In the background of intelligent manufacturing, fault diagnosis is more and more important. In order to reduce the time and space cost of mass multidimensional data in machine learning fault diagnosis algorithm, a fault diagnosis framework based on DEMATEL and Support Vector Machine is proposed in this paper. DEMATEL algorithm can reduce the dimensions of data and Support Vector Machine can better solve the problem of multiple classification of small sample with non-linear feature and classify the fault data after dimensionality reduction. The framework proposed in this paper improves the efficiency of fault diagnosis effectively and ensures the further development of intelligent manufacturing. Keywords: DEMATEL

 SVM  Fault diagnosis  Intelligent manufacturing

1 Introduction With the further development of intelligent manufacturing, the fault diagnosis of machine equipment has great significance. In the past, machine fault diagnosis is based on the experience of the experts. With the application of sensors, more and more sensors are used in machine equipment to acquire the status parameters in real time. At the same time, many intelligent learning algorithms also replace the traditional method to diagnose fault by expert experience [1]. It is the key for the stable and orderly intelligent manufacturing process to discover the fault sources fast, accurately, and timely. Due to the complexity of machine fault diagnosis and the interaction among subsystems, the conventional fault diagnosis model based on linear hypothesis is difficult to achieve the desired result [1]. Therefore, the application of machine learning algorithm for fault classification is an important means. In the past several decades, many classification algorithms have been developed, such as decision tree [2], random forest [3], neural network [4, 5], Bayesian theory [6] and so on. However, a large number of sensors are placed on the machine to acquire real-time running state parameters. Mass multidimensional data will increase the calculation time and space of fault diagnosis algorithm [1], thus the real-time diagnosis can be affected. Therefore, this paper intends to study the above two types of problems. © Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 503–508, 2019.


M. Ji

Support Vector Machine (SVM) is a new machine learning theory for small sample with high performance, which can solve the problem of small sample and non-linear classification. It not only ensures the high learning ability, but also takes the generalization ability into account, and balances between the reliability and complexity of the model according to the limited sample information [7]. DEMATEL method is an effective method for factor analysis and identification, which is more effective in dealing with complex social problems, especially for those systems with uncertain factors, and is widely used in many fields [8, 9]. In this paper, the DEMATEL model is adopted to reduce the dimension of machine running state parameters, and choose the important state parameters to enter the fault diagnosis model of Support Vector Machine.

2 Dimension Reduction of Fault Data Based on DEMATEL DEMATEL model was proposed in 1972 to study and solve complex and difficult problems by using graph theory and matrix theory for system factor analysis [10]. It can solve the specific interconnected problem groups and identify feasible solutions by classification. Through matrix calculus, it can visually show and simplify the logical relationship between the problems. This method makes full use of experts’ experience and knowledge to deal with complex problems, and overcomes the difficult analysis of systems with multiple subsystems or components in traditional methods [10]. Among many parameters, it can identify the cause parameters and the result parameters, and sort the parameters according to the centrality degrees, which indicate the importance level of each parameter. The specific algorithm is as follows. Invite experts to evaluate the relationships between the state parameters by experience. The degree of mutual influence among the state parameters can be calculated by Likert scale, where 0 means no influence relationship and N is the important degree of one state parameter compared with another. A direct influence matrix A can be constructed: 2

  A ¼ aij nn

 .. . 

a11 6 .. ¼4 . an1

3 a1n .. 7 . 5



aij means the important degree of state parameter i compared with state parameter j. Normalize the direct influence matrix A, and assign the element values of matrix A between 0–1:   X ¼ xij nn ¼

h max max1  i  n



j¼1 aij ; max1  j  n

Construct a comprehensive influence matrix T:


i¼1 aij



The Research on the Framework of Machine Fault Diagnosis

T ¼ X ðE  X Þ1



Calculate the degree of influence D and the degree of being influenced R of each parameter: D ¼ ðti Þn1 ¼


t j¼1 ij


Xn    R ¼ tj 1n ¼ t ij i¼1




Calculate the cause degree, the result degree and the centrality degree of each state parameter. If Di − Ri > 0, the parameter i is the cause parameter. If Di − Ri < 0, the parameter i is the result parameter, which is influenced by the cause parameter. The centrality degree Di + Ri can be arranged in descending order to indicate the importance degree of parameter.

3 SVM Classification Algorithm In the 1995, Cortes and Vapnik first proposed the Support Vector Machine theory, which is an emerging machine learning theory for small sample with high performance. In essence, it is a group of feedforward neural Networks [11], which show many unique advantages in various aspects. It is based on the statistical learning theory, which breaks the limitation of the traditional machine learning theory, and solves the difficult problem which the traditional machine learning can’t solve. The optimization goal of neural network algorithm is the experience risk minimization, which is likely to occur overfitting with strong study ability and poor generalization ability. Support Vector Machine is based on the theory of structural risk minimization and Vapnik-Chervonenkis Dimension, which not only protect the higher learning ability, but also take into account the generalization ability [12]. The basic concept of SVM classifier is: the non-linear function uðÞ can map the training set data from the low dimensional nonlinear separable space to a high dimensional linear separable space, and then construct the optimal classification hyperplane in the high dimensional linear space, and finally the discrimination function of the classifier can be obtained. ðw  uð xÞÞ þ b ¼ 0


The discrimination function is: yð xÞ ¼ sign½ðw  uð xÞÞ þ b


The optimal classification hyperplane problem can be described as: 8 < :


2 1 2 kw k


l P i¼1


s:t:yi ððw  uðxi ÞÞ þ bÞ  1  ni ; ni  0; i ¼ 1; 2;    ; l



M. Ji

The dual optimization problem of this quadratic programming problem is: 8 l l P l l l P l    P   P P P > 1 1 > a  a a y y u ð x Þ  u x a  ai aj y i y j K x i ; x j max L ¼ ¼ > i i j i j i j i 2 2 > < i¼1 i¼1 j¼1 i¼1 i¼1 j¼1 l P > s:t: yi ai ¼ 0 > > > i¼1 : 0  ai  C; i ¼ 1; 2; . . .; l ð8Þ   K xi ; xj ¼ uðxi Þ  u xj is the kernel function, ai and aj are lagrangian multiplier. The optimal classification hyperplane is: 

gð x Þ ¼

l X

^ai yi ðuðxi Þ  uð xÞÞ þ ^b ¼


l X

^ b ai y i K ð x i ; x Þ þ ^



^b can be calculated according to: l l X X         ^b ¼ yj  w ^ai yi uðxi Þ  u xj ¼ yj  ^ ^  u xj ¼ yj  ai y i K x i ; x j i¼1



^ai is the estimated value of ai .

4 The Framework of Fault Diagnosis The framework of fault diagnosis based on DEMATEL and SVM can be divided into four stages (Fig. 1). The first stage is the pre-processing phase, which includes the fault diagnosis object determination and the historical data collection. Based on the different machine equipment, different sensors are deployed to capture the machine operating parameters. The historical data can be used to generate model later. The second stage is the dimension reduction of fault diagnosis. After obtaining the machine operating parameters, we can construct DEMATEL model to analyze the relationship between the parameters, calculate the cause parameters and the result parameters, and sort the parameters according to the centrality degrees. According to the centrality degrees, the important parameters can be selected as the inputs of the SVM fault diagnosis model. The third stage is the data training phrase. The parameter values of each fault state is divided into two groups: training sample and fault diagnosis classification sample. The training sample is the input of the SVM model. The appropriate kernel function is selected and the cross verification method based on grid search is applied to optimize the model parameters. The trained SVM fault diagnosis model is obtained.

The Research on the Framework of Machine Fault Diagnosis


Fig. 1. The framework of fault diagnosis based on DEMATEL and SVM

The fourth stage is the diagnosis and evaluation phase. We can diagnose the faults by SVM model which is generated in the third stage and evaluate the accuracy of the model.

5 Conclusion In this paper, we propose a framework of fault diagnosis based on DEMATEL and SVM. DEMATEL can analyze the relationships between the machine running parameters and sort the parameters according to their important levels. SVM classifier is used to diagnose faults based on the parameters selected by DEMATEL. This framework not only can reduce dimensions of multidimensional parameters but also


M. Ji

can improve the diagnosis performance by using machine learning algorithm. In the future, this proposed framework can be used in the practical production to verify the framework’s performance. Acknowledgement. This work is supported by the Yong Teacher Training Project of Shanghai Municipal Education Commission (Grant No. ZZegd16007).

References 1. He, L., Wang, X., Han, B.: Fault diagnosis of diesel engine based on random forest and support vector machine. Navig. China 40(2), 29–33 (2017) 2. Yang, B.S., Lim, D.S., Tan, A.C.C.: VIBEX: an expert system for vibration fault diagnosis of rotating machinery using decision tree and decision table. Expert Syst. Appl. 28(4), 735– 742 (2005) 3. Cerrada, M., Zurita, G., Cabrera, D., et al. Fault diagnosis in spur gears based on genetic algorithm and random forest. Mech. Syst. Sign. Process. 70–71, 87–103 (2016) 4. Baldi, P., Blanke, M., Castaldi, P., et al.: Combined geometric and neural network approach to generic fault diagnosis in satellite actuators and sensors. IFAC Papersonline 49(17), 432– 437 (2016) 5. Liu, J., Wang, F., Yang, Z.: Transformer fault diagnosis based on RBF neural network and adaptive genetic algorithm. Eng. J. Wuhan Univ. (2016) 6. Cai, B., Huang, L., Xie, M.: Bayesian networks in fault diagnosis. IEEE Trans. Ind. Inform. PP(99), 1 (2017) 7. Jiang, Z.: Research on fault diagnosis of nuclear power equipment based on support vector machine. Nanhua University (2010) 8. Zhou, X., Shi, Y., Deng, X., et al.: D-DEMATEL: a new method to identify critical success factors in emergency management. Saf. Sci. 91, 93–104 (2017) 9. Su, C.M., Horng, D.J., Tseng, M.L., et al.: Improving sustainable supply chain management using a novel hierarchical grey-DEMATEL approach. J. Clean. Prod. 134(5), 469–481 (2016) 10. Fu, J., Mao, Z., Feng, G.: A new method of prioritizing failures of harmony device based on FMECA and DEMATEL method. M&E Eng. Technol. (5), 96–99 (2014) 11. Haykin, S.: Neural networks: a comprehensive foundation. Neural Netw. Compr. Found. 71– 80 (2008) 12. Zhang, S.: A survey of improved algorithm and application on support vector machine. J. Jiangsu Univ. Technol. 22(2), 14–17 (2016)

Utilization of MES System, Enablers and Disablers Inger Gamme1(&) and Ragnhild J. Eleftheriadis2(&) 1

SINTEF Raufoss Manufacturing AS (SRM) Product and Production Development, Raufoss, Norway [email protected] 2 SINTEF Raufoss Manufacturing AS (SRM) Product and Production Development, Trondheim, Norway [email protected]

Abstract. Pressure from endlessly changing global markets, forces Manufacturers to continuously work to improve the quality and reduce the cost of their products and processes. Hence, to know the status of what is being produced is essential to succeed in this work. Different types of software applications are often used to collect product- and process data and provide a visualization to control the production processes. Despite advanced systems, manufacturers often experience difficulties related to information gathering, sharing and interpretation. Based on a case study of a single mass producer, this article presents experience data and illustrate possible challenges in succeeding with a MES system. Keywords: Knowledge management

 MES system  Performance

1 Introduction In a continuously changing and challenging environment, European manufacturers are forced to continuously improve their processes. Hence, the production strategies and methods must focus on achieving processes that can gain a resource optimizing and produce high quality products in a more efficient way than currently is the case. The conditions of the machines, manufacturing processes and decision-making influence the degree productivity, performance and product quality achieved. To remain competitive and maintain their leading manufacturing position, it is essential that European industry deliver high quality products, produced at low cost in the most efficient way. To avoid errors or disorders in processes, companies aim for a Zero-Defect Manufacturing system, and to achieve this, enterprises worldwide often implement powerful systems to acquire accurate and complex production data (Wang 2013). This study presents important contingencies for companies’ capability to succeed with, fully utilize and gain sufficient feedback from a MES system. This qualitative exploratory case study reveals that there exist several disablers to fully utilize such a system.

© Springer Nature Singapore Pte Ltd. 2019 K. Wang et al. (Eds.): IWAMA 2018, LNEE 484, pp. 509–514, 2019.


I. Gamme and R. J. Eleftheriadis

2 Theory 2.1


To monitor and control real-time and flexible processes, production departments often use self-developed databases or spreadsheets. Because of the complexity of the maintenance and integration of such systems as they increase over time, this has brought software developers to develop complete and integrated solutions. Such solutions, often mentioned as Cyber Physical Systems (CPS) and Industry 4.0 technology, offer a common user interface and data management system, and is often referred to this as Manufacturing Execution System (MES) (Saenz de Ugarte et al. 2009). Nowadays, industrials have many different IT solutions available such as Enterprise Resource Planning (ERP), Manufacturing Execution Systems (MES), Product Lifecycle Management system (PLM) etc. to support them in their daily challenges. Having advanced software, makes it possible to collect and store a large amount of data (Lee 2008; Lee et al. 2015). Though such systems have been proven to assist in solving daily work tasks, the data created is exponentially growing (Dhuieb et al. 2016). With complex processes, an extensive effort is often needed to collect, handle and analyse product and process quality to detect faults (Tong et al. 2017). A MES has been found to improves the return on operational properties in addition to on-time delivery, inventory turns, gross margin and cash-flow performance. Via two-way communications, a MES offers mission-critical information about production activities across the enterprise and supply chain. Younus et al. (2010), categorizes MES more as a method than a specific software application. Furthermore, they claim that to be able to succeed with a MES, it is important for organizations to develop a culture that supports the sharing of data across departments and put focus on processes to improve the cooperation between production and management involved in the product manufacturing. The definition of MES (Manufacturing Execution System) has historically tended to be defined according what each software producers had as a capacity of their own systems. Hence, the MESA organization proposed a formal definition based on a gathering of the major actors of the market as the following: “MES deliver information that enables the optimization of production activities from order launch to finished goods. Using current and accurate data, a MES guides, initiates, responds to and reports on plant activities as they occur. The resulting rapid response to changing conditions, coupled with a focus on reducing non-value-added activities, drives effective plant operations and processes”(MESA). Hänel and Felden (2011) emphasize the importance of considering the various manufacturing environments consisting of diverse complexity of product and processes, when implementing MES systems. The functionality of MES related to the quality control are connected to the real time measurements collected from the manufacturing shop floor to secure good quality of the produced products through continuously control (Younus et al. 2010). To seek to eliminate the impacts of accidental or non-systematic factors is essential for a qualityoriented process control. They could appear frequently and with dissimilar characteristics. The process outcome could be influenced by several aspects as e.g. nonhomogeneities of incoming material, errors in machine settings or errors in the tools or maintenance of the production facilities (Westkämper and Warnecke 1994).

Utilization of MES System, Enablers and Disablers




Tools as Statistical Process Control (SPC) are often used by manufacturers to control the process and product variables over time (Tong et al. 2017). In fixed or variable intervals, samples are taken from the production process, and by using statistical methods, statistics can be computed and displayed. Exact and complex datasets are stored in databases at the different stages of manufacturing, and these datasets are linked to products, machines, materials, processes, inventories, sales, markets, etc. Valuable information and knowledge can be extracted from these datasets, including patterns, trends, associations, dependencies and rules (Reynolds et al. 1988). According to Mason and Antony (2000), organizations lacking understanding of the potential benefits of SPC, is one of the main reasons why SPC implementation fail. They state, that the importance is not to focus on control charts themselves, but rather what should be the foundation for establishing SPC. A lack of understanding of the benefits of SPC has been referred to in literature as a hindrance to succeed with SPC. In addition, the following topics are listed; lack of commitment from management, education and training is lacking, misinterpretation of control charts gives unsatisfactory actions, lacking knowledge of important parameters to monitor and SPC is often implemented as result of a customer requirement rather than what is needed to improve the capability of the processes (Antony and Taner 2003). Bergquist and Albing (2006) performed a study within the Swedish industrial sector to find out to what degree Statistical methods (SM) are used within the work places, and how it is being used. Furthermore, they were seeking to find out what was the motivation for implementing statistical methods such as SPC or not and if this was different related to type of organization. Additionally, they were looking for what would assist in increasing the use of SM. The main findings from the study showed that a lack of competence and too few resources were the main disablers for expanded use of SM. When manufacturers work towards measuring and improving the performance of their manufacturing process, defining the necessary Key Performance Indicators (KPI’s) is essential. Furthermore, developing process performance improvement strategies that are balanced across all critical objectives and targeted to the manufacturin