Advances in Service and Industrial Robotics

This volume contains the proceedings of the RAAD 2018 conference, covering major areas of research and development in robotics. It provides an overview on the advances in robotics, more specifically in novel design and applications of robotic systems; dexterous grasping, handling and intelligent manipulation; intelligent cooperating and service robots; advanced robot control; human-robot interfaces; robot vision systems and visual serving techniques; mobile robots; humanoid and walking robots; field and agricultural robotics; bio-inspired and swarm robotic systems; developments towards micro and nano-scale robots; aerial, underwater and spatial robots; robot integration in holonic manufacturing; personal robots for ambient assisted living; medical robots and bionic prostheses; intelligent information technologies for cognitive robots etc.The primary audience of the work are researchers as well as engineers in robotics and mechatronics.


122 downloads 3K Views 123MB Size

Recommend Stories

Empty story

Idea Transcript


Mechanisms and Machine Science 67

Nikos A. Aspragathos Panagiotis N. Koustoumpardis Vassilis C. Moulianitis Editors

Advances in Service and Industrial Robotics Proceedings of the 27th International Conference on Robotics in Alpe-Adria Danube Region (RAAD 2018)

Mechanisms and Machine Science Volume 67

Series editor Marco Ceccarelli LARM: Laboratory of Robotics and Mechatronics DICeM: University of Cassino and South Latium Via Di Biasio 43, 03043 Cassino (Fr), Italy e-mail: [email protected]

Editorial Board Members Alfonso Hernandez Mechanical Engineering, University of the Basque Country, Bilbao, Vizcaya, Spain Tian Huang Department of Mechatronical Engineering, Tianjin University, Tianjin, China Steven A. Velinsky Mechanical and Aerospace Engineering, University of California Davis, Davis, California, USA Yukio Takeda Mechanical Engineering, Tokyo Institute of Technology, Tokyo, Japan Burkhard Corves Institute of Mechanism Theory, Machine D, RWTH Aachen University, Aachen, Nordrhein-Westfalen, Germany

This book series establishes a well defined forum for monographs, edited Books, and proceedings on mechanical engineering with particular emphasis on MMS (Mechanism and Machine Science). The final goal is the publication of research that shows the development of mechanical engineering and particularly MMS in all technical aspects, even in very recent assessments. Published works share an approach by which technical details and formulation are discussed, and discuss modern formalisms with the aim to circulate research and technical achievements for use in professional, research, academic, and teaching activities. This technical approach is an essential characteristic of the series. By discussing technical details and formulations in terms of modern formalisms, the possibility is created not only to show technical developments but also to explain achievements for technical teaching and research activity today and for the future. The book series is intended to collect technical views on developments of the broad field of MMS in a unique frame that can be seen in its totality as an Encyclopaedia of MMS but with the additional purpose of archiving and teaching MMS achievements. Therefore the book series will be of use not only for researchers and teachers in Mechanical Engineering but also for professionals and students for their formation and future work.

More information about this series at http://www.springer.com/series/8779

Nikos A. Aspragathos Panagiotis N. Koustoumpardis Vassilis C. Moulianitis Editors

Advances in Service and Industrial Robotics Proceedings of the 27th International Conference on Robotics in Alpe-Adria Danube Region (RAAD 2018)

123

Editors Nikos A. Aspragathos Department of Mechanical Engineering and Aeronautics University of Patras Patras, Greece

Vassilis C. Moulianitis Department of Product and Systems Design Engineering University of the Aegean Syros, Greece

Panagiotis N. Koustoumpardis Department of Mechanical Engineering and Aeronautics University of Patras Patras, Greece

ISSN 2211-0984 ISSN 2211-0992 (electronic) Mechanisms and Machine Science ISBN 978-3-030-00231-2 ISBN 978-3-030-00232-9 (eBook) https://doi.org/10.1007/978-3-030-00232-9 Library of Congress Control Number: 2018953708 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The 27th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2018, is held in the Conference and Cultural Center of the University of Patras, Greece, 6–8 of June 2018. Academic and industry researchers from the Alpe-Adria-Danube Region as well as affiliated countries and their worldwide partners are brought together in an international forum for presenting their research work and exchanging and discussing new ideas towards the progress of robotics science and technology. The papers covered all the major areas of R&D and innovation in robotics including human–robot interaction and collaboration, service robots, unmanned aerial vehicles, robot control, design, optimization among others. In RAAD 2018, eight (8) special sessions are proposed. We would like to thank their organizers who proposed and promoted their topics and collected the related papers. A special acknowledgement for their valuable collaboration is given to: Dan Popescu, Loretta Ichim, Giuseppe Quaglia, Marco Piras, George Nikolakopoulos, George Georgoulas, Med Amine Laribi, Giuseppe Carbone, Tadej Petrič, Kosta Jovanović, Timotej Gašpar, Martin Bem, Marialena Vagia, Eleni Kelasidi, Emmanouil Z. Psarakis, Georgios D. Evangelidis, Nikolaos Poulopoulos and Nefeli Lamprinou. With 88 submissions, an extremely thorough peer review process was completed with comprehensive and thorough comments for each paper including their relevance, novelty, clarity and quality of the work. We would like to thank all the reviewers for allocating their valuable time and effort for their outstanding work in reviewing the assigned papers. The list of reviewers is acknowledged for their outstanding work. This process resulted in the acceptance of 77 papers with an additional half-month for authors to revise their papers. We would like to thank all the authors for their effort and excellent work to improve their papers.

v

vi

Preface

This book is the collection of the accepted papers presented in the conference, which is organized in 10 parts: 1. Human Robot Interaction and Collaboration: It includes nine (9) papers with innovative methods for physical human–robot interaction, human–robot collaboration, collision detection, collided link identification, safety, variable stiffness mechanisms and actuators and learning by demonstration or imitation. 2. Service Robots: This part includes five (5) papers demonstrating new robot capabilities for bathing assistance, elderly and disable care, orthopaedic surgery, the identification of upper limb motion specifications for robot-assisted exercising, pneumatic artificial muscles and hand rehabilitation. 3. Unmanned Aerial Vehicles: The third part includes five (5) papers dedicated to UAV for various applications such as forest monitoring, navigation in underground mines, and visual methods for UAV such as photogrammetry, video stabilization and horizon line detection. 4. Mobile and Walking Robots: Nine (9) papers contributing to the kinematic design of a passively steered 4WD mobile platform, upgrading an all-terrain autonomous robotic vehicle, a stability analysis for rough terrain navigation of UGV, mobile robots for precision agriculture and stair climbing, design and test of hexapod walking robots, and kinematic, dynamic analysis and control of walking robots with elastic legs. 5. Robot Design and Optimization: This is the biggest part of the book including twelve (12) papers with new approaches for the design of optimal and dexterousparallel and serial robots, design of mechanisms, mechatronic design of systems, end-effectors and sensory tools. 6. Robot Control: Six (6) papers contribute new methods for robot control including nonlinear and fuzzy control, gain scheduling, control of pneumatic systems, control of constrained, underactuated or hyper-redundant manipulators and backstepping control. 7. Motion planning and trajectory generation: The nine (9) papers included in this part propose methods and approaches for motion and path planning, trajectory generation and navigation, applications in micro-assembly and additive manufacturing as well as an agent-based method for group movements. 8. Robotic Vision Systems: Eight (8) papers with research contribution in robotic vision systems for detection, localization, mapping, eye localization and tracking, with applications in garments or medicine. 9. Industrial Robots and applications: Eight (8) papers on industrial robots, robot services and cloud manufacturing, Cyber-physical systems and robotics, and methods for positioning systems and robot calibration are included in this part. 10. Social Robotics: The last part includes six (6) dealing with new challenges in educational robotics, ethics and UAVs, educational kits and development of cognitive skills. This book collects the most recent impact works in robotics research and is a source for inspiration for future developments.

Preface

vii

We would like to thank the RAAD Advisory Board, the International Scientific Committee and the National Organizing Committee for their valuable support. We are grateful to IFToMM and University of Patras which have supported RAAD 2018. We thank the members of the Robotics Group, and the students of the Robotics Club, Mechanical Engineering and Aeronautics Department, University of Patras, who helped in the organization of the conference. A special thanks to Mrs Georgia Kritikou, who together with her other scientific burden acted as the Conference Secretary. Finally, we would like to thank the publisher Springer and its Editorial staff for accepting and helping in the publication of this Proceedings volume within the book series on Mechanism and Machine Science (MMS). March 2018

N. A. Aspragathos P. N. Koustoumpardis V. C. Moulianitis

Organization

General Chair Nikos A. Aspragathos

University of Patras, Greece

Co-chairs Panagiotis Koustoumpardis Vassilis Moulianitis

University of Patras, Greece University of the Aegean, Greece

Advisory Board Guido Belforte János F. Bitó Štefan Havlík Peter Kopacek Alberto Rovetta Imre J. Rudas

Polytechnic of Turin, Italy Centre of Robotics and Automation, Hungary Slovak Academy of Sciences, Slovakia Vienna University of Technology, Austria Polytechnic of Milan, Italy Óbuda University, Budapest, Hungary

National Organizing Committee Panagiotis Koustoumpardis Vassilis Moulianitis Emmanouil Z. Psarakis Gabriel Mansour Antonios Gasteratos Nikolaos Tsourveloudis

University of Patras, Greece University of the Aegean, Greece University of Patras, Greece Aristotle University of Thessaloniki, Greece Democritus University of Thrace, Greece Technical University of Crete, Greece

ix

x

Organization

International Scientific Committee (RAAD ISC) Nikos A. Aspragathos Karsten Berns Theodor Borangiu Ivana Budinská Marco Ceccarelli Karol Dobrovodský Carlo Ferraresi (ISC Deputy Chair) Nick Andrei Ivanescu Roman Kamnik Gernot Kronreif Andreas Mueller Ivan Petrovic Doina Pîsla Alexander Rodic Jozsef K. Tar Said Zeghloul Leon Žlajpah (ISC Chair)

University of Patras, Greece University of Kaiserslautern, Germany Polytechnic University of Bucharest, Romania Slovak Academy of Sciences, Slovakia University of Cassino, Italy Slovak Academy of Sciences, Slovakia Polytechnic of Turin, Italy Polytechnic University of Bucharest, Romania University of Ljubljana, Slovenia ACMIT GmbH, Austria Johannes Kepler University Linz, Austria University of Zagreb, Croatia Technical University of Cluj-Napoca, Romania Institute Mihailo Pupin, Belgrade, Serbia Óbuda University, Budapest, Hungary Poitiers University, France Jožef Stefan Institute, Ljubljana, Slovenia

General Secretary Georgia Kritikou

University of Patras, Greece

List of Reviewers RAAD 2018 We gratefully acknowledge the contribution of the following reviewers who reviewed papers for the 27th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2018. Amanatiadis, Angelos Andrikopoulos, Georgios Angeli, Stefano Argyros, Antonis Arvanitakis, Ioannis Azariadis, Philip Bader, Markus Balaska, Vasiliki Belforte, Guido Bevec, Robert Bilalis, Nikolaos

Birbilis, George Boiadjiev, George Bompos, Dimitrios Borangiu, Theodor Brandstötter, Mathias Buchegger, Klaus Budinska, Ivana Carbone, Giuseppe Ceccarelli, Marco Chatzis, Ioannis Chikurtev, Denis

Organization

Dan, Popescu Dermatas, Evangelos Dimeas, Fotios Dobrovodský, Karol Doitsidis, Lefteris Domozi, Zsolt Duta, Luminita Ferraresi, Carlo Gams, Andrej Gašpar, Timotej Gasteratos, Antonios Gattringer, Hubert Gordic, Zavisa Hehenberger, Peter Henrich, Dominik Hernández Martínez, Eusebio Eduardo Hofbaur, Michael Ionita, Silviu Ivanescu, Mircea Ivanescu, Nick Iversen, Nikolaj Jörgl, Matthias Jovanovic, Kosta Jovanovic, Milos Kaburlasos, Vassilis Kamnik, Roman Karastoyanov, Dimitar Karsten, Berns Kelasidi, Eleni Keshtkar, Sajjad Kladis, Georgios P. Koskinopoulou, Maria Kosmopoulos, Dimitrios Kostavelis, Ioannis Koumboulis, Fotis Koutsabasis, Panayiotis Kraljic, David Krastev, Evgeniy Kronreif, Gernot Laribi, Med Amine Lazarou, Panagiotis Lingua, Andrea Logozzo, Silvia Loizou, Savvas

xi

Mahmoud, Abdelnasser Makris, Sotirios Mansour, Gabriel Mariolis, Ioannis Miatliuk, Kanstantsin Mueller, Andreas Nikolakopoulos, Pantelis Nitulescu, Mircea Pachidis, Theodore Papageorgiou, Dimitrios Papageorgiou, Xanthi Papanikos, Paraskevas Petric, Tadej Piperidis, Savvas Piras, Marco Pisla, Doina Psarakis, Emmanouil Quaglia, Giuseppe Rekleitis, Georgios Rodić, Aleksandar Rohner, Dorian Sakellariou, John Sandoval Arévalo, Juan Sebastián Sarafis, Elias Savino, Sergio Silvagni, Mario Sotiropoulos, Panagiotis Stoian, Viorel Stoimenov, Nikolay Šuligoj, Filip Švaco, Marko Synodinos, Aris Syrimpeis, Vasileios Tar, József Thanellas, Georgios Tiboni, Monica Tosa, Massimo Trahanias, Panos Triantafyllou, Dimitra Tsagaris, Apostolos Tsakiris, Dimitris Tsardoulias, Emmanouil Tsvetkova, Ivanka Tzafestas, Costas

xii

Urukalo, Djordje Valsamos, Charalampos Vlachos, Kostas Vladu, Ionel Cristian Voros, Nikolaos Werner, Tobias Wilson, Richard Wolniakowski, Adam Xidias, Elias

Organization

Yovchev, Kaloyan Zacharopoulos, Nikolas Zachiotis, Georgios-Alexandros Zafar, Zuhair Zahariev, Plamen Zeghloul, Said Žlajpah, Leon

Contents

Human Robot Interaction and Collaboration Manipulator Collision Detection and Collided Link Identification Based on Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abdel-Nasser Sharkawy, Panagiotis N. Koustoumpardis, and Nikos A. Aspragathos

3

Virtual Guides for Redundant Robots Using Admittance Control for Path Tracking Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leon Žlajpah and Tadej Petrič

13

New Variable Stiffness Safety Oriented Mechanism for Cobots’ Rotary Joints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y. Ayoubi, M. A. Laribi, S. Zeghloul, and M. Arsicault

24

Safety Performance of a Variable Stiffness Actuator for Collaborative Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juan Sandoval, Med Amine Laribi, Said Zeghloul, Marc Arsicault, and Gérard Poisson

35

Task Space Torque Profile Adaptations for Dynamical Human-Robot Motion Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tadej Petrič and Andrej Gams

44

Progressive Automation of Repetitive Tasks Involving both Translation and Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fotios Dimeas and Zoe Doulgeri

53

Real-Time Recognition of Extroversion-Introversion Trait in Context of Human-Robot Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zuhair Zafar, Sarwar Hussain Paplu, and Karsten Berns

63

Fully Integrated Torque-Based Collision Detection in Periodic Tasks for Industrial Robots with Closed Control Architecture . . . . . . . . . . . . . Zaviša Gordić and Kosta Jovanović

71

xiii

xiv

Contents

Learning Spatio-temporal Characteristics of Human Motions Through Observation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maria Koskinopoulou, Michail Maniadakis, and Panos Trahanias

82

Service Robots Identification of Upper Limb Motion Specifications via Visual Tracking for Robot Assisted Exercising . . . . . . . . . . . . . . . . . . . . . . . . . M. A. Laribi, A. Decatoire, Giuseppe Carbone, D. Pisla, and S. Zeghloul

93

Hand Rehabilitation Device Actuated by a Pneumatic Muscle . . . . . . . . 102 Carlo De Benedictis, Walter Franco, Daniela Maffiodo, and Carlo Ferraresi Handheld Robotized Systems for Orthopedic Surgery . . . . . . . . . . . . . . 112 G. Boiadjiev, T. Boiadjiev, K. Delchev, R. Kastelov, K. Zagurki, and I. Chavdarov Usability Study of Tele-controlled Service Robot for Increasing the Quality of Life of Elderly and Disabled – “ROBCO 17” . . . . . . . . . 121 Nayden Chivarov, Denis Chikurtev, Ivaylo Rangelov, Emanuil Markov, Alexander Gigov, Nedko Shivarov, Kaloyan Yovchev, and Lyubomira Miteva Human-Centered Service Robotic Systems for Assisted Living . . . . . . . 132 Xanthi S. Papageorgiou, Georgia Chalvatzaki, Athanasios C. Dometios, and Costas S. Tzafestas Unmanned Aerial Vehicles Evaluation of UAV Based Schemes for Forest Fire Monitoring . . . . . . . 143 V. C. Moulianitis, G. Thanellas, N. Xanthopoulos, and Nikos A. Aspragathos Dense 3D Model Generation of a Dam Surface Using UAV for Visual Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Stefano Angeli, Andrea Maria Lingua, Paolo Maschio, Luca Piantelli, Davide Dugone, and Mauro Giorgis UAV Forest Monitoring in Case of Fire: Robustifying Video Stitching by the Joint Use of Optical and Thermal Cameras . . . . . . . . . . . . . . . . 163 Evangelos G. Sartinas, Emmanouil Z. Psarakis, and Nefeli Lamprinou Towards Autonomous Surveying of Underground Mine Using MAVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Christoforos Kanellakis, Sina Sharif Mansouri, George Georgoulas, and George Nikolakopoulos

Contents

xv

Vision Based Horizon Detection for UAV Navigation . . . . . . . . . . . . . . . 181 Stavros Timotheatos, Stylianos Piperakis, Antonis Argyros, and Panos Trahanias Mobile and Walking Robots Hybrid Control Strategies for Jumping Robots . . . . . . . . . . . . . . . . . . . 193 Mircea Ivanescu, Mircea Nitulescu, Cristian Vladu, Nguyen Van Dong Hai, and Mihaela Florescu Experiences for a User-Friendly Operation of Cassino Hexapod III . . . 205 Ernesto Christian Orozco Magdaleno, Daniele Cafolla, Marco Ceccarelli, Eduardo Castillo Castañeda, and Giuseppe Carbone On the Kinematics of the Gait with Jumping Stilts . . . . . . . . . . . . . . . . 214 M. Garau, A. Manuello Bertetto, and M. Ruggiu Upgrading a Legacy Outdoors Robotic Vehicle . . . . . . . . . . . . . . . . . . . 222 Theodosis Ntegiannakis, Odysseas Mavromatakis, Savvas Piperidis, and Nikos C. Tsourveloudis Large Scale Wireless Sensor Networks Based on Fixed Nodes and Mobile Robots in Precision Agriculture . . . . . . . . . . . . . . . . . . . . . . 236 Maximilian Nicolae, Dan Popescu, Daniel Merezeanu, and Loretta Ichim Stair-Climbing Wheelchair.q05: From the Concept to the Prototype . . . 245 Giuseppe Quaglia, Walter Franco, and Matteo Nisi Stability Prediction of an UGV with Manipulator on Uneven Terrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Massimo Tosa and Karsten Berns Modeling and Analysis of a Novel Passively Steered 4WD Mobile Platform Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Florian Pucher, Hubert Gattringer, Christoph Stöger, Andreas Müller, and Ulrich Single Additive Manufacturing-Oriented Redesign of Mantis 3.0 Hybrid Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Luca Bruzzone, Pietro Fanghella, Giovanni Berselli, and Pietro Bilancia Robot Design and Optimization A Study of Feasibility for a Design of a Metamorphic Artificial Hand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 F. J. Espinosa-Garcia, Giuseppe Carbone, M. Ceccarelli, D. Cafolla, M. Arias-Montiel, and E. Lugo-Gonzalez

xvi

Contents

Extending the Workspace of the PLVL-Variable Stiffness Actuator . . . 291 Miha Dežman and Andrej Gams Mechatronic Design of a Gyro-Stabilized Bicycle . . . . . . . . . . . . . . . . . . 300 Hubert Gattringer, Andreas Müller, and Matthias Jörgl Exchange of Effectors for Small Mobile Robots and UAV . . . . . . . . . . . 308 Jaroslav Hricko and Stefan Havlik Task-Dependent Structural Modifications on Reconfigurable General Serial Manipulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Mathias Brandstötter, Paolo Gallina, Stefano Seriani, and Michael Hofbaur Design of a 3-DOFs Parallel Robotic Device for Miniaturized Object Machining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Francesco Aggogeri, Alberto Borboni, Angelo Merlo, Nicola Pellegrini, and Monica Tiboni Numerical and Experimental Development of a Hub+Bearing System for Tire Pressure Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Guido Belforte, Carlo Ferraresi, Daniela Maffiodo, Vladimir Viktorov, Carmen Visconte, and Massimiliana Carello Design and Control Strategy of a Low-Cost Parallel Robot for Precise Solar Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Arturo Díaz, Sajjad Keshtkar, Jaime A. Moreno, and Eusebio Hernandez Off-line Robot Optimization with Hybrid Algorithm . . . . . . . . . . . . . . . 351 Ápostolos Tsagaris, Dimitrios Sagris, and Gabriel Mansour Optimal Task Placement in a Metamorphic Manipulator Workspace in the Presence of Obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 V. C. Moulianitis, E. Xidias, and P. Azariadis Minimization of Joint Velocities During the Execution of a Robotic Task by a 6 D.o.F. Articulated Manipulator . . . . . . . . . . . . . . . . . . . . . 368 C. Valsamos, A. Wolniakowski, K. Miatliuk, and V. C. Moulianitis Approach in the Integrated Structure-Control Optimization of a 3RRR Parallel Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 S. Ivvan Valdez, M. Infante-Jacobo, S. Botello-Aceves, Eusebio Hernández, and E. Chávez-Conde Robot Control Studying Various Cost Functions by Nonlinear Programming for the Control of an Underactuated Mechanical System . . . . . . . . . . . . 389 Tamás Faitli and József K. Tar

Contents

xvii

Nonlinear Control for Vibration Rejection in a System Containing a Flexible Beam and Belts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Matthias Jörgl, Hubert Gattringer, and Andreas Müller Hyper-redundant Robot Control System in Compliant Motions . . . . . . . 407 Viorel Stoian and Ionel Cristian Vladu On the Common Control Design of Robotic Manipulators Carrying Different Loads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Fotis N. Koumboulis Cascade Gain Scheduling Control of Antagonistic Actuators Based on System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Branko Lukić, Kosta Jovanović, and Tomislav B. Šekara Inverted Pendulum on a Cart Pneumatically Actuated by Means of Digital Valves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 F. Colombo, L. Mazza, G. Pepe, T. Raparelli, and A. Trivella Motion Planning and Trajectory Generation Passing Through Jacobian Singularities in Motion Path Control of Redundant Robot Arms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Evgeniy Krastev Trajectory Planning for Additive Manufacturing with a 6-DOF Industrial Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 David Kraljić and Roman Kamnik Behavior-Based Local Path-Planning by Exploiting Inverse Kinematics on FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Alexander Köpper and Karsten Berns Activation Algorithms for the Micro-manipulation and Assembly of Hexagonal Microparts on a Programmable Platform . . . . . . . . . . . . . 475 Georgia Kritikou and Nikos A. Aspragathos Iterative Learning Control for Precise Trajectory Tracking Within a Constrained Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Kaloyan Yovchev A Reinforcement Learning Based Algorithm for Robot Action Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Marko Švaco, Bojan Jerbić, Mateo Polančec, and Filip Šuligoj Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 Klaus Buchegger, George Todoran, and Markus Bader

xviii

Contents

Avoiding Sets of Measure-Zero in Navigation Transformation Based Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 Savvas G. Loizou SkyBat: A Swarm Robotic Model Inspired by Fission-Fusion Behaviour of Bats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Ján Zelenka, Tomáš Kasanický, Ivana Budinská, Ladislav Naď o, and Peter Kaňuch Robotic Vision Systems Experimental Measurement of Underactuated Robotic Finger Configurations via RGB-D Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 Renato Brancati, Chiara Cosenza, Vincenzo Niola, and Sergio Savino Finger Joint Detection Vision Algorithm for Autonomous Rheumatoid Ultrasound Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 Nikolaj Iversen, Søren Andreas Just, and Thiusius Rajeeth Savarimuthu Robot-Driven Autofocus Control Mechanism for an In-hand Fixed Focus Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Robert Bevec, Timotej Gašpar, and Aleš Ude Real Time Eye Localization and Tracking . . . . . . . . . . . . . . . . . . . . . . . 560 Nikolaos Poulopoulos and Emmanouil Z. Psarakis Graph-Based Semantic Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 572 Vasiliki Balaska, Loukas Bampis, and Antonios Gasteratos SeqSLAM with Bag of Visual Words for Appearance Based Loop Closure Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Konstantinos A. Tsintotas, Loukas Bampis, Stelios Rallis, and Antonios Gasteratos Real Time Sub Image Localization for Tracking . . . . . . . . . . . . . . . . . . 588 Karol Dobrovodský and Pavel Andris Upper Layer Extraction of a Folded Garment Towards Unfolding by a Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Dimitra Triantafyllou and Nikos A. Aspragathos Industrial Robots and Applications The Case of Industrial Robotics in Croatia . . . . . . . . . . . . . . . . . . . . . . 607 Marko Švaco, Bojan Jerbić, Ivan Župančić, Nikola Vitez, Bojan Šekoranja, Filip Šuligoj, and Josip Vidaković Decentralizing Cloud Robot Services Through Edge Computing . . . . . . 618 Florin Anton, Th. Borangiu, O. Morariu, Silviu Răileanu, Silvia Anton, and Nick Ivănescu

Contents

xix

Smart Cyber-Physical System to Enhance Flexibility of Production and Improve Collaborative Robot Capabilities – Mechanical Design and Control Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 Aleksandar Rodić, Ilija Stevanović, and Miloš Jovanović Automatic Painting and Paint Removal System: A Preliminary Design for Aircraft Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 Umberto Morelli, Matteo D. L. Dalla Vedova, and Paolo Maggiore Base Frame Calibration of a Reconfigurable Multi-robot System with Kinesthetic Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Timotej Gašpar, Robert Bevec, Barry Ridge, and Aleš Ude Compensating Position Measurement Errors for the IR Static Triangulation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660 Maciej Ciężkowski and Adam Wolniakowski Efficient, Precise, and Convenient Calibration of Multi-camera Systems by Robot Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669 Tobias Werner, David Harrer, and Dominik Henrich A Lumped Model for Grooved Aerostatic Pad . . . . . . . . . . . . . . . . . . . . 678 F. Colombo, L. Lentini, T. Raparelli, A. Trivella, and V. Viktorov Social Robotics Social Robotics in Education: State-of-the-Art and Directions . . . . . . . . 689 T. Pachidis, E. Vrochidou, V. G. Kaburlasos, S. Kostova, M. Bonković, and V. Papić MYrobot – Mobile Educational Platform . . . . . . . . . . . . . . . . . . . . . . . . 701 Ondrej Karpis, Juraj Micek, and Veronika Olesnanikova On Ethical and Legal Issues of Using Drones . . . . . . . . . . . . . . . . . . . . 710 Ivana Budinska Effects of Physical Activity Based HCI Games on the Attention, Emotion and Sensory-Motor Coordination . . . . . . . . . . . . . . . . . . . . . . . 718 Hasan Kandemir and Hatice Kose The Impact of Robotics in Children Through Education Scenarios . . . . 728 Ápostolos Tsagaris, Maria Chatzikyrkou, and Gabriel Mansour Trends in Educational Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737 Daniela Floroiu, Paul C. Patic, and Luminita Duta Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745

Human Robot Interaction and Collaboration

Manipulator Collision Detection and Collided Link Identification Based on Neural Networks Abdel-Nasser Sharkawy1,2(&), Panagiotis N. Koustoumpardis2, and Nikos A. Aspragathos2 1

2

Mechanical Engineering Department, Faculty of Engineering, South Valley University, Qena 83523, Egypt [email protected] Department of Mechanical Engineering and Aeronautics, University of Patras, Rio 26504, Greece [email protected], [email protected]

Abstract. In this paper, a multilayer neural network based approach is proposed for the human-robot collisions detection during the motions of a 2-DoF robot. One neural network is designed and trained by Levenberg-Marquardt algorithm to the coupled dynamics of the manipulator joints with and without external contacts to detect unwanted collisions of the human operator with the robot and the link that collided using only the proprietary joint position and joint torque sensors of the manipulator. The proposed method is evaluated experimentally with the KUKA LWR manipulator using two joints in planar horizontal motion and the results illustrate that the developed system is efficient and very fast in detecting the collisions as well as the collided link. Keywords: Collision detection  Collided link identification  Neural networks Proprietary sensors

1 Introduction Safety is necessary factor when the robots and humans share the same workspace because the proximity of the operator to the robot can lead to potential injuries. Therefore, a system for collision avoidance or detecting the collision should be incorporated to collaborative manipulators. Collision can be avoided by monitoring the environment using vision as presented in [1, 2] or using proximity sensors as in [3]. Although these methods can be used for collision avoidance, modifications in the manipulator body are required for the sensors installation. Some researchers contributed to the safety system in HRI by the collision detection and reaction methods such as disturbance observer method [4, 5], nonlinear adaptive impedance control law [6] and detection methods based fuzzy logic and neural networks [7, 8]. In the previous paper [9] a neural network (NN) was presented for collision detection in one-joint motion. It is well known that there is a dynamic coupling between the joints of the serial manipulator particularly in high speeds and accelerations. Therefore, the scaling of our previous approach by using one independent NN for © Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 3–12, 2019. https://doi.org/10.1007/978-3-030-00232-9_1

4

A.-N. Sharkawy et al.

each joint could not be generally applied. In this paper, one NN coupling two joints is designed and trained by Levenberg-Marquardt (LM) algorithm where a priori knowledge of the dynamic model of the robot is not required. In this method just the joint position and torque sensors are used which are proprietary to the KUKA LWR manipulator that is used for the experiments. For the training of the NN, the measurements of the collision force mapped to the joints torque is required. Then the trained NN can estimate the external torque and hence the collision is detected, and the collided link is identified. The generalization ability of the trained NN is presented.

2 Neural Network Design In this work, the KUKA LWR manipulator is used as shown in Fig. 1. The arm is configured as a SCARA type robot (2-DoF for planar horizontal motions), to avoid the effects of gravity during the control of the motions. Joint 1 represents KUKA’s A3 (4th joint) and joint 2 represents KUKA’s A5 (6th joint). The collisions are performed randomly by the human hand touching the end-effector and the link between the two joints as shown in Fig. 1.

Fig. 1. (a) Experimental setup with Kuka LWR manipulator. (b) External force F is applied during the motion of joints where the black spot means the center of mass for each link.

One NN is designed and trained using the external torque estimated from KUKA Robot Controller (KRC). Then the trained NN is used to calculate the external torque, detect the collisions and identify the collided link. If the external collision torque by the trained NN exceeds a threshold value, then the collisions are detected. The collision threshold is defined as the maximum of the absolute values of the training error between the external torques estimated from KRC and the trained NN of contact-free motion. A sinusoidal motion hdi ðtÞ; where i ¼ 1; 2 with variable frequency is commanded simultaneously to joint 1 and joint 2 respectively. These two serial joints compose a coupled system where if any external force exerts on any link, then the torque of each

Manipulator Collision Detection and Collided Link Identification

5

joint is affected. If the collision occurs on the end-effector (case 1, Fig. 1), the effect appears clearly on the measured joint torques of the two joints. But when there is collision between the two joints (case 2, Fig. 1), a small effect occurs in the torque of joint 2 because of the inertial force applied on the link whereas the effect on the torque of joint 1 appears clearly. The measured torque of joint 1 is always higher than of joint 2 whether there is collision or no. After many experiments and trials as discussed in our previous paper [9], the main inputs for the neural network that give us the good performance and the minimum mean squared error (mse) are the current position error ~ hi ðkÞ between the desired and actual joint position, the previous position error ~ hi ðk  1Þ, the commanded joint velocity h_ di and the measured joint torque si , where i ¼ 1; 2. The actual joint velocity from the previous paper [9] is replaced by the commanded joint velocity since after more experiments it is found that the performance of the NN (lower training error) is better using the commanded joint velocity. Three layers are used to compose the NN as shown in Fig. 2; the input layer, the non-linear hidden layer and the output layer that calculates the external torques s0ext1 ; s0ext2 of joints 1 and 2 that are compared respectively with the external torques sext1 ; sext2 given by KRC. The external torques sext1 ; sext2 are used only for training the network and the external collision force can be measured by any external sensor and transformed to the joint torque by the Jacobian. It should be noted that one independent NN for each joint is also applied but it has less performance than using one NN coupling the two joints since the motion of the two joints is coupled system as discussed.

Fig. 2. The multilayer neural network scheme for the two joints.

6

A.-N. Sharkawy et al.

3 NN Training and Testing The NN is trained using Levenberg-Marquardt learning which has the trade-off between the fast learning speed of the classical Newton’s method and the guaranteed convergence of the gradient descent [10, 11]. A sinusoidal motion with variable frequency is commanded on the two joints of KUKA LWR robot (Fig. 1), as described previously, where the frequency is linearly increasing from 0.05 Hz to 0.17 Hz for joint 1 and from 0.05 Hz to 0.23 Hz for joint 2. These frequencies produce angular velocities x up to 1.068 rad/s for joint 1 and 1.445 rad/s for joint 2. The range of joints motion is [–80, 10]° for each joint. The training data are divided into two sets: in the first set the robot joints perform the motion without any external force (collision) applied to the robot body and in the second set the same motion is performed with the user performing random collisions suddenly and stochastically with his hand on the end-effector and between the two joints. During the experiments the robot is commanded to move with position control mode and no reaction strategy is implemented. Using the data collected in the motion with and without collision, the neural network system with the eight inputs is trained. The total number of input-output pairs collected from the experiments are 170131. From these data 80% are used for training, 5% for validation and 15% for testing. After trying many different weights’ initializations and number of hidden neurons, it is found that the best case is 120 hidden neurons and 1000 iterations, which gives the minimum mse and the adequate collision threshold. The training process is very fast and stable and applied using Matlab on an Intel(R) Core(TM) i3-6100 CPU @ 3.70 GHz processor. The trained NN is evaluated with the same data-set that are used for the training to get an insight about the approximation. The difference between the measured external  torques ðsext1 ; sext2 Þ and the external torques s0ext1 ; s0ext2 estimated by the NN system is calculated and it is derived that the approximation error of the collision torque for each joint is higher than the error of contact-free motion. The averages of the absolute error values are very small and are 0.0954 Nm and 0.0486 Nm for joint 1 and joint 2 respectively. By providing the dataset without any collisions to the trained NN, the resulted collision threshold values are sth1 ¼ 1:5529 Nm and sth2 ¼ 0:8006 Nm for joint 1 and joint 2 respectively and the collision is assumed, when s0exti [ sthi for the ith joint. The external collision torques from KRC and the trained NN for both joints are compared as shown in Fig. 3. The spikes with blank circles refer to the collisions between the two joints whereas the others are the collisions on the end-effector.

4 Performance of the Trained NN The proposed method is evaluated and tested in the experimental setup by commanding the robot to perform two joints motion with constant velocity profiles, which is a common case in robot applications. The generalization of the trained NN is tested for joint ranges outside the ones that have been used for training, to show its effectiveness.

Manipulator Collision Detection and Collided Link Identification

7

An external force sensor (ATI F/T Nano 25) is used to compare its readings that are converted into joint torques via Jacobian with the external torques estimated by the trained NN and KRC.

Fig. 3. The two external collision torques from KRC and NN.

4.1

Trained NN Evaluation

The trained NN is evaluated with a speed equal to 0.5 rad/s on both joints with two cases of the collisions; the first case is that the force sensor is fixed on a predefined location at the end-effector (Fig. 1a) and the collisions occur only on the force sensor perpendicularly to the end-effector body and the second case is that the force sensor is fixed on the link between the two joints and the collisions occur only on the force sensor perpendicularly to the link between the two joints. Figures 4 and 5 illustrate the comparison between the three torques for both joints form KRC, Trained NN and external force sensor.

Fig. 4. The three external collision torques from KRC, trained NN and using the force sensor when there is collision on the end-effector.

From Fig. 4 where there is a collision on the end-effector only, both joints’ torques are affected by the collision and also the close approximation between the three

8

A.-N. Sharkawy et al.

Fig. 5. The three external collision torques from KRC, trained NN and using the force sensor when there is collision between the two joints.

external torques is very good particularly from the trained NN and KRC since the external torque from KRC is used for training the NN. Before and after collision the joint torques measured by the external sensor are zero and during the sudden collision increased quite rapidly. This wave form illustrates the collision phenomenon most accurately since the robot dynamics do not affect this measurements as it is the case in the estimated joint torques by KRC. The actual collision detection time is calculated as the elapsed time from the detection of the collision by the external force sensor to the moment when the external torque by the trained NN exceeds the threshold so the actual collision detection time for joint 1 and 2 is 14.8 ms and 27.01 ms respectively. From Fig. 5 where there is a collision between the two joints only, joint 1 external torque is highly affected by the collision, whereas joint 2 external torque has small effect at time when the force exerts and this effect sometimes does not appear clearly if the collision force is small. The close approximation between the three torques is good particularly from the trained NN and KRC. The good approximation between the external joint torques measured by KRC and external sensor with the estimated by the trained NN proves the success of our trained NN. The actual collision detection time for joint 1 is 21.4 ms. As shown from the results the trained NN is easily identifying the collided link since when the collision torques of joint 1 only exceeds the threshold, it means that the collisions occur on the link between the two joint whereas when the collision torques of both joints exceeds the threshold, it means that the collisions occur on the end-effector. To confirm the validity and efficiency of the proposed method under a wide range of operating conditions and acquire a performance measure, another 61 trials of collisions (NC) are evaluated with various magnitudes, points of collision, directions and different velocities of the motion. Table 1 provides the performance in terms of the number of the correctly detected collisions (CC), the number of false negatives (FN) which is the number of collisions not detected by the method and the number of the false positives (FP) which are the collision alerts provided by the method when there is not an actual collision. It is noted from Table 1 that when there are collisions at the end-effector, the proposed method succeeds with 92.59% and 81.48% to detect the collisions affected on

Manipulator Collision Detection and Collided Link Identification

9

Table 1. Summary of the performance of the proposed method obtained for the different collision scenarios and angular speeds of the joints Collision scenario

Joint

Collisions on the end-effector

Joint Joint Collision between the two joints Joint Joint Average percentage

1 2 1 2

Trained neural network method NC CC FN FP 27 27 0 2 27 26 1 3 34 34 0 9 0 0 0 0 98.86% 1.136% 16%

Efficiency %

92.59% 87.035% 81.48% 73.5% 73.5% --------82.52%

joints 1 and 2 respectively which could be considered very good success rate. When there are collisions between the two joints, the trained NN succeeds but with a lower rate (73.5%) to detect the collisions affecting on joint 1 only and not joint 2. Also the number of the false positive collisions is low (16%) which means that our method has less sensitivity to the external disturbances and unmodeled parameters. In another meaning the efficiency of the presented method or the success average percentage to detect the collisions is very good (82.52%). 4.2

Trained NN Generalization

The joint motion, for collecting data used for training the NN and its evaluation, limited in the range from –80° to 10° for each joint. Furthermore, two experiments are executed outside of these ranges (the first from 10° to 40° and the other one from –110° to –80° range) with random collisions on the end-effector and between the two joints to show the generalization of the trained NN. The results are shown in Figs. 6 and 7, where it is observed that the trained NN approximates adequately the two KRC and trained NN external torques but the error is a little bit larger compared to the previous case (inside the training range). However, the trained NN presents satisfactory performance which proves the generalization ability. It should be noted that the spikes have blank circle refers to the collisions between the two joints whereas the others are the collisions at the end-effector. 4.3

Discussion of the Results

The proposed method is easily applied and the training is very fast. The fuzzy based part of [7] is compared with our method. The estimated external torque sext given by the robot controller is used here only for training the NN, whereas in [7] the external torque measured by an ATI F/T Nano 25 force sensor is used for verification and the training. In the present method, one trained NN is used only for collisions detection since the two joints motion is a coupled system whereas in [7] two independent trained fuzzy systems are used. Our method succeeds to detect the collisions when there are

10

A.-N. Sharkawy et al.

collisions on the end-effector with a percentage of 87% whereas in [7] the Fuzzy system achieves 85% and for collisions between the two joints our method achieves 73.5% whereas in [7] Fuzzy system has 50%.

Fig. 6. The two external collision torques from KRC and trained NN during out of range two joints motion (from 10 to 40°).

Fig. 7. The two external collision torques from KRC and trained NN during out of range two joints motion (from –110 to –80°).

In [8], the proposed method by Lu et al. depends on two external force sensors, whereas our method does not require any external sensors and is used with any robot without model knowledge but having joint torque sensors. Our method presents also low detection time for the collisions. It should be noted that because of the different data and operating conditions used in our paper and the other two papers, it is difficult to compare quantitatively the time required for the detection of the collisions.

Manipulator Collision Detection and Collided Link Identification

11

5 Conclusion and Future Work In this paper, a method is proposed for human-robot collision detection based on the multilayer neural network approach trained by Levenberg-Marquardt algorithm. One NN system is implemented and trained for a 2-DoF robot which is a coupled system. The training is stable and very fast. The inputs to the NN are derived from the joint position and torque sensors and the method is able to detect the collision of the robot with the human hand very quickly and identify the collided link. The evaluation of the proposed method shows that our method, compared with the other methods, is efficient in detecting the collisions, since it succeeds with very good and higher percentage to detect the collisions whether on the end-effector or between the two joints. The number of the false positive collisions is low which means that the presented method has low sensitivity to the external disturbances and unmodelled parameters. The trained NN is tested using out of training range motion and presents satisfactory performance which proves the generalization ability. Because of the promising results in this paper, it is considered the extension of the proposed approach to implement the collision detection system for three joints of the manipulator taking into consideration the effect of the gravity during the motion. Acknowledgments. Abdel-Nasser Sharkawy is funded by the “Egyptian Cultural Affairs & Missions Sector” and “Hellenic Ministry of Foreign Affairs Scholarship” for Ph.D. study in Greece.

References 1. Mohammed, A., Schmidt, B., Wang, L.: Active collision avoidance for human – robot collaboration driven by vision sensors. Int. J. Com. Integr. Manuf. 30(9), 970–980 (2017) 2. Flacco, F., Kroeger, T., De Luca, A., Khatib, O.: A depth space approach for evaluating distance to objects with application to human-robot collision avoidance. J. Intell. Robot. Syst. 80(Suppl 1), S7–S22 (2015) 3. Lam, T.L., Yip, H.W., Qian, H., Xu, Y.: Collision avoidance of industrial robot arms using an invisible sensitive skin. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4542–4543 (2012) 4. Haddadin, S., Albu-Schaffer, A., De Luca, A., Hirzinger, G.: Collision detection and reaction: a contribution to safe physical human-robot interaction. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3356–3363 (2008) 5. Cho, C., Kim, J., Lee, S., Song, J.: Collision detection and reaction on 7 DOF service robot arm using residual observer. J. Mech. Sci. Technol. 26(4), 1197–1203 (2012) 6. Morinaga, S., Kosuge, K.: Collision detection system for manipulator based on adaptive impedance control law. In: Proceedings of the 2003 IEEE International Conference on Robotics and Automation, pp. 1080–1085 (2003) 7. Dimeas, F., Avendano-valencia, L.D., Aspragathos, N.: Human - robot collision detection and identification based on fuzzy and time series modelling. Robotica, 1–13 (2014) 8. Lu, S., Chung, J.H., Velinsky, S.A.: Human-robot collision detection and identification based on wrist and base force/torque sensors. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 796–801, April 2005

12

A.-N. Sharkawy et al.

9. Sharkawy, A.-N., Aspragathos, N.: Human-robot collision detection based on neural networks. Int. J. Mech. Eng. Robot. Res. 7(2), 150–157 (2018) 10. Du, K., Swamy, M.N.S.: Neural Networks and Statistical Learning. Springer, London (2014) 11. Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 5(6), 2–6 (1994)

Virtual Guides for Redundant Robots Using Admittance Control for Path Tracking Tasks (B) ˇ Leon Zlajpah and Tadej Petriˇc

Joˇzef Stefan Institute, Ljubljana, Slovenia {leon.zlajpah,tadej.petric}@ijs.si

Abstract. Virtual guides are used in human-robot cooperation to support a human performing manipulation tasks. They can act as guidance constrains to assist the user to move in the preferred direction or along desired path, or as forbidden-region constraint which prevent him to move into restricted region of the robot workspace. In this paper we proposed a novel framework that unifies virtual guides using virtual robot approach, which is represented with the admittance control, where a broad class of virtual guides and constraints can be implemented. The dynamic properties and the constraints of the virtual robot can be defined using three sets of parameters and variables: desired motion variables, dynamic parameters (stiffness, damping and inertia) and deadzones. To validate the approach we implemented it on a KUKA LWR robot for the Buzz-Wire tasks, where the goal is to move a ring along a curved wire. Keywords: Human-robot cooperation · Admittance control Virtual guides · Redundant robots · Path following

1

Introduction

To exploit the capabilities of modern robots and combining them with the skills of a human leads to robot applications where the physical human-robot interaction (pHRI) is essential [5,8]. This cooperation does not involves only a direct or indirect physical interaction between the partners but also requires collaboration between them at the “cognitive” level. Some of the problem domains of pHRI which have been addressed in the last years are: learning and imitation [10,14], human and robot roles in this cooperation [13,17], compliance control [8,16] and safety [3,5]. One type of human-robot interaction is cooperation, where we exploit mechanical capabilities of robotic devices and combine them with perception and cognitive capabilities of humans to achieve an overall goal. Such robotics This work was supported by EU Horizon 2020 Programme grant 680431, ReconCell, and Slovenian Research Agency grant P2-0076. c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 13–23, 2019. https://doi.org/10.1007/978-3-030-00232-9_2

14

ˇ L. Zlajpah and T. Petriˇc

systems can significantly reduce human effort while preserving high quality of task execution. Examples of human-robot collaboration are cobots and telemanipulators. In human-robot comanipulation the operator controls the robot motion through the direct contact (cobots) or indirect contact (telemanipulators) and the robot supports the human by constraining the motion or guiding the human during the motion. Here the most common method is to guide the motion using virtual guides introduced in [21] as virtual fixtures. Virtual guides are virtual constraints which constrain the motion of a cooperative robotic system. They have similar function as real mechanical constraints except that they are implemented in the control algorithms. In [4] the virtual fixtures are defined as collaborative strategies, which can be used in human manipulation tasks to improve or assist by anisotropically regulating motion. Fixtures can constrain the motion in different ways. Guidance active constrains force the operator to move toward a specific target point or to follow a predefined path [2,4]. On the other hand,regional constraints prevent the operator to move into specific region or force the operator to move out of it [1,4]. The guides can also change or adapt during the task execution using sensory or other information [2,9] or can be learned iteratively [19]. In general, both type of guides are equivalent except that guidance constraint are attractive and regional are repulsive. Although the guides can be related to the dynamic motion [18], in most cases only static guides constraining the position of the tool are used. In the past different methods for calculating active constraint were given from simple one using points or line, up to the methods considering complex surfaces or volumes [1,2,4,9,11]. The physical human-robot comanipulation relies on interaction forces between the operator holding the robot tool and the robot. Therefore, either impedance or admittance control has to be included in the robot control [22]. Both methods are based on mechanical properties of the contact between the human and robot. Using impedance control framework the motion produces forces, which are then felt by the operator, whereas using admittance control framework the forces applied to the robot by the operator contribute to the robot motion. In general, the impedance control is more suitable when the robot is in contact with stiff environment and the admittance control has its advantages when the environment is more compliant [15]. This distinction between the impedance and admittance control is significant when used in physical humanrobot interaction framework. In Sect. 2 we explain concept of a virtual robot implemented as admittance control. In Sect. 3 we present our approach to virtual guides for path tracking and avoiding forbidden zones, where we also show how the task space and the redundant degrees of freedom (DOFs) may be changed regarding the state of the system. Finally, in Sect. 4 we illustrate the use of virtual robots and guides on KUKA LWR robot arm guiding a human to follow a path.

Virtual Guides for Redundant Robots Using Admittance Control

2

15

Admittance Control

When a human operator is interacting with a robot, he is moving it by applying forces to a point on the robot. Hereafter we assume that this point is the tool or end-effector of the robot. We also assume no contact with stiff environment, therefore we find the admittance control more suitable for physical human-robot cooperation. Let the configuration of the manipulator be represented by the n-dimensional vector q of joint positions. We assume that the low level robot control completely decouples and linearizes the system so that the close loop behavior is ¨q + Kv e˙ q + Kp eq = 0 , e

(1)

where eq = q d − q and q d is the desired motion of the robot end-effector.1 Furthermore, we assume that the gains Kv and Kp assure stability of the robot and faster robot dynamics than the expected dynamics of the human induced motion. Consequently, we can use the following simplification q ≈ qd .

(2)

Under this assumptions we design the admittance control using a virtual robot whose output is the desired motion of the robot q d . Here our main constrain is that the virtual robot dynamics is bounded by the robot dynamics. We designed the virtual robot dynamics as a mass-damping-spring system. The selection of virtual mass and damping depends on the task [6,7,12,20]. As the human operates in the Cartesian space, it is reasonable to design the control in the Cartesian space and in most cases with the same dynamic properties of the system in all spatial directions. Let p ∈ R3 represent the position, Q = {η, } ∈ S3 ⊂ R4 (unit quaternion) the spatial rotation, and v = [p˙ T ω T ]T the spatial velocity of the end-effector, where p˙ ∈ R3 and ω ∈ R3 are the linear and angular spatial velocity, respectively. Then, we define the virtual robot dynamics as a double integrator v˙ = v˙ c and the control input as v˙ c = v˙ d + M−1 (De˙ x + Kex + F ext )

(3)

where ex in ∈ R6 is the task space error is combining the position and the orientation error   pd − p , (4) ex = 2 log(Qd ∗ Q−1 ) e˙ x is the spatial velocity error, e˙ x = v d −v, and F ext ∈ R6 are the forces/torques applied by the human on the end-effector. Diagonal matrices M ∈ R6×6 , D ∈ R6×6 and K ∈ R6×6 represent the desired inertia, damping and stiffness of the virtual robot, respectively. 1

With subscript (.)d we denote the desired value of the variable.

16

ˇ L. Zlajpah and T. Petriˇc

Finally, to obtain the desired motion q d for the robot we have to solve the inverse kinematics. In the case of redundant robots the inverse kinematics is defined as q˙ d = J# v c + (I − J# J)q˙ n ¨ d = J# (v˙ c − J˙ q) ˙ + (I − J# J)¨ q qn

(5) (6)

where J and J# are the jacobian matrix and its generalized inverse, and q˙ n and ¨ n are arbitrary joint velocities and accelerations, which can be used to perform q some lower priority tasks. Experiments with human interacting with robots have shown that damping has greater influence on the human perception than the virtual mass of the robotic system [6,12]. Therefore a simplified version of admittance control at the velocity level can be used. In this case, the virtual robot is defined as a single integrator and the control input as v c = v d + D−1 (Kex + F ext ) .

(7)

The inertia can be considered in this framework if a first order filter resulting from the following relation F = M¨ x + Dx˙ (8) is used for the external force. This yields a similar close loop behavior regarding the external force F ext ext as (3), except that the effective damping is higher.

3

Virtual Guides

The main role of virtual guides is to assist a human operator to perform a task by regulating the motion of the operator. This can be achieved by attaching a tool to the robot end-effector, and the operator is then manipulating that tool. This means that the operator has the responsibility to generate the motion, while the robot monitors the motion and reacts if necessary regrading the preplanned trajectories or regions where the tool is. This means that the robot has enough information regarding the task path and constraint geometry, but has no temporal information how to perform the task. 3.1

Unified Constraint Framework

All previously mentioned constraints can be included in the unified framework. Observing the admittance control algorithm (3) or (7) we can identify two sets of control design parameters and variables. The first set are the virtual stiffness, damping and inertia parameters, which influence the feeling of the operator when moving the tool. The second set are the desired values for the position, velocity and acceleration, which define the active constraints. For example, if we want that the robot guides the operator toward a certain point (the point constraint), the desired position pd has to represent that point.

Virtual Guides for Redundant Robots Using Admittance Control

17

As soon as the desired values (positions, velocities, etc.) are used, the robot starts actively influencing the motion of the tool. Therefore, it is important to know how to select the control gains. For example, the gain K defines how strong the robot is pushing the tool toward the target, and gains D and M define the dynamics of the motion. In the other hand, if we do not want to force the operator to move in a certain direction but only to encourage him to do so, then the stiffness gains K have to be set to 0, no desired values for robot state variables are used, and by selecting gains D the desired action is achieved. For example, by using anisotropic gains D one motion direction can become the preferred one [18]. In case when the constraint should not influence the motion in a certain region, this can be achieved easily by adding dead-zone functions for the relevant variables in Eqs. (3) or (7). For example, if the active constraint has to prevent the tool to leave the subspace of T where y ∈ [ymin , ymax ], then the desired of position in the y-direction has to be selected to be between the region bounds and the corresponding error component ey in the control (7) is calculated by applying a dead-zone to it ⎧ ⎪ ⎨(y − ymax ) for y > ymax e∗y = 0 (9) otherwise ⎪ ⎩ (y − ymin ) for y < ymin Consequently, the stiffness term in (7) is 0 when the tool lies between ymin and ymax , and when it leaves this region, a virtual spring will push the tool back into the desired region. The unified concept of the active constraint framework is as follows. First, the space T relevant for the task has to be identified, i.e. the DOFs needed to perform the task. Next, the configuration of the constraint has to be determined. For most constraints this is the geometry, which is defined in the space V ⊂ T , or it can be also a velocity profile if the goal is to follow a velocity [18]. The control actions (guidance, attraction, repulsion) are established by evaluating the robot configuration relatively to the constraint configuration. Here we have to select which states of the virtual robot are used for the constraining actions and the desired values for those states. For the regional constraints also the dead-zone parameters have to be defined. Finally we define the gains K, D and M to achieve the desired dynamic behavior. Usually the motion of the end-effector is described in the Cartesian space. However, when the task or the constraints require a special behavior in some directions, which are not aligned with the Cartesian space, then we propose to formulate the control and the constraints in the relevant task space T . This can be done by mapping the control and constraints from the Cartesian into the suitable task space T [23]. The rotation of the task space T is considered only the admittance control. So, (7) becomes q˙ d = (RTt J)# t v c + (I − (RTt J)# RTt J)q˙ n ,

(10)

18

ˇ L. Zlajpah and T. Petriˇc

where t

v c = RTt v d + D−1 (KRTt ex + RTt F ext ) ,

(11)

where the matrix Rt maps the controller into T . 3.2

Constraints for Tracking Tasks

The tracking tasks are tasks where the robot end-effector has to follow a predefined path. In the framework of human-robot cooperation this task can consist of several steps. The tool attached to the end-effector of the robot can be in the near neighborhood of the goal position on the target path or far away. So a possible sequence could be as follows: after the human operator grasps the tool, he has to move it first closer to the path and when the target path is reached, the tool has to stay on that path. After the task is completed, the robot can be moved away. During these procedure the aim of the robot is to support and guide the human operator in each step. As there might be some regions in the robot workspace which have to be avoided, i.e. due to obstacles, singular configurations, etc., the robot should prevent the tool to enter them. In general, it is not necessary to exploit all available DOFs of the robot in each situation. For example, when the tool is far from the target position, the orientation of the tool might not be important and the unused orientation DOFs can be utilized for some lower priority tasks like a pose optimization or obstacle avoidance. 3.3

Selection of Virtual Guides

We can defined different zones depending on the motion complexity and requirements, e.g. far, near and close to the target. Additionally, due to some limitations, we also defined a restricted region for the tool position. When the tool is far from the wire, the orientation of the tool may not be important. Thus, only the tool positions are considered in the task space, and the orientation DOFs can be included in the null-space. Obvious the preferred motion direction is to move the tool closer to the wire. Consequently, damping gains are lower in the direction pointing to the target — this is the guiding constraint. When the tool is nearing the target path, the motion constraints become more restricting and the tool orientation becomes more important when closer to the wire. Hence, it is necessary to control the tool orientation in this region. Note that this requires that the orientation DOFs are included in the task space. Additionally, when the tool is closer to the wire the allowed region for the tool position regarding the target path becomes narrower, i.e. regional constraints are changing with the distance to the target path. Finally, when the tool is moved to the target path, the tool motion becomes very restricted — the tool can move only along the path. Here the constraint have to been designed so that any motion perpendicular to the path is not possible. The operator is able to move the tool only in the instantaneous direction of the path, which becomes a strongly preferred direction. As this direction depends

Virtual Guides for Redundant Robots Using Admittance Control

19

on the curvature of the path, we propose to use variable task space T , where T is aligned with the instantaneous direction of the path, and then we implement the admittance control in T . Consequently, all virtual active constraints are not changing when moving along the path.

4

Experimental Evaluation

To demonstrate the performance of the virtual guides based on the unified control framework we have selected a the Buzz-Wire task, where a ring has to be moved along curving wire without touching it. This is a typical path following task, which requires a certain level of mental effort from the human for achieving the goal. To reduce the effort, a KUKA LWR robot arm with 7DOF is assisting a human to do the task. For that, a handle with a ring is attached to the force sensor, which is mounted on the robot end-effector. The human operator grasps and moves the handle. Figure 1 shows the experimental setup. For clearness of virtual guides definition the wire was designed to be in a plane and aligned with the robot space y−z plane. As the wire path could not be described with analytic functions, the path was encoded using radial basis functions (RBF).

preferred direction restricted region wire C:close A:far a)

B:near b)

Fig. 1. Buzz-wire task: (a) Experimental setup. (b) Task zones and approaching sequence

Figure 1 shows the zones we have different depending on the motion complexity and task requirements. First we have determined the closest point on the wire. Based on this point we defined the following zones (see Fig. 1b): (A) far away (>0.1 m from the wire plane), (B) near (between 0.1 m and 0.03 m from the wire plane) (C) close to the wire ( < 1; if the object is in its target position X 0; if the object is not in its target position nor at the target pos: of another object R¼ > i¼1 : 1; if the object is located in the target position of antoher object

ð11Þ Defining the actions. Changing the position of a particular object is defined as a possible action in the current iteration step. A e-greedy action-selection method was used, which with probability e chooses a random action, while with probability 1- e selects the action for which the agent gets the highest possible reward (“explorationexplotation dilemma”). In the case of selecting a large parameter value, the possibility of research increases or the agent searches for a larger space while in the reverse case follows the policy whose sequence of actions gives the highest reward to the agent. Depending on the particular case, it is necessary to find a compromise, and in most literature in the field of reinforcement learning, the e parameters are between [0.1, 0.2]. We have used e = 0.15 for our particular case. The actions are chosen as follows:

A Reinforcement Learning Based Algorithm for Robot Action Planning

499

The training of parameters is shown in the following algorithm:

The values of the Q function for the current state and the state in two steps in advance are qt and qt+2, d is the calculated temporal difference, rt+1 and rt+2 are the scalar values of the reward obtained by the transition from st ! st þ 1 and st þ 1 ! st þ 2 . The second phase of the algorithm is shown below, where the prediction of future actions is calculated. After a number of iterations, the system is brought from an initial to a target state, i.e. the objects are assigned a number of actions by following the optimal policy from the initial to the target position.

500

M. Švaco et al.

3 Results We have made extensive convergence tests in order to find optimal values for the discount rate c and the iteration step a. Figure 2. shows the mean value of parameter h depending on the parameters a and c and the first 700 episodes. It can be noticed that by increasing their value, there is a higher rate of convergence, but with higher oscillations in the solution. Based on the experiments, we have empirically chosen a = 0.4 and c = 0.6 which yield a good convergence rate with small oscilations. At this point we do not have a validation function for the choice of these parameter values, which we plan to do in our future research.

Fig. 2. The mean vector value of the policy parameter h for different parameter values a and c

A Reinforcement Learning Based Algorithm for Robot Action Planning

501

Fig. 3. Experimental application on a UR3 robot. The series of images shows the initial object state i.e. the initial spatial structure (A) (also shown in Fig. 1. A), the robot “pick&place” actions (B-M), and the final spatial structure (N) (also shown in Fig. 1. B)

The experimental application of the reinforcement learning algorithm on an actual robot is shown in Fig. 3. The positions of the objects are localized in the robot workspace. Commands to the UR3 robot are sent through the TCP protocol using the secondary robot server (port 30002). Python 3 programming language was used and the commands defined in the UR-script language were sent in “string” form. A vacuum gripper was used for the manipulation of objects and the robot algorithm has the following steps:

502

M. Švaco et al.

4 Conclusion and Future Work In this research, we have presented an action planning algorithm based on the temporal difference method using linear basis functions. The robot agent has been taught to perform a series of actions with the aim of the assembly of a predefined spatial structure with known objects with the smallest number of steps by selecting actions that minimize the traveled distance by the robot. We have empirically and experimentally chosen parameter values of a = 0.4 and c = 0.6. The general validity of the proposed algorithm was not proven in general what will be the ground for our future research. A detailed comparison between the dynamic programming approach and analytic solutions with the new reinforcement learning algorithm was out of the scope of this paper and it will be evaluated in our future work. One of the main advantages of using our reinforcement learning approach in comparison with analytical solutions which explore all possible combinations of actions and states which the robot can encounter, is in the dimensionality reduction. Based on the developed algorithms for solving the robotic action planning problem, there are possible extensions in the form of that objects may occupy more than one field in the discretized 2D space. Furthermore, considering online learning, the reward function could be based on the similarities of the target and current position of the objects after each action an agent has undertaken. In that case, it would be necessary to upgrade a post-algorithm with supervised learning algorithms such as Artificial Neural Networks [21]. Although we have used an reinforcement learning model, in some situations [9, 21] there is a possibility of combining these two types of learning. Furthermore, one of the ways of enhancing the reinforcement learning model is to upgrade already existing algorithms with experiential limit functions. Acknowledgements. The authors would like to acknowledge the Croatian Scientific Foundation through the research project ACRON - A new concept of Applied Cognitive Robotics in clinical Neuroscience.

References 1. Švaco, M., Jerbić, B., Šekoranja, B.: Task planning based on the interpretation of spatial structures. Tehnicki vjesnik - Technical Gazette 24(2) (2017) 2. Švaco, M., Jerbić, B., Šuligoj, F.: ARTgrid: a two-level learning architecture based on adaptive resonance theory. Adv. Artif. Neural Syst. 2014, 1–9 (2014) 3. Ekvall, S., Kragic, D.: Robot learning from demonstration: a task-level planning approach. Int. J. Adv. Rob. Syst. 5(3), (2008) 4. Asada, M., Noda, S., Tawaratsumida, S., Hosoda, K.: Vision-based reinforcement learning for purposive behavior acquisition. In: IEEE International Conference on Robotics and Automation, Proceedings., vol. 1, pp. 146–153 5. Yiannis, D., Hayes, G.: Imitative Learning Mechanisms in Robots and Humans (1996) 6. Jerbić, B.: Autonomous robotic assembly using collaborative behavior based agents. Int. J. Smart Eng. Syst. Des. 4(1), 11–20 (2002) 7. Jerbić, B., Grolinger, K., Vranješ, B.: Autonomous agent based on reinforcement learning and adaptive shadowed network. Artif. Intell. Eng. 13(2), 141–157 (1999)

A Reinforcement Learning Based Algorithm for Robot Action Planning

503

8. Kormushev, P., Calinon, S., Caldwell, D.: Reinforcement learning in robotics: applications and real-world challenges. Robotics 2(3), 122–148 (2013) 9. Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: a survey. Int. J. Rob. Res. 32(11), 1238–1274 (2013) 10. Bakker, B., Schmidhuber, J.: Hierarchical reinforcement learning based on subgoal discovery and subpolicy specialization. In: Proceedings of the 8-th Conference on Intelligent Autonomous Systems, pp. 438–445 (2004) 11. Brochu, E., Cora, V.M., De Freitas, N.: A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning, arXiv preprint arXiv:1012.2599 (2010) 12. Morimoto, J., Doya, K.: Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning. Rob. Autonom. Syst. 36(1), 37–51 (2001) 13. Miljković, Z., Mitić, M., Lazarević, M., Babić, B.: Neural network reinforcement Learning for visual control of robot manipulators. Expert Syst. Appl. 40(5), 1721–1736 (2013) 14. Khan, S.G., Herrmann, G., Lewis, F.L., Pipe, T., Melhuish, C.: Reinforcement learning and optimal adaptive control: an overview and implementation examples. Ann. Rev. Control 36 (1), 42–59 (2012) 15. Duguleana, M., Barbuceanu, F.G., Teirelbar, A., Mogan, G.: Obstacle avoidance of redundant manipulators using neural networks based reinforcement learning. Rob. Comput.Integr. Manuf. (2011) 16. Deisenroth, M., Rasmussen, C., Fox, D.: Learning to control a low-cost manipulator using data-efficient reinforcement learning (2011) 17. Švaco, M., Jerbić, B., Šuligoj, F.: autonomous robot learning model based on visual interpretation of spatial structures. Trans. FAMENA 38(4), 13–28 (2014) 18. Miklic, D., Bogdan, S., Fierro, R.: Decentralized grid-based algorithms for formation reconfiguration and synchronization. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 4463–4468 (2010) 19. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998) 20. Konidaris, G., Kuindersma, S., Grupen, R., Barto, A.: Robot learning from demonstration by constructing skill trees. Int. J. Rob. Res. (2011) 21. Ye, C., Yung, N.H.C., Wang, D.: A fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidance. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 33(1), 17–27 (2003)

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Klaus Buchegger, George Todoran, and Markus Bader(B) Vienna University of Technology, Karlsplatz 13, 1040 Vienna, Austria [email protected]

Abstract. In order to enable mobile robots to navigate autonomously in an environment shared with humans, special considerations are necessary to ensure both safe and efficient navigation. This work presents a predictive, human-aware motion controller, based on the Robot Operating System (ROS), which optimizes the vehicle trajectory at the control level with a high update rate. Predicting future positions of persons allows the system to optimize a trajectory around those predictions, yielding a sequence of motor controls for a smooth executed motion. The improvements were statistically evaluated using simulation runs in terms of travel duration, path length, and minimum distance to persons along the path. This way, we are able to show that our new motion controller performs significantly better in the presence of humans than a controller without human-awareness. Keywords: Autonomous navigation · Human-aware Human space · Robotics · Control level

1

· Local planner

Introduction

In human-aware robot navigation, a driver-less vehicle has to navigate through an environment with humans. In order to establish a system which is safe and efficient, human motions must be predicted and integrated into the navigation system. Such an integration can be done on a discreet planning level or on the control level. The difference in the resulting trajectory can be seen in Fig. 1. On the planning level, a single cost-map can be used to define a safety region around a detected person’s current position. The robot then plans a trajectory around this stationary safety region to avoid a collision. Depending on the movement of the person, either away from or closer to the planned trajectory, this can result in trajectories with large, unnecessary detours, or even cause a crash. By reasoning about future positions of a person, the robot can plan efficient trajectories that possibly pass through areas which were initially occupied by a person, but will be free once the robot reaches those areas. The scientific contribution of this work is to present a novel navigational planner, that predicts human movements and avoids collisions on the control c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 504–511, 2019. https://doi.org/10.1007/978-3-030-00232-9_53

Navigation in the Presence of Humans

505

Fig. 1. Left, a classical planning approach on the discreet path planning level using a single cost-map to circumnavigate humans. On the right, our new approach using a time-dependent cost function to optimize the vehicles trajectory on the control level, resulting into a shorter motion.

level. The main advantage of performing these actions on the control level is the high update rate (10–50 Hz). However, classical control level strategies (e.g. DWA [2]) are only capable of handling simple tasks, like immediate collision avoidance. Our novel approach is, nevertheless, able to plan trajectories for complex, dynamic situations, similar to those handled by a higher level approach proposed by Kollmitz et al. [5]. Since predictions of human walking paths cannot be perfect, the planner has to be able to react to prediction errors. Due to the high update rate of the control level, the robot can react faster to imperfect predictions by replanning the trajectory 10–50 times per second. Additionally, our approach does not require large, multi-layered cost-maps to keep track of a person’s future positions, as the positions and according safety regions are directly computed for every evaluated time step. In this work, we present the results of our human-aware robot navigation approach in multiple showcases. Section 3 provides an overview of our approach, and in Sect. 4 we discuss in detail how our robot handled different scenarios, compared to a non-human-aware implementation. Our new approach was implemented by enhancing the MPN framework developed by Todoran et al. [8] for the Robot Operating System (ROS)1 . This implementation was tested in a simulation environment, using Gazebo2 , and statistically compared to the previous state of the framework, which treated every person as a static obstacle. Finally, in Sect. 5 we conclude our work by summarizing the results and highlighting the advantages of our approach compared to others.

2

Related Work

Recent publications show progress in developing human-aware mobile robots. Kollmitz et al. [5] proposed a navigational planner which uses a social cost field around a detected person and the prediction. The approach uses at least two layers for planning. A discreet planning layer, which implements the new time-dependent person costs, and the control layer, which computes the final motion commands. The integration of time-dependent person cost maps allows 1 2

ROS: http://www.ros.org/. Gazebo: http://gazebosim.org/.

506

K. Buchegger et al.

an A* algorithm to determine the best path around moving persons. These timedependent cost maps are computationally complex to update and to evaluate, therefore the algorithm is only able to run at a update rate of 2 Hz. Kollmitz et al. [5] stated that the integration of such cost-maps for the control layer would make sense but it is, due to the computational complexity, not feasible. However, we show with our proposed work, by using cost functions and not discreet costmaps, human prediction can be integrated on the control layer at update rates of up to 50 Hz. In [6], Kostavelis et al. proposed a navigational planner, that predicts human movement towards certain points of interest in the environment using a D* algorithm. Along the predicted path, costs are assigned in a single time-independent cost-map layer. In their work, they are mostly concerned about collision avoidance. Efficient robot paths are of minor importance, as their model is only predicting locations where the person could be at any time, and not whether the person actually leaves a location and therefore frees space for the robot to pass through. Chen et al. [1], recently presented a human-aware planner, based on deep reinforcement learning. In contrast to their approach, we formulated an explicit, transparent movement prediction model with parameters adjustable at run-time and without going through a learning phase again. Other publications not only focus on the avoidance of collisions with humans, but also on replicating human-like behavior, in a way that persons will not be irritated by the robot’s presence. Moussaid et al. [7], for example, discovered, that depending on the culture, people have a generally preferred side on which they pass other persons. By knowing such preferences, the robot could be designed to behave as people will expect.

3

Human-Aware Approach

Figure 1 (left) shows, that the typical navigational approach plans a path around a person by estimating the possible positions of that person in an ellipse, expanded in the person’s moving direction. By actually predicting the person’s movement for future timestamps (t1 –t3 ), the robot can more accurately estimate the position of the person, and thus choose a more efficient trajectory, as shown in Fig. 1 (right). Our approach consists of three core elements, which enable the robot to plan trajectories that account for human movements. We implemented all three steps at the control level of our planner, allowing for fast update rates of 10–50 Hz. At first, the movements of detected persons were predicted, by assuming people in general choose similar paths. Therefore, our prediction follows paths many people have been observed walking along. With this prediction, the robot can estimate the future position of a person for any point in time. In the second step, possible trajectories towards a predefined goal were checked for validity. For every point in time, the future position of the robot had to satisfy a required safety distance to the future positions of the detected persons. And finally, the trajectories were evaluated with the cost function, which contains a social cost field around persons, favoring trajectories that passed on a person’s left side.

Navigation in the Presence of Humans

507

For the prediction, we recorded the paths of persons walking through the environment in a map.3 For every cell of the map we counted how many people walked through this cell. With these recordings we could predict a person to walk towards areas which have seen a high occupancy in the past, using an explicit model. For every pose of a person, an area in front of that person is evaluated to determine a predicted change in orientation. An attracting potential is assigned to each cell in that area, scaled by the occupancy count, such that the combined potentials represent the most likely movement direction. The person is then predicted to turn towards that movement direction within one second, while walking with assumed constant velocity. The safety distance was formulated as an inequality constraint in the framework, such that the Euclidean distance to the closest person had to be at least 0.7 m (with an allowed error margin of 0.1 m), for every point along the trajectory. The distance of 0.7 m was chosen so the robot avoids entering the intimate space, defined as a 0.45 m wide circle around a person [4]. We considered forcing the robot to pass outside of the personal space (0.45–1.2 m), but then the robot was not able to find a trajectory in some scenarios (e.g. narrow hallway, with a person approaching the robot) even though the person could have been passed at a comfortable distance. Kollmitz et al. [5] similarly designed their planner to accept trajectories passing through a person’s personal space. For the preference to pass on a person’s left side, the core cost function was modified. The cost of the existing framework is based on a velocity map, created using the fast marching method [3]. The velocity map represents how fast the goal can be reached from any position and following the slope results in the shortest path, avoiding obstacles but ignoring the robot’s dynamics. For every trajectory, accounting for the robot’s dynamics, the end point is optimized to be as far along the slope as possible, resulting in the lowest cost for the remaining path to travel. To favor trajectories that pass on a person’s left side, we increased the cost in an elliptical area around the person, slightly offset to the person’s right, similar to the social cost-field of Kollmitz et al. [5]. While this increased cost is only assigned to the person’s current position, it was sufficient to shift the slope in the velocity map to the person’s left side and therefore the planned trajectories also end on a point to the person’s left side. Besides the recorded person positions, our approach is based on an explicitly formulated model. Therefore, an advantage over designs based on neural networks is that all parameters can be adjusted quickly, either at run-time or at compile-time, without the need of a new learning phase.

4

Experimental Setup and Results

In our experiments, we simulated a Pioneer P3-DX robot and a walking person in different scenarios. For each scenario, we performed several runs with various 3

Only the author’s walking paths were observed and recorded during this work, the remaining part was performed in simulation, with no further persons involved.

508

K. Buchegger et al.

walking speeds for the person, in order to observe the robot’s behavior for consistency. The runs were done with the MPN framework as proposed by Todoran et al. [8], and with our improved version of the framework. We compared the performances in terms of travel duration, path length and the minimum distance to the person along the path. Additionally, we recorded velocity data of the robot to examine how smooth the robot moved - sudden stops or jerks might startle surrounding persons and reduce the predictability of the robot’s motion.

Fig. 2. Simulated test scenarios. In the approaching scenario (top) the robot and the person move towards each other, trying to pass. In the crossing scenario (bottom) the robot tries to enter a hallway while the person passes between the robot and the hallway entrance.

The scenarios are shown in Fig. 2. In the first scenario (top), the robot and the person started facing each other, with the goals each behind the other. The person was simulated to walk in a straight line from right to left, while the robot was instructed to find a path from left to right. For the second scenario (bottom), the person was simulated to walk from bottom to top, crossing the robot’s path vertically, and the robot had to enter a hallway on the right. The robot’s maximum movement speed was limited to 0.4 m/s and the simulated person walked with speeds ranging from 0.3 to 1.5 m/s. We recorded travel duration, path length and minimum distance to the person, from when the robot started moving, up until it passed the person’s initial position. For the first scenario, Fig. 3 shows, that both the old MPN framework (top) and our improved version (bottom) found paths that deviated to the side in order to let the person pass unhindered. However, without our social cost function, favoring passing a person on his/her left side, the robot randomly chose the passing side. With the improvement, the robot always took similar turns to the right. Figure 4 shows a summary of the measurements of 30 simulation runs and one example velocity profile. While the path length was similar for both implementations, for the travel duration and the minimum distance to the person, our improved version performed significantly better. With our human-aware approach, the robot always kept the required minimum distance of 0.7 m with an allowed error margin of 0.1 m, and passed on the socially preferred side. We

Navigation in the Presence of Humans

509

Fig. 3. Resulting paths in the first scenario where the robot and the person approached each other. The top image shows the non-human-aware approach, the bottom image shows our improved version. The robot is represented by the black model on the left, with its goal, the orange arrow on the right. The trajectories are shown by sequences of small red arrows. The person is represented by the gray figure in the center, and the predicted path by the blue arrow pointing leftward.

Fig. 4. Comparison of measurements of travel duration, path length, minimum distance and velocity profile of the old implementation to our new implementation in the approaching scenario.

noticed that increased walking speed of the person resulted in longer travel durations and slightly shorter safety distance, as the robot had to perform a steeper evasive maneuver. The non-human-aware version of the framework couldn’t satisfy the specified minimum distance to obstacles, and therefore the robot had to stop and wait for the person to pass in most of the runs. In the exemplary velocity profile, shown in Fig. 4, the braking can be seen after twelve seconds. The plot for our improved version show, on the other hand, that no sudden stops or jerks were necessary, as the robot smoothly accelerated and passed the person. In the crossing scenario, the person started in front and to the right of the robot, and the robot tried to drive through a hallway entrance. The resulting paths for both approaches can be seen in Fig. 5. In the old version, the robot tried to pass in front of the person by turning left, while the new approach always selected a path that passed behind the person. The spirals in the old approach resulted from a fast walking speed. As the robot tried to pass in front of the person it had to detour further and further, until it reached the point where turning around and passing behind the person

510

K. Buchegger et al.

Fig. 5. Resulting paths in the second scenario where the person crossed in front of the robot. The left image shows the non-human-aware approach, the right image shows our improved version. The robot is represented by the black model on the left with its goal, the orange arrow on the right. The trajectories are shown by sequences of small red arrows. The person is represented by the gray figure on the bottom, and the predicted path by the blue arrow pointing upward.

was necessary to enter the hallway. For slower walking speeds, the robot started turning left, but then the person moved too close, so the robot had to stop until the person passed. A summary of the recorded measurements is presented in Fig. 6. Again, the path length were similar, but our new approach performed better in terms of travel duration and distance to the person. In the velocity plot for the old version it can be seen, that after six seconds the robot violated the safety distance, and therefore had to stop. For our new approach the velocity plot shows that the robot accelerated slower, allowing the robot to smoothly keep the safety distance while passing behind the person.

Fig. 6. Comparison of measurements of travel duration, path length, minimum distance and velocity profile of the old implementation to our new implementation in the crossing scenario.

In this scenario we noticed that for our new version, the travel duration was not connected to the path length. The optimizer found two different solutions for an efficient trajectory towards the goal. Either the trajectory detoured to the right, to actively evade the person, or a straight line with limited acceleration was chosen. This trade-off between speed and path length resulted in very similar travel durations, and therefore similar costs.

Navigation in the Presence of Humans

5

511

Conclusion

In this work, we demonstrated the benefits of making a navigational planner human-aware. By accounting for a person’s movements and predicting future positions, our planner is able to find efficient trajectories that safely avoid collisions on the control level. In addition to collision avoidance with humans, our approach also allows the robot to consistently pass approaching persons on a preferred side, which makes it more predictable for humans. Our enhancements of the MPN framework brought significant improvements for the tested scenarios, as they prevented the need for sudden stops. Unlike the approach of Kostavelis et al. [6], our planner predicted the persons to move away from their initial position, and therefore allowed the robot to plan paths through initially occupied areas. The results were similar to those of Kollmitz et al. [5], who proposed a higher level planner, with an update frequency of 2 Hz. By integrating the human-awareness into the control level of the motion planner, we are able to achieve much higher update frequencies of 10–50 Hz, allowing fast reactions to unforeseen situations. For future work, the approach still has to be tested outside of the simulation, in a real world environment. In addition, as our explicit prediction model is evaluated for every future position of the robot, human-robot interactions could be implemented as well. For example, the person could then be predicted to slightly turn away from the robot’s trajectory. Acknowledgement. The research leading to these results has received funding from the Austrian Research Promotion Agency (FFG) according to grant agreement No. 854865 (TransportBuddy) and No. 855409 (AutonomousFleet)

References 1. Chen, Y.F., Everett, M., Liu, M., How, J.P.: Socially aware motion planning with deep reinforcement learning. CoRR, abs/1703.08862 (2017) 2. Fox, D., Burgard, W., Thrun, S.: The dynamic window approach to collision avoidance. IEEE Rob. Autom. Mag. 4(1), 23–33 (1997) ´ 3. G´ omez, J.V., Alvarez, D., Garrido, S., Moreno, L.: Fast methods for Eikonal equations: an experimental survey. CoRR, abs/1506.03771 (2015) 4. Hall, E.: The Hidden Dimension: Man’s Use of Space in Public and Private. Doubleday anchor books, Bodley Head (1969) 5. Kollmitz, M., Hsiao, K., Gaa, J., Burgard, W.: Time dependent planning on a layered social cost map for human-aware robot navigation. In: Proceedings of the IEEE European Conference on Mobile Robots (ECMR), Lincoln, UK (2015) 6. Kostavelis, I., Kargakos, A., Giakoumis, D., Tzovaras, D.: Robot’s workspace enhancement with dynamic human presence for socially-aware navigation. In: Computer Vision Systems, pp. 279–288, Cham (2017) 7. Moussa¨ıd, M., Helbing, D., Garnier, S., Johansson, A., Combe, M., Theraulaz, G.: Experimental study of the behavioural mechanisms underlying self-organization in human crowds. Proc. R. Soc. London B: Biol. Sci. 276(1668), 2755–2762 (2009) 8. Todoran, G., Bader, M.: Expressive navigation and local path-planning of independent steering autonomous systems. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4742–4749, October 2016

Avoiding Sets of Measure-Zero in Navigation Transformation Based Controllers Savvas G. Loizou(B) Cyprus University of Technology, Limassol, Cyprus [email protected]

Abstract. The Navigation Transformation proposed in [Loizou (2017)] provides a novel solution to the motion and path planning problems, while enabling temporal stabilization up to a set of measure-zero of initial conditions. Since sets of measure zero are explicitly defined for a given workspace, this work proposes an additional control action that steers the system trajectories away from such sets. The provided theoretical results are backed with experimental studies.

Keywords: Navigation transformation

1

· Time abstraction

Introduction

Sets of measure-zero in the navigation problem are a direct implication of workspace topology when designing smooth vector fields [Koditschek and Rimon (1990)]. Such sets primarily affect the set of feasible initial conditions of the navigation problem. Any initial condition in those sets causes the smooth navigation vector field to fail to navigate the system to its sought destination. In practice those sets are considered of minor importance since the probability of a system having initial conditions there is exactly zero. However, practical side effects of the existence of those sets are, in the case of Navigation Function based controllers [Loizou and Jadbabaie (2008); Rimon and Koditschek (1992)], reduced convergence rate as the initial condition moves closer to them, and in the case of Navigation Transformation (NT) based controllers [Loizou (2017)] moving closer to them leads to trajectories that are closer to the internal workspace boundary. This work proposes an auxiliary control action that can be implemented on NT based controllers that pushes the system away from sets of measure zero. This is possible for NT based controllers, since under NT, sets of measure zero are explicitly defined. In the current work we study the effect of measure-zero set avoiding controllers on NT based controllers for the motion planning problem and the problem of motion planning with time abstraction. c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 512–520, 2019. https://doi.org/10.1007/978-3-030-00232-9_54

Avoiding Sets of Measure-Zero

513

The rest of the paper is structured as follows: Sect. 2 presents some preliminary notations from the NT literature, Sect. 3 presents the proposed controller design, Sect. 4 presents experimental results and Sect. 5 concludes the paper.

2

Preliminaries

In this section the necessary NT terminology and definitions for methodology development will be presented. For further details regarding the NT please refer to [Loizou (2017)]. For a finite M ∈ Z+ , let Pi ∈ Rn , i ∈ {1, . . . M } be M discrete elements of n R . Then: Definition 1 (Definition 1 in [Loizou (2017)]). A point-world is defined as a M  manifold P n ⊆ Rn \ Pi . i=1

Definition 2 (Definition 3 in [Loizou (2017)]). A workspace (with external ◦ boundary) W ⊂ Rn is a manifold such that W is diffeomorphic to P n ( P˜ n ). Let O  ∂W. Then O consists of the mutually disjoint sets of the obstacle boundaries, Oi , i ∈ {1, . . . M }, and (if such exists,) the disjoint “external” Oi . boundary O0 , such that O = j=0...M

Assume a system described by the first order holonomic kinematic model x˙ = u

(1)

where x ∈ Rn is the robot’s position, and u ∈ Rn is the velocity control input. ◦

The initial configuration of the robot is denoted as x0 ∈W , and the destination ◦ configuration as xd ∈W . Definition 3 (Definition 7 in [Loizou (2017)]). A Navigation Transformation ◦ ◦ is a diffeomorphism Φ :W → P n (Φ :W → P˜ n ), that maps the interior of the workspace to a point-world (with spherical boundary). Define the destination vector in the point-world as d¯p (x)  Φ(xd ) − Φ(x)

(2)

Definition 4 (Definition 8 in [Loizou (2017)]). A scheduling function is a decreasing C 1 function sT : R≥0 → R≥0 , with T > 0, that satisfies:   1. sT (0) = d¯p (x(0)), 2. sT (T ) = 0, t ≥ T 3. sT (t) > 0, 0 ≤ t < T ,

514

S. G. Loizou

From the proof of Proposition 1 in [Loizou (2017)], the set of measure zero is defined as Z=



{z ∈ P n |z = Pi + (Pi − qd ) · ζ,

ζ > 0}

i∈{1,...M }

where q0  Φ(x0 ), qd  Φ(xd ). The motion, path and time abstraction problems considered are the ones defined in [Loizou (2017)].

3

Control Design for Measure-Zero Set Avoidance

The solutions provided in for the motion planning and time abstraction problems (Propositions 2 and 3 in [Loizou (2017)]) provide analytically guaranteed results, including collision avoidance. Collision avoidance is achieved by topological properties of the point-world (i.e. the initial conditions that cause collisions, are sets of measure zero). Initial conditions that are arbitrarily close to sets of measure zero (but not in them) will result in trajectories that are arbitrarily close to obstacles (without colliding with them). However, such a scenario raises issues of robustness and safety due to the proximity of the trajectories to obstacles and the possibility of uncertain disturbances that can appear in the system. In order to increase the safety and robustness of the solution in terms of collision avoidance properties (i.e. increase the minimum distance from the obstacles), a repulsive potential from the elements Pi , i = 1, . . . M can be utilized. Assume q ∈ P n . Let c¯i (q)  q − Pi . Define the potential: 1

Γ (q)  1+

M  i=1

¯ ci (q)

If n is even, then we can analytically define the perpendicular destination vector1 with a closed form expression. Particularly if n = 2 such a vector can be defined as:   01 ˜ d˜⊥  d p −1 0 p Defining an equivalent vector field on odd-dimensional workspaces would require an appropriate switching strategy when transitioning between workspace’s topological charts and although appropriate heuristics can be developed and analyzed for this, it is beyond the scope of the current work. 1

For odd-dimensional workspaces no single continuous function exists that can provide such a vector - a direct consequence of the “hairy ball” Theorem.

Avoiding Sets of Measure-Zero

515

We have the following result regarding the motion planning problem that addresses additional robustness considerations with respect to collision avoidance: Proposition 1. Let Φ be a Navigation Transformation. Then system (1) under the control law    ˜⊥T ∇Φ Γ (Φ(x)) u = JΦ−1 k1 d¯p (x) − k2 d˜⊥ (3) d p p where k1 , k2 positive scalar gains, is globally exponentially stable at x = xd , almost everywhere. Proof. From the Lyapunov function: V (x) =

1¯ dp (x)T d¯p (x). 2

we have that:

   ˜⊥T V˙ (x) = −d¯p (x)T JΦ JΦ−1 k1 d¯p (x) − k2 d˜⊥ p dp ∇Φ Γ

Noting that d¯p (x)T d˜p (x)⊥ = 0 we have that: V˙ = −2k1 V, that implies global exponential stability for the system, up to a set of measure zero of initial conditions.

We have the following safety related result: Corollary 1. The vector field generated by control law (3) around Pi in the point-world, ∀i = 1, . . . M , is locally repulsive up to a set of configurations of measure zero. Proof. It is sufficient to prove that the flows of system (1) under the control law (3) in the point-world, move away from Pi in some ε neighborhood around Pi either in the “radial” (i.e. across d˜p ) direction: d˜p d˜Tp c¯i or in the “tangential” (i.e. the complementary to the “radial” direction that is across d˜⊥ p ) direction: ˜⊥T c¯i up to a set of measure zero. d d˜⊥ p p Taking the projection of the point-world flows defined by control law (3) on ˜⊥T c¯i we get: d d˜⊥ p p 1  ˜⊥ ˜⊥T T ¯i · . . . d d c¯i JΦ x˙ = Γ 2 d˜⊥T p c k2 p p ⎛ ⎞⎞ ⎛ ⎝ ¯ ⎝d˜⊥T ci  ¯ cj  + ¯ ci  d˜⊥T cj ⎠⎠ p ∇q ¯ p ∇q j=i

j=i

516

S. G. Loizou

whereas ¯ ci  can become arbitrarily small as we move closer to Pi , ∇q ¯ ci  = cˆi is a unit vector. Now observe that as long as there exists a δ > 0 such that ci  ≥ δ, then there will always exist an ε(δ) > 0, small enough, such d˜⊥ p · ∇q ¯ that the first term in the parenthesis will dominate, hence: T  ˜⊥T ¯i JΦ x˙ > 0 d˜⊥ p dp c as long as 0 < ¯ ci  < ε. However, there are cases when such a δ cannot be found. Those are the cases ci  = 0. The set of points Zl ⊃ Z satisfying this condition is the when d˜⊥ p · ∇q ¯ line passing from Pi and qd . Taking the projection of the point-world flows in Zl on d˜p d˜Tp c¯i we get:   1  ˜ ˜T T dp dp c¯i JΦ x˙ = d˜Tp c¯i d¯p  k1 which is negative only in Z, that is a set of measure zero.



We have the following result regarding the time abstracted solution to the motion planning problem that addresses additional robustness considerations with respect to collision avoidance: Proposition 2. System (1) under the control law:        ˜⊥T u (t) = JΦ−1 · d˜p −s˙ T + k1 d¯p  − sT − k2 d˜⊥ p dp ∇Φ Γ (Φ(x))

(4)

where k1 , k2 are positive scalar gains, provides a time abstracted solution to the motion planning problem, with duration T . Proof. The proof follows the same steps as the proof of ([Loizou (2017)] PropoT

sition 3), by noting that dp d˜⊥ p = 0. We now have the following safety related result: Corollary 2. The vector field generated by control law (4) around Pi in the point-world, i = 1, . . . M , is locally repulsive up to a set of configurations of measure zero. Proof. Following the same steps as in the proof of Corollary 1, we get that: 

˜⊥T ¯i d˜⊥ p dp c

T

JΦ x˙ > 0

as long as 0 < ci < ε and d˜⊥ p · ∇q ci  = 0. Also the set of points Zl satisfying the condition d˜⊥ p · ∇q ci  = 0 is the line passing from Pi and Φ(qd ) and the only non-repulsive subset is Z, a set of measure zero, as in the Proof of Corollary 1.

Avoiding Sets of Measure-Zero

4 4.1

517

Experiment Results Setup

The experimental setup is the same one detailed in [Loizou (2017)] and depicted in Fig. 1. Two case studies were examined to test the effectiveness of the controllers proposed in Propositions 1 and 2.

Fig. 1. Experimental setup. A star-shaped workspace with 4 obstacles and a differential drive robot (same as the one used in [Loizou (2017)])

4.2

Case Study 1

In the first case study, control law (3) provided in Proposition 1 was experimentally evaluated. The resulting path is depicted in Fig. 2 whereas Fig. 3 depicts the minimum distance to obstacles during the experiment. As can be seen control law (3) successfully solves the motion planning problem whereas providing additional robustness to the system by means of increased minimum distance from the obstacles as a result of the repulsive control action from the sets of the measure zero. 4.3

Case Study 2

In the second case study, control law (4) provided in Proposition 2 was experimentally evaluated. A sinusoidal scheduling function was chosen as in [Loizou (2017)]:     cos( tπ T )+1 ¯   (5) sT (t) := dp (x(0)) 2

518

S. G. Loizou

Fig. 2. Case study 1: Robot trajectory under control law (3). Green and red circles denote initial and destination configurations respectively

Fig. 3. Case study 1: Minimum distance to obstacles under control law (3)

with task duration T = 30s. The resulting path is depicted in Fig. 4 whereas Fig. 5 depicts the minimum distance to obstacles during the experiment. Figure 6 depicts the distance to the destination versus time for the experiment. As can be seen control law (4) successfully solves the motion planning and time abstraction problems whereas providing additional robustness to the system by means of increased minimum distance from the obstacles as a result of the repulsive control action from the sets of the measure zero.

Avoiding Sets of Measure-Zero

519

Fig. 4. Case study 2: Robot trajectory under control law (4). Green and red circles denote initial and destination configurations respectively

Fig. 5. Case study 2: Minimum distance to obstacles under control law (4)

520

S. G. Loizou

Fig. 6. Case study 2: Distance to the destination vs time under control law (4)

5

Conclusions

In this an auxiliary control action that can be implemented on NT based controllers [Loizou (2017)] was proposed, that pushes the system away from sets of measure zero. This is possible for NT based controllers, since under NT, sets of measure zero are explicitly defined. The designed controllers provided the auxiliary control action for the motion planning problem and the problem of motion planning with time abstraction. In addition to the theoretical guarantees, experimental studies were presented to validate the results in realistic conditions. Acknowledgments. This work was supported by the Interreg Balkan-Mediterranean project “SFEDA” co-funded by the European Union and National Funds of the participating countries.

References Koditschek, D.E., Rimon, E.: Robot navigation functions on manifolds with boundary. Adv. Appl. Math. 11, 412–442 (1990) Loizou, S., Jadbabaie, A.: Density functions for navigation function based systems. IEEE Trans. Autom. Control 53(2), 612–617 (2008) Loizou, S.G.: The navigation transformation. IEEE Trans. Robot. 33(6), 1516–1523 (2017). https://doi.org/10.1109/TRO.2017.2725323 Rimon, E., Koditschek, D.E.: Exact robot navigation using artificial potential functions. IEEE Trans. Robot. Autom. 8(5), 501–518 (1992)

SkyBat: A Swarm Robotic Model Inspired by Fission-Fusion Behaviour of Bats ˇ 2, J´ an Zelenka1(B) , Tom´aˇs Kasanick´ y1 , Ivana Budinsk´ a1 , Ladislav Nado 2 and Peter Kaˇ nuch 1 2

Institute of Informatics, Slovak Academy of Sciences, Bratislava, Slovakia {zelenka,kasanicky,budinska}@savba.sk Institute of Forest Ecology, Slovak Academy of Sciences, Bratislava, Slovakia [email protected], [email protected] http://www.ui.sav.sk/w/en/dep/mcdp/ http://www.ife.sk

Abstract. An agent based model - SkyBat, based on long-term observation of bats behaviour under fission-fusion dynamics, is presented in this paper. The agents cooperate while searching for specific targets of interest in an unknown area. Although the agents are autonomous, they have an ability to move from one location to another without a group leader and to react to changes in environment.

Keywords: SkyBat

1

· Fission-fusion dynamics · Swarm robotics

Introduction and Related Works

Swarm robotic systems are characterised by collective behaviour that emerges from the interactions between robots and between robots and environment. An inspiration for creation of such systems can be often found in nature, for example in movement of insect, flocking of birds or in schooling of fish. Swarm robotics was also inspired by bats - the only flying mammals on this planet. In 2010, Xi-She Yang presented a formal description of a behavioural algorithm, where he idealized some of the echolocation characteristics of microbats [9]. Since the time of publication of this work, the Bat algorithm found application in various fields, including optimization, classification, image processing, task scheduling, data mining and many others. Properties of echolocation and communication among bat individuals might, as well, find application in the field of robotics, particularly to solve problems related to collision avoidance of multiple robots [1]. The majority of bat-inspired algorithms developed until now are based on certain echolocation characteristics of microbats, however behavioural algorithm, which try to mimic the group movement of bats, are more rare. To our knowledge, the first attempt to simulate movement of tree-dwelling bats was published by Barto´ n and Ruczy´ nski [6]. In their agent-based model, c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 521–528, 2019. https://doi.org/10.1007/978-3-030-00232-9_55

522

J. Zelenka et al.

they aimed to explore the benefits of tree selection, memory and eavesdropping (compensation behaviours) during search for tree cavities by bats with short and long perception range. In this paper we present a first version of a more advanced agent-based model called SkyBat, which is able to fully comprehend roost-switching dynamics of bat groups. The model is based on key patterns of swarming behaviour of tree-dwelling bats revealed by our recent studies [3,4]. Among the unique features of the SkyBat algorithm belongs: (i) ability to perform group movement from one location of interest to another without a groupleader, (ii) ability to effectively search for some targets of interest in an unknown environment and (iii) ability to perform flexible non-centralized decision-making in a rapidly changing environment. The paper is organized as follows: Sect. 2 describes an agent model based on bats behaviour under fission-fusion dynamics with relevant biological description of individual parameters settings. The results of simulations of the proposed behavior of bats are given in Sect. 3. The last section includes a conclusion and some ideas for future work.

Fig. 1. Movement of bat agents. Predetermined directional (a) and speed distribution (b) responsible for highly correlated random walk of bats.

2

Model Description and Simulation Settings

All parameters used in our swarm agent model are based on biological observation of Nyctalus leisleri (fast flying, aerial hawking and tree-dwelling species of bat) and biological research presented in [2–5]. Our simulation environment is represented by a square plane (600 × 600 m) with evenly distributed set of roosts. Roosts have only one physical characteristic, the height above the ground, which determines their attractiveness. We decided to use only height above the ground, as it is one of the most easily interpretable (higher cavities are safer from predators). Heights of roosts were taken as uniformly distributed random number in the range 7–29 m (which is the most common height of roosts in real environment). Agents move in the environment by random walk, following nearly linear path. Each agent randomly chooses a swerve direction vd (t) (Fig. 1a) and a speed

SkyBat

523

s(t) (Fig. 1b) from respective predetermined distributions, in each second of the simulation. To maintain a biological relevance, the values for flight speed s(t) are derived from distribution with minimum flight speed of 2.5 m.s−1 [7], maximum of 11.11 m.s−1 [8] and median speed of 3.7 m.s−1 (Fig. 1b). The movement of agents is described by the following equation: vm (t) = vd (t) + vs (t),

(1)

where vm (t) is a bat movement vector in time t, vd (t) is a random vector of the swerve in time t and vs (t) is a vector of the random speed in time t. A duration of the simulation Tsim is set to 28800 s (8 h of night activities of bats). Simulation is divided to 3 episodes: (i) foraging, (ii) searching for roost and signalling, (iii) dawn period (last 300 s of the simulation cycle). At the start of the simulation, all agents are placed in a single roost. After initial departure of agents from the roosting site, a foraging period begins. Agents start to randomly move in space. Lengths of the foraging time depends on many factors in real environments. An average foraging time, 2–3 h (7200 s to 10800 s), was derived from the observation of bats. Random movement ensures that at the end of foraging, agents are randomly distributed within the environment. After the foraging period ends, agents begin to interact with the roosts. Sensing a field of view (F oV ) of an agent is set to 230◦ and a distance to 100 m, in which it can detect a roost. Distance and F oV was set according to structural characteristics of echolocation calls of N. leisleri. An agent randomly selects one roost from the set of multiple roosts included within its F oV . After arriving to the selected roost, he evaluates the quality of the roost and it begins to attract other agents by calls (acting as a signaller), if the quality is satisfactory. The roost quality for a bat can be affected by many factors, such as walls thickness, an entrance diameter, amount of parasites, a distance from the ground, etc. We do not know any study that would address this issue. Therefore in the simulation we use only the height above the ground as the main quality indicator. Signalling of a roost by calls might be visualized in the simulation by constructing a circle buffer with variable Bdistattrac as its circumference around a target tree. In the roost selection process the roost, which is currently being attracted, has a higher probability that a receiver agent (i. e. agent currently not signalling any roost) will select it as a target roost. If receiver agent is located in multiple circles at the same time, it chooses the target roost with larger number of signalling agents. The time tattract , which the signaller agent is willing to spend in order to attract other agents to target roost, is calculated as follows: tattract = HR + N B +

100 [s], TTS

(2)

where HR is the height of the roost from the ground in meters, N B is a number of bats in roost and the T T S is a time to sunrise in seconds. The agent decides, whether it is willing to invest its energy to longer roost attraction or not, when agent’s time counter reach tattract . This decision is also affected by the fact, that the calls of agent attracted someone else to the roost. If the agent decides to

524

J. Zelenka et al.

keep performing longer roots attraction, the length of bonus time for attracting of the target roost depends on the number of attracting agents (inside the roost) and is calculated as follows: tattractbonus = test ∗ N B [s],

(3)

where N B is a number of bat agents in the roost and test is a coefficient of the approximated curve of the length of bonus attracting time. The bonus attracting time depends on the population size of the attracting bats (for a group of 25 attracting bats the bonus time takes 45 min. This coefficient was used in simulation). After this time a signaller agent enters the roost and remain inside the roost (i) as long as the roost is attracted by the other signalling agents or (ii) till the next evening departure of the group of agents reached its threshold size GST (t) expressed by the following equation:  b, t ∈< 0, o > (4) GST (t) = sim b − (a−b)∗T , t ∈< o, Tsim > t−Tsim The a and the b parameters in (4) represent the minimum and maximum value of the number of bats in the group, respectively (in simulations a and b were set to 10 and 90 individuals, respectively). The o represents a time, when a bat lowers its interests on a size of population in roost (in simulations o was set to midnight). Otherwise, when signalling reaches its limits and the number of attracted agents inside the roost is not sufficient, all agents leave this roost and starts to search for other roosting sites. It should be noted, that signalling a target roost, which is located in close proximity to the roost, used a previous night, is penalized (proportionally to the length of distance calculated by (5), where the c and the d represent the minimum and the maximum values of the safe distance, because roost-switching on short distances might increase the risk of predation (e.g. martens, owls). In simulations c and d were set to 10 m and 150 m, respectively. The parameter α represents a descent slope of the F P D(t) curve (in simulations α were set to 2). α  t 1 − Tsim (5) F P D(t) = c + (d − c)    t α , t ∈< 0, Tsim > 1 − 12 Tsim At last, if some agents did not found the position of the new roosting site and the time before sunrise is less than 300 s, the agents are automatically navigated to enter the last roost, in which their group was roosting (even if it is currently empty). Thus, during the daytime, all bats must be located inside some roost and at the evening the cycle starts again (last positions of bats are their initial positions for the next night). The simplified state diagram of the process of searching a suitable roost for a group of bats according to the above described behavioural rules of agents is illustrated on Fig. 2. From biological perspective - the main aim of bats is to find a new roost and attract a sufficient number of group members to enter it while eliminating

SkyBat

525

Fig. 2. State diagram of agent-based behavioural model for finding a suitable roost for a whole group inspired by tree-dwelling bats. The states 0, 1, 2 and 4 represent a situation, where a bat is in the roost, a bat is attracting a roost for a time tattract or tattractbonus , a bat is finding new roost and a bat is hunting a prey, respectively. The state 3 represents a situation, when a bat is flying to the target roost, if a bat is flying to no attracted roost and during the flight a bat hears other attracting bat he changes this attracting roost as his new target roost. The states, from 0 to 3, represent the fission-fusion dynamics.

predation risks resulting from roost-switching on short distance. During the noncentralized decision-making process, the bat group must constantly evaluate pros and cons of every combination between factors: quality of the roost, energetic costs of signalling, current size of its group, distance of new roosting site from the previous one and the time remaining till sunrise.

3

Simulation Results

The whole process of construction and adjustments of our model was performed in the Matlab SW tool. We avoided the using of real roosts coordinates, because we wanted to suppress the impact of the real environment on the proposed behavioural rules of the agent model. Using of the real roosts positions can cause inappropriate results, especially in this initial phase of the model development. Therefore we decided to distribute roosts randomly among the environment as we describe in Sect. 2. The positions of the roosts were generated randomly. The minimum distance between the roosts RRD was kept (this distance varied among different simulations in order to test the roosts position importance with respects to the bat parameters). The size of a bat population was set to 60 individuals, which roughly corresponds to the size of a real bat group. We measured a model performance - its ability to perform group movement from one roost to another

526

J. Zelenka et al.

without group disintegration, or creating as fewer subgroups as possible - by running it with different combinations of parameters (minimum distance between individual roosts - RRD; threshold distance for signalling calls - Bdistattract ; number of roosts in environment (roost density)). Results of these simulations are visualized in Figs. 3 and 4. Number of groups, which formed at the end of the night (even one individual in the roost is considered to be a group) is calculated as a mean value from the nine independent simulations.

Fig. 3. Plots representing model performance under various combinations of parameters - a threshold distance for signalling calls Bdistattract and a number of roosts (roost density). Grey circles represent mean number of groups, which formed at the end of the night with respect to minimum distance between individual roosts - RRD (10 and 40 m). Thicker grey vertical lines behind means show minimal and maximal number of groups.

A case, when roosts are uniformly distributed in the space, was simulated too. This case represents a simulations with specific environmental parameter settings of the roost density (170 roosts per 600 × 600 m) and RRD distance between roosts equal to 40 m (Fig. 3b). If the Bdistattrac is less than the distance between roosts RRD, for example if Bdistattrac is less than 20 m and roosts are more than 40 m apart, a situation arises, when bats signalling at one roost do not hear the bats signalling from an adjacent tree. In such case, a grouping of agents is inefficient and bats may split to a large number of subgroups (∼15 subgroups in our simulation). Increasing value of the Bdistattrac ensures more accurate swarming process (more effective group fusion), as the number of groups at the end of night is significantly lower (the variability of the group sizes among multiple simulation runs is lower, as well). Smaller values of the RRD cause randomness in spatial distribution of roosts - they may occur in small clusters and as a solitaire, as well. Both these areas, dense and sparse, can negatively influence the effectivity of the swarming process. In a situation, when bats are signalling the position of a remote roost, other non-signalling bats do not hear their calls due to a large distance. On the other hand, if bats are signalling the position of roost located within a cluster of multiple roosts, bats do not have any problem to merge into one large group (see Fig. 3). Smaller distances of Bdistattrac (∼20 m) on which bats are able to recruit other bats cause splitting of bats into numerous smaller groups, regardless of the environment is densely or sparsely populated by roosts (Fig. 4a), however, such situation is in reality

SkyBat

527

unlikely, as N. leisleri is audible to other bats to a distance of at least 100 m. On the other side, larger Bdistattrac values allow bats to merge into a large group regardless of roost density or spatial distribution (see Fig. 4b).

Fig. 4. Plots representing model performance under various combinations of parameters - a minimum distance between individual roosts RRD and a number of roosts (roost density). Grey circles represent mean number of groups which formed at the end of the night with respect to different threshold distance for signalling calls Bdistattract (20 and 100 m). Thicker grey vertical lines behind means show minimal and maximal number of groups.

Model performance was also tested in extreme environmental setting (only 5 roosts in the environment, the RRD distance very high (250 m) and the Bdistattrac value lower than the RRD, but also very high (150m)). In this cases all roosts were occupied at the end of the simulation, however bats were less effective in forming large groups. Bats were using all 5 roosts, whereas in the 3 roosts they form only small groups (up to 5 individuals). Increasing the Bdistattrac to extreme (and unrealistic) values, caused that bat group was able to successfully perform roost-switching even in environment with extreme lack of roosting sites. An ability to perform group movement from one location of interest to other without a group-leader in dynamically changing space is a relevant swarm scenario of the robotics (e.g. robots need to search for worksites in an unknown environment and either perform work on them (e.g. in the case of area surveillance), or collect items from the worksites and bring them to a designated location (e.g. in resource collection applications)). Described swarm behaviour represents a way, in which agents obtain and share information about where work needs to be done to the swarm’s ability to use that information to perform the work efficiently, given a particular task and environment.

4

Conclusion and Results

This article presents the SkyBat agent-based model, which is inspired by the behavior of tree-dwelling bat species N. leisleri. Unique features of this model are: (i) ability to perform group movement from one location of interest to other without a group-leader, (ii) ability to effectively search for some targets of interest

528

J. Zelenka et al.

in unknown environment and (iii) ability to perform the flexible non-centralized decision-making in a rapidly changing environment. The presented model is scalable and flexible. It does not depend on the number of agents in the group and can easily adapt to changes in the environment. The first version of the SkyBat model presented here is based on behavioral patterns obtained by long-term data collection (including radio-telemetry) from 2011 to 2015 and available literature on topic of roost-switching, foraging and social bats. To extract additional information in order to have better insight into a bat motivation and a roost-switching dynamics, we will use more advanced data collection techniques (RFID technologies, ultrasound recording devices, infra-red cameras and even attempt to develop a new ultra-lightweight GPS sensor). We suppose that the cooperation between behavioural biologists and roboticists will lead not only to new discoveries in mammal ecology, but will also bring new ideas to the field of swarm robotics. Acknowledgment. This work has been supported by the Slovak Scientific Grant Agency - VEGA (grants No. 2/0154/16 and No. 2/0131/17).

References 1. Hossain, Q.D., Uddin, M.N., Hasan, M.M.: Collision avoidance technique using biomimic feedback control. In: 2014 International Conference on Informatics, Electronics and Vision (ICIEV), pp. 1–6. IEEE (2014) 2. Kaˆ nuch, P.: Evening and morning activity schedules of the noctule bat (Nyctalus noctula) in Western Carpathians. Mammalia 71, 126–130 (2007) 3. Nado, L., Kaˇ nuch, P.: Dawn swarming in tree-dwelling bats - an unexplored behaviour. Acta Chiropterologica 15, 387–392 (2013) 4. Nad’o, L., Kaˇ nuch, P.: Swarming behaviour associated with group cohesion in treedwelling bats. Behav. Process. 120, 80–86 (2015) 5. Nad’o, L., Chrom´ a, R., Kaˇ nuch, P.: Structural, temporal and genetic properties of social groups in the short-lived migratory bat Nyctalus leisleri. Behaviour 154, 785–807 (2017) 6. Ruczy´ nski, I., Barto´ n, K.A.: Modelling sensory limitation: the role of tree selection, memory and information transfer in bats’ roost searching strategies. PloS ONE 7(9), e44897 (2012) 7. Schutt Jr., W.A., Altenbach, J.S., Chang, Y.H., Cullinane, D.M., Hermanson, J.W., Muradali, F., Bertram, J.E.: The dynamics of flight-initiating jumps in the common vampire bat Desmodus rotundus. J. Exp. Biol. 200, 3003–3012 (1997) 8. Shiel, C.B., Shiel, R.E., Fairley, J.S.: Seasonal changes in the foraging behaviour of Leisler’s bats (Nyctalus leisleri) in Ireland as revealed by radio-telemetry. J. Zool. 249(3), 347–358 (1999) 9. Yang, X.-S.: A new metaheuristic bat-inspired algorithm. In: Gonzales, J.R. (ed.) Nature Inspired Cooperative Strategies for Optimization. Studies in Computational Intelligence, vol. 284, pp. 65–74. Springer, Heidelberg (2010)

Robotic Vision Systems

Experimental Measurement of Underactuated Robotic Finger Configurations via RGB-D Sensor Renato Brancati

, Chiara Cosenza , Vincenzo Niola and Sergio Savino(&)

,

Department of Industrial Engineering, University of Naples “Federico II”, Naples, Italy [email protected]

Abstract. Underactuated robotic systems need suitable experimental methods able to measure their small and low-weight component dynamics. Depth sensors represent a valuable strategy to develop quantitative approaches to study the behavior of these systems. Here, an experimental application of markerless vision technique is proposed employing the low-cost and low-resolution Kinect depth sensor to compute the kinematics of an underactuated robotic finger. Keywords: Underactuated robotic hand  Vision techniques  RGB-D sensors Microsoft kinect

1 Introduction In previous works, a model of an underactuated robotic hand based on tendon driven fingers was proposed [1]. Five fingers compose the robotic hand; each finger is made of three phalanxes hinged to each other by pins, which represent the different articulations. A single actuator controls the movement of each finger by means of a differential system obtained with a self-adaptive pulleys mechanism. The hand is named Federica hand and is able to grasp complex shape objects [2]. Inextensible tendons allow each finger to grab the object with a force that is not dependent on the specific configuration of the finger itself or the other finger configurations. The hand kinetics and dynamics have been simulated and the first hand prototypes have been built [3]. Simulations have been performed to study the geometrical tendon parameters like the distances of the tendon guides from the hinges and the hinge positions. These parameters control the closing phalanx sequence that is a fundamental issue to mimic the human hand behavior and, therefore, ensure the effectiveness of object grasping [4]. Moreover, experimental tests have recorded the trajectory centers of each phalanges employing a camera and the closing phalanx sequence has been analyzed [5]. These experimental activities had the limitation to give back only qualitative results. In this context, there is indeed a lack in the experimental validation of small mechanical components using quantitative approaches. Furthermore, it is not feasible to study the finger dynamics using multiple encoders assembled on each component; indeed the encoder weight could modify or alter the system trajectory itself. For these reasons, © Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 531–537, 2019. https://doi.org/10.1007/978-3-030-00232-9_56

532

R. Brancati et al.

vision techniques, based on markerless approaches, represent a possible route to track the dynamics of small mechanical systems, like low-weight robotic hands. In virtual reality and game console control, there is a great effort in the development of algorithms and hardware components suitable for object tracking with a particular attention to the human hand tracking [6, 7]. Several research groups have used markerless vision-based methods to detect and track the hand motion, for humanmachine interaction applications or for gesture recognition, employing low cost RGBDepth (RGB-D) cameras [8–10]. Among all, the Kinect sensor (Microsoft) has become one of the most used depth sensor together with the Senz3D (Creative), the Intel Real Sense and the Leap Motion. RGB-Depth sensors give back the point cloud data of the observed object. The point cloud data contains the coordinates of the external surface object in the three-dimensional space. Each point may also have other attributes such as the components of the normal vector, the accuracy, or the color in the four spectral bands. In this paper, the quantitative measurement of the finger kinematics is computed employing a markerless vision technique. Data have been acquired during the finger closing sequence with a low-cost and low-resolution RGB-depth sensor. For this purpose, the Kinect sensor (Microsoft) and MatLab (Mathworks) have been employed to acquire the finger point cloud data at its different configurations. A customized algorithm allowed reconstructing the skeletal shape of each phalanx in the finger, as a simple 3D geometrical model composed of cylinder elements. At each step, the algorithm computes minimization procedures through linear regression models and geometrical constraints. Finally, the angles formed between the phalanges at different motor rotations have been computed and the experimental measurement of the finger closing sequence was obtained.

2 Point Cloud Data Acquisition The Kinect V2, Fig. 1, is a commercial depth sensing device produced by Microsoft. In particular, it is a RGB-Depth sensor constituted of a depth sensor that works in association with a RGB camera, to augment the conventional image with depth information. The RGB camera has a resolution of 1920  1080 pixels. The depth sensor is realized with an infrared camera with a resolution of 512  424 pixels, and an infrared emitter. It is based on the Time of Flight (TOF) technology, in particular, on the intensity modulation technique [11], by means of which, the on board hardware evaluates the distance between the sensor and each point of the scene viewed by the infrared camera. The depth accuracy is a function of the distance of observed objects from the sensor [12]. The Kinect sensor has aroused the interest of researchers because of its high potentiality, when used as a measuring instrument, combined with its very low cost. For these reasons, applications in various fields have been studied and developed [13].

Experimental Measurement of Underactuated Robotic Finger Configurations

533

Fig. 1. The Kinect V2 sensor front with cameras and emitter positions.

Data acquired through Kinect sensor are: – a Color image array (RGB image) with dimensions of 1920  1080 pixels; – a Depth image array with dimensions of 512  424 pixels that stores in each pixel the distance, in meters, from the sensor to the spatial point observed in that pixel; By means of the Image Acquisition Toolbox (MatLab) it is possible to obtain a 3D coordinates array, with the same dimensions of the Color image, in which the elements A(i, j, 1:3) are the coordinates X, Y and Z, in the Camera reference frame, of the spatial point corresponding to the pixel (i, j) of the Color image. Due to noise, a single frame acquired by the Kinect sensor may provide inaccurate depth results [14]. To avoid errors and reduce noise, provided that the scenario viewed by the camera is steady, each distance in the Depth image has been evaluated as the mean value on 20 consecutively acquired distance values. The noise of the acquired data can provide inaccurate depth results if a single frame is considered [14]. In the hypothesis that scene observed by the Kinect, is stationary, it is possible to reduce the measurement errors, evaluating each distance in the Depth image as the mean value on more data; in this application 20 consecutively frames are used.

3 Experimental Setup The experimental setup, Fig. 2, comprises of an underactuated mechanical finger that is constrained to a rigid support by the proximal phalanx end part. The finger is linked to a pulley by means of two inextensible wires that represents the traction and antagonist tendons [3, 5]. An analog servomotor moves the pulley and sets the finger tendon displacements. A myRIO Embedded Device controller (National Instruments) controls the servomotor rotation. Moreover, the experimental setup is equipped with an encoder to receive a response on the motor angular position. The control system of the test rig sets a given motor position and by means of the tendon system, the finger configuration changes, i.e. the rotation of each phalanx changes. In the range 0 to 180° of motor rotation, the finger performs a complete closing sequence. The above range was divided in steps of 18° and in each step the Kinect sensor acquired the point cloud of the finger configuration. The Kinect sensor has been

534

R. Brancati et al.

Fig. 2. Experimental test rig.

placed at about 70 cm from the finger along the observation direction (z-axis). The result of the acquisition is a point cloud of the finger external surface, in threedimensional coordinates.

4 Data Elaboration Starting from the point cloud of the finger, we have developed an elaboration algorithm that is able to reconstruct the finger configuration as a simple 3D geometrical model composed of three cylindrical elements. The point cloud data have been preliminary purged of points not belonging to the finger itself and divided in three different point clouds, representing the proximal, the medial and the distal phalanges. The point cloud data have been projected in two perpendicular planes to compute the principal directions that correspond to the phalanx principal directions. The algorithm, by means of an optimization procedure, aims to build for each phalanx a cylinder that has the axis parallel to the principal direction and that fits the point cloud data. In each iteration of the proposed algorithm a geometrically constrained linear regression problem is solved. The geometrical constrains are represented by the radius and the length (distance between two consecutive hinges of the finger) of each phalanx. Each cylinder must begin at the end of the previous one. In this way, starting from the axes of the cylinder fitting models, it is possible to compute the relative angles of the phalanges for each finger configuration.

5 Experimental Results In the Fig. 3, it is possible to observe the acquired point clouds of the finger in the Kinect reference system (XYZ). The data are referred to a finger closing sequence. The point clouds of each phalanx are shown with three different colors: red for the proximal phalanx, blue for the medial and green for the distal.

Experimental Measurement of Underactuated Robotic Finger Configurations

535

Fig. 3. Point cloud of the finger closing sequence.

Figure 4 shows the results of 3D model reconstruction for each phalanx and the acquired point cloud, for a motor angle rotation of 126 .

Fig. 4. 3D model reconstruction of the finger.

In Fig. 5, the diagram of the rotations of each finger phalanx, as a function of the motor rotation, computed through the proposed algorithm, is shown. The measurement agrees with the right closing sequence of the finger. In fact, the first phalanx that reaches its final position is the proximal one, then the medial and finally the distal

536

R. Brancati et al.

phalanx. This finger closing sequence mimics the human hand behavior; indeed, it allows the fingers fitting even complex object shapes to execute grasping tasks.

Fig. 5. Phalanx rotation angles as a function of the motor rotation

6 Conclusion In this activity, an experimental procedure to measure the configurations of an underactuated mechanical finger driven by tendons, starting from the output of the RGB-D Kinect sensor, has been proposed. This study represents a first step to develop a technique to analyze, with a quantitative approach, the kinematics and the dynamics of under-actuated mechanical systems adopting markerless vision techniques. The above experimental results can be used to evaluate the functions that describe the real behavior of the phalanges in function of the actuator tendon shortening. These findings are important for the functional studies of an underactuated finger which is a component of an even more underactuated system like the “Federica” mechanical hand. Because the RGB-D sensor that we used, can acquire in continuous, the next step, in the development of the methodology, is the analysis of the finger during a continuous movement to perform a dynamic measure of the finger behavior.

Experimental Measurement of Underactuated Robotic Finger Configurations

537

References 1. Rossi, C., Savino, S.: An underactuated multi-finger grasping device. Int. J. Adv. Robot. Syst. 11 (2014). https://doi.org/10.5772/57419 2. Niola, V., Carbone, G., Rossi, C., et al.: An underactuated mechanical hand prosthesys by IFToMM ITALY. In: The 14th IFToMM World Congress, Taipei, Taiwan, 25–30 October 2015 (2015) 3. Niola, V., Rossi, C., Savino, S., Troncone, S.: An underactuated mechanical hand: a first prototype. In: Proceedings of the RAAD 2014 23rd International Conference on Robotics in Alpe-Adria-Danube Region, 3–5 September 2014, Smolenice Castle, Slovakia (2014) 4. Niola, V., Rossi, C., Savino, S.: Dynamical model and prototype tests of a self-adaptive mechanical hand. Int. Rev. Model Simul. 9, 97–104 (2016). https://doi.org/10.15866/iremos. v9i2.8068 5. Niola, V., Rossi, C., Savino, S.: Influence of the tendon design on the behavior of an underactuated finger. In: Ferraresi, C., Quaglia, G. (eds.) Advances in Service and Industrial Robotics: Proceedings of the 26th International Conference on Robotics in Alpe-AdriaDanube Region, RAAD 2017, pp. 1033–1042. Springer International Publishing, Cham (2018) 6. Niola, V., Rossi, C., Savino, S.: Perspective transform and vision system for robotic applications. In: Proceedings of the 5th WSEAS, pp. 87–91 (2006) 7. Aristidou, A.: Hand tracking with physiological constraints. Vis. Comput. 34, 1–16 (2016) 8. Sridhar, S., Mueller, F., Zollhöfer, M., et al.: Real-time joint tracking of a hand manipulating an object from RGB-D input. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 294–310 (2016) 9. Prankl, J., Aldoma, A., Svejda, A., Vincze, M.: RGB-D object modelling for object recognition and tracking. In: IEEE International Conference on Intelligent Robots and Systems, December 2015, pp. 96–103 (2015). https://doi.org/10.1109/iros.2015.7353360 10. Kyriazis, N., Argyros A.: 3D Tracking of hands interacting with several objects. In: IEEE International Conference on Comput Vision Workshops (2015) 11. Kolb, A., Barth, E., Koch, R., Larsen, R.: Time-of-flight sensors in computer graphics. In: Eurographics 2009 - State Art Reports, vol. xx, pp. 119–134 (2009). https://doi.org/10.1111/ j.1467-8659.2009.01583.x 12. Yang, L., Zhang, L., Dong, H., et al.: Evaluating and improving the depth accuracy of Kinect for Windows v2. IEEE Sens. J. 15, 4275–4285 (2015). https://doi.org/10.1109/JSEN.2015. 2416651 13. Caruso, L., Russo, R., Savino, S.: Microsoft Kinect V2 vision system in a manufacturing application. Robot. Comput. Integr. Manuf. 48, 174–181 (2017). https://doi.org/10.1016/j. rcim.2017.04.001 14. Lachat, E., Macher, H., Mittet, M.A., et al.: First experiences with kinect V2 sensor for close range 3D modelling. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, pp. 93–100 (2015)

Finger Joint Detection Vision Algorithm for Autonomous Rheumatoid Ultrasound Scan Nikolaj Iversen1(B) , Søren Andreas Just2 , and Thiusius Rajeeth Savarimuthu1 1

University of Southern Denmark, Odense, Denmark [email protected] 2 Odense University Hospital, Odense, Denmark

Abstract. The use of ultrasound scanning in rheumatology has been shown to be a sensitive and specific modality for assessing joint disease. It is used for both detection of early signs of disease and determination of arthritis activity in established disease. In order to improve the monitoring and treatment of arthritis patients, an automated system to scan the joints of the wrist and hand is developed. In this paper, a vision algorithm used to locate the joints from a camera is proposed. The joints are found by using a global approach combined with a geometric analysis of the image. The algorithm is used on a platform where a robot arm can move an ultrasound probe to the joints. The overall detection rate is 94.2% with the majority of variance found along the finger. Keywords: Computer vision · Finger joint detection Rheumatioid arthritis · Medical robot

1

· Ultrasound

Introduction

Rheumatoid arthritis (RA) is a chronic inflammatory disease primarily affecting the joints of the hand, wrist and feet [1]. Uncontrolled active disease causes joint damage, disability and decreased quality of life for the patient. The cardinal symptom of active disease is synovitis, an inflammation of the joint, which can cause structural damage to both the joints cartilage and bone components. Ultrasound examination of the joints has been shown to be a sensitive and specific method for both detecting signs of early disease and evaluate arthritis activity in established disease [2]. Further ultrasound scans are non-invasive and do not use radiation [3]. The use of ultrasound in evaluating arthritis activity has been standardized [4–6]. This is done by using both the B-mode and the Power Doppler ultrasoundsettings when performing the scanning. This method gives an overall synovitis score of 0–6 for each joint scanned, where 0 is no activity and 6 is severe disease activity [5]. However, studies have shown that ultrasound scanning is operator dependent with a moderate to good interobserver reliability [7,8]. This variability in c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 538–550, 2019. https://doi.org/10.1007/978-3-030-00232-9_57

Finger Joint Detection Algorithm for Automated RA US Scan

539

scanning results can potentially affect the monitoring of disease progression. We want to develop a platform for automated ultrasound scanning of the wrist, metacarpo-phalangeal (MCP) and proximal interphalangeal (PIP) joints, followed by automated interpretation of disease activity. The system is to be used for both detection of early joint disease and help the doctor in monitoring the progress of patients with established disease. The platform we propose consist of a robot holding an ultrasound probe suitable for scanning of the wrist and fingers. An image of the platform can be seen in Fig. 1. Furthermore, the platform includes one stereo camera for detecting features of the patient’s hand and a Leap motion sensor used for tracking motion of the hand. A classical think-then-act system where the detection gives an initial estimate, then the robot moves to the position is used. The platform is built on a 120 × 80 cm table. The platform can be seen in Fig. 1. The stereo camera, as seen in Fig. 1a, is faced down towards the hand of the patient who sits at the table. A robot arm (Fig. 1b), is equipped with an ultrasound probe (Fig. 1c). A GE ML6-15-D ultrasound probe is used to scan the fingers and wrist. The probe has a footprint of 58 by 13 mm and is oriented to follow the orientation of the finger for measurements. In this paper, a vision algorithm to detect the finger joints for this platform is implemented. The vision algorithm locates the position of the MCP and PIP joints on the hand. Using stereo vision, the position of the finger joints and hand location can be found in relation to the base of the robot and be used as a starting point for the navigation of the ultrasound probe. Only a single hand can be within the view of both camera frames. The square marked at Fig. 1d marks the detection area. The detection area is limited by the proximity of the camera. The requirements for the finger joint detection algorithm is limited by the ability for the robot to reach the joint and the scanning area of the ultrasound probe. The vision algorithm must be precise enough to guide the robot to the joint and target the ultrasound probe within the scanning area as an initial state of the scanning procedure. Once the robot is between the hand and the camera, a Leap Motion (LM) sensor tracks the movement (Fig. 1e). The initial estimate of the joint can then be improved upon using force feedback (Fig. 1f), learned search-patterns and the ultrasound imaging as feedback.

2

Related Work

Other Projects which uses robots to perform ultrasound scans have used a remote control paradigm [9,10]. In such scenarios, a doctor would perform the ultrasound scan from a different location and have a nurse or paramedic to aid with the ultrasound scan on the patients end. The all movements of the robot are controlled by the doctor. This allows for a broader spectrum of applications as the ultrasound robot is only limited by the kinematics and the training of the doctor. It does not automate the task and thus it serves better as a tool for unexpected situations where a patient cannot travel to the expert. A fully

540

N. Iversen et al.

Fig. 1. Image of the platform. The camera (a) detects joints in hands placed in the detection area (d). The robot arm (b) creates contact with the hand and the ultrasound probe (c), using the force torque sensor (f) to find the correct pressure. Any movement the hand makes during scans is picked up by the Leap Motion sensor (e).

automated system serves better for routine checkups of large number of patients with a chronic disease such as rheumatoid arthritis. This can reduce the cost and increase capacity of the hospitals. Hand segmentation has been explored with different aims. In [11], the aim is to find the shape of the hand in order translate sign language. Several methods for this is compared in [12]. For the robot platform, the problem of detecting the hand is much simpler and is a matter of background segmentation. The gesture or pose of the hand is not varying between images and any variation hereof would complicate the ultrasound scan of the hand. For the ultrasound application, a full model of the hand is needed in order to plan the robot motion towards the joint. A commercial sensor, the LM controller, is a 3d sensor which uses proprietary software to track and provide hand poses and a model of the hand. The LM sensor is used to compare results with the vision algorithm and as a way to track the movement when the robot moves between the hand and the camera. Using a stereo camera provided several benefits over the LM sensor as the main way of tracking the hand. The vision algorithm can detect the joints based on a single image frame where the Leap Motion detector improves the model of the hand during movement. This design is useful when using it as a game controller, but for the ultrasound application, the hand is expected to be kept in a single position.

Finger Joint Detection Algorithm for Automated RA US Scan

541

For background segmentation, the rate of change from the current image to the background is used to separate the hand. In [13], is a more detailed background analysis used. By using the gradient of change in the image, areas can be grouped together which can give a better separation of objects in the image. Another segmentation would be to detect skin colour [14]. Colour segmentation is heavily influenced by the lighting of the scene and is therefore rejected as the platform can be moved so there is no way to control this variable. The rate of change as shown in Eq. (1) was enough to separate the background. This method is faster than the method proposed in [13]. An even faster method of background segmentation is using differential frames [15]. Using a direct subtraction performs worse in images with poor lighting as distinguishing between shades on the background and darker skin colours give similar results. This method was therefore rejected.  Cau,v >t Cau,v , if Cbu,v (1) C(u, v) = 0, otherwise Where Ca and Cb are the colour channels of image a and image b and t is an uncertainty threshold. The pixel positions within the image are labeled u and v. By storing an old image of when no hand is detected, the image b will be the dark background which needs to be eliminated. If image a contains a hand, the pixel values will increase and thus keep those pixels. The image can then be turned into greyscale with a summation of the channels.

3

Finger Joint Detection Algorithm

The proposed vision algorithm finds the position of the joints in a single image. When a hand is placed on the table, the positions of the joints can be detected. The detection is used as the setup of the robot motion planning. During the initialisation, the patient adjusts the position of the hand and only when several consecutive detections yield the same position of the joints, the robot will move to perform the scan. Given the constraints of the platform, the hand can be placed anywhere within a 22 × 28 cm working area. The algorithm contains four stages, pre-processing, area and center detection, joint detection and verification. An overview of the algorithm can be seen in Fig. 2.

Pre-process image. Calculate COM and rotation. Find finger tips / gaps between fingers. Verify the joints match the model.

Fig. 2. The 4 stages in the algorithm.

542

N. Iversen et al.

The pre-processing uses background segmentation to separate the background from the current image. In Fig. 3a is an example of an input image shown. The image is processed by removing background features of the image. Instead of looking at the differential frames, the rate of change is used instead. This allows changes in the image to be segmented from the background. The edges in the segmented images is used to form a contour of the hand. Due to noise, the edges might not form a perfect contour of the hand, so a median filter is used to remove any gaps in the hand. This median filter is optimized to only run in the dark areas near the hand. The result of the background segmentation can be seen in Fig. 3b. In order to remove noise, the area where the hand is located is detected as seen in Fig. 3c.

Fig. 3. The input image (a) is processed and areas with low change are set to black. In (b) is the cropped grayscale result of this pre-processing. The large coherent objects left in the scene is gathered to the hand area in (c).

The detection is initiated when an object, the hand, enters the field of view of the camera. This will introduce a change in the image over as described by Eq. 1. Large objects in the image are assumed to be hands entering the scanning area and will start the further processing of the image. Other objects in the scene will fail the verification if the shape does not resemble a hand. When the area of the hand is found, shadows between the fingers can be either counted as part of the hand or part of the background. The gaps between the fingers are processed further by analyzing this area. Once the gaps between the fingers are found, the gaps are extended along the fingers to separate the fingers. The center of the hand is estimated and the rotation of the hand is found by using PCA. The pixel candidates for the fingertips can be found using convex hull of the hand as seen in Fig. 4a. The fingertips are found by rotating these points around the coordinate system which was found using the eigenvectors of the PCA as shown in Fig. 4b. The greatest variance will be along the axis of the arm and the middle finger. The second greatest variance can be used to find the thumb and thus detect which hand is being detected. Once the hand is detected, the ordering of the fingers can be detected. The gaps between the fingers are detected in order to determine length and orientation of the finger. The gaps between the fingers can

Finger Joint Detection Algorithm for Automated RA US Scan

543

Fig. 4. In order to identify the fingers, the extremes in (a) is reduced to the green dots fingertips in (b). The green dots in (c) show the corresponding gaps which are found along the edge of the hand. (Color figure online)

be found using a breadth-first search along the edge of the hand. The gaps have been drawn in Fig. 4c.

Fig. 5. Results from the algorithm. A vector along the finger is estimated in (a) with the joints found along this vector in (b). This stage includes verification of the estimations which can lead to a rejection.

Each finger is described as a vector which originates from the tip of the finger and is oriented between the gaps on either side of the finger. Such a vector can be seen in Fig. 5a. When the fingers are evenly spread, the distance between the gaps and the center of the hand becomes equal, making it possible to detect if a gap is detected too far away from the assumed position. If the fingertips or the gaps between the fingers are not correctly detected, the detection will fail. The length of the fingers is then estimated as the length from the tip of the finger to the gap. The proportions of the finger bones are close to identical on different hands [16]. Using this, the joints can be found at the proportional position along the vector of the finger. While index through the little finger cannot bend sideways from the finger vector, the thumb can. An improved model is therefore used for the thumb which estimates the center of the finger along the original finger vector. Finding the center of the finger within the finger is done by maximizing the distance to edges across the finger and searching along the finger. The original finger vector is used as the initial approximation for the improved model. When the center is searched

544

N. Iversen et al.

for within the hand, the distance to an edge of the finger becomes too great. By using the same finger width as detected inside the finger, the improved model will follow the edge of the hand which gives a better estimate of the MCP of the thumb. Once the positions of the joints are estimated, the model can be evaluated. The orientation of the finger vector must make geometrical sense as the MCP joints cannot be overlapping and no joint can be outside the hand area. By disproving the detection of the joints, the detection can be rejected. If a part of the algorithm fails, the detection can also stop without progressing to the parts that depend on previous results. Because the detection is happening on human hands, the detection can be assisted by the person being scanned. If the hand is placed out of the camera view, the person can be instructed to move the hand. Giving the patient visual feedback to what the camera can see and information on when the hand is detected, the patient can adjust and find the compromise between a natural and relaxing position and a position where the hand can be detected. The last stages verifies the joints fit the a model of the joints. Several features are extracted. Distance to center, distance between joints, whether the joints are inside the hand, if the fingers overlap, if the finger lengths have the right proportions are gathered to find a model to see if the detection should be rejected. If the detection is rejected, no joints are returned. No prior or external knowledge is used on the detection. In order to determine actions based on the detection, the platform will detect on both images. If the detection is valid on both images stereopsis can be used to find the position of the joint in relation to the table. Should the hand lie in a position where the detection is not accurate, the results from the detection can vary from one image to another. Using the detected position, any such errors will be perceived as movement and the planning system will not engage. The platform waits for the expected hand to be placed on the table. When the hand is detected, a timer is started to see if the hand is removed or there is movement. If movement occurs, the timer will reset, making the patient adjust the hand to increase the chance of a good detection.

4

Results

The proposed vision algorithm is tested on speed and accuracy. In order to measure the accuracy, a dataset is created and manually labeled to get ground truth data. This data consists of 26 subjects providing 15 images of each hand, giving an image library of 780 images. The image set contains variance from different hands as seen among the examples of input images. Examples from the dataset in Fig. 6. During the testing of the algorithm, the rejection phase is turned off. This allows the faulty detections to be included as a false positive result. The speed of the algorithm is measured by timing the steps of the algorithm in the system. The vision algorithm runs in real-time when the robot is not occluding the hand. The timing of the algorithm is tested on a i7-6700 CPU and with a Point

Finger Joint Detection Algorithm for Automated RA US Scan

545

Fig. 6. Example of input images from different datasets. The variation within the dataset includes different positions and rotations of the hand with both left and right hands used.

Grey Bumblebee 2 BB2-08S2C-60 stereo camera. In Table 1 shows the mean and variance of the of timing for the algorithm when used on these images. The average width of a finger is measured to be 58.4 pixels with the test setup. An error rate higher than 30 pixels is therefore viewed as outside the finger. To measure the accuracy, the ground truth is set and several types of errors are measured. To obtain the ground truth data, each image is manually labeled as to where the perceived centers of the joints are located. The errors are measured in pixels according and measured in three directions as shown in Fig. 7. The error d is the euclidian distance between the measured and the real data. The h and w represents the error along the finger vector. The height, h, is used to detect how much the joint is in the center of the finger. The width, w, is used to detect if the proportion model of the fingers is correct. The accumulated error of the different types are shown in Fig. 8. As the total error, d, is dependent on the w and h errors, these are analysed further. In Figs. 9 and 10 is the w and h errors for all joints shown, grouped into bins of 5 pixels. All detections where the error exceeds 30 pixels are grouped together and viewed as failures of the algorithm. The percentages of the distribution is shown.

546

N. Iversen et al.

Table 1. Timing of the vision algorithm. The detected column show the timing of the algorithm when the hand is stationary, and the moving column show the timing when the hand is being waved in front of the camera to create motion blur and scenarios where part of the hand is visible. Mean [ms] Detected Moving Pre-processing

Std dev. Detected Moving

21.27217 21.37337 4.4238

4.30681

COM and rotation 36.77519 35.45741 6.24108

5.1909

Finger tips/gaps

0.59683 0.21922

0.54077

58.90798 57.42761 8.67004

7.21514

Total

0.86063

d detected

w

ground truth

h

80 70

d [px] h [px] w [px]

50

60

Success rate [%]

90

100

Fig. 7. The types of errors measured. The longest line represents the finger vector as detected. The red and the green dot represents the measured and the real position of the joint. The d error is the direct discrepancy between the points, where w describes the error along the finger and the h error describes the error across the finger. As the finger vector is obtained by the algorithm instead of manually labelled, the w and h values will not be directly along or across the finger. (Color figure online)

l (zero padding) [3, 4]. Further extension of m, n to a common power of two is recommended, but optional. Having done the extensions, we have XX cuv ¼ xij yu þ i;v þ j ; i ¼ 0; . . .; m  1; j ¼ 0; . . .; n  1: ð3’Þ Equation (3’) may be computed by ð4Þ Ƒ is the discrete Fourier transform. The asterisk indicates the complex conjugate and C, X, Y are real matrices with the dimensions m, n or (optionally) the common power of two. Finally, the resulting matrix [cuv] defined in Eq. (3) is a left up sub matrix ðm  k þ 1Þ  ðn  l þ 1Þ of the matrix C.

3 Correlation Coefficient In 1895, Karl Pearson defined correlation coefficient. Pearson’s r was the first normalized formal correlation, and it is still the most widely used measure of relationship between the two series of values, [1, 2]: r¼

X 1=2 X X ðxi xm Þðyi ym Þ= ðxi xm Þ2 ðyi ym Þ2 :

ð5Þ

The corresponding mean values from (5) are defined as X X xm ¼ xi =k; ym ¼ yi =k:

ð6Þ

Let us define the following sums: X X X X X a¼ xi ; b ¼ yi ; c ¼ xi yi ; d ¼ x2i ; e ¼ y2i :

ð7Þ

We can write (5) in the form   1=2 r ¼ ðkcabÞ= kda2 keb2

ð8Þ

The Pearson product-moment correlation coefficient is a dimensionless index, which is up to signum invariant to linear transformations of either variable: x0i ¼ px xi þ qx ; a0 ¼

X

x0i ¼ px a þ k qx ;

y0i ¼ py yi þ qy b0 ¼

X

y0i ¼ py b þ k qy

ð9Þ ð10Þ

Real Time Sub Image Localization for Tracking

c0 ¼

X

x0i y0i ¼ px py c þ px qy a þ py qx b þ qx qy k

d0 ¼ e0 ¼

X X

591

ð11Þ

2 2 x02 i ¼ px d þ 2px qx a þ k qx

ð12Þ

2 2 y02 i ¼ py e þ 2py qy b þ k qy

ð13Þ

    1=2 ¼ r sgn px py r0 ¼ ðkc0  a0 b0 Þ= kd0  a02 ke0  b02

ð14Þ

In case of the two 2-D k  l sub-images the following form of the correlation coefficient with i ¼ 0; . . .; k  1; j ¼ 0; . . .; l  1 is preferred r¼

X X

xij  xm



 X X 2 1=2 2 X X  yij  ym = yij  ym : ð15Þ xij  xm

The corresponding mean values from (15) are defined as xm ¼

XX

xij =k=l;

ym ¼

XX

yij =k=l:

ð16Þ

Analogously, sums (7) become the form: a¼

XX

xij ; b ¼

XX

yij ; c ¼

XX

xij yij ; d ¼

XX

x2ij ; e ¼

XX

y2ij : ð17Þ

Finally, we can write (15) in the form   1=2 r ¼ ðklc  abÞ= kld  a2 kle  b2

ð18Þ

The use of correlation coefficient for pattern matching is leading to repeated calculations of the correlation coefficient (18) the same way as to repeated calculations of the value (2) in case of the cross correlation. Moreover, the value c in (2) is the same as the value c in (17). Thus the FFT may be used to speed up the repeated calculations (3) through the extension (3’) without having removed the mean value from images like assumed in [3, 4].

4 Normalizing Having computed the resulting matrix [cuv] defined in Eq. (3) as a left up sub matrix ðm  k þ 1Þ  ðn  l þ 1Þ of the matrix C defined in (4), we can normalize elements cuv in accordance with (18):

592

K. Dobrovodský and P. Andris

  1=2 ruv ¼ ðklcuv  abuv Þ= kld  a2 kleuv  b2uv

ð19Þ

When shifting the pattern [xij] through the area of interest, sums a, d in (17) remain constant, but sums b, e vary: buv ¼ euv ¼

XX XX

yu þ i;v þ j ; i ¼ 0; . . .; k  1; j ¼ 0; . . .; l  1

ð20Þ

y2u þ i;v þ j ; i ¼ 0; . . .; k  1; j ¼ 0; . . .; l  1:

ð21Þ

In order to speed up calculations (20), (21), let us precompute powers zij ¼ y2ij

ð22Þ

and define partial sums row buv ¼ row euv ¼

X X

yu;v þ j ;

j ¼ 0; . . .; l  1

ð23Þ

zu;v þ j ;

j ¼ 0; . . .; l  1:

ð24Þ

We compute (23), (24) for u = 0, …, m − 1 and v = 0 only. All next partial sums may be computed recurrently for v = 0, …, n − l − 1 row bu;v þ 1 ¼ row buv  yu;v þ yu;v þ l

ð25Þ

row eu;v þ 1 ¼ row euv  zu;v þ zu;v þ l

ð26Þ

Similar technique of computing a definite sum from a precomputed running sum was introduced in [5] for texture mapping. In respect of the above definitions of partial sums, we define for u ¼ 0; . . .; m  k þ 1 and v ¼ 0; . . .; m  l þ 1 buv ¼ row bu;v þ row bu þ 1;v þ . . . þ row bu þ k1;v

ð27Þ

euv ¼ row eu;v þ row eu þ 1;v þ . . . þ row eu þ k1;v

ð28Þ

Nevertheless, we compute (27), (28) just for u = 0 and v ¼ 0; . . .; n  l. All next sums recurrently for u ¼ 0; . . .; m  k þ 1 bu þ 1;v ¼ buv  row bu;v þ row bu þ k;v

ð29Þ

eu þ 1;v ¼ euv  row eu;v þ row eu þ k;v

ð30Þ

Finally, we compute normalized ruv according to (19) using effectively computed buv, euv according to (27), (28), (29), (30) instead of (20), (21).

Real Time Sub Image Localization for Tracking

593

5 Performance A vision system is running on a Core Module 920 Single Board Computer with the CPU Intel Core i7 (Dual Core) 1.7 GHz under Linux operating system Fedora 26. System is running in real time with the sampling period of 40 ms. The performance of the above algorithms is a function of the search window size and the pattern size. Elapsed time increases in accordance with the size of the search window significantly. On the other hand, the size of the pattern does not play a significant role since it is always extended to the size of the search window. Thus the duration of the FFT is nearly the same, as we can see in the Table 1. Table 1. Matrix of cross correlations computed in Fourier transform domain. Area/Pattern 512  512 256  256 128  128

64  64 9.14 ms 2.96 ms 1.59 ms

32  32 9.24 ms 3.03 ms 1.62 ms

16  16 9.30 ms 3.06 ms 1.63 ms

88 9.32 ms 3.08 ms 1.64 ms

44 9.35 ms 3.09 ms 1.64 ms

22 9.35 ms 3.10 ms 1.65 ms

The reason why the times in Table 2 are slightly increasing is the increasing total number of pattern positions in the search window. Table 2. Normalizing of cross correlations in spatial domain. Area/Pattern 512  512 256  256 128  128

64  64 11.56 ms 2.54 ms 0.49 ms

32  32 12.11 ms 2.77 ms 0.59 ms

16  16 12.76 ms 3.04 ms 0.68 ms

88 12.44 ms 3.06 ms 0.72 ms

44 12.49 ms 3.14 ms 0.78 ms

22 12.42 ms 3.16 ms 0.77 ms

The sum of times from Tables 1 and 2 is in Table 3 for comparison with times in Table 4 achieved in spatial domain only. It can be seen that the acceleration of the algorithms is significant from the pattern 4  4 above. Table 3. Matrix of correlation coefficients computed in transform and spatial domain. Area/Pattern 512  512 256  256 128  128

64  64 20.70 ms 5.50 ms 2.08 ms

32  32 21.35 ms 5.80 ms 2.21 ms

16  16 22.06 ms 6.10 ms 2.31 ms

88 21.76 ms 6.14 ms 2.36 ms

44 21.84 ms 6.23 ms 2.42 ms

22 21.77 ms 6.26 ms 2.42 ms

594

K. Dobrovodský and P. Andris Table 4. Matrix of correlation coefficients computed in spatial domain only. Area/Pattern 512  512 256  256 128  128

64  64 4307.40 ms 776.73 ms 89.18 ms

32  32 1215.10 ms 264.56 ms 49.15 ms

16  16 343.50 ms 80.29 ms 17.74 ms

88 105.48 ms 25.61 ms 6.01 ms

44 39.55 ms 9.73 ms 2.41 ms

22 22.40 ms 5.53 ms 1.38 ms

6 Search Mode The recognition starts running in a search mode offering recognized objects to operator. A small database of patterns is available for this phase (Fig. 1).

Fig. 1. Pattern images for the Search mode.

In case of unsuccessful recognition, the operator can point out an object on the screen to be recognized. From this moment a new pattern is defined and a search for similar sub images is started. All recognized objects within the search window with the degree of similarity higher than the selected level (about 70%) will be offered for tracking. The operator can make a choice from possibly more objects simply by pushing knobs on the joystick in this phase. Having selected one, the Track mode is started.

7 Track Mode The operator may recall an enlarged frame of the current pattern to modify, accept or reject it in the Track mode. Modifying consists in moving the frame vertically and horizontally so that the object of interest gets more accurately into the center of the pattern. In addition, it is possible to adjust the frame size so that the pattern will contain less background (Fig. 2).

Fig. 2. Pattern modification in the Track mode.

Real Time Sub Image Localization for Tracking

595

It is clear that the normalized cross correlation is not an ideal approach to feature tracking since it is not invariant in respect of imaging scale, rotation, and perspective distortions. We eliminate these limitations, addressed in [4], by continuously updating the pattern every 40 ms in the Track mode. The prediction of the future position of the tracked object is calculated in the Track mode. The output of the recognition process is an object that meets the maximum allowable distance criterion from its predicted position. If it is not the case, i.e. there is no such object recognized, the output is the predicted position and the system signals blind monitoring. The operator can stop the process in case of a dummy object tracking. The maximum allowable distance of the object from its predicted position is determined experimentally.

8 Conclusion It has been observed that a small database is useful as a starting tool in search mode but it does not work well when different objects in various positions should be recognized. Pointing out an object of interest on the screen is more universal way how to start tracking of the moving object. Once having started, the system is able to update the pattern in each step of the tracking process and thus to avoid difficulties with scale, rotation, and perspective distortions. Nevertheless, some generally oriented basic pattern images may be stored in a small database to reduce the time needed to define new patterns. Acknowledgment. We thank the support of the Scientific Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences project number 2/0154/16.

References 1. Rodgers, J.L., Nicewander, W.A.: Thirteen ways to look at the correlation coefficient. Am. Stat. 42(1), 59–66 (1988) 2. Gonzales, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Adison-Wesley, Reading (1992) 3. Lewis, J.P.: Fast Template Matching. Vision Interface 95, Canadian Image Processing and Pattern Recognition Society, Quebec City, Canada, pp. 120–123, 15–19 May 1995 4. Lewis, J.P.: Fast normalized cross-correlation. In: Proceedings of Vision Interface, pp. 120– 123 (1995) 5. Crow, F.: Summed area tables for texture mapping. Comput. Graph. 18(3), 207–212 (1984) 6. Andris, P., Dobrovodsky, K.: Developing an embedded system based on a real-time version of linux. In: Proceeding of the 23rd International Conference on Robotics in Alpe-AdriaDanube Region, CD, IEEE catalog no. 34043, Smolenice, Slovakia, 3–5 September (2014)

596

K. Dobrovodský and P. Andris

7. Dobrovodsky, K., Andris, P.: Real time recognition and tracking of moving objects. Comput. Inform. 33(6), 1213–1236 (2014) 8. Dobrovodsky, K., Andris, P.: Aiming procedure for the tracking system. In: Advances in Robot Design and Intelligent Control: Proceedings of the 24th International Conference on Robotics in Alpe-Adria-Danube Region (RAAD). Advances in Intelligent Systems and Computing, vol. 371, pp. 369–378. Springer, Cham (2016) 9. Dobrovodsky, K., Andris, P.: Cooperative distance measurement for an anti-aircraft battery. In: Advances in Service and Industrial Robotics: Proceedings of the 26th Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2017. Mechanisms and Machine Science, vol. 49, pp. 95–101. Springer, Cham (2018)

Upper Layer Extraction of a Folded Garment Towards Unfolding by a Robot Dimitra Triantafyllou(B) and Nikos A. Aspragathos Department of Mechanical Engineering and Aeronautics department, University of Patras, 26504 Rio, Greece [email protected], [email protected]

Abstract. A layer extraction method aiming at robotic garment unfolding is presented in this paper. Utilizing depth sensor input, the layer detection is achieved through a depth first algorithm and a simple perceptron. An action space based on the edges detected on the garment and their features restricts the complexity of the algorithm and accelerates the detection. This paper constitutes part of the robotic garment unfolding pipeline and emphasizes on the extraction of the upper layer of clothing articles that result in a half folded, two-layer planar state after robotic manipulations. The presented methodology is independent from the type of the handled garment and can cope with unknown designs. First results seem promising and encourage future work on the field. Keywords: Unfolding

1

· Garments · Layer extraction

Introduction

Home service robotics is a scientific area recently receiving augmented attention in the roboticists’ world. A challenging task that robots have to face while doing housework is the manipulation of pieces of clothing. The main challenge of these tasks lays on the fact that clothing articles are highly non-rigid items with infinite degrees of freedom hence it is difficult to manipulate them and define their form. Over the last years, robotic research groups have been occupied with the robotic unfolding of clothing articles. Several methods approach the unfolding task using an intermediate step where the garment is brought to a half folded configuration that facilitates the further unfolding [1–4] whereas other approaches detect characteristic features that, when grasped, lead to the natural unfolding of the clothing article due to gravity [5–7] (e.g. a T-shirts shoulders). This paper proposes a two phase procedure for the task of robotic garment unfolding: (1) bringing with robotic manipulations the clothing article in a halffolded, two-layer state, as it is presented in [4], (2)extraction of the upper layer of the garment. In the second step, depth information analysis of the half-folded garment provides a set of edges that includes segments of the garment’s upper layer and noise. Thus, a search strategy utilizing a simple perceptron and a depth first algorithm explore the edges to obtain the sequence forming the upper layer. c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 597–604, 2019. https://doi.org/10.1007/978-3-030-00232-9_63

598

D. Triantafyllou and N. A. Aspragathos

The main contribution of this method is that it is independent of the garment’s type and shape. This property allows handling different kinds of clothing that sometimes might have unusual shape. For example a woman’s wardrobe might have irregular garments according to fashion, e.g. a shirt with only one sleeve or an uneven hemline. Contrary to other methods that handle only specific types of garments known a priori, our method can cope with unknown designs. Furthermore, it does not need training with large databases nor uses predefined templates.

2

Bringing the Garment in a Half Folded Planar Configuration

In this paper the unfolding procedure is considered a two stage task. In the first part the garment is brought from a random, crumbled configuration in a half-folded, planar state whereas in the second part, the two layers in which the garment is divided are extracted so that it can be completely unfolded. The procedure of transforming a piece of clothing from a random state to a half folded planar configuration was analysed in detail in [4]. The garment, which was initially lying crumbled on a table, was grasped by a random point by one robotic manipulator and rotated in front of a range sensor acquiring its depth images. Each depth image was analysed so that folds and hemline edges were extracted. Although a single picture of the garment could have been sufficient to extract folds, aggregating results over more viewpoints improved the robustness of the algorithm. Therefore a voting procedure that designated the best candidate fold was utilised facilitating the avoidance of erroneous folds. Once the first fold was grasped by the robot the aforementioned procedure was iterated so that a second outline point was detected and grasped. Finally, the garment was located half-folded on a table to proceed with further unfolding.

3

Detection of the Garment’s Upper Layer

In this section a method for the extraction of a half folded garment’s upper layer is introduced. We divide the procedure in two parts: (1) the extraction of edges formed inside the garment and by its outline and their conversion into straight, oriented edges so that they are easily managed, (2) the interconnectivity of these edges in order to extract the upper layer of a half folded piece of clothing. 3.1

Extraction of Straight, Oriented Edges

The first step of the proposed method is the acquisition of a depth image of the half folded garment using a range sensor (Asus Xtion). The image is preprocessed by means of bilateral filtering aiming at reducing texture and noise and enhancing the edge contours extracted by Canny detector. Since our goal is to transform the edge contour pixels into separate edges, we cluster all the

Upper Layer Extraction of a Folded Garment Towards Unfolding by a Robot

599

garment’s internal pixels using the dbscan algorithm (the outline of the garment is considered to be a separate cluster). Subsequently, we apply line simplification to all the clusters so that they result into straight line segments. In order to diminish the noise, all the small edges are rejected.

Fig. 1. The edges’ orientation according to the direction of the path’s exploration

In order to attribute orientation to the detected straight edges, the depth difference between the two sides of each edge is calculated. In particular, a rectangular area around each edge and parallel to it is examined. The edge separates the area in two equal parts and the average depth of each part is calculated. The edge’s orientation depends on the direction of movement while exploring possible outlines of the upper layer. Thus, when the direction of the exploration is made clockwise the upper layer is supposed to lie on the right side of the edge, while in the opposite exploration direction the upper layer is supposed to be on the left side (Fig. 1). Therefore the orientation of an edge with two endpoints p and q is: → − pq → → =− u ×− z, (1) → − |pq| → where − z is the unit vector of the rotation axis determining the exploration → direction and − u is unit vector which is vertical to the edge and its direction occurs from the depth difference between the two sides of the edge. 3.2

Upper Layer Extraction

The edges that occurred from the procedure described in the previous section belong either to parts of the garment’s layers or to wrinkles. The existence of wrinkles, the gaps and the curvature of the garment’s edges compound the difficulty of the layer extraction problem. The goal of the proposed method is to find the sequence of edges that connects the a priori known grasp points, that brought the garment in the half folded configuration, and forms the outline of the

600

D. Triantafyllou and N. A. Aspragathos

garment’s upper layer. In order to determine this sequence a search algorithm is defined. The algorithm is defined by the tuple E, A, T  and actually comprises: (1) a state space E, since the exploration of the sequence proceeds based on each edge’s features, each edge determines the sequence and is considered a different state, (2) an action space A, that comprises the possible interconnections between the states (3) a state transition function T , that based on the states suggested by the action space determines the subsequent state of the sequence and (4) a search procedure, that facilitates handling of dead-end sequences, i.e. sequences that do not connect the two grasp points. State Space. The state space E is defined as the set E = {s1 , s2 , ..., sN }, where N is the number of detected edges, excluding the edges between the two grasp points that constitute the folding axis to avoid taking this short-cut as a candidate path. The goal is to find the set EL ⊂ E, i.e. the edges of the upper layer. Before analysing the path extraction procedure the initial and goal state of the path are determined. These states are based on three types of folded upper layers that might occur in reference to the edges that start from the folding axis. In particular, the starting and ending edges of a layer might be: (1)both inside the outline of the half-folded garment, (2) one inside and one located at the outline, (3) both located at the outline. Thus, the initial state s1 , i.e. the starting edge of the path, is detected by evaluating the edges based on their orientation and location in relation to the grasp points. In particular, if there is more than one internal edge with orientation starting from one of the grasp points (i.e. the line connecting the starting grasp point and the start of the edge should be collinear or form a very small angle ω < th, where th is a threshold), then, the nearest edge is selected while when there is no internal edge with such orientation, one of the outline edges starting from the grasp points is chosen. The preference to the internal edges as parts of the upper layer’s outline is justified by the fact that in this case an “arrow” junction is implied, i.e. the fold that created the layer [8]. In the same spirit, the goal state sG , i.e. the final edge of the path, is found. It is either an internal edge oriented towards the final grasp point or an outline edge resulting to it.

Fig. 2. Four types of connection: (a) collinear edges, (b) edge with inclination towards an outline corner, (c) continuous edges, (d) proximate edges.

Upper Layer Extraction of a Folded Garment Towards Unfolding by a Robot

601

Action Space. The action space A = {a1 , a2 , ..., an } defines the n possible state interconnections as they occur based on certain criteria. Thus, the exhaustive examination of all the edges is avoided and only certain actions are available at each state of the path. The criteria concern the edge’s collinearity, contiguity, proximity and inclination. In particular, we distinguish four types of connection (Fig. 2): 1. Collinear edges (Fig. 2a). It is common in perceptual completion to prefer smooth interpolating curves of least energy. 2. Edges with inclination towards an outline corner (it is an indicator of a non accidental relationship that connects an internal with an outline edge, Fig. 2b). 3. Continuous edges, which are mostly outline edges (Fig. 2c). 4. Proximate edges (Fig. 2d), meaning edges whose intersection point is located inside the garment’s outline, the line connecting their ends does not intersect with the outline (e.g. one edge is on the sleeve and the other on the main body of a shirt) and their distance is smaller than half of the outline’s length.

Fig. 3. Angles utilized as perceptron inputs

It should be mentioned that in all the cases described above the edges are oriented. State Transition Function. The state transition function T (si , (ai 1, .., ai n)) = si + 1 is a recursive perceptron that compares two possible states at a time, as they occur from action space A = {ai1 , .., ain } and selects the final candidate si+1 for the path. To proceed to this stage, the branching part described in the previous paragraph has to be implemented for all the candidates, i.e. we proceed one step further exploring the possible subsequent edges. In this way a better understanding of each candidate’s position in the sequence is obtained. The important features of each candidate that affect the selection of the path’s components and are used as inputs for the perceptron are: 1. The connection type t, where t = 1, 2, 3, 4 and corresponds to the numbers given at the description of the types at the previous paragraph.

602

D. Triantafyllou and N. A. Aspragathos

2. The absolute value of angle θ (Fig. 3) needed to make the candidate edge si+1 collinear with the current edge si . This feature indicates the abrupt changes of the path’s curvature that might occur from false inclusion of wrinkles. 3. The absolute value of angle ϕ (Fig. 3) needed to make the subsequent of the candidate edge si+2 collinear with the candidate edge si+1 . The goal of this feature, which is similar to the previous one, is to perceive the position of the candidate edge according not only to its previous but also to its subsequent edge. 4. A binary input that shows whether the candidate edge si+1 is internal or external in correspondence to the outline of the folded garment. The existence of edges in the garment’s interior is an indication that they are parts of the garment’s upper layer and are preferred over the outline edges. 5. A binary input depicting whether the subsequent of the candidate edge si + 2 is internal or external. 6. The distance d between the current edge and the candidate. Since the edges are oriented, the distance is defined from the end point of the current edge and the start point of the candidate edge. The smaller the distance the more possible is for the edges to be connected.

Fig. 4. Three different options provided by the action space and encountered by the perceptron.

In Fig. 4 the action space provides three different options of states. As it can be seen in Fig. 5, the transition function via the perceptron resulted in the correct outcome. Search Procedure. The state transition procedure returns a preferred edge that constitutes the next edge of the path. Nevertheless, sometimes this might result in a path that does not lead to the final grasp point or a path that includes a loop. Both of these cases are rejected, as a result the searching for another path should start. To deal with this situation we use a depth first algorithm. Therefore, we return back to the last node where there was a branching, discard the edge that led to dead end and select the next edge that the state transition function suggests. This procedure is iterated until a path that results to the final grasp point is found or until there are no other branches to follow.

Upper Layer Extraction of a Folded Garment Towards Unfolding by a Robot

603

Fig. 5. Outline of layer extraction: (a) the half folded garment, (b) the extracted sequence of edges that form the layer, (c) the extracted layer with the detected edges connected.

4

Results

The depth images used for the method’s evaluation were acquired by an Asus Xtion range sensor, located at a fixed height from the working table. The garments were manually placed onto the table but they were not flattened. Since the very thin garments’ width is smaller than the range sensor’s tolerance we used only thick articles such as thick long sleeved shirts and pairs of pants. For the perceptron’s training 17 branching examples were used that occurred in 12 different configurations of 3 garments (two long sleeved shirts and a pair of shorts). As a testing set 30 cases of branching were used that occurred from 18 random configurations. In 27/30 of the cases the perceptron’s output was the correct. For the evaluation of the whole method 34 different configurations of 5 articles were used (three long sleeved shirts and two pairs of shorts). In 27/34 the method detected correctly the upper layer (in 4 cases there was a deviation due to lack of

Fig. 6. Examples of extracted layers

604

D. Triantafyllou and N. A. Aspragathos

edge segments representing a whole edge of the garment’s upper layer polygon). In 4 cases the result was wrong due to the perceptron’s outcome, in 2 because the garment was too wrinkled for the branching procedure to gather the correct candidates and in 1 case a big part of the upper layer was not visible, i.e. there were no edges representing it. In Fig. 6 examples of extracted layers in different types of folded garments are depicted.

5

Conclusions

In this paper, a layer extraction methodology, part of a robotic garment unfolding pipeline, is introduced. The method, utilizing depth data, analyses the detected edges on the garment’s surface and according to their features and interconnections extracts through a depth first algorithm and a perceptron the edge sequence that comprises the layer. In particular, the presented method focuses on the detection of the upper layer of garments that result in a planar half-folded state due to robotic manipulations. The leverage of the proposed procedure is that it remains independent of the garment’s type handling various garment designs. Since the results provide a high rate of success, future work testing the method in an extended dataset is planned. Furthermore, a methodology that determines the appropriate grasp points on the detected layer combined with the planning of the robotic movements to achieve successful unfolding is being developed.

References 1. Osawa, F., Seki, H., Kamiya, Y.: Unfolding of massive laundry and classification types. J. Adv. Comput. Intell. Intell. Inform. 11, 457–463 (2007) 2. Cusumano-Towner, M., Singh, A., Miller, St., OBrien, J., Abbeel, P.: Bringing clothing into desired configurations with limited perception. In: ICRA (2011) 3. Kaneko, M., Kakikura, M.: Study on handling clothes-task planning of deformation for unfolding laundry. J. Robot Mechatron. 15, (2003) 4. Triantafyllou, D., Mariolis, I., Kargakos, A., Malassiotis, S., Aspragathos, N.: A geometric approach to robotic unfolding of garments. Robot. Auton. Syst. 75, 233– 243 (2016) 5. Doumanoglou, A., Kargakos, A., Kim, T.-K., Malassiotis, S.: Autonomous active recognition and unfolding of clothes using random decision forests and probabilistic planning. In: ICRA (2014) 6. Alenya, G., Torras, C.: Active garment recognition and target grasping point detection using deep learning. Pattern Recognit. 74, 629–641 (2018) 7. Kita, Y., Neo, E., Ueshiba, T., Kita, N.: Clothes handling using visual recognition in cooperation with actions. In: IROS (2010) 8. Triantafyllou, D., Aspragathos, N.: Definition and classification of primitives for the robotic unfolding of a piece of clothing. In: KEOD, pp. 417–422 (2014)

Industrial Robots and Applications

The Case of Industrial Robotics in Croatia Marko Švaco(&), Bojan Jerbić, Ivan Župančić, Nikola Vitez, Bojan Šekoranja, Filip Šuligoj, and Josip Vidaković Department of Robotics and Production System Automation, Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Zagreb, Croatia {marko.svaco,bojan.jerbic,ivan.zupancic,nikola.vitez, bojan.sekoranja,filip.suligoj,josip.vidakovic}@fsb.hr

Abstract. This paper presents an analysis of the number and the distribution of industrial robots in the Republic of Croatia. Also, the actual state of industrial robotics in the world is given, with the present and future growth trends, the distribution of robots by countries and manufacturing sectors. The number of robots in Croatia was obtained on the basis of a survey questionnaire sent to 1,500 Croatian companies. Regarding the question of robot ownership, 72 companies answered positively, resulting in a total of 326 active industrial robots in 2017. According to the Croatian Chamber of Economy estimates, the Croatian economy should have at least 2,000 installed robots. The paper gives a prediction for the growth trend in order to achieve 2,000 robots in a reasonable time period (by 2026). For that case, an exponential growth rate of 25.4% is required. Based on the current state of the Croatian economy, such an exponential growth is a huge challenge for the near future. The paper gives a brief critical review of the current state of industrial robotics in Croatia and provides guidelines for stimulating the application of industrial robots in the near future. Keywords: Robotics

 Croatia  STEM  Industrial production

1 Introduction Current advances in technology shape every aspect of our lives, including communication, transportation, education, manufacturing, etc. Through shaping our lives, they shape our desires which are the main drivers of the economy. The most important postulate in economy is to make profit and every company wants to be profitable as much as possible. To gain more profit companies must raise their productivity. Productivity growth in industrial manufacturing is usually connected with automation and robotization. In the modern industry, robots are hardware extension of computers and they enable the raise in productivity, product quality, and job quality. Since robots and automation provide higher productivity, product and job quality with respect to human workforce, there is a growing opinion that robots are stealing jobs. The reality is completely different, as proven by many recent papers [1, 2]. Automation and robotics lower production costs and enable companies to be competitive while keeping manufacturing locally, even in high-income countries which are also reshoring their factories. Competitiveness assures existence of manufacturing companies and in that way © Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 607–617, 2019. https://doi.org/10.1007/978-3-030-00232-9_64

608

M. Švaco et al.

keeps jobs and even makes new ones. New jobs can be created because productivity gains caused by robotization must flow back to the economy through one of the following ways: lower prices, higher wages or higher profits [2]. All mentioned ways cause increased demand for products because workforce and companies spend the surplus of money earned. As a result of the increased demand, higher needs for workforce in the industry appear. Furthermore, robots are capable of doing certain labor activities but they cannot replace all tasks since only 10% of jobs can be fully automated [3]. Considering that robots cannot do all the work by themselves, the future is in cooperation between people and robots in a way that robots augment labor and help people in tasks requiring high precision, quality or strength [4, 5]. Current robotic trends show high growth of collaborative and service robots [6–8] which means that people recognize the benefits of working with robots and feel comfortable in their vicinity. Since robots and automation are by all means necessary for productivity and employment growth, countries should be focused on providing proper education of future and current workforce to ensure a shift to higher skilled, higher paid and more quality jobs. Examples and importance of robotics in STEM education are shown in [9–11]. Regardless of the current state of the economy, even middle-income and lowincome countries which have based their competitiveness on cheap labor should embrace robotics and automation as economic growth enablers [12]. The latter statement is even more significant when McKinsey Global Institute predicts that 0.8% to 1.4% of global annual GDP growth in years to come will be driven by automation [13]. In this paper, the current status of industrial robotics in Croatia is presented. The status of robotics in Croatia with respect to the global robot market and country’s needs is discussed. Based on the estimated Croatian industry needs, a curve for the required robotics growth is suggested. The necessity for robotization and branches with the highest potential for robotization are also presented.

2 Global Robot Market In this section, we present the global state of robotics in the world and future trends of its development. The International Federation of Robotics (IFR) statistical analysis follows the robotic development and growth trends for a particular “more significant” country or general development in the world [14]. One of the important indicators of robotics development in the world is the annual sales of robots. Figure 1 shows the estimated annual worldwide supply of industrial robots in the period of 2008 to 2016 and the future forecast from 2017* to 2020*. As a result of the global recession in 2009, a significant drop can be seen, followed by rapid recovery and regrowth. Since 2010, the demand for industrial robots has significantly increased. Between 2011 and 2016, the average robot sales increase was at 12% by compound annual growth rate (CAGR). A significant leap in sales was recorded in 2014 when it rose 29% from 178,000 to 221,000 sold units. In 2016, annual robot sales rose by 16% to 294,312 units sold, which is the highest peak recorded. For the period from 2017 to 2020 constant sales growth is expected, it is estimated that half a million units per year will be sold by the end of 2020.

The Case of Industrial Robotics in Croatia

609

Fig. 1. Estimated annual worldwide supply of industrial robots 2008–2016 and forecast 2017*– 2020* [14].

2.1

Applications of Robots by Countries

Thousands of units

The annual sales of industrial robots can be observed in more detail by countries from Fig. 2, where fifteen leading countries in the world are shown. China, the Republic of Korea, Japan, United States and Germany are five major markets representing 74% of the total sales volume of industrial robots in 2016.

90 80 70 60 50 40 30 20 10 0

87.0

41.4 38.6

31.4 20.0 7.6 6.5 5.9 4.2 3.9 2.6 2.6 2.6 2.3 2.0

Fig. 2. Number of sold robots in the 2016 year between 15 countries [14]

Since 2013 China has been the biggest robot market in the world with a continued dynamic growth. In 2016, China kept its leading position and recorded growth of 27% in annual sales, where 30% of the total annual sales of industrial robots went to China. With 87,000 sold robots sold last year, it has come closer to the total sales of Europe and America together (97,300). The most important European industrial robot market in Germany which is the fifth largest robot market. Between 2011 and 2016 the annual sales of industrial robots more or less stagnated at around 20,000 units. Italy who follows Germany, since 2014 records a slight decline in robotic investments compared to 2015 (6,700).

610

2.2

M. Švaco et al.

Application of Robots by Industries

The statistical analysis of the worldwide annual sales can be observed by the basic industrial activities shown in Fig. 3. The automotive industry has been the most important customer of industrial robots between 2010 and 2014, with increased investments in industrial robots worldwide. Between 2011 and 2016 robot sales to the automotive industry increased by 12% on average per year (CAGR). Investments in new production capabilities in the emerging markets, as well as investments in production modernization in major car producing countries, have caused the number of robot installation to rise. Demand for industrial robots in electrical/electronics industry records is in constant growth since 2013. The constant development of new technology and demand for new products to be released on the market in a short time period caused the need for a widespread automation of production facilities. Electronic products, such as computer equipment, radio, and communication devices are mainly produced in Asian countries which have led to the sudden development of these countries (mainly China). The annual growth of electrical/electronic industry in the period between 2011 and 2016 was 19%. The metal and machinery industry records an increasing trend since 2010, however in the 2016 industry sales were slightly decreased by 3%, annual average growth between 2011 and 2016 was up 15% per year (CAGR). The rubber and plastic industry has increased the number of robot installations in period since 2009 to 2015, while the food and beverage industry also follow the same trend. 2016

2015

2014

24 20 24

Unspecified

11 15 19

Others

8.2 6.9 7.1

Food

16 17.3 17

Chemical, rubber and plasƟcs Metal

21

28.7 29.45

Electrical/electronics

46

65

AutomoƟve industry

91.3

98 94

0

20

40

60 80 Thousands of units

100

103.3

120

Fig. 3. Estimated annual supply of industrial robots by industries worldwide 2014–2016 [14]

The Case of Industrial Robotics in Croatia

611

3 Robotic Application in Croatia In this section, we provide the results of the analysis of industrial robotic application in Croatia. The results were obtained on the basis of a survey questionnaire sent to 1,500 Croatia companies. Regarding the question of robot ownership, 72 companies answered positively and 100 responded negatively, while the others did not respond. Based on the collected data we have obtained the result of at least 326 installed robots in Croatia. An alternative assessment comes from the Croatian Chamber of Economy (HGK), which recently provided information that Croatia has a total number of 175 installed industrial robots. They assumed that Croatia should have an additional 1,800 robots according to the Croatian economic potential and status. In order to determine the average number of robots per 10,000 employees in Croatia, data from the State Bureau of Statistics [15] were used, according to which in 2016 the total number of employees in the manufacturing industry was 227,863. Taking into account 326 installed robots from the survey questionnaire, an average of 14 robots per 10,000 employees was calculated. It can be concluded that Croatia lies far below the European average of 74, which puts us in the group “other” and the group of states with the “minimum automotive industry”. The number of robots can be observed according to the HGK classification of industrial activities and thus determine the number of robots present within which production activities. For companies that have provided enough data on the robot application (240 companies), the statistics are represented in Fig. 4. 180

161.82

160

Number of robots

140

Number of robots per 10,000 employees

120 100 70

80 60 40 20

45.9 36

40.8 14 3.9

2.98 1

21

34 33.9 27.2 26 19.7 10.1 12 11 7.2 6 8.01 2

11.4 6

C24

C30

31

0 C10

C13

C20

C22

C23

C25

C26

C27

C28

C29

C32

C10-ProducƟon of food products C13-Manufacturing of texƟles C20-Manufacturing of chemicals and chemical products C22-Manufacturing of rubber and plasƟc products C23-Manufacturing of other non-metallic mineral products C25-Manufacturing of finished metal products (except machinery) C26-Manufacturing of computers and electronic and opƟcal products C27-ProducƟon of electrical equipment C28-ProducƟon of machines and devices C29-ProducƟon of motor vehicles, trailers and semi-trailers C24-Metal producƟon C30-ProducƟon of other means of transport C32-Other processing industry

Fig. 4. The number of robots and the number of robots per 10,000 employees within production activity (according to HGK) [15]

612

M. Švaco et al.

All of the installed robots belong to a group of industrial robots and work isolated from humans. It is interesting that the minimal use of robots is in the automotive industry (C29). This data can be explained by the fact that the Croatian automotive industry focuses largely on manufacturing and exporting car parts, which fall into the remaining branches (metallurgy, rubber production, etc.), while there is not a lot of production facilities for the automotive industry itself. The most prominent domestic car maker with robots is DOK-ING. In addition to the automotive industry, the area of production of food products is a great potential for robotization. This industry has the largest number of employees, while at the same time a small number of robots. Robotization of production processes in industries such as Kraš, Podravka and others could boost robotization in other industrial branches. The rubber and plastic processing industry have the highest number of robots, along with the pharmaceutical and nonmetallic mineral production branches. Using the data on the number of permanent employees in companies, the annual turnover and the data collected from the survey questionnaire, we have determined the distribution of robots depending on the size of the company. Detailed data are shown in Tables 1 and 2, shows that the largest number of robots are in companies with a larger number of employees and in companies with a higher annual turnover. Table 1. Deployment of robots depending on company size by number of employees Number of employees Number of robots 500 138

Companies with 51–100 employees have the lowest number of robots (Table 1), therefore these companies represent the main area for further investment in industrial robotics. Smaller companies have invested in robots and with no increase in the number of employees, they have gained more profitability. The data in Table 2 shows that the number of installed robots is higher in companies with a higher annual turnover. Larger companies are more likely to bear the higher investment costs and are more inclined to invest in new technologies. According to the data of the State Bureau of Statistics, the average annual gross salary in the manufacturing and production sector in Croatia is 970 €. Today, the integration cost of industrial robotic arms in most cases vary anywhere from 20,000 € to 325,000 € [16]. Taking a medium sized robot with a load capacity up to 10 kg, the costs are estimated as the installation (85,000 €) and maintenance costs (1,000 € per year). The annual labor cost of work in three shifts in Croatia is 35,000 €. The estimated recovery period for this case is roughly three years, but it can be less. The worldwide price of industrial robots has dropped more than 25% since 2014, and is expected to drop an additional 22% by 2025, what will lead to a shorter payback period. It would be interesting to evaluate the competitiveness and

The Case of Industrial Robotics in Croatia

613

Table 2. Deployment of robots depending on company size by the annual turnover Annual turnover of the company Number of robots 7 mil. € 134

employment trends caused by the application of the robots in Croatia, but those would require another more detailed survey questionnaire and were out of scope of this research. The growth of the number of industrial robots in Croatia is shown in Fig. 5. In order to predict the growth of robots in the coming years on the basis of the results obtained from the survey questionnaires, we fit a second order curve to best describe the collected data. It can be seen that in 2050 there will be only 1,650 installed robots if the current growth trend is maintained, which will still be well below the current needs of the Croatian economy and the EU and world average.

2000 y = 0.4385x2 + 9.0067x + 3.5562 R² = 0.9959

1500 1000

Number of robots per year

2050

2047

2041

2044

2038

2035

2032

2029

2023

2026

2020

2017

263 326

2014

2011

2002

1999

2005

23 62

0

189 85 146 2008

500

Fig. 5. Estimated growth of installed robots through the years up to 2050.

To achieve an economic growth and reach 2,000 robots (Required number of robots) within a reasonable time (we assume a period of nine years, i.e. 2026.), an annual increase of the installed robots with respect to past years has to be significantly increased. In order to determine the required growth rate, we propose the following equation suggesting an exponential growth in the number of installed robots: 

Required number of robots Total number of robots for 2017 year

! 1 t1

 1 ¼ exp g:

ð1Þ

We have calculated that an exponential growth of exp_g = 25.4% is necessary for a time period of t = 9 years, which represents a huge jump compared to the previous

614

M. Švaco et al.

growth of the number of industrial robots. Taking into account the exponential growth and the current number of installed robots, Fig. 6 shows the necessary annual increase in the number of installed industrial robots. 2500 2000

2000

1594 1271 1013 808

1500 1000 500

326

644 513 409

2025

2023

2021

2019

2017

2015

2013

2011

2009

2007

2005

2003

2001

1999

0

Fig. 6. Number of installed robots through years with predicted exponential growth of 25.4%

Such an exponential growth of industrial robots would also implicate a greater economic development. Taking into account the current state of the Croatian economy, the realization of such exponential growth is a huge challenge. In order to achieve this, it is necessary to determine which industries have the greatest development potential. The automotive industry as the leading robot consumer would definitely increase the number of installed robots in Croatia. The development of the automotive industry due to Croatia’s good geostrategic position is certainly a huge economic potential. A favorable investment climate and government support in Croatia needs to be created, as it was the case in Portugal during the establishment of Autoeuropa [17]. This could enable the arrival of foreign car producers, as for example in Croatia’s neighbor countries Slovenia (Renault) and Serbia (Fiat). Also, domestic companies such as Rimac cars and DOK-ING could realize the development of a small serial production of electric cars. Other industries such as the production of food, rubber, plastic and other non-metallic products are at a sufficient level of development, which is a good foundation for investments in robots and automation. Robotization of mentioned industries could stimulate further robotization of other manufacturing activities. Therefore, it is obvious that non-robotized companies should be modernized, i.e. entrepreneurs and managers must understand the advantage and necessity of introducing robotics in production to be competitive on the local but also on the global market. The industrial production growth rate in Croatia for 2016 was 5%. This is a clear indicator of recovery, which could have a profound impact on the further implementation of industrial robots. Also, a clear set of basic national economic goals such as increased production and employment by policymakers will contribute to industrial progress.

The Case of Industrial Robotics in Croatia

615

Fig. 7. The applied robotics framework at the Department for Robotics and Production System Automation, University of Zagreb, Faculty of Mechanical Engineering and Naval Architecture

4 Conclusion Obviously, Croatia lags significantly behind most of the European countries, especially industrial giants like Germany. According to the HGK estimates, the Croatian economy should use at least 1,800 more robots. This paper presents the results of the current growth rate of the implementation of industrial robots in Croatia. Based on the current growth rate, the number of installed robots by the year 2025 will not reach even half of the total robots required. The forecasts for necessary exponential growth are also presented. Croatia needs to increase the annual procurement of industrial robots to at least 100 to 200 robots per year. The focus on the automotive industry and the promotion of its own small-scale production could significantly boost industrial robotics in Croatia. Croatia has an excellent geostrategic position and is able to provide support for the construction of modern industrial automobile manufacturing facilities. Other industrial branches for the production and processing of rubber, food, and plastic products have been sufficiently developed to provide a good basis for future investments, what could lead to significant economic benefits and stimulate further robotization. Education and training of young professionals in engineering, mathematics, technology, and science also play an important role [9–11], and it is necessary to provide them a job opportunity because their work is immediately prompted by economic growth and national income. The leading Technical faculties in Croatia, such as the University of Zagreb, Faculty of Mechanical Engineering and Naval Architecture can also give a strong support through their applied robotics courses lectured to Bachelor and Master students where laboratory exercises are conducted on the leading edge

616

M. Švaco et al.

robotics technology as shown in Fig. 7 (at this moment the Department for Robotics and Production System Automation has around 15 industrial and collaborative robots). Robotics is certainly the foundation for the fourth industrial revolution i.e. Industry 4.0 [18] that can be linked, controlled and managed through the so-called Internet of Things. It is necessary to provide highly educated staff and recruit them in the industry, and only then to promote the advancement of smart factories. The Croatian industry definitely needs an appropriate development strategy, including proper education and modernization by automation and robotization, to become more competitive on EU and the global market. Acknowledgements. Authors would like to acknowledge Marko Šćurec for preparing the questionnaire and the data collection in Croatian companies, and the Croatian companies for the provided feedback. Authors would also like to acknowledge the support of the Croatian Scientific Foundation through the research project ACRON - A new concept of Applied Cognitive Robotics in clinical Neuroscience, and the support from the project DATACROSS – Centre of Research Excellence for Data Science and Advanced Cooperative Systems – ACROSS.

References 1. Jäger, A., et al.: Analysis of the impact of robotic systems on employment in the European Union: final report. Luxembourg: Publications Office (2015) 2. Miller, B., Atkinson, R.D.: Are robots taking our jobs, or making them? The Information Technology and Innovation Foundation (2013) 3. The Impact of Robots on Productivity, Employment and Jobs. International Federation of Robotics (2017) 4. Jalba, C.K., Konold, P., Rapp, I., Mann, C., Muminovic, A.: Cooperation between humans and robots in fine assembly. In: IOP Conference Series: Materials Science and Engineering, vol. 163 (2017) 5. Hajduk, M., Jenčík, P., Jezný, J., Vargovčík, L.: Trends in industrial robotics development. Appl. Mech. Mater. 282, 1–6 (2013) 6. Executive Summary World Robotics 2017 Service Robots. https://ifr.org/. Accessed: 10 Jan 2018 7. Švaco, M., Koren, P., Jerbić, B., Vidaković, J., Šekoranja, B., Šuligoj, F.: Validation of three KUKA Agilus robots for application in neurosurgery. In: Ferraresi, C., Quaglia, G. (eds.) Advances in Service and Industrial Robotics, vol. 49, pp. 996–1006. Springer, Torino (2017) 8. Švaco, M., Šekoranja, B., Šuligoj, F., Vidaković, J., Jerbić, B., Chudy, D.: A novel robotic neuronavigation system: RONNA G3. Stroj. Vestn. J. Mech. Eng. (2017) 9. Benitti, F.B.V., Spolaôr, N.: How have robots supported STEM teaching? In: Khine, M.S. (ed.) Robotics in STEM Education, pp. 103–129. Springer, Cham (2017) 10. Eguchi, A.: RoboCupJunior for promoting STEM education, 21st century skills, and technological advancement through robotics competition. Robot. Auton. Syst. 75, 692–699 (2016) 11. Kopcha, T.J., et al.: Developing an integrative STEM curriculum for robotics education through educational design research. J. Form. Des. Learn. 1(1), 31–44 (2017) 12. Keisner, C.A., Raffo, J., Wunsch-Vincent, S.: Breakthrough Technologies: Robotics, Innovation and Intellectual Property. WIPO (2015)

The Case of Industrial Robotics in Croatia

617

13. A Future That Works: Automation, Employment, and Productivity. McKinsey&Company (2017) 14. Executive Summary World Robotics 2017 Industrial Robots 15. Croatian Bureau Of Statistics - Republic Of Croatia. https://www.dzs.hr/default_e.htm. Accessed 15 Jan 2018 16. Sirkin, H.L., Zinser, M., Rose, J.R.: The robotics revolution: the next great leap in manufacturing (2015) 17. Reis, A., Heitor, M., Amaral, M., Mendonça, J.: Revisiting industrial policy: lessons learned from the establishment of an automotive OEM in Portugal. Technol. Forecast. Soc. Change 113, 195–205 (2016) 18. Bahrin, M.A.K., Othman, M.F., Azli, N.N., Talib, M.F.: Industry 4.0: a review on industrial automation and robotic (2016)

Decentralizing Cloud Robot Services Through Edge Computing Florin Anton, Th. Borangiu(&), O. Morariu, Silviu Răileanu, Silvia Anton, and Nick Ivănescu Research Centre in CIM and Robotics, University Politehnica of Bucharest, Bucharest, Romania {florin.anton,theodor.borangiu,silviu.raileanu, silvia.anton,nick.ivanescu}@cimr.pub.ro

Abstract. The paper extends previous developments of cloud robot services for intelligent manufacturing with new data streaming and machine learning techniques that are used to dynamically reschedule resources and predict future behaviour on the shop floor. Data is obtained in real-time with edge computing solutions that ease the computing effort in the cloud, by moving intelligence to the edge of the manufacturing execution system. Thus, machine learning algorithms can be run in real-time context with re-training on new data; the insights become predictions, enabling real-time decisions for: operations scheduling, robot allocation and real status-based maintenance. Experiments are described. Keywords: Robot services Data stream

 Cloud manufacturing  Edge computing

1 Introduction Collecting real-time monitoring data from shop floor devices attached to multiple robot workstations of manufacturing systems is an important step in designing intelligent mechanisms capable of dynamic scheduling robotized assembly or material processing operations and handling unexpected situations, like service quality degradation or faults. While totally unexpected breakdowns can occur and will unavoidably result in outages if those resources are not redundant in the system, a certain class of possible problems occurs over time [1]. The operations of robot vision resources are additionally affected by variations of the environment (lighting system, material flow, shop floor transport and storage systems) which also requires dedicated monitoring. Consider for example a robot drive; a failure will not be immediate and slight variations in the behaviour of the actuator can be observed, like increased vibrations patterns or increased temperature in the affected component. A classical implementation would be to monitor the metric actively and set a fixed threshold for it; the threshold would be then used to trigger alerts. In manufacturing systems intensively using robots, this approach is not efficient and does not work at scale, partly because not all faults can be predefined; i.e. some vibrations can be caused by other factors that can be perfectly normal. On the other © Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 618–626, 2019. https://doi.org/10.1007/978-3-030-00232-9_65

Decentralizing Cloud Robot Services Through Edge Computing

619

hand, some behaviour depends on the current drive system context (e.g., amplifier and encoder temperatures, AC voltage) that can have virtually an infinite number of states. These include the parts being handled by the robot in the shop floor in that given time interval, the interactions between shop floor resources, material and information flows and environment conditions. Another important consideration is the ability to track and consider the covariance of multiple metrics monitored. Continuing on the previous example, if a sensor monitors vibration while another one heat, the covariance of these two might indicate the difference between a fault and normal behaviour. Again, this covariance might be difficult to define ahead of time, as interactions between components can be very complex and almost always change in time [2]. In this context, we believe that an efficient manufacturing execution system (MES) using multiple robot-vision stations must have the functional characteristics: • Dynamic scheduling of robot operations and robot allocation based on real-time data and predictions derived from it for the near future; • Dynamically learn the patterns of the signals monitored; • Detect faults before service degradation occurs; • Determine and learn the covariance between signals; • Work on real-time data streams from sensors rather than static data; • Classify the current state of the robotic system as healthy or faulty; • Execute automated corrective actions for the extended robot-vision environment. Some qualitative considerations are also required, especially related to the number of false positives the system generates. While in a fully accurate system it won’t be feasible, keeping the false positives rate low is usually decisive when automating the corrective actions. As the time interval between detecting a possible fault and the start of service degradation can be low, automating the corrective actions is a fundamental feature of a dynamic MES. One possible solution for minimizing false positives is to provide the system with the ability to learn from the outcome of its own actions [3]. Real-time decision for online robot operation scheduling is currently a popular research topic. Some initial steps towards real-time data integration in MES systems were presented by Zhong [4] where RFID technology is used to track the movement of objects in the shop floor. In this approach, RFID devices are deployed systematically on the shop-floor to track and trace manufacturing objects and collect real-time production data. However, the process described is mostly manual and the decisions are taken by human operators. In [5], Zhang proposes a real-time, data-driven solution that optimizes decision using a dynamic optimization model. The model is based on game theory and each robot is an active entity that will request the processing tasks independently. While this is an important step towards decentralization, the solution does not include machine learning techniques in order to have a predictive behaviour. The Industrial Internet of Things technology (IIoT) seamlessly integrates smart connected objects (orders, products and resources) in Cloud Manufacturing Execution System (CMfg) architectures through edge computing technology. This stimulates a Product Centric Approach, where products directly request processing, routing and traceability; also, robotic resources are dynamically re-allocated function of the quality of services (QoS) they provide and the power they consume, and directly request maintenance while they are in use [6].

620

F. Anton et al.

This paper tries to bridge the gap between previous cloud implementation of MES with robotic services and new big data streaming and machine learning techniques that can be used to predict future behaviour on the shop floor. If data can be obtained in real-time and machine learning algorithms can be run in a real-time context with retraining on new data, then the insights become predictions, enabling real-time decisions for: operations scheduling, robot allocation and real status-based maintenance.

2 Moving Intelligence to the Edge of Cloud Robot Service System The shop floor MES that uses robot-vision services is semi-heterarchical. The robotized job-shop type manufacturing processes (machining, assembly) are performed by robotvision resources interconnected through a closed loop conveyor. Products visit the robot stations in a sequence optimally established off line, as long as disturbances do not occur; job rescheduling and robot re-allocation are done in real-time at technical disturbances (degradation of QoS performed by robots or failures). Products are placed on pallet carriers on which intelligent devices (e.g. Raspberry Pi, Overo Storm Air) are embedded (IED). The MES is partitioned in two layers (see Fig. 1): • Cloud MES (CMfg), hierarchical and centralized, assures optimization of mixed batch planning and product scheduling, and resource allocation. It uses Machine Learning (ML) for long term predictive analysis and preventive maintenance decisions. These global tasks consume data aggregated in real-time at edge device layer and received via the Manufacturing Service Bus (MSB) – a message passing, robust queuing system [7]. • Distributed MES (dMES), heterarchical and decentralized, assures robustness at shop floor disturbances and agility at changes in robot behaviour and environment. This layer intensively exploits edge computing principles by locating the data processing power at the edge of the MES network instead of holding it in the cloud. Edge computing is a solution that facilitates data processing at or near the source of data generation; it serves in this case as the decentralized extension of the cloud layer [8]. In the IIoT context of robotized manufacturing, the sources of data generation are devices (things) with sensors or embedded devices [9], see Fig. 2. Edge devices are of two types: (a) Static devices attached to the robotic-vision resource and its environment: sensors collecting supply signals, mechanical effort signals, images collected and processed by smart video cameras, and production data. Using its encoders, the robot may act as an “aggregate sensor”, providing information about the accuracy of performed motion (e.g. peak positioning error, repeatability errors in assembly tasks; (b) Mobile intelligent devices embedded (IED) on the carrier (pallet) which transports the product from one robot workstation to the other. Intelligent products (IP) are thus created temporarily, while they are executed. The number of active IEDs is equal to the number of products in simultaneous execution; each one runs an Order Agent (OA) holding the data of resource visiting sequence: information and rules governing the way it is intended to be made (recipe, resources), routed, verified and stored, which enables the product to support or influence these operations.

Decentralizing Cloud Robot Services Through Edge Computing

621

Fig. 1. Extending the CMfg with distributed MES based on edge computing

Fig. 2. Edge devices: robot with intelligent sensors, and Intelligent Product (IED on carrier). (Left: Architecture of intelligent shop floor resource (robot) with external sensors; Intelligence embedded on the product during its execution, using mobile technology and real-time OS)

622

F. Anton et al.

Shop floor devices like robots do not always possess the processing power required for Cloud integration and in this case the existing hardware needs to be augmented with additional dedicated hardware in order to become a so called SOA-enabled device (see Fig. 2 left) that can be connected to the Cloud MES layer. On the other hand, virtualization for IPs is adopted when product execution is decision-oriented, i.e., the IP makes decisions relevant to its own destiny. Virtualization allows moving the processing from the IED to the cloud environment, in a shared workload model which is best suited for multi-agent system based implementation of the OA set. Each robot-vision resource has an information counterpart – Resource Agent (RA) that executes on the workstation (IBM PC-type) connected via TCP to the intelligent video camera(s) and the robot controller. RA computes at the edge of the dMES layer three classes of applications that individually evaluate the resource’s status, behaviour and local environment conditions to weight the robot’s collaborative assignment for future manufacturing tasks with reasonable power consumption: • Evaluation of the robot’s QoS performed at operation and product level (timeliness, positioning and contouring errors, duty cycle, parts not processed, visually not recognized/not located items, errors reported by visual geometry inspection. • Update of the Extended Vision Environment (EVE) parameters, to maintain correct operating conditions for visual guidance of robots and automated visual inspection. • Prediction of the electrical energy needed by a robot to perform operations in a small time horizon, for collaborative allocation in dMES multi-agent framework. Order agents compute at the edge, represented by the active IED set, new schedules for operations on the products in current execution and new resource allocations at failures, QoS degradation or when the robot’s power consumption predicted on short term exceeds certain thresholds. While reconfiguring is done at the edge on near horizon, an Expertise Agent (EA) re-computes at cloud MES level the schedule and allocation for all remaining products at complete batch horizon. OA also tracks the execution of the product it represents and evaluates the behaviour of the shop floor conveyor while routing the product. CMfg aggregates all OAs data. The CMfg global, long term monitoring solution uses a database containing information about the production and shop-floor conditions mentioned above; the communication with edge devices in dMES uses message queuing. The database is organized in six tables: (i) Status QoS of robotic resources in the shop floor context; (ii) Executable operations on resources; (iii) Operations assignment on resources; (iv) History of executed operations and products; (v) Supply data and energy consumed; (vi) Alarms. High level TCP-based protocols access the cloud database. MSB transforms messages to and from proprietary protocols in a common standardized format by placing message convertors at entry/exit points of the bus; message convertors assure that data streams is aggregated at the CMfg layer with relevant metadata (robot ID, location).

Decentralizing Cloud Robot Services Through Edge Computing

623

3 Acquiring Robot Data 3.1

Adept ACE

In order to monitor and gather data from Adept Technology robots, a proprietary application was used: Adept ACE, which offers a core framework of functionalities in a library which is executed on the robot workstation (Fig. 3); the functionality has the ability to run different utilities and applications, storing and loading the workspace to offer persistence, and the ability to log events. A set of plugins was installed: 3D visualization of the robot arm, interface to the robot, controller and machine vision.

Fig. 3. The extensions and the base framework for Adept ACE

Adept ACE communicates with the robot controller using the Ethernet network; this is done by starting on the robot controller a command server which sends the required data and executes commands on the robot controller on behalf of Adept Ace. The command server reports events to Adept ACE at a min. time base of 0.5 s [10]. An Adept ACE can be created and executed in various ways: • The application can be created directly in Adept ACE graphical user interface. • An application can be created which can load Adept ACE in the same process and call the Adept ACE Framework (AAF) application programming interface. • A program written in C#, C++ or Visual Basic can be created and executed on a different PC and can make remote calls to AAF API, as depicted in Fig. 4. • To communicate with programmable logic controllers, OLE for process control (OPC) is used to control an ACE PackXpert application, to send variables, etc. • Adept ACE can be executed to configure automatic execution of AceServer service in the Services manager Windows control panel; it can be also configured to load at start-up a default workspace containing a specific robot application.

624

F. Anton et al.

Fig. 4. Executing Adept ACE user application through remote AAF API calls

3.2

Acquisition and Handling of Robot-Vision Data

Adept ACE was used to acquire data from the robot controller, the application being started with a default workspace including the monitored robot and then the System Monitor tool is started and configured to log data into a file. The monitored data is: • For the robot: Amplifier bus voltage (V); AC input (V); DC input (V); Base board temperature (°C); Encoder temperature (°C); Amplifier temperature (°C); Duty cycle (% limit); Harmonic Drive usage (%); Peak torque (% max torque); Peak velocity (RPM); Peak position error (% soft envelope error). • For the belt process manager: Instance count; Belt velocity; Instantaneous instances; Instances per minute; Active instances; Latch faults • For the process manager and robot: Idle time (%); Processing time (%); Average total time (ms); Parts per minute; Targets per minute; Parts not processed; Targets not processed; Parts processed; Targets processed. The data is acquired and stored in the file at each 0.5 s. Because the file is always open by the System Monitor, it cannot be opened in other application to send the information in the cloud. To overcome this problem, Cygwin was installed [11]. Cygwin represents a set of Linux tools which runs on Windows. In order to extract the data from the log file, an application was created which uses the following tools: tail is used to extract the last line from the log file; watch is used to execute tail at each 0.5 s; cut is used to extract each field from the file and Net Cat is used to send the information in the cloud. Figure 5 depicts the entire software process.

Fig. 5. Acquiring and sending the robot data in the cloud

Decentralizing Cloud Robot Services Through Edge Computing

625

4 Experiments and Conclusions Once consolidated streams are available from the map-reduce algorithm run by RA on the fixed edge device (robot workstation), the data was directly used in the dMES for decision making. We designed a robot allocator to product operations that considers the energy consumption mapped on robots in order to schedule the next operation such that energy consumption is optimized at current shop floor activities. In this case, it was considered as useful to attempt predicting some behaviour in the near future, and use that information for resource assignment. When dealing with scalar data such as energy consumption for a given operation from a robot, we used linear regression to estimate whether there is a pattern in the data. To do this, the energy consumption for a given operation was map-reduced and aligned chronologically in time. Once this was done, linear regression was applied to the time series and a threshold was used to check the pattern (Fig. 6).

Fig. 6. Linear regression applied within an energy value stream

The horizontal axes mark timeline (seconds to current time) and the vertical axes the recorded power consumption. Applying linear regression results in determining the slope of the line in the right graph. This allows for prediction of how much energy would that operation require in the next runs on the robot, at least in a small time horizon; but as data and scheduling are real-time, even a small horizon can lead to significant optimization. At the same time, by applying a threshold on the slope, the scheduler can detect if the robot allocation preference should be changed to optimize energy consumption. Similar approaches can be applied to other streams, based on operation duration for example. It is important to note that these predictions can be made in parallel at edge of the MES, and the scheduling algorithm can use multiple dependencies on the local optimization goals.

626

F. Anton et al.

References 1. Valckenaers, P., Van Brussel, H., Bruyninckx, H., Saint Germain, B., Van Belle, J.: Predicting the unexpected. J. Comput. Ind. 62(6), 623–637 (2011) 2. Morariu, O., Morariu, C., Borangiu, Th., Răileanu, S.: Manufacturing systems at scale with big data streaming and online machine learning. In: Service Orientation in Holonic and Multi-Agent Manufacturing, Studies in Computational Intelligence, vol. 762, pp. 253–264. Springer (2018) 3. Heshan, F., Surgenor, B.: An unsupervised artificial neural network versus a rule-based approach for fault detection and identification in an automated assembly machine. Rob. Comput. Integr. Manuf. 43(2017), 79–88 (2017) 4. Zhong, R.Y., et al.: RFID-enabled real-time manufacturing execution system for masscustomization production. Rob. CIM 29(2), 283–292 (2013) 5. Zhang, Y., et al.: Game theory based real-time shop floor scheduling strategy and method for cloud manufacturing. Int. J. Intell. Syst. 32(4), 437–463 (2017) 6. Viswanadham, N., Johnson, T.L.: Fault detection and diagnosis of automated manufacturing systems. In: IFAC Proceedings Volumes, vol. 21, issue 15, pp. 95–102 (1998) 7. Trentesaux, D., Borangiu, T., Thomas, A.: Emerging ICT concepts for smart, safe and sustainable industrial systems. J. Comput. Ind. 81(2016), 1–10 (2016) 8. Takiishi, K., Tazaki, K.: Edge Computing for Improved Digital Business Networks (2015). https://www.gartner.com/doc/3155018/edge-computing-improved-digital-business. Published 21 October 2015 by Gartner, ID: G00291619. Accessed Dec 2017 9. Industrial Ethernet Book: IIoT: Combining the Best of OT and IT, IEB Issue 95/14 (2017). http://www.iebmedia.com/index.php?id=11673&parentid=63&themeid=255&showdetail= true. Accessed Dec 2017 10. Adept (2018). http://www1.adept.com/main/ke/data/pdf_web/ace_ug.pdf. ACE User’s Guide, 3.7.x. Accessed 22 Jan 2018 11. Cygwin (2018). Cygwin homepage. https://www.cygwin.com/. Accessed 22 Jan 2018

Smart Cyber-Physical System to Enhance Flexibility of Production and Improve Collaborative Robot Capabilities – Mechanical Design and Control Concept Aleksandar Rodić(&), Ilija Stevanović, and Miloš Jovanović Mihajlo Pupin Institute, University of Belgrade, Belgrade, Serbia {aleksandar.rodic,ilija.stevanovic, milos.jovanovic}@pupin.rs

Abstract. A new smart cyber-physical system (CPS) specially designed to further improve the flexibility of production systems and to improve the collaborative capabilities of industrial service robots designed for interactive work with humans in an information structured work space is presented in the paper. This system enables quick and easy reconfiguration of the technological production process in accordance with changes caused by the small production batches or frequent changes of the production program. As such, the system is extremely suitable for use in small and medium production enterprises (SMEs). The new cyber-physical system is based on a mobile, dual-arm industrial robotic system with the extended working space and a smart application-specific opentype software interface that can be updated and upgraded with new knowledge and skills from the production line according to the technological needs. The basic functions the mechanical modules and cloud-based control architecture are presented in this paper. Keywords: Cyber-physical systems  Flexible production systems Collaborative robot  Cloud-based control  Internet of things Distributed intelligence

1 Introduction Flexible production systems and high degree of automation have been widely introduced into industrial practice in the 1970s and 1980s due to the rapidly increasing demands of the global market together with the requirements of massive, inexpensive production. Large scale production quantities have become synonymous to accelerated industrialization, economic growth for the most powerful companies. The cheaper products, high quality and unified products rise from this type of production systems. On the other hands, unique products were still made manually and it will be significantly more expensive than similar ones produced in large series. In the meantime, consumers have become more demanding, in the sense that the product they intend to buy should be maximally adjusted, not only to their need, but also to the aesthetic taste. Mass industrial production could not provide it because it would be too expensive to © Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 627–639, 2019. https://doi.org/10.1007/978-3-030-00232-9_66

628

A. Rodić et al.

change the production scheme because of the specific demands of the product. Thus, the mass production model was anchored in time because it is inflexible and inert with regard to market demands. This imposed a change in the production model from the concept of mass production into the so-called customized production. The development of new technologies, digitalization of production, Internet of things, wireless sensorycommunication networks and fast and reliable global communication are strongly involved in production. The man-worker, as a cognitive and creative factor, is again in the center of attention while the robots perform heavy and routine tasks and assist to man-workers in the production process. Now, the man-worker and the cyber worker (industrial service robot) must work together using mutual advantages. The paper is organized in 5 sections starting with introduction. Section 2 presents some recent valuable results in the field. The Sect. 3 presents an innovative solution for an industrial service robot, called the cyber-worker, designed for safe and reliable collaborative work and quick reconfiguration of the technological batch manufacturing lines. In Sect. 4 some implementation aspects are considered. In Sect. 5, conclusion and future research and development will be elaborated.

2 State-of-the-Art in the Field In the recent years, talking about the production technologies of the future, the terms “factory of the future”, “Industry 4.0”, smart cyber-physical systems, collaborative robots, industrial service robots and more are often used. The European Commission (EC) pointed out through its’ recent development programs novel, smart technologies based on mass digitization, Internet of things, sensor-computer networks, intelligent robotic systems, new materials and nanotechnology. The ReconCell project [1] proposes to develop a flexible robotic work cell that will allow very short, self-adaptable and affordable changeovers under the conditions demanded and based on end-user needs. This will be achieved with the minimum use of additional resources over the system’s lifetime. A new kind of robotic work cell has been developed in the ReconCell project. It enables the application of robots for small batch size production (also called “few-of-a-kind production”). The developed workcell is based on novel ICT technologies for programming, monitoring, and executing assembly operations in an autonomous way. It can be partly automatically reconfigured to execute new assembly tasks efficiently, precisely, and economically with a minimum amount of human intervention. This is possible due to recent developments in the area of robot programming, fast reconfigurable machinery, vision technologies, modelling, and simulation. The proposed approach is backed up by a business intelligence module that guides production planning and the introduction of new technologies. The aim of business intelligence is to show that the ReconCell system is economically viable especially for SMEs. The HORSE project [2] deals with the research and development of smart integrated robot systems designed for small and medium enterprises operated by the Internet of object and based on dynamic production processes. The HORSE project aims to bring about a breakthrough in the manufacturing industry where it proposes a new flexible model of a smart factory that involves close cooperation between people

Smart Cyber-Physical System to Enhance Flexibility

629

and robots, automatically guided vehicles and machine tools for the realization of various industrial tasks in an efficient way (Fig. 1).

Fig. 1. The CAD model of the new reconfigurable processing cell according to the ideas elaborated in the ReconCell EU-funded project [1]

The main strategy of this project is based on existing technologies and research results in robotics, smart factories and integrates them into a coherent and flexible software framework. One of the objectives of the project is to generate new Centers of Excellence that provide infrastructure, expertise and a professional environment (ecosystem) that enables small and medium-sized enterprises to have easier access to robotic solutions. Together with development of innovative concepts for industrial manufacturing systems, attention has been paid to development of high-performance industrial service robots for collaborative work with humans. The ABB has developed a collaborative, dual arm, small parts assembly robot solution YuMi [3] that includes flexible hands, parts feeding systems, camera-based part location and state-of-the-art robot control. It is a specially designed to meet the requirements of flexible and intensive production in the electronics industry. The Universal Robot [4] has been increasingly used in industry as a collaborative robot that has inherently built in safety algorithms for working with people [5]. The SDA20D dual-arm robot developed by Yaskawa [6] is a dual-arm, 15axis robot designed for complex assembly and material handling applications. At the International Robot Exhibition 2013 Epson announced development work on an autonomous dual-arm robot [7] that recognizes and manipulates objects simultaneously with both arms and is capable of freely adjusting the amount of force applied to objects by the end-effectors (hands). The Hitachi Co. has developed a 15 DOF industrial humanoid NEXTAGE [8] for collaboration with conventional industrial robots and specialized equipment.

630

A. Rodić et al.

Some valuable results in design and implementing dial-arm robotic systems for flexible production systems are presented in [9–11]. Innovative technical solution that will be presented in the paper, in relation to the SoA given in this section, refers to the corresponding mechanical system improvements and new functionalities of the smart flexible manufacturing system based on the concept of spatially distributed intelligence and application of a cloud-computing network. Mechanical improvements relate to the original constructive solution that leads to a significant increase in the workspace of the robot with variable configuration.

3 Smart Cyber-Physical System Designed for Flexible Manufacturing Systems The main contribution of the paper refers to the design of a smart cyber-physical system to be used for flexible manufacturing systems in industry. The system consists of a dual-arm industrial service robot (Fig. 2) of extended maneuvering capabilities and enhanced collaborative ability. The system is designed to operate within the flexible manufacturing system that requires quick and easy reconfiguration of technological production lines and to adapt to the new production program in smaller batches. Reconfiguring the production system does not require additional resources or reprogramming controllers being all the necessary information about the technological process is stored in the cloud-computing architecture.

Fig. 2. Solid-model of dual-arm service robot designed for collaborative work in assembly tasks

Contemplated 16 DOFs system consists of the following components (Fig. 3): (a) 3 DOFs mobile platform (an omnidirectional motorized trolley) where specially designed

Smart Cyber-Physical System to Enhance Flexibility

631

mechanical interface (a folding torso with 1 DOF) is mounted, (b) a 1DOF folding torso designed as a dual-arm robot-base as the carrier of the industrial robot arms, (c) two 6 DOFs industrial robot arms (Universal Robots UR-5 [4]). The folding torso can change its’ attitude deflection relative to the floor which increases the overall workspace of the service robot (Fig. 4), and (b) the intelligent applicative software interface (task-oriented command interface) that allows higher flexibility of the production system and better collaborative skills of robot interacting with a man-worker.

Fig. 3. Elements of the mechanical interface specially designed for collaborative tasks

The system was implemented under a cloud-computer architecture that has three functional levels (Fig. 5): (i) physical level, (ii) network level (wireless communication level), and (iii) applicative level. On the physical level there are man-workers, service robots, machine tools, conveyors, and buffers for storing items, along with supporting equipment and sensor data acquisition system which forms part of the technological process of production. All actors at the physical level are connected to a unique sensor-computer-communication network that collects information from the physical layer, sends them through the network to the cloud-computer and returns the appropriate information of interest back to individual participants (Fig. 5). Network level communication is divided in several layers: wired layer, WIFI layer and local Bluetooth layer. Each stationary system is connected to the wired network system. It is also equipped with a Bluetooth wierless link for direct urgent communication with the nearest mobile robot. Mobile robots are equipped with a WIFI wireless system which is connected to the local WIFI host and they have Bluetooth wireless link for possible urgent communication with the nearest stationary system or other mobile robot(s) for data exchange. Using such a redundant

632

A. Rodić et al.

Fig. 4. (a) Mobile service robot in a transport position. Reconfigurable robot torso: (b) middle attitude deflection to shoulder level, (c) lower attitude deflection to the knee level, and (d) overhead attitude deflection

system as a whole is quite adaptable to any possible loss of connectivity within the network. The high control level of service robot makes an inherent part of the so-called “distributed intelligence”. Such architecture allows the optimal use of sensor-computer resources and releases the controller of service robot of tasks that can be performed elsewhere at another computer. At the application level is a cloud-computing system with the appropriate database. At this level the “brain and knowledge” of the system are set as the applicative software is executed at the level that is responsible for a regular operation of the technological process at the physical level of implementation. At the application level the algorithms have been developed that govern robotic manipulative tasks (gripping skill, manipulation of various objects, processing of objects, etc.) [12, 13], manage technique of collaboration between robots and humans, coordinate dynamics of performing of the technological process, monitor quality of production, estimate the security risks, etc. In the database of the cloud-computer, it is kept an

Smart Cyber-Physical System to Enhance Flexibility

633

Fig. 5. High-level presentation of the cloud-based control of manufacturing process

inherent knowledge about the technological processes, phases of tasks accomplishing, technological operations of the robot, robot relative positions with respect to the machine tool or to the workers, etc. The database can be modified (upgraded) by addition of new knowledge and information of interest or by erasing the obsolete information. It is understood that one can do a database backup to some external drive and thus kept useful information for a longer period. 3.1

Collaborative Mechanical Interface

Collaborative mechanical interface consisting of two lightweight Universal Robots (UR-5) mounted on the robot-base movable up and down is shown in Figs. 3 and 4. Robotic arms are placed at the distance of 0.60 m one from each other as the human arms (Fig. 3). This dual-arm manipulating system can be adjusted in height (lift or lower), based on the folding torso rotating about a revolute joint by means of a linear actuator (Fig. 4). The rotary joint is located at the height of 1.50 m corresponding to the chest height of a man-worker. On demand, the linear actuator can lower or raise a common base with dual-arm robot system so that it takes up a working position which corresponds to a worker in the squat position or in the position with abducted arms above head. In this way, the adaptive manipulative system is able, without moving the cart, to retrieve items that are low on an objects transporter, desk or high on a shelf (buffer) which is located above the heads of workers. By such a practical structure of mechanical interface of variable configuration it is achieved an extended working range of the robotic service system with respect to the robots with a fixed base. The reconfigurable dual-arm robotic system is installed on an omnidirectional cart which serves to bring the robot from one place to another, from one machine to another depending on the needs of the technological process. Because of the increased capabilities of robot-arm reach, the service robot presented in the paper is more universal than other ones of similar purpose and it can be used in different positions as a substitute for the robots of the invariable operating range.

634

A. Rodić et al.

In addition, the service robot shown in Fig. 3 is of a modular structure, so that it can be easily replaced tools or end-effectors attached to the last robot link. In this goal, the robot has a tool-buffer (holder) for keeping different tools or end-effectors (e.g. gripper, hook, punch, drill, screw-driver, extractor, etc.). The motorized trolley is remotely operated [14, 15]. The operator from the command station imposes commands that set movement and position of the trolley in the workspace. Through the network (Fig. 5), the command to move the robot-trolley is sent to its controller. Robot-cart receives the coordinates of the target destination and the path by which to be transferred from one machine tool to another one. Also, a motorized cart receives a command from the operator (or cloud-computer if the automatic mode is activated) that contains information about the new location/position which should take relative to the machine tool or to come close to the human worker at the appropriate distance. The 3DOFs omnidirectional trolley is designed so that it can take any position on the floor in the workspace. Safety aspects in the paper are resolved in two ways. The first method involves inherent safety, which involves the use of the producer-provided safety algorithms for robots that are designed for collaborative work with human-workers. Since our CPS (Figs. 3 and 4), made of two universal robot arms UR-5, in its controller have already built-in the corresponding safety algorithms, that can be used to limit the robot composite speed and the force/torque in the joints indirectly managed by control the current into the robot drives. It should be noted that a new generation of robots (e.g. KUKA LBR-iiwa 7 r800 [16]) have built-in so called algorithms of the active compliance which operate on the basis of the feedbacks upon the joint torques where in the robot joints are set up the torque sensors. In this case, it is talking about a compliant structure of the robot. Another way of solving safety issues in collaborative human-robot tasks is based on a convenience that offers the concept of spatial distributed intelligence, supported by adequate cloud-computing architecture (Fig. 5). The safety algorithms are implemented at the robot high control level (Fig. 6). These intelligent safety algorithms operate on the basis of acquisition of sensory-data of heterogeneous type, human maneuvers and robot movements in a shared task space. The database (Figs. 5 and 6) stores the specific know-how about the technological operations performed by humans and robots, interrelations of the manipulative tasks, or cooperative actions. Intelligent interpreter analyzes the movements at any time, the behavior of the robot in response to the manipulative actions of man-workers, compares them with the reference actions in the database. If the interpreter observes the danger for the man-worker, it takes appropriate safety procedures - stops or slows down the robot motion, changes its trajectory, warn the staff, etc. After termination of the safety threats, the CPS control system reactivated the robot and continues to run where it left off, provided that manworker is willing (ready) also to continue the work. 3.2

Intelligent Applicative Software Interface

Intelligent applicative software interface provides full control of cyber-physical system [17, 18] shown in scheme in Fig. 5. Applicative software was designed with the aim to facilitate the technological production in accordance with the high manufacturing standards. The algorithms implemented in the context of the intelligent applicative

Smart Cyber-Physical System to Enhance Flexibility

635

Fig. 6. Flow-chart of the algorithm running on the cloud host-computer

software interface are part of the hierarchical distributed control architecture (Fig. 5). Flow-chart of the algorithm running on the cloud host-computer is presented in Fig. 6.

636

A. Rodić et al.

The database of the cloud-computing system keeps the knowledge and datainformation of interest for regular operation of technological manufacturing process. There is a digital map (plan) of working space, layout of the machine tools in the workshop, conveyors, buffers distribution in the workshop, required locations of workers and robots in accordance with the technological process. The database keeps knowledge about the manipulative skills of the robot handling various objects, the appearance of objects, images and CAD models of different objects, technological way of gripping items to be accomplished by service robot end-effector, technological features of manipulation skill of the robot manipulating objects, dynamics of technological operations, failure and emergency procedures, and so on. The database also contains appropriate information on the skills of the collaborative work of robots and humans. The movements of the workers are previously recorded by means of sensors and measurement equipment to record human motions (motion capture system or depth camera). Recorded trajectories of human hands and arms are digitized and transformed into the form of joint angles suitable for application at the robot controller. Also, joint speed and acceleration of human arms analyzed in collaborative tasks with service robot were recorded in the database. At the same time, the robot is trained to perform the desired movements synchronized with human movements so that it is ensured the task coordination of collaborative activities. In the manufacturing process some variables describe specific tasks and actions as part of the technology process cannot be expressed in a measurable way (e.g. careful grasping, smooth touch, compliant contact, etc.). In these cases, usually it is used narrative description. Narrative descriptions are converted into the corresponding linguistic variables that can be subjected to the qualitative graduation. Further, by applying the fuzzy technic formulation of the model, linguistic variables are converted into a digital form suitable for the implementation on the microprocessor at the higher control levels. By use of this an idea, features of a skill man-worker can be mapped to the service robot (cyber-worker) by use fuzzy modeling technic and then applied in the high-level robot controller (Fig. 5). This methodology has been already successfully applied in the paper [19] where a cognitive (affective and social) human behavior was mapped to the EI-controller of the personal service robot.

4 Implementation Aspects of Smart CPS in Flexible Production Applicative software interface implemented at the highest hierarchical level of system operation (Fig. 5), enables rapid and reliable exchange of information between particular functional levels and process management. Service robot receives information from cameras and sensors located on the robot or its surroundings (workspace). On the other hand, the robot receives useful information from the database of a superior cloudcomputer [17, 18]. The data regards to the technological process, manner of accomplishing particular tasks and other important details for the production. Information about objects (products), stored in the database, contain data such as: appearance and shape (geometry) of the object (e.g. previously captured images of the object, CAD technical drawings, etc.), description of the colors and textures, mass and material of

Smart Cyber-Physical System to Enhance Flexibility

637

the object, order of technological operations (grasping, transferring, insertion, fixation, etc.), data about the quality standards requested, force and torque on the robot gripper, etc. In this way, the database saves the recorded information of interest for the overall production system. Reconfiguring the production system means that it can change its’ production capacity and technical capabilities without investing additional material resources but exclusively by relocating available resources (e.g. displacements of robots and machines in a new way), changing interior structures and configurations of the existing modules (e.g. changing mode or way of operation of machine tools, robots, etc.) or by replacement human-work by industrial service robots and vice versa. Designed service robot presented in Figs. 3 and 4 has the property of enhanced mobility and changing configuration (by changing attitude deflection of the folding torso). Collaborative work of man and robot assumes that the mutual advantages of biological and technological systems are used. The human is still superior compared to the robot in cognitive sense (better perception, reasoning, creativity) but also in some sophisticated very demanding manipulation tasks. Human manipulative skill can be recorded by use of a capture motion system and digitized in the form suitable for microprocessor processing. On the other hand, industrial robot (Fig. 3) has option of training tasks with the help of the teaching-pendant device. After synchronization of the data regarding to the movements of man-worker and cyber-worker, the set of reciprocal movements is recorded in the cloud. In such a way cloud-computer is able to control collaborative tasks of human and robot.

5 Conclusion The paper presents a new smart cyber-physical system specially designed with the aim to further improve the flexibility of production systems and improve the collaborative capabilities of industrial service robots intended for interactive work with a human in the information structured workspace. The presented robot system differs from the similar systems of same purposes by having an extensive working range, has a higher level of versatility of application thanks to the smart applicative interface and cloud computing architecture. Together speaking, the innovation presented in this paper, raise the level of flexibility of the production system and enables fast and easy reconfiguration of the technological process. In the following period is expected to develop a prototype device of the robotic system from Fig. 4 and an experimental verification to be done. The proposed dual-arm robotic system with two UR-5 lightweight robot arms is suitable for implementation in agro-food industry for packaging and palletizing small products in the boxes. Proposed system can be applied for assembling parts in cooperation with human worker as presented in Fig. 2, too. Generally spoken system is suitable for the SMEs due to its’ high versatility performance.

638

A. Rodić et al.

References 1. 2. 3. 4. 5. 6. 7. 8. 9.

10. 11.

12.

13.

14.

15.

16. 17.

ReconCell project. http://www.reconcell.eu/. Accessed Jan 2018 HORSE project. http://www.horse-project.eu/. Accessed Jan 2018 ABB YuMi. http://new.abb.com/products/robotics/industrial-robots/yumi Universal Robots - Collaborative Industrial robot arm. https://www.universal-robots.com/ Dual-arm Universal Robot. https://www.crossco.com/blog/rethinks-baxter-vs-universalrobots-which-collaborative-robot-best-you SDA20D Dual-Arm Industrial Robot. https://www.motoman.com/industrial-robots/sda20d EPSON dual-arm autonomous robot. https://global.epson.com/innovation/engineer/ dualarm_robot.html Humanoid Dual Arm Industrial Robot NEXTAGE. https://www.hitachi-hightech.com/eu/ product_detail/?pn=ind-nextage&version= Makris, S., Tsarouchi, P., Matthaiakis, A.S., Athanasatos, A., Chatzigeorgiou, X., Stefos, M., Giavridis, K., Aivaliotis, S.: Dual arm robot in cooperation with humans for flexible assembly. CIRP Ann. Manuf. Technol. 66(1), 13–16 (2017) Makris, S., Tsarouchi, P., Surdilovic, D., Krueger, J.: Intuitive dual arm robot programming for assembly operations. CIRP Ann. Manuf. Technol. 63, 13–16 (2014) Makris, S., Karagiannis, P., Koukas, S., Matthaiakis, A.S.: Augmented reality system for operator support in human–robot collaborative assembly. CIRP Ann. Manuf. Technol. 65(1), 61–64 (2016) Rodić, A., Miloradović, B., Popić, S., Spasojević, S., Karan, B.: Development of modular compliant anthropomorphic robot hand. In: Pisla, D., Bleuler, H., Rodic, A., Vaida, C., Pisla, A. (eds.) New Trends in Medical and Service Robots. Theory and Integrated Applications, Mechanisms and Machine Science, vol. 16, VIII, 238 p., p. 167. Springer (2014). ISBN 9783-319-01591-0, Due: 30 September 2013 Rodić, A., Miloradović, B., Popić, S., Urukalo, Đ.: On developing lightweight robot-arm of anthropomorphic characteristics. In Bleuler, H., Pisla, D., Rodic, A., Bouri, M., Mondada, F. (eds.) New Trends in Medical and Service Robots. Book 3, Series: Mechanisms and Machine Science, vol. 38. Springer (2015). ISBN: 978-3-319-23831-9 Rodić, A., Jovanović, M., Popić, S., Mester, G.: Scalable experimental platform for research, development and testing of networked robotic systems in information structured environments. In: Proceedings of the IEEE SSCI2011, Symposium Series on Computational Intelligence, Workshop on Robotic Intelligence in Information Structured Space, Paris, France, pp. 136–143 (2011) Rodić, A., Stojković, I.: Building of open structure wheel-based mobile robotic platform, In: Habib, M.K., Paulo Davim, J. (eds) Interdisciplinary Mechatronics: Engineering Science and Research Development, Handbook, 624 p., pp. 385–421. ISTE-Willey, London (2013). ISBN: 978-18-4821-418-7 KUKA. https://www.kuka.com/en-de/products/robot-systems/industrial-robots/lbr-iiwa Rodić, A., Jovanović, M., Stevanović, I., Karan, B., Potkonjak, V.: Building technology platform aimed to development service robots with embedded personality and enhanced communication with social environment. In: Digital Communications and Networks, vol. 1, pp. 112–124. Elsevier (2015). ISSN: 2352-8648. https://doi.org/10.1016/j.dcan.2015.03.002

Smart Cyber-Physical System to Enhance Flexibility

639

18. Rodić, A., Jovanović, M., Vujović, M., Urukalo, Dj.: Application-driven cloud-based control of smart multi-robot store scenario. In: Rodić, A., Borangiu, Th. (eds.) Advances in Robot Design and Intelli-gent Control, Proceedings of the 25th International Conference on Robotics in Alpe-Adria Danube Region, Series: Advances in Intelligent Systems and Computing, vol. 540, pp. 347–360 (2016) 19. Peake, I., Vuyyuru, A., Blech, J.O., Vergnaud, N.: Cloud-based analysis and control for robots in industrial automation. In: Proceedings of 2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS), Melbourne, VIC, Australia, pp. 14–17 (2015). Electronic ISSN: 1521-9097. https://doi.org/10.1109/icpads.2015.113

Automatic Painting and Paint Removal System: A Preliminary Design for Aircraft Applications Umberto Morelli1,2, Matteo D. L. Dalla Vedova2(&) and Paolo Maggiore2 1 2

,

Instituto Superior Técnico, Av. Rovisco Pais 1, 1049-001 Lisbon, Portugal Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy [email protected]

Abstract. The maintenance of the aircraft finish system is executed completely manually at present, involving a big amount of manual labor for a long time and in a hazardous environment. The automation of the process would be able to dramatically speed it up and to decrease manpower involved. Moreover, costs and environmental risks are expected to reduce with the automation. Several solutions are being developed; nevertheless, a system able to achieve the maintenance process automatically is not yet available. Along this thesis, a preliminary design of an automatic system for aircraft painting and paint removal has been carried out. The work points out that a cost effective solution for this complex problem is possible. As a preliminary study, this is intended to be a starting point for further study on this subject. Keywords: Aircraft finish system Automatic system  Aircraft

 Paint removal  Spray painting robot

1 Introduction As the fleet of commercial aircraft grows, there is an expected increase in the number of aircraft needing maintenance in the future [1]. Periodically, aircraft are required to remove and reapply the finish system to check the substrate integrity, protect it from corrosion or simply change the aircraft livery [2]. The whole process is, nowadays, achieved completely manually. It requires a big amount of time and labor [3]. Furthermore, it has to be accomplished inside a dedicated hangar for environmental safety reasons. Thus, only one aircraft per painting hangar is concurrently processed. To tackle the aircraft fleet growth, there are two ways: either to increase the number of painting hangars and consequently the number of workers, or to increase the finish system maintenance rate. Using the latter approach, the implementation of an automatic system is a solution to speed up the maintenance process. As for other industries during the past decades, nowadays a big challenge for the aerospace industry is the automation of the aircraft finish system maintenance. According to Waurzyniak [1], this solution would induce great advantages. First of all, the maintenance rate would increase as well © Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 640–650, 2019. https://doi.org/10.1007/978-3-030-00232-9_67

Automatic Painting and Paint Removal System

641

as the quality of the final result. Meanwhile, the whole process cost would drop as a big amount of highly skilled labor is no more required. Moreover, the introduction of an automatic system would drastically reduce the workers’ exposition to a toxic environment during the painting and paint removal processes and, not least, the environmental impact of the process would reduce thanks to the waste optimization. Presently, there is no robotic system commercially available able to achieve the coatings system maintenance. Many projects are under development especially for the paint removal automation while only few involve the painting of aircraft. For the paint removal, the more remarkable projects (still under development) are the Advanced Robotic Laser Coating Removal System (ARLCRS) by Carnegie Mellon University’s National Robotics Engineering Consortium (NREC) and Concurrent Technologies Corporation (CTC), and the Laser Coating removal Robot (LCR) by STRATAGEM. The first one uses a continuous wave laser mounted on a state-of-the-art mobile robot to remove the coating system from medium to small size military aircraft [4]. The latter implements a 20 kW CO2 laser to evaporate and combust the paint that is vacuumed from the surface and passed through a filtration system; the laser is mounted on an eight Degree of Freedom (DoF) robotic arm and four DoF mobile platform. The developer expects 50% reduction in processing time and 90% labor reduction [5]. For the aircraft coating two systems are operative: the Robotic Aircraft Finishing System (RAFS) developed by Lockheed Martin for the F-35 coating [6] and Automated Spray Method (ASM) developed by Boeing for B-777 wings [7]. Both have a six DoF robotic arm mounted on auxiliary axis rails.

2 Specifications and Requirements The subject of the present work is the preliminary design of an automatic system able to achieve the finish system maintenance. The maintenance of aircraft finish system can be divided into three main stages: masking, paint removal and painting [8]. Of these, only the automation of painting and paint removal were studied. The automation of the aircraft masking would give big benefits, as it is a long-lasting process that involves several workers. Nevertheless, the technology to do it is not yet available and/or the system would get too complex and expensive. In the present section, the factors that mainly influence the automation of the painting and paint removal processes are described. There are many painting and paint removal methods. The present study was confined to describe the possible solutions available and leave the painting and paint removal method selection to the client. An automatic system for painting and paint removal is influenced primarily by three factors: the aircraft size and shape, the paint application requirements and the paint removal requirements. Generally, the maintenance system has to process airplanes with different shapes and size. To decide the system dimensions, medium to small military and civilian aircraft were selected as system objectives. Specifically, the largest airplane to process is the Lockheed C-130 Hercules, whose dimensions are shown in Table 1. According to the previous decision, the maintenance system’s workspace has the dimensions in Table 2. The aircraft painting process requires compressed air, a paint tank and a spray gun. The maintenance

642

U. Morelli et al.

system must handle the equipment and to apply the paint with the required thickness following the technical prescriptions [8]. Table 1. C-130 H geometrical features [9]. Length 29.30 [m] Height 11.40 [m] Wingspan 39.70 [m] Fuselage height 4.60 [m] Fuselage width 4.30 [m] Landing gear height 0.52 [m]

Table 2. Dimensions of the maintenance system’s workspace. Maximum height 13.00 [m] Minimum height 0.50 [m] Length 35.00 [m] Width 45.00 [m]

Moreover, painting requires at least one painter at each side of the airplane for quality reasons [8]. Finally, because of the solvents inside the paint spread in the air, every component has to be ATmosphere EXplosibles (ATEX) certified. The requirements for the automation of the paint removal process depend upon the removal method selected by the client. Generally, mechanical and optical methods require high end-effector positioning accuracy and precision, the ability to handle the required equipment (motor-driven sander, laser equipment, etc.) and a sensor to determine if and where the coating has been removed. On the other hand, the chemical removal requires a spray equipment and the scrape of all loosened coatings with a squeegee.

3 System Design Different possible designs were devised and evaluated. Between these a trade-off analysis was carried out to select the solution to be developed. The selection criteria rewarded the design with the lowest complexity and maximum adaptability at different aircraft. In the selected design a robotic arm is mounted at one end of a beam which has its longitudinal axis in a ground parallel plane. The beam is supported on the other end by a lifting system, that moves the beam on a vertical axis. The lifting system is positioned on an omnidirectional Automatic Guided Vehicle (AGV) that is a vehicle able to move in any direction and to perform zero radius turns [10]. In Fig. 1, an overview of the system is presented. The AGV and the lift, provide 4 DoF to the maintenance system and position the robotic arm with respect to the work surface. The arm has to position and direct its end-effector with respect to the work surface; thus, it requires at least 6 DoF. Accordingly, the maintenance system has 10 DoF.

Automatic Painting and Paint Removal System

643

Fig. 1. Schematic overview of the proposed AVG system layout.

The maintenance system has to be able to locate its end-effector 4 meters away from its vertical structure. This is necessary to reach the fuselage center line without touching the structure; the length of the beam plus the length of the extended robot arm must be at least of four meters. The height of the lifting system depends of the height of the AGV and the maximum height to be reached by the robotic arm. To compete with the state-of-the-art robots, the new maintenance system must ensure high performance and rapid development at the lowest investment and operating costs. Thus, the driving design criterion was simplicity. Along the present section, the preliminary design of each component of the system is described. To start designing the components of the structure, the first step was the selection of the robotic arm to know the load on the structure and the dimensions of the other parts of the system. Knowing the weight and the workspace of the robotic arm, the horizontal beam and the lifting system were designed and subsystems and AGV were selected. 3.1

Robotic Arm Selection

To select a robotic arm between the many available on the market the following criteria were applied: (i) lightness, (ii) workspace equal to or bigger than a human painter, (iii) ATEX certification, (iv) production company able to ensure spare parts supply in the next decades, (v) different end-effector tools. Finally, the robotic arm selected is the FANUC Paint Mate 200iA/5L [11]. It weights 37 kg and can handle a payload of 5 kg. The robotic arm can extend up to 1267 mm from its base into the vertical plane. To make a conservative design, the lift structure height has been computed without considering the AGV. So, the height of the lifting system is 11.7 m. The length of the beam should be 3.1 m, but a 3.5 m long beam is used to oversize the system. 3.2

Robotic Arm Support Structure

The robotic arm is mounted at the tip of a cantilever beam supported on the other end by the lifting structure. The beam has been designed to be light and to ensure a small displacement of the robotic arm. To limit the weight, the material selected is the aluminum alloy Al 6061-T6 [12]. A constant Sect. 1-beam was selected from the American Society for Testing and Materials (ASTM) standard [13]. Using the Euler-

644

U. Morelli et al.

Bernoulli theory, a parametric study was carried out to select the beam cross section dimensions considering the tip displacement and the beam weight. The beam selected weights 60.8 kg and allows a tip displacement of 2.75 mm. This displacement is constant during the whole operative life of the system, making it possible to be considered during the system control design, restricting the end-effector positioning error. The beam is supported by a plate structure composed of four aluminum alloy plates welded together, that also connects it to the lifting system. The lifting system is composed of four linear guides and a ball screw mounted on a truss structure as in Fig. 2a. Eight linear bearings are bolted to the beam support and coupled with the rails mounted on the lift structure. A screw nut coupled with the ball screw is also bolted to the beam support. The bearings transmit to the lifting structure only forces normal to their axis, while the screw holds the vertical load and moves the beam support along the lift axis.

Fig. 2. Details of the robotic arm support structure: (a) lifting system; (b) free body diagram of the beam support.

According to Fig. 2(b), the equilibrium equations are written: Fa ¼ Whouse þ Vy ¼ 0:149N=mm  wbear þ 884:9N;

ð1Þ

Fbear4 þ Fbear3  Fbear1  Fbear2 ¼ 0;

ð2Þ

ðFbear1 þ Fbear2 þ Fbear3 þ Fbear4 Þhbear ¼ Mx þ Vy

wbear : 2

ð3Þ

where Mx and Vy are moment and force due to the beam and robotic arm weight respectively, Fbear are the forces in each bearing, Whouse is the weight of the portion of the beam inside the support and Fa the screw force. To solve Eq. (1), the following conservative assumptions were made: Fbear4 = 0 and Fbear3 = Fbear2. The load on the bearings influences the bearing shaft size and therefore its weight. Thus, wbear and hbear were selected equal to 500 mm and 700 mm respectively. The design load on the bearings is 3436 N. To compute the beam support thickness, one of the two horizontal plates was studied as a cantilever beam supporting the entire load due to both the

Automatic Painting and Paint Removal System

645

horizontal beam and the robotic arm. The thickness required applying the Von Mises yield criterion is 19 mm [14]. Then, the support has a weight of 44 kg. Through the bearings, the beam support transmits the moment due to the beam and the robotic arm to the lift structure. The lift structure is manufactured by an aluminum alloy plate bent and cut. To allow preliminary computations, all its structural elements have the same thickness and width, and were studied as jointed beams. Buckling analysis of the lift truss structure was carried on to size it: the structure was assumed loaded only by its own weight. By Euler buckling theory, the critical height for the root pillars is: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p2 EI hcr ¼ 1:44fHlift qgA

ð4Þ

where E is the material Young’s modulus, I is the pillar section smallest moment of inertia, f ¼ 3 is a safety factor, q the material density, g the gravity acceleration and A the pillar section area. Assuming L-section pillars, a parametric analysis was conducted to select pillars height and section dimensions avoiding buckling problem but minimizing the lift weight. According to this analysis the truss structure was designed to have vertical components of 780 mm with a thickness of 5 mm and an L-section of 60 mm. As a result, the lift structure mass is 103 kg. To validate the design, a twodimensional frame analysis tool Ftool [15] was used. The truss structure was loaded by its own weight and by the arm and beam moments (their weight is carried by the linear actuator). The results show that the maximum load acts on the front root pillars and it is a compression load of 2.7 kN. Then, because the critical load on a one end fixed, one p2 EI end supported column is Pcr ¼ ð1:2hÞ 2 ¼ 62 kN, it is stated that buckling will not occur for this structure. 3.3

Lifting System Design

As earlier discussed, the lifting system is composed of four linear guides and a linear actuator, namely a ball screw driven by an electric motor. While the linear guides transmit moments to the lift, the ball screw holds the vertical load and moves the beam support along the lift. To design the linear guides, the bearings and the rails were selected. Continuously supported round rails were selected to minimize the rail bending and the friction, and to avoid bucking [16]. For the bearing selection, the following criteria were applied: between the bearings able to support the load required (3436 N) the one with the smaller shaft diameter is selected. Moreover, it has to be corrosion resistant to ensure a long working life in a polluted environment. The rail shaft diameter is important because the rail weight increases approximately with the squared diameter of its shaft. Finally, the bearings selected are the Thomson’s SSETWNO M16-CR whose rail shaft diameter is 16 mm, while the rails are the LSRM16. The total weight for the linear guide system is, then, 97 kg. The other component of the lifting system is the linear actuator. Generally, three types of linear actuators are available: hydraulic, pneumatic and electromechanical systems. An electromechanical system was selected because, despite being more expensive, has low maintenance costs, high accuracy and easy control. Moreover, it holds the load without consuming power [17].

646

U. Morelli et al.

As already mentioned, the actuator is composed by a screw driven by an electric motor. The screw is supported on the upper end by the lift structure while the other end is mounted on the AGV platform and connected to the motor. For this application, a ball screw is used because of its higher precision and efficiency, lower vibrations and longer operative life; nevertheless, it is more expensive [18]. A Thomson 40 mm diameter screw has been selected because it allows a vertical velocity of 2.5 m/min that is comparable with the vertical velocity of commonly used human lifts. For this screw, the buckling load, is 7250 N that is five times bigger than the total vertical load on the system. The screw is driven by an electric motor. Three types of electric motors are generally used for this positioning application: Direct Current (DC), stepper and servo motors. DC motors have a low cost, but a low accuracy, thus they are rarely used for accurate positioning. In this project, a stepper motor is used, because when compared to a servo motor, it is cheaper, it can work in an open loop, it has higher performance at low speeds, and it requires less maintenance (stepper motors are brushless). The minimum torque required for the motor is calculated equal to 7 Nm, then the stepper motor PK599BE-N7.2 by Oriental Motor was selected. It has a torque-speed specification that allows the direct connection of the screw with the motor. 3.4

Subsystems

This section lists and describes the required subsystems not covered in the previous sections. First of all, to control the robotic arm, it has to be linked to its controller, the R-30iATM Mate Controller. Moreover, to paint the aircraft, paint and compressed air are supplied to the robotic arm. The air pressure and flow rate depend on the paint technique used. To limit the weight of the maintenance system, the air is supplied by hoses linked to an external compressor. For the same reason, the electric power is also supplied to the system by cable linked to an external power source. Therefore, the maintenance system does not have to stop to recharge or change batteries. It must be ensured that the AGV platform does not run over the supply line. Thus, the electric cable and the air hose are mounted on a retractable reel. Two types of retractable reel are possible: spring or motor driven. For this system, electric motor driven retractable reels are used to actively control the tension force on the cables and hoses. The paint tank has to be located on the AGV platform as well. To size it, the paint usage was estimated. The Lockheed C-130 Hercules has a wetted area of 2323 m2. Assuming two robots painting it, each robot paints 1163 m2, the half of it. The specifications of different paints were taken into account and the paint volume needed was estimated to be 120 L per robot. A tank of this size is not required when painting smaller aircraft or with higher coverage paints. It is difficult to clean and handle big paint tanks and they increase the paint waste due to the paint left-over. To reduce the weight, the cost and the paint waste, a 60 L tank was selected. This selection implies that when painting larger aircraft, a refill of the tank may be required. The total tank weight is then 128 kg (assuming 80 kg of paint). The subsystems required for the paint removal depend on the method selected. Chemical stripping does not require any additional subsystem, while to optically remove the paint a laser equipment is necessary. The mechanical removal by water or media blasting requires a dedicated tank and blasting equipment

Automatic Painting and Paint Removal System

647

on the robotic arm tip. If the paint is removed by a motor driven abrasive equipment, it is only required to replace the arm manifold. The implementation of these subsystems is not discussed in the present work. 3.5

AGV Platform

To select an AGV it is necessary to know the load it supports. According to Table 3, the payload on the vehicle is 635.6 kg. The weight of the parts not yet designed is unknown. Thus, to select conservatively the vehicle, a minimum payload of 1000 kg was assumed. For the present project, it is required a zero turning radius vehicle and the possibility to guide it with high accuracy. Moreover, it has to be possible to locate the robotic arm at a minimum height of 0.5 m above the ground.

Table 3. Vertical load on the AGV. Part Robotic arm Arm controller Horizontal beam Beam support Ball screw Linear bearings Rails Stepper motor Lift structure Paint tank Paint Total

Weight [kg] 37.0 56.0 60.8 44.0 105.0 3.0 93.6 5.0 103.2 48.0 80.0 635.6

The AGV that satisfies all the requirements is the RoboMate 17 by Vetex. It is equipped with four omnidirectional wheels that support up to 1000 kg each. Unloaded, its maximum speed is 67 m/min [19]. Locating the subsystems and the lift on the vehicle, an even distribution of load has to be ensured. There are two main problems to take into account: paint consumption and positioning of the lift axis in the center of the vehicle. The paint consumption causes a shift of the system’s Center of Gravity (CG), thus the load on each wheel changes during the painting. This problem could have been solved positioning the paint tank in the wheels centroid, but this was not possible because the lift axis has to be located coincident with the centroid. It ensures that, when the vehicle is performing a zero radius turn, the result is only a rotation about the lift axis without any translation. The CG positioning problem is divided into lateral and longitudinal positioning. Along this section, the AGV is supposed to have a longitudinal and a lateral plane of symmetry. As shown in Fig. 3(a), the lateral CG positioning problem can be solved using the robotic arm controller to locate the CG on the symmetry plane.

648

U. Morelli et al.

Then: xcontrol ¼ 

xscrew Wscrew ¼ 216 mm Wcontrol

ð5Þ

The CG longitudinal positioning is illustrated in Fig. 3(b). The paint tank is located as close as possible to the vehicle centroid to reduce the variation of the system CG as the paint in the tank is consumed. The arm controller is then used to balance the robotic arm and beam moments. As a result, the position of the system CG is 318 mm in front of the symmetry plane with the empty tank and 210 mm with the tank filled with 80 kg of paint. Consequently, the CG is located within the wheels area ensuring that the AVG will not tip over due to the moments of the extended beam and robotic arm. The addition of a balancing mass can be considered to position the CG closer to the vehicle centroid. The design of the lift-AGV attachment is not discussed in the present work and will be the subject of future detailed study in the system structure.

Fig. 3. Top (a) and side (b) view of the system.

4 Conclusions The preliminary design of an automatic solution for the aircraft painting and paint removal is proposed. To compete with the state-of-the-art robots under development, the system was designed to ensure high performance and rapid development at the lowest investment and operating costs. To ensure high performance, high accuracy selection criteria were adopted during the design. Moreover, the system was designed to process as many aircraft as possible, considering different shapes and sizes. Finally, to reduce the costs, it is composed mostly by off-the-shelf components. They do not require to be designed and manufactured but are mass-produced by specialized companies at a lower cost and high quality. Furthermore, it is possible to have a spare parts supply during the operational life of the system. Because this is a complex problem, the development time for a project of this kind is generally long. Especially developing and

Automatic Painting and Paint Removal System

649

testing the control system require a long time. To reduce the development time, the system was designed as simple as possible, both from the structure and the control point of view. In details, the system is composed by a FANUC Paint Mate 200iA/5L robotic arm installed at the tip of a 4 m long aluminum alloy I-beam. The beam is held by a support that connects it to a lifting system composed by an aluminum truss structure, four linear guides and a ball screw driven by a stepper motor. The whole structure is then mounted on a Vetex RoboMate omnidirectional AGV on which the paint tank and the robotic arm controller are installed. The AGV receives electric power and compressed air from external sources through cables and hoses. Except for the lift truss structure and the beam support, all the components listed above are off-the-shelf. Future investigations are required to complete the structure design and to develop and implement the control system.

References 1. Waurzyniak, P.: Expanding the horizons of aerospace automation. Manuf. Eng. 156(2), 59– 67 (2016) 2. Koleske, J.V.: Paint and Coating Testing Manual: Fifteenth Edition of the Gardner-Sward Handbook. ASTM manual series. ASTM International (2012) 3. Then, M.J.: The future of aircraft paint removal methods. Master’s thesis, Air Institute of Technology, Wright-Palterson Air Force Base, Ohio, September 1989 4. Arthur, J.: Robotic laser system to strip paint from aircraft. Adv. Coat. Surf. Technol. 26(1), 2–3 (2013) 5. STRATAGEM. Laser coating removal robot. http://www.stratagemgroup.nl/project-lasercoating.html. Accessed 12 Apr 2016 6. Seegmiller, N.A., Bailiff, J.A., Franks, R.K.: Precision robotic coating application and thickness control optimization for F-35 final finishes. SAE Int. J. Aerosp. 2, 284–290 (2009). (2009-01-3280) 7. Waurzyniak, P.: Picking up the pace in aerospace production. Manuf. Eng. 69–79 (2014) 8. USAF: TO 1-1-8 Technical Manual: Application and removal of Organic Coatings, Aerospace and non-Aerospace Equipment. Secretary of the Air Force, March 2016 9. Reed, C.: Lockheed C-130 Hercules and Its Variants: (Schiffer Military History). Schiffer Publishing Ltd. (1999) 10. Diegel, O., Badve, A., Bright, G., Potgieter, J., Tlale, S.: Improved mecanum wheel design for omnidirectional robots. In: Proceedings of the 2002 Australasian Conference on Robotics and Automation, Auckland, pp. 117–121 (2002) 11. FANUC Robotics America, Inc. Paint Mate 200iA/5L, MDS 00085, December 2009. http:// www.fanuc.eu/ch/en/robots/robot-filter-page/paint-series/paint-mate-200ia-5l. Accessed 13 Oct 2016 12. ASM International and ASM International. Handbook Committee and ASM International. Alloy Phase Diagram Committee. Metals Handbook: Properties and selection, vol. 2. ASM International (1990) 13. A ASTM. 6-standard specification for general requirements for rolled structural steel bars. Plates, Shapes, and Sheet Piling (2009) 14. Bakhoum, M.: Structural Mechanics, vol. 1. Mourad Bakhoum (1992) 15. Martha, L.F.: Ftool - two-dimensional frame analysis tool. Educational version 2 (2001)

650

U. Morelli et al.

16. Thomson Industries, Inc. RoundRail Linear Guides and Components, CTEN-0002-03A. http://www.thomsonlinear.com/downloads/bearings_guides/RoundRail_LinearGuides_ Components_cten.pdf. Accessed 16 Oct 2016 17. Boldea, I., Nasar, S.A.: Linear electric actuators and generators. In: Electric Machines and Drives Conference Record, p. MA1-1. IEEE International (1997) 18. Thomson Industries, Inc. Lead Screws, Ball Screws and Ball Splines, CTEN-0006-02A | 20151030TJ. http://www.thomsonlinear.com/downloads/screws/Leadscrews_Ballscrews_ Splines_cten.pdf. Accessed 16 Oct 2016 19. Thomas, R.: Selecting and Applying Rolling Element Linear Bearings and Guides. Thomson Industries, Inc., 1500 Mittel Blvd, Wood Dale, IL

Base Frame Calibration of a Reconfigurable Multi-robot System with Kinesthetic Guidance Timotej Gaˇspar(B) , Robert Bevec, Barry Ridge, and Aleˇs Ude “Joˇzef Stefan” Institute, Jamova 39, 1000 Ljubljana, Slovenia {timotej.gaspar,robert.bevec,barry.ridge,ales.ude}@ijs.si

Abstract. Reconfigurable manufacturing systems (RMS) provide means to deal with changes and uncertainties in highly dynamic production processes. They allow for a relatively quick adjustment of various modules within the production line. To further increase the flexibility of such systems, multiple robots can be used within. Multi-robot systems provide a higher degree of flexibility and efficiency compared to singlerobot systems. These systems can perform tasks that require a high level of dexterity. However, in order to ensure the robots are able to precisely perform cooperative tasks, it is necessary to have a well calibrated system. In this paper, we present a novel approach for robot base frame calibration by exploiting the collaborative robots’ kinesthetic guidance feature. The developed method is suitable in RMS, as it is more time efficient and intuitive without drawbacks in precision. Keywords: Reconfiguration · Multi-robot Industry · Collaborative robots

1

· Base frame calibration

Introduction

Large batch size manufacturing processes are somewhat quick in adopting robotic manipulators in order to automate large portions of their productions as robotic manipulators excel in highly repetitive. Traditionally, the high costs of invested resources arise not only from the costs of the robot manipulators and all related peripherals, but also from the time invested into the planning, developing and programming of the production line [1]. The manufacturing companies, which want to integrate robot manipulators in their production, tend to outsource the robot integration, as the whole process requires knowledge specific for the robot brand. This kind of investment does pay off in large batch size manufacturing scenarios, but it is harder to justify in more dynamic manufacturing processes. For these reasons, it is rather uncommon to see robots operate in small or medium sized enterprises (SME) with small batch size productions, also known as “few-of-a-kind productions”. These kind of processes demand a high degree of flexibility as they tend to change often. These demands led to c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 651–659, 2019. https://doi.org/10.1007/978-3-030-00232-9_68

652

T. Gaˇspar et al.

an increase in research and development efforts towards reconfigurable manufacturing systems (RMS) [2,3]. Reconfigurability can be achieved through the implementation of reconfigurable fixturing systems [4,5], modules that can be quickly plugged or unplugged into a robot cell [6], a component-based technology for robot workcells [7], etc. Regardless of the technology used to achieve reconfigurability, the paradigm stays: the robot workcell should be reconfigurable in the least amount of time possible, to accommodate the change of production parameters. One of the technologies that enhances the intuitiveness and time efficiency of robot programming is “programming by demonstration” (PBD) which has been extensively studied in the recent years [8]. With the recent years’ surge of collaborative robots on the market, the PBD technology also saw its adoption in industrial environments [9]. Many of the collaborative robots also provide the so called “gravity compensation” control mode, in which the torques commanded to the joints are just right to nullify the effect of gravity on the robot’s structure. When this kind of control is in effect, the robot is completely compliant to the effects of external forces and it is thus possible to operate the robot manipulator by kinesthetic guidance. Kinesthetic teaching of robots would allow also nonexperts in robotics to program the robots to perform various tasks. This would bring the adoption of robotic driven manufacturing even closer to SMEs. To further increase the flexibility and productivity of robot workcells, it is possible to include more than one robot manipulator. Coordinated multi-robot systems can accomplish more complex tasks, have a higher degree of dexterity, can carry higher payloads, etc. [10,11]. However, in order to achieve coordinated multi-robot performance, the system has to be well calibrated. The process of base frame calibration can be tedious and time consuming, which goes against the reconfigurable workcell paradigm. In our previous publication, we introduced a reconfigurable robot workcell, aimed to tackle the previously discussed issues of robot adoption in SMEs manufacturing processes - ReconCell [12]. The developed cell is reconfigurable in both hardware and software aspects. We aim to build a reconfigurable multirobot workcell, where robots are one of the reconfigurable modules that can be freely positioned and re-positioned according to the task at hand. In this paper, we therefore present a novel approach for base frame calibration of coordinated robots that exploits the kinesthetic guidance capabilities of collaborative robots and is aimed to facilitate the calibration process in highly reconfigurable robot workcells. 1.1

Related Work

The problem of identifying the coordinate transformation between two robots’ coordinate frames has been researched and discussed in the past. By reviewing the available literature, we were able to group the proposed base frame calibration methods into two groups: those that require additional measuring equipment and those that do not.

Base Frame Calibration of a Reconfigurable Multi-robot System

653

With Additional Measuring Equipment. Albada et al. used a camera-inhand robot system to locate a plate fixed within the robot’s workspace [13]. The plate had a known pattern printed on it and they used the robot to acquire a number of images from different perspectives. With algorithms of image processing, they were able to identify the position of the plate relative to the robot. They argued that this procedure could be used to calibrate a multi-robot system by placing the plate in such a way that both robots, mounted with a camera, could perform this calibration procedure. Lippiello et al. used a hybrid eyein-hand and eye-to hand system to estimate the pose of an object using the extended Kalman filter [14]. Other authors have also used laser trackers [15], passive Cartesian measurement systems [16] or other approaches to solve the base frame calibration. One drawback of these approaches is that they require additional measuring devices. The other drawback is that, even if the calibration process can be automated, it cannot run unsupervised because the poses, in which the robot should move to acquire measurements, have to be carefully selected to avoid possible collisions between the robot and the environment. Without Additional Measuring Equipment. Bennet and Hollerbach provide one of the earliest mentions of the idea to use another robot as a measuring system for calibration purposes [17]. They proposed a calibration process to identify the D-H parameters of either a redundant robot or two robot manipulators coupled together at their end effectors. The authors use an iterative identification process with the Jacobian matrix. However, they do not carry their work further from the simulation. Bonitz and Hsia propose a dual-robot calibration method that relies on aligning two precisely machined metal plates, which are fixed on both robots’ end effectors, in various points within the joined workspace [18]. Similar approaches, i.e. calibration of a dual-robot system with a specially designed end effector tool, have been proposed by other authors [19,20]. The two major drawbacks in these approaches are that they require a special tool, of which the dimensions are well known, and are very time consuming. Moving two robots separately so the specially designed tools align is a tedious and time consuming task which goes against the paradigm of fast reconfiguration.

2 2.1

Base Frame Calibration of a Multi-robot System Kinematic Representation

In order to derive the mathematical solution of the calibration problem, we first introduce the given system of coordinate frames. The two robot manipulator frames, FM1 and FM2 , are described in the world coordinate frame FW . The end effectors of robots are noted with FE1 and FE2 . The transformation between two frames will be denoted with T, which represents the 4 × 4 matrix of homogeneous transformation   R p T= , (1) 0 1

654

T. Gaˇspar et al.

where R3×3 is the rotational part and p3×1 is the translational part of the transformation. For example, the transformation from the world frame to the robot base frame is noted with TW,M1 . The position and rotation of the robot’s end effector are obtained from the direct kinematic model of the robot R = R(θθ )

and p = p(θθ )

(2)

where θ is the vector of joint variables.

Fig. 1. Coordinate frames and their relations.

Since the world coordinate frame can be placed anywhere arbitrary, we place it at the base frame of the first robot manipulator so that TW,M1 = I, where I is the identity matrix. 2.2

Least Square Solution for AX = XB Problem

In this section, we will be deriving the equations for the solution of the transformation from one robot frame FM1 to another FM2 , i.e. TM 1,M 2 . We start by defining a set measurements of robots’ end effectors transformations {(TnM1 ,E1 , TnM2 ,E2 )}n=0...N where TE1 ,E2 is constant and N is the number of measurements. In practice this means that the robots’ end effectors are stiffly coupled together. We now write the kinematic chain for the first and any consecutive measurement (j = 1 . . . N ) and define the equation system: T0M1 ,E1 TE1 ,E2 = TM1 ,M2 T0M2 ,E2

(3)

TjM1 ,E1 TE1 ,E2

(4)

=

TM1 ,M2 TjM2 ,E2

In this equation system we have two transformations that remain constant and are unknown: TE1 ,E2 and TM1 ,M2 . If we rewrite both equations to solve for TE1 ,E2 , we can then combine Eqs. 3 and 4 into TjM1 ,E1 T0M1 ,E1

−1

TM1 ,M2 = TM1 ,M2 TjM2 ,E2 T0M2 ,E2

−1

(5)

Base Frame Calibration of a Reconfigurable Multi-robot System

We can rewrite the above equation in the shape of Aj X = XBj where:   −1 RAj pAj j 0 TM1 ,E1 TM1 ,E1 = Aj = , 0 1   −1 RBj pBj , TjM2 ,E2 T0M2 ,E2 = Bj = 0 1   RX pX . TM1 ,M2 = X = 0 1

655

(6) (7) (8)

All matrices, Aj , Bj and Xj are elements of the special Euclidean group of rigid body transformations SE(3). The solution of this equation system is discussed in more depth in [21]. For the sake of completeness, we will provide the key steps towards the solution. We first define the logarithmic maps of matrices RAj and RBj which transform them into skew symmetric matrices [αj ] = log RAj and [βj ] = log RBj , where ⎤ ⎡ 0 −ω3 ω2 (9) [ω] = ⎣ ω3 0 −ω1 ⎦ . −ω2 ω1 0 The least square solution for RX is given as RX = (MT M)

−1/2

M, M =

N 

βj αjT ,

(10)

j=1

while the solution for pX is

pX

3 3.1

⎡ ⎤ ⎤ pA1 − RX pB1 I − R A1 ⎢ pA2 − RX pB2 ⎥ ⎢ I − R A2 ⎥ −1 ⎢ ⎢ ⎥ ⎥ , D = = (CT C) CT D, C = ⎢ ⎢ ⎥ ⎥ .. .. ⎣ ⎣ ⎦ ⎦ . . I − R AN pAN − RX pBN ⎡

(11)

Results Evaluation Real Data Acquisition

We conducted our experiments in the workcell described in [12]. We used two UR10 robots and coupled their end effectors with two Destaco TP-30 tool adapter modules, which were tightly screwed together, as seen in Fig. 2. By using the tool changer system, we ensured that no additional components or devices had to be developed for the calibration. It is worth emphasizing that the end effectors could also be coupled by different means. The proposed calibration method works regardless of the shape of the coupler (Fig. 3). The coupled robots were then kinesthetically moved around their joined workspace, all while data was being acquired. Both robots’ joint space variables (θθ 1 and θ 2 ) were recorded at each time step tn in a data structure ARaw : ARaw = {(θθ n1 , θ n2 , tn )}n=1...T ,

(12)

656

T. Gaˇspar et al.

Fig. 2. Two Destaco TP-30 tool adapter modules tightly screwed together.

Fig. 3. Two UR-10 robots coupled together with two Destaco TP-30.

where T denotes the number of acquired samples. Since we were acquiring data from two different robot controllers, it can be happen that there is a small time delay between the data coming from one and the other robot controller. This can cause discrepancies between acquired samples at time tn . To solve these issues, we decided to filter the acquired data before using the algorithm described in Sect. 2.2. We started off by only taking the samples at times where both robots were not moving: AFilt = {(θθ n1 , θ n2 )|

n n || [θ˙ 1 , θ˙ 2 ] ||2 <  }.

(13)

The result of such filtering can be seen in Fig. 4 where the fifth joint of one of the robot is depicted. The blue line depicts the original data, while in red we see the filtered samples. Next, we divided the set AFilt into N number of subsets depending on the Euclidean distance between each sample. Finally, we took the average values of each newly created subset and joined them into the final set of joint configuration samples. The final step was to convert the set of joints into a set of coordinate transformations which we used for the calibration ACalib = {(TkM1 ,E1 , TkM2 ,E2 )}k=1...N . We then recoded and filtered a second set of measurements, which we then used for the evaluation of the calibration AEval = {(TkM1 ,E1 , TkM2 ,E2 )}k=1...N . A total of N = 37 samples were used in the

Fig. 4. Raw recorded joint values (blue) and the filtered samples (blue) for one axis.

Base Frame Calibration of a Reconfigurable Multi-robot System

657

calibration process and M = 21 were used for the evaluation. The sizes of the sets is determined by the filtering process. It is not possible to predict how many samples will the filtering yield thus the size of the calibration and evaluation sets are different. 3.2

Evaluation Results

By applying the algorithm described in Sect. 2.2 to ACalib , we obtained the coordinate transformation between the base frames of the two robots TM1 ,M2 . To evaluate the result, we calculated the transformation between the robots’ end effectors TE1 ,E2 for each sample of AEval using the coordinate frame relations depicted in Fig. 1. The assumption was that if the resulting transformation TM1 ,M2 is precise, the standard deviation of all the calculated TE1 ,E2 using AEval should be small. The results of the evaluation are shown in the Table 1. Considering the UR-10 nominal repeatability of 0.1 mm [22], we can argue that the standard deviation is relatively small, for both position and orientation. Table 1. The statistical evaluation of the calculated parameters of TE1 ,E2 for each sample in AEval (M = 21). Translation [mm] x y z Mean

z

−0.94 −0.92 25.82 −179.95 −0.15 −1.10

Standard deviation 0.82

4

Rotation [◦ ] x y

0.52

0.64

0.04

0.06

0.11

Conclusion and Future Work

In this paper we present a novel approach for robot base frame calibration by using the kinesthetic guidance feature of the collaborative robots. The main advantages of our method are that it does not require additional measuring equipment or precisely designed calibration tools and it is time efficient without compromising the result of the calibration. However, further work is needed to further evaluate and improve the calibration method. The evaluation based on measuring the standard deviation gave us an insight on the applicability of our method but we want to examine its potential on more complex robot tasks, e.g. cooperative motions. Another aspect we intend to improve is the data acquisition. We intend to compile a system that notifies the user who performs the calibration when he has collected enough data. Acknowledgments. This work has received funding from the EU’s Horizon 2020 IA ReconCell (GA no. 680431) and from GOSTOP programme C3330-16-529000 cofinanced by Slovenia and EU under ERDF.

658

T. Gaˇspar et al.

References 1. Dietz, T., Schneider, U., Barho, M., Oberer-Treitz, S., Drust, M., Hollmann, R., Haegele, M.: Programming system for efficient use of industrial robots for deburring in SME environments. In: 7th German Conference on Robotics, ROBOTIK 2012, pp. 1–6, May 2012 2. Bi, Z.M., Lang, S.Y.T., Verner, M., Orban, P.: Development of reconfigurable machines. Int. J. Adv. Manuf. Technol. 39(11), 1227–1251 (2012) 3. Koren, Y.: General RMS characteristics. comparison with dedicated and flexible systems. In: Reconfigurable Manufacturing Systems and Transformable Factories, pp. 27–45. Springer (2006) 4. Gdl, M., Kovac, I., Frank, A., Haring, K.: New robot guided fixture concept for reconfigurable assembly systems. In: International Conference on Changeable, Agile, Reconfigurable and Virtual Production, pp. 1–7 (2005) 5. Jonsson, M., Ossbahr, G.: Aspects of reconfigurable and flexible fixtures. Prod. Eng. 4(4), 333–339 (2010) 6. Reinhart, G., Krug, S., Httner, S., Mari, Z., Riedelbauch, F., Schlgel, M.: Automatic configuration (plug & produce) of industrial ethernet networks. In: 9th IEEE/IAS International Conference on Industry Applications, pp. 1–6 (2010) 7. Chen, I.-M.: Rapid response manufacturing through a rapidly reconfigurable robotic workcell. Rob. Comput.-Integr. Manuf. 17(3), 199–213 (2001) 8. Billard, A., Calinon, S., Dillmann, R., Schaal, S.: Robot programming by demonstration. In: Handbook of Robotics, pp. 1371–1394. Springer (2008) 9. Robotiq, Collaborative Robots Buyers Guide (2017) 10. Hemami, A.: Kinematics of two-arm robots. IEEE J. Rob. Autom. 2(4), 225–228 (1986) 11. Garg, D.P., Poppe, C.D.: Coordinated robots in a flexible manufacturing work cell. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, vol. 1, pp. 648–653 (2001) 12. Gaˇspar, T., Ridge, B., Bevec, R., Bem, M., Kovaˇc, I., Ude, A., Gosar, V.: Rapid hardware and software reconfiguration in a robotic workcell. In: 18th International Conference on Advanced Robotics (ICAR), pp. 229–236 (2017) 13. van Albada, G.D., Lagerberg, J.M., Visser, A., Hertzberger, L.O.: A low-cost pose-measuring system for robot calibration. Rob. Autonom. Syst. 15(3), 207–227 (1995) 14. Lippiello, V., Siciliano, B., Villani, L.: Position-based visual servoing in industrial multirobot cells using a hybrid camera configuration. IEEE Trans. Rob. 23(1), 73–86 (2007) 15. Zhao, D., Bi, Y., Ke, Y.: Kinematic modeling and base frame calibration of a dualmachine-based drilling and riveting system for aircraft panel assembly, vol. 94, no. 5, pp. 1873–1884 (2018) 16. Wang, W., Liu, F., Yun, C.: Calibration method of robot base frame using unit quaternion form. Precision Eng. 41, 47–54 (2015) 17. Bennett, D.J., Hollerbach J.M.: Self-calibration of single-loop, closed kinematic chains formed by dual or redundant manipulators. In: Proceedings of the 27th IEEE Conference on Decision and Control, pp. 627–629 (1988) 18. Bonitz, R.G., Hsia, T.C.: Calibrating a multi-manipulator robotic system. IEEE Rob. Autom. Mag. 4(1), 18–22 (1997) 19. Zhang, Z.: Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004)

Base Frame Calibration of a Reconfigurable Multi-robot System

659

20. Wu, H., Bi, Z., Su, M., Zhang, P., He, Y., Guan, Y.: Coordinated motion planning with calibration and offline programming for a manipulator-positioner system. In: IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1094– 1099 (2014) 21. Park, F.C., Martin, B.J.: Robot sensor calibration: solving AX= XB on the euclidean group. IEEE Trans. Rob. Autom. 10(5), 717–721 (1994) 22. Universal Robots. https://www.universal-robots.com/. Accessed 1 May 2018

Compensating Position Measurement Errors for the IR Static Triangulation System (B) Maciej Cie˙  zkowski and Adam Wolniakowski

Bialystok University of Technology, Bialystok, Poland {m.ciezkowski,a.wolniakowski}@pb.edu.pl

Abstract. Determination of an object’s position in a given reference frame is the main purpose of a navigation system. This can be done in many different ways and depending on the chosen method, and measurement equipment, produce more or less accurate measurements. Precise indoor navigation is particularly important due to the ever more dynamic development of autonomous systems in many areas of industry. Unfortunately, the measurement accuracy of indoor navigation systems is reduced due to the influences of walls and other obstacles that interfere with the measurement signals causing so called multipathing. Multipathing is often reduced by creating error maps, which is a labor-intensive task. In this paper, we present a method in which a robot manipulator is used as the reference positioning system to determine such mapping. In the next step the mapping, using a second order polynomial, is determined which maps the measured disturbed object’s position from the triangulation system into real object’s position. Keywords: Positioning system Reference positioning system

1

· Calibration

Introduction

The main purpose of a navigation system is to estimate an object’s position in a certain reference frame. Today, most of outdoor navigation (maritime, aviation and land) is based on the Global Positioning System (GPS). Despite the many advantages, the GPS has some limitations: about 1 m accuracy and lack of indoor area coverage. To enable navigation inside the buildings or to improve navigation precision, local positioning systems (LPS) are still developed. Most of the absolute positioning systems use one of two methods to designate the object’s position: trilateration and triangulation [1–3]. Trilateration is based on the measurements of the distances to known objects (beacons). Distance between the object and the beacon is calculated on the basis of the time of flight (TOF) of the signal (radio frequency signal (RF) or ultrasonic signal) or timedifference of arrival (TDOA) of the signal. The most popular RF trilateration method is based on the UWB (ultra wideband) technique [4–7] which accuracy c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 660–668, 2019. https://doi.org/10.1007/978-3-030-00232-9_69

Compensating Position Measurement Errors for the IR Static

661

is about 10–30 cm [8]. Another technique called a Fingerprint location method is based on the Wi–Fi received signal strength (RSS). This method needs to has an online access to the RSS data database to determine position of the object. The accuracy of Fingerprints systems is about 1–5 m [8]. Among the trilateration systems, the ultrasonic systems should also be mentioned [9–11]. The accuracy of ultrasonic systems can be up to 1 cm but this systems have a small range [8]. Triangulation is the process of determining the object’s position by measuring the angles between the object and the beacons with known locations. Triangulation, unlike trilateration, is able to determine the object’s position and its orientation directly and simultaneously. Triangulation systems can be grouped according to the principle of their operation and construction. The first group uses a mounted on-board laser beam rotating at the horizontal plane and illuminating retroreflective beacons [12,13]. Another group of triangulation systems uses unique beacons, which has been achieved via additional communication channel (typically an RF channel) [12,14]. Some solutions use the laser beam mounted on a rotating beacon and illuminating the surrounding space [15], or active beacons emitting infrared light instead of visible light and rotating infrared receiver on the mobile robot [16]. The mechanical moving parts in triangulation system (such as motors, rotating lasers or rotating diode) require an additional power supply, complicate construction, and wear out over time. To avoid this problems a static (without moving parts) IR beacon-receiver system has been designed, built and successfully tested [17]. All of the measuring systems mentioned above have a specific measurement accuracy. This accuracy depends mostly on the method applied of determine the object’s position or precision of the measuring device. These two factors usually cannot be influenced. However, there is another source of the measurement error, which can be eliminated. This error is caused by the influence of the walls and other obstacles that cause multipathing of the measurement signals. In the case of Fingerprint system, multipathing is immediately taken into account during creation of the database of the RSS signals. This approach is also used for other positioning systems: the error map is created based on measurements on a spatial grid. There are also methods using ray tracing technique to create the error map [18–20]. A major drawback of creating an error map lies in the labor intensive survey: a large number of reference points have to be tested. Another problem is the precise determination of reference points, because an additional positioning system is required to determine the reference positions. To solve these problems, we decided to use the UR5 manipulator with the positioning device placed at the end of its kinematic chain. The use of the manipulator made it possible to obtain high precision in finding the reference points necessary to determine the map of measurement errors. As a position system the static IR beacon-receiver system was used. This positioning system consist of three infrared beacons (transmitters) and one measurement device (receiver) mounted on the manipulator. This positioning system does not contain any mechanical moving parts and uses a triangulation method to calculate the object’s position, so we call it Static Triangulation System (STS). The manipulator and STS receiver are shown in Fig. 1.

662

M. Cie˙  zkowski and A. Wolniakowski

Fig. 1. The STS system mounted on UR5 manipulator

2 2.1

Methods Position Measurement

The functional principle of the STS triangulation system is based on the measurement of the intensity of IR light that is emitted by three beacons. The receiver consists of a 32-channel circular array of photodiodes mounted on the mobile board. The receiver photodiodes measure the intensity of infrared radiation, and on this basis calculate the angle of arrival of beacon’s light. Using the infrared light intensity measured on each channel, the relationship between light intensity and photodiodes angular positions can be established. The designation of the relative angle ϕi to each IR beacon Bi is based on the determination of the maximum value of this relationship. The principle of operation of this system is presented in Fig. 1 and more details about the STS positioning system can be found in [17]. 2.2

Calibration

To improve the accuracy of the position estimation of our STS system we perform the calibration between the reference points and the measured points by finding a mapping f : R2 → R2 such that the raw measured positions can be rectified into the calibrated measurements. We assume a quadratic calibration model: x = axx x2 + axy xy + ayy y 2 + ax x + ay y + a1 y  = bxx x2 + bxy xy + byy y 2 + bx x + by y + b1

(1)

where x and y are the raw measured values, x and y  are the rectified values, aij are coefficients for the x-value quadratic calibration model, and bij are coefficients for the y-value quadratic calibration model. The model coefficients are found using a least-square optimization: +  (2) · [ˆ xy ˆ] [a b] = x2 xy y2 x y 1

Compensating Position Measurement Errors for the IR Static

663

where a and b are column vectors of the aij and bij coefficients respectively, x and y are the raw position measurement values, and x ˆ, y ˆ are the reference points positions. 2.3

Experimental Setup

We perform our real-world position measurement experiments in an existing Bender workcell. The stand is constructed with an aluminum frame (2.4 × 1.2 × 2.2 m) and a wooden table, and equipped with two UR5 6 DoF robotic manipulators [24]. One of the manipulators is used as the reference system, positioning the STS device at the measurement points laid in an 0.1 × 0.1 m grid within the robot working area. The UR5 robot provides a precise and flexible reference system for the position measurement calibration. The manipulator is capable of lifting a 5 kg payload with a repeatability of ±0.01 mm [25]. We perform the UR5 control using custom ROS [21] packages based on the RobWork Hardware libraries [23]. The STS device is attached to the last joint of the robot such that the central axis of the measurement dish passes directly through the TCP of the robot, and the rotation of 6-th joint of UR5 directly affects the measured heading θ. The mounting of the device is shown in Fig. 1. The IR beacons of the positioning system are placed around the Bender stand at positions B1 = (1.2, 1.0583) m, B2 = (−1.2, 1.0583) m and B3 = (0, −1.6) m as shown in Fig. 2. This configuration was chosen such as to provide uniform coverage of the manipulator’s working area, as well as to avoid any obstructions of the beacon visibility due to the construction elements. The experiment is done in two variants: the first without any modification, and the second with a reflector surface placed at one of the corners of the stand. A 60 × 40 cm whiteboard was used as the reflecting surface.

Fig. 2. Setup overview: (a) visualization of the UR-STS setup in RobWork [22], (b) placement of the setup features with the optional reflector.

664

M. Cie˙  zkowski and A. Wolniakowski

The setup is located in a large room (6 × 12 m), with the B3 beacon located close to the longer side. Thus, it is expected that the reflections could cause some errors in position measurement. We have taken some measures to reduce the noise caused by IR pulses reflecting off of the structural elements of the stand. The aluminum frame was covered with corrugated cardboard painted in black, and the manipulator was covered in flexible dark cloth. Unfortunately, environmental factors are impossible to eliminate completely, and thus we aim to discern the influence of reflections by placing an additional reflector in the setup.

3

Results

We perform two sets of experiments. First, we take baseline position measurements without the reflector placed on the stand. We then perform calibration and present the rectified position measurements for this configuration. Next, we place a reflector (60 × 40 cm whiteboard) in one of the corners of the robotic setup. We then perform similar position measurements and calibration. After the calibration process, the RMSE errors were calculated – error between real and measured positions as well as between real and rectified positions. Calibration data and RMSE errors are presented in Table 1. Table 1. Experiment results Baseline experiment Model coefficients axx = −0.004987 bxx = -0.109371

RMSE

Reflector experiment axx = 0.006245 bxx = −0.112520

axy = 0.008893

bxy = −0.037582 axy = 0.005009 bxy = −0.055392

ayy = 0.022036

byy = −0.044798 ayy = 0.003826 byy = −0.050107

ax = 0.971222

bx = 0.043462

ax = 0.990085

bx = 0.066196

ay = 0.023288

by = 1.015682

ay = 0.040625

by = 1.020146

a1 = 0.026174

b1 = 0.014719

a1 = 0.058725

b1 = 0.013605

Before Calibration: 4.2 cm

Before calibration: 7.3 cm

After calibration: 1.1 cm

After calibration: 1.3 cm

In both experiments, we collect the position measurement data from the points placed on an 0.1 × 0.1 m grid within the robot working area at the height of 0.75 m above the table surface and centered around the manipulator base central axis. At this height, the robot is able to reach points located at the maximal radius of 0.7 m. The robot singularities prevent us from taking the measurements within the radius of 0.2 m around the base of the robot. Position measurements without and with the reflector are presented in Fig. 3. The vectors in the figures present the difference between the reference positions (base of the arrows) and the measured (subfigures (a) and (c)) or rectified positions (subfigures (b) and (d)) respectively.

Compensating Position Measurement Errors for the IR Static

665

Fig. 3. Position measurements: (a) raw measurements without the reflector, (b) after calibration without the reflector, (c) raw measurements with the reflector, (d) after calibration with the reflector.

At the end we describe our investigation in optimal choice of the number of the calibration measurements for both experimental scenarios. For different number of calibration measurements m ∈ [6, 100], we have randomly selected 100 permutations of m calibration datapoints measured in the previous experiment. Based on these sets of datapoints we have performed data rectification and measured the achieved reduction in RMSE of the measurements. The data collected from this experiment is presented in Fig. 4. We can conclude that in both cases a decent reduction in position measurement error can be achieved with performing around 10–15 calibration measurements. The improvement of the RMSE error after calibration depending on the number of test measurements is presented in Fig. 4.

666

M. Cie˙  zkowski and A. Wolniakowski

Fig. 4. The improvement of the RMSE error after calibration depending on the number of test measurements: (a) without reflector, (b) with reflector.

4

Conclusion

There are two principal sources of errors in position measurement in IR-based triangulation systems. The first is the inherent lack of precision in the device, which is however difficult to eliminate. Another major source of the errors are the environmental factors, such as the static lighting conditions and reflections of the beacon impulses from the obstacles. A common way of tackling these issues is to employ a device calibration scheme. In this work, we have performed a set of experiments using the STS to determine whether the calibration can be succesfully used to eliminate the errors due to introduction of a reflecting surface in the scene. A simple quadratic mapping was shown to reduce the RMSE score of the measurements from 4.2 cm to 1.1 cm in the baseline experiment, and from 7.3 cm to 1.3 cm in the reflector experiment. In our future work, we first plan to perform similar calibration experiment with the heading position also included. We plan to determine the influence of the individual obstacles, as well as the room geometry on the position measurement errors. We believe that a calibration pattern can be found to automatically determine the location of beacons, as well as the placement of obstacles. Acknowledgments. This study has been carried out in the framework of S/WM/1/2016 funded by the Polish Ministry of Science and Higher Education.

References 1. Borenstein, J., Everett, H., Feng, L., Wehe, D.: Mobile robot positioning-Sensors and techniques. J. Robot. Syst. 14(4), 231–249 (1997) 2. Demetriou, G.: A survey of sensors for localization of unmanned ground vehicles (UGVs). In: Proceedings International Conference on Artificial Intelligence, vol. 2, Las Vegas, NV, USA, pp. 659–668 (2006) 3. Ingemar, J.C., Wilfong, G.T.: Autonomous Robot Vehicles. Springer, New York (1990)

Compensating Position Measurement Errors for the IR Static

667

4. Alarifi, A.: Ultra wideband indoor positioning technologies: analysis and recent advances. Sensors 16(5), 707 (2016) 5. Sahinoglu, Z., Gezici, S., Guvenc, I.: Ultra-Wideband Positioning Systems: Theoretical Limits, Ranging Algorithms, and Protocols. Cambridge University Press, New York (2008) 6. Tiemann, J., Eckermann, F., Wietfeld, C.: ATLAS - an open-source TDOA-based Ultra-wideband localization system. In: Proceedings International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–6 (2016) 7. Kim, E., Choi, D.: A UWB positioning network enabling unmanned aircraft systems auto land. Aerosp. Sci. Technol. 58, 418–426 (2016) 8. Farid, Z., Nordin, R., Ismail, M.: Recent advances in wireless indoor localization techniques and system. J. Comput. Netw. Commun. 2013 (2013) 9. Medina, C., Segura, J.C., de la Torre, A.: A synchronous TDMA ultrasonic TOF measurement system for low power wireless sensor networks. IEEE Trans. Instrum. Meas., 1–13 (2012) 10. Medina, C., Segura, J.C., de la Torre, A.: Accurate time synchronization of ultrasonic TOF measurements in IEEE 802.15.4 based wireless sensor networks. Ad Hoc Netw. 11, 442–452 (2012) 11. Kang, M.-H., Lim, B.-G.: Development of a practical foolproof system using ultrasonic local positioning. Measurement 79, 1–14 (2016) 12. Zalama, E., Dominguez, S., Gomez, J., Peran, J.: Microcontroller based system for 2D localisation. Mechatronics 15(9), 1109–1126 (2005) 13. Borenstein, J., Everett, H., Feng, L.: Where am I? Systems and methods for mobile robot positioning. University Michigan, Ann Arbor, MI, USA, Technical report (1996) 14. Premi, K., Besant, C.: A review of various vehicle guidance techniques that can be used by mobile robots or AGVs. In: 2nd International Conference Automated Guided vehicle systems, Stuttgart, Germany (1983) 15. Hernandez, S., Torres, J., Morales, C., Acosta, L.: A new low cost system for autonomous robot heading and position localization in a closed area. Auton. Rob. 15(2), 99–110 (2003) 16. Pierlot, V., Van Droogenbroeck, M.: BeAMS: a beacon-based angle measurement sensor for mobile robot positioning. IEEE Trans. Rob. 30, 533–549 (2014) 17. Ciezkowski, M.: Triangulation positioning system based on a static IR beaconreceiver system. In: Proceedings 22th International Conference on Methods and Models in Automation and Robotics MMAR 2017, Miedzyzdroje, pp. 84–88 (2017) 18. Meissner, P., Steiner, C., Witrisal, K.: UWB positioning with virtual anchors and floor plan information. In: Proceedings 7th Workshop on Positioning, Navigation and Communication, Dresden, pp. 150–156 (2010) 19. Meissner, P., Gan, M., Mani, F., Leitinger, E., Fr¨ ohle, M., Oestges, C., Zemen, T., Witrisal, K.: On the use of ray tracing for performance prediction of UWB indoor localization systems. In: Proceedings IEEE International Conference on Communications Workshops (ICC), Budapest, pp. 68–73 (2013) 20. Wielandt, S., De Strycker, L.: Indoor multipath assisted angle of arrival localization. Sensors 17(11), 2522 (2017) 21. Robot Operating System. http://www.ros.org/ 22. Ellekilde, L.-P., Jorgensen, J.A.: RobWork: a flexible toolbox for robotics research and education. In: 2010 41st International Symposium on and 2010 6th German Conference on Robotics (ROBOTIK) Robotics (ISR), pp. 1–7 (2010)

668

M. Cie˙  zkowski and A. Wolniakowski

23. RobWork. http://robwork.dk/ 24. Universal Robots. https://www.universal-robots.com/ 25. UR5 Technical specifications. https://www.universal-robots.com/media/50588/ ur5 en.pdf

Efficient, Precise, and Convenient Calibration of Multi-camera Systems by Robot Automation Tobias Werner(B) , David Harrer, and Dominik Henrich Lehrstuhl f¨ ur Robotik und Eingebettete Systeme, Universit¨ at Bayreuth, 95440 Bayreuth, Germany [email protected] http://robotics.uni-bayreuth.de

Abstract. Future use cases for stationary robot manipulators envision shared human-robot workspaces. However, shared workspaces may contain a priori unknown obstacles (e.g. humans). Robots must take these obstacles into account when moving (e.g. through online path planning). To this end, current research suggests real-time workspace monitoring with a calibrated multi-camera system. State-of-art solutions to camera calibration exhibit flaws in the above scenario, including long calibration times, excessive reprojection errors, or extensive per-calibration efforts. In contrast, we contribute an approach to multi-camera calibration that is at once efficient, precise, and convenient: We perform fully-automated calibration of each camera with a robot-mounted calibration object. Subsequent multi-camera optimization equalizes reprojection error over all cameras. After initial setup, experiments attest our contribution minor reprojection errors in few minutes time at one button click. Overall, we thus enable frequent system (re-)calibration (e.g. when moving cameras). Keywords: Multi-camera systems · Camera calibration Shared workspace monitoring · Human-robot collaboration Obstacle reconstruction · Path planning

1

Introduction

Human-robot collaboration promises to combine the individual virtues of humans and robot manipulators. Respective research envisions various future use cases for robots, from flexible industrial automation to applications in small businesses and the service sector. However, human-robot collaboration mandates shared human-robot workspaces, which contain a priori unknown obstacles (e.g. humans or human-placed objects). The robot manipulator must thus be able to avoid obstacles in real-time. State-of-art solutions for real-time obstacle avoidance (e.g. [1,2]) propose a two-tier approach: Real-time workspace monitoring with a multi-camera system finds 3D workspace volumes that are occupied by c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 669–677, 2019. https://doi.org/10.1007/978-3-030-00232-9_70

670

T. Werner et al.

obstacles. Concurrently, real-time path planning generates collision-free or riskminimized robot trajectories around occupied workspace volumes. Crucial to workspace monitoring and subsequent path planning is a precise calibration of the multi-camera system. In particular, calibration must provide precise approximations of both intrinsic and extrinsic camera parameters (i.e. in reference to the coordinate system of the robot manipulator). However, the context of shared human-robot workspaces poses additional challenges for camera calibration: Users for instance may arbitrarily attach additional cameras (e.g. when flexibly changing workspace layout), users may advertently or inadvertently change camera positions, or vibrations on ad-lib camera mounts may cause slight camera displacement. Camera calibration for human-robot workspaces hence need not only be precise, but must also be efficient (to enable fast or even online recalibration) and convenient (to enable effortless calibration by non-experts). State-of-art approaches do not meet the above three criteria. In contrast, we contribute a novel approach to calibrating a multi-camera system that is precise, efficient, and convenient: We automate calibration by a robot-mounted calibration object to enable online camera calibration at a single button click. The remainder of our work is structured as follows: Sect. 2 surveys alternative approaches to camera calibration. In Sect. 3, we present and discuss our approach to efficient, precise, and convenient multi-camera calibration by robot automation. Section 4 continues with an evaluation of our contribution in precision and efficiency. Section 5 concludes with an outlook on future work.

2

Related Work

We discuss related work on camera calibration in three distinct categories: singlecamera calibration, single-camera calibration with subsequent multi-camera optimization, and full multi-camera calibration. Single-camera calibration usually estimates initial camera parameters (e.g. with a homography [3], direct linear transforms [4], or explicit ray equations [5]), then refines initial parameters (e.g. with Levenberg-Marquardt [3], non-linear optimization [4], or gradient descent [5]) for a more precise result. Comparative evaluation (e.g. [6,7]) indicates that homography with Levenberg-Marquardt optimization (e.g. [3]) yields most precise results with minor runtime overhead. As a multi-camera system consists of individual cameras, it is possible to calibrate each individual camera with one of the above single-camera calibration approaches. Subsequent optimization (e.g. with a spanning tree over camera pairs with intersecting frusta [8]) can apply knowledge about the multi-camera system to further refine parameter precision (e.g. by reducing cyclic errors [8]). Finally, multi-camera calibration can determine parameters for all cameras at once without preceding single-camera calibration. This implies finding matching features over all cameras (e.g. by manually waving a bright light spot in all cameras [9]), followed by parameter estimation (e.g. by matrix factorization [9]). In preliminary experiments, we found this variant to be less precise than singlecamera calibration alternatives, especially for cameras with distinct distortions.

Efficient, Precise, and Convenient Multi-camera Calibration

671

Overall, the default practice for calibration is to use a hand-held calibration object. This practice is unfavorable in multi-camera systems: Occlusions (e.g. by the robot) enforce multiple calibration poses. Selecting and capturing those poses in turn is time-consuming, error-prone, and requires expert knowledge.

3

Our Approach

Our approach to multi-camera calibration shares one prerequisite with related work: We need multiple views onto a calibration object as input, and we need to know the pose of this calibration object in respect to the robot manipulator. To solve both problems at once, we attach the calibration object (in our case, a checkerboard pattern on a wooden support frame) to the end effector mount of the robot manipulator. From CAD data of the robot and a one-time manual measuring, we can then determine a very precise estimate for the pose of the calibration object with respect to the robot even for an arbitrary choice of joint angles. In other words, we can now move the calibration object with the robot manipulator while precisely knowing the pose of the calibration object. Figure 1 illustrates our setup. Note real-world applications may replace the cumbersome checkerboard with a pattern that is conveniently imprinted onto the robot casing.

−−→ OD

x

z y

Fig. 1. Calibration object (a checkerboard pattern) mounted to the robot end effector.

Fig. 2. Pose of the calibration object in the robot software with axes and gen−−→ erating vector OD.

After mounting and measuring the calibration object once, we arbitrarily can perform our multi-camera calibration. Each subsequent calibration process takes three distinct steps: In the first step, the robot moves the calibration object through the scene while all cameras record images of the calibration object from different perspectives. In a second step, each camera is calibrated individually through a state-of-art single-camera calibration. In the third and final step, the information from overlapping camera frusta is used to refine individual results. Our calibration approach, in terms of related work, thus belongs to the category of single-camera calibration with subsequent multi-camera optimization. In the following, we discuss all three steps of our approach in greater detail.

672

3.1

T. Werner et al.

Planning Robot Movement

Target poses for robot movement during the first step must satisfy two requirements: Target poses must avoid occlusions of the calibration object (e.g. by the robot casing) in camera images, and target poses must be uniformly distributed over the workspace. Both requirements improve precision and efficiency of calibration by creating many correspondences over many cameras in a short time. To satisfy above requirements, we choose origins for the calibration object on a sphere centered around the robot base (see Fig. 4). A sphere radius r near workspace limits ensures that all but the first two robot joints remain fixed and thus we avoid self-collisions without explicit checks. A later online recalibration furthermore can use one of the available path planners (e.g. [1]) or collision mitigation (e.g. by soft or artificial skins) to cope with existing obstacles in the robot workspace on transfer movements, including the floor of the workspace. To generate almost uniformly spaced points on the sphere, we pick random spherical coordinates θ ∈ [−180◦ , 180◦ ) and ϕ ∈ [−90◦ , 90◦ ). An additional transform ϕ = arccos(2xrandom − 1), xrandom ∈ [0, 1) avoids dense sampling at the poles due to singularities. From spherical coordinates and previously defined radius r, we find a Cartesian position for the origin of the calibration object, (r sin(ϕ) cos(θ),

−r cos(ϕ),

r sin(ϕ) sin(θ)).

A respective orientation then is chosen for an upright calibration object that faces away from the sphere origin. The normalized vector from robot base to the origin of the calibration object becomes the z-axis in the local coordinate system of the calibration object (see Fig. 2). The remaining axes are chosen as an orthogonal system with an x-axis parallel to the floor plane. Finally, the robot moves to each generated pose, and cameras record images of the scene. 3.2

Camera Calibration

For every recorded image of every camera we check whether the image completely contains the calibration object. If the calibration object is partially or completely missing (e.g. due to occlusions or frustum limits), we discard the image for the respective camera. Otherwise, we store object feature points in image coordinates alongside the 3D pose of the calibration object. Once we have stored a preset number of images for an individual camera, we calibrate this camera through the popular openCV library: We first calculate intrinsic parameters and distortion coefficients without starting estimate from proxy points with z = 0 (i.e. as required by the implementation [3]). Thereafter, we find 3D feature points from saved poses, known object dimensions, and measured object transforms. We continue by refining camera intrinsics with correspondences between real image-space and 3D feature points. In a final step we use the openCV direct linear transform to find extrinsic camera parameters.

Efficient, Precise, and Convenient Multi-camera Calibration

3.3

673

Parameter Optimization

Although preceding steps already perform automated camera calibration, we can still improve the precision of calibration: We can exploit the additional knowledge that we have a multi-camera system. To this end, we have evaluated two different methods: Stereo optimization based on camera pairs, and global error minimization. Stereo Optimization. The idea of stereo optimization for camera pairs (as inspired by [8]) is to equalize the error in extrinsic parameters over neighboring cameras. For a given camera i and another, close-by camera j with many shared correspondences, we first estimate a relative transform i Kj from the pose of camera j to the pose of camera i by stereo calibration (e.g. with LevenbergMarquardt or the openCV). Applying i Kj to the pose of camera j yields a new estimate for the pose of camera i. Note this estimate differs from preceding extrinsics, as it exploits multi-camera connectivity through stereo calibration. We now have two estimates for the pose of camera i: The single-camera guess and its counterpart from stereo calibration. Averaging the translational and the rotational components of these pose estimates gives us a new parameter set for camera i. While mean translations are trivial, a mean rotation is not clearly defined. Research (see [10]) proposes rotations that exhibit least deviation from originals. For quaternions qi and qj with real part wi , wj , this leads us to use   wi (wi − wj + z) wj (wj − wi + z) T qi + sign(qi qj ) qj , qmean = z(wi + wj + z) z(wi + wj + z)  with z = (wi − wj )2 + 4wi wj (qTi qj ))2 . Camera i adopts the mean extrinsics. Finally, iterating over all close-by pairs (i, j) of cameras (possibly multiple times) increases precision of extrinsics. Error Minimization. In contrast to the camera pairs of stereo optimization, our global error minimization considers all cameras at once. At first, we discard all images on which the calibration object is only recorded by one camera, as we can not get any correspondences from those images. We are conversely left with all 3D feature points that have at least two 2D feature correspondences. Thereafter, we perform the actual optimization with Levenberg-Marquardt: In every iteration, we calculate the mean reprojection error and its gradient in the extrinsic parameter space of all cameras with correspondences on the current image set. After termination (e.g. due to epsilon error or limited iteration count), we continue on the next set of camera images.

4

Evaluation

Our test environment consists of a mock-up robot workcell (4 m × 4, 5 m × 2, 8 m) with a St¨ aubli RX130 robot and eight inexpensive, consumer grade Logitech

674

T. Werner et al.

Fig. 3. Multi-camera system with eight consumer-grade Logitech C930e cameras (red circles) monitoring the workspace. (Color figure onlilne)

Fig. 4. Distribution of feature points (red and yellow) in a camera view, as collected over the entire calibration process. (Color figure onlilne)

C930e Full HD webcams (see Fig. 3). For testing our calibration approach, we recorded two image sequences, each with 75 images per camera. We then used one image sequence for calibration and the other one for evaluating our results. As calibration object, we used a checkerboard pattern with 4×3 relevant corners. Our error measure is the popular reprojection error (i.e. the mean squared error for an image-space distance metric between camera projections of 3D features and the corresponding 2D feature points). 4.1

Precision

Our first goal was to estimate an upper bound on the minimum reprojection error attainable in our single-camera calibration step (see Sect. 3.2). Additionally, we investigated the amount of images required for error convergence. We therefor calculated extrinsic and intrinsic parameters from the first 10, 20, ..., 70 images and tested the resulting calibration against the 75 images of the testing dataset.

Fig. 5. Reprojection error of each camera with increasing number of input images, no intrinsic precalibration. Yellow: Mean of error over all eight cameras. (Color figure onlilne)

Fig. 6. Reprojection error of each camera with increasing number of input images, intrinsic precalibration. Yellow: Mean of error over all eight cameras. (Color figure onlilne)

Efficient, Precise, and Convenient Multi-camera Calibration

675

Figure 5 shows the change in reprojection error with increasing sequence length for each individual camera. As evident, more images in general lead to a lower squared mean of the reprojection error, with a viable 20 pixels average error after 50 images. Translating this error to the 3D workcell for Full HD cameras gives about 10 cm offset at the far end of the workspace and about 5 cm nearby the robot, well below tolerances for gross motion planning (see [1,2]). Still, the reprojection error for individual cameras varies moderately. It therefore is not possible to find an optimal number of input images. This precision problem stems from feature points that do not fully cover the field of view of some cameras: The manipulator range does not extend to all image corners, yet image corners exhibit greatest distortions and thus are particularly relevant for camera intrinsics. As a solution, we derived an initial guess for intrinsics of each single camera by manually covering the entire field of view with a hand-held checkerboard. This significantly reduces later mean reprojection errors to about four pixels. Respective robot-supported camera calibration then exhibits stable and fast convergence after six images, as illustrated in Fig. 6. While not quite as convenient as fully automated calibration, the optional pre-calibration step need only be performed once for each camera, and some manufacturers already provide individual intrinsics for each device. 4.2

Efficiency

In order to record 75 images with every camera, the robot arm must approach between 250 and 300 poses. Moving to a new pose and waiting for network transfers of camera images takes about ten seconds in our setup. This adds to a total runtime of 40 to 50 min for recording all images. Calibration itself roughly takes ten minutes after all images have become available. Computational complexity is O(k ONL (n)), with k the number of cameras, n the number of images per camera, and ONL (n) the undocumented complexity of the openCV non-linear optimization. Opposed to this effort, stereo optimization and error minimization have a negligible performance overhead in terms of few seconds.

Fig. 7. Calibration runtime for increasing number of input images.

Fig. 8. Reprojection error over iterations of stereo optimization for artificial error.

676

T. Werner et al.

See Fig. 7 for runtime trends. Our experiments with alternative approaches (e.g. [3] or [9]) indicate several hours of manual calibration for similar precision. When using intrinsically pre-calibrated cameras, the number of images necessary for reasonable precision drops to about five per camera and the time for recording reduces to roughly ten minutes. Because computing an extrinsics-only calibration takes only seconds, the overall process in turn completes in a few minutes, which enables a rather fast reaction to changes in camera placement. 4.3

Optimization

Optimization strategies (see Sect. 3.3) show their main benefit when singlecamera calibration had poor results (e.g. due to numerical issues or mediocre feature point localization) for a limited subset of all cameras. In this case, our experiments indicate that either optimization strategy reduces a reprojection error of 100 pixels (e.g. induced by explicitly corrupting the estimated extrinsic parameters) to a nominal 15 pixels in Full HD input. See Fig. 8 for error trends of stereo optimization, trends for global error minimization are similar. For more accurate results of single-camera calibration (e.g. enabled by intrinsic precalibration), both optimization variants have significantly less impact, with improvements of at most 3 pixels in the reprojection error.

5

Conclusion

Evaluation attests that our contribution enables efficient recalibration of a multicamera system in few minutes, with suitable precision for gross motion planning and at the convenient click of a button. Experiments further suggest to use an intrinsic pre-calibration, with a fallback to either stereo or global optimization if intrinsics are not available. Optimization is particularly relevant for applications with non-expert users (who cannot reliably perform intrinsics calibration) and inexpensive cameras (which cannot economically be factory-calibrated). Future work may involve error-adaptive recalibration with a casing-mounted pattern. Acknowledgments. This work has partly been supported by the Deutsche Forschungsgemeinschaft (DFG) under grant agreement He2696/11 SIMERO.

References 1. Werner, T., Henrich, D., Riedelbauch, D.: Design and evaluation of a multi-agent software architecture for risk-minimized path planning in human-robot workcells. In: Kongress Montage Handhabung Industrieroboter (2017) 2. Werner, T., Henrich, D.: Efficient and precise multi-camera reconstruction. In: International Conference on Distributed Smart Cameras (2014) 3. Zhang, Z.: A flexible new technique for camera calibration. Trans. Pattern Anal. Mach. Intell. (2000) 4. Heikkil, J., Silvn, O.: A four-step camera calibration procedure with implicit image correction. In: Conference on Computer Vision and Pattern Recognition (1997)

Efficient, Precise, and Convenient Multi-camera Calibration

677

5. Tsai, R.Y.: A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses. J. Rob. Autom. (1987) 6. Li, W., Gee, T., Friedrich, H., Delmas, P.: A practical comparison between Zhang’s and Tsai’s calibration approaches. In: International Conference on Image and Vision Computing (2014) 7. Zollner, H., Sablatnig, R.: Comparison of methods for geometric camera calibration using planar calibration targets. Technical report, Pattern and Image Processing Group, Vienna University of Technology (2004) 8. Li, B., Heng, L., Koser, K., Pollefeys, M.: A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern. In: International Conference on Intelligent Robots and Systems (2013) 9. Svoboda, T., Martinec, D., Pajdla, T.: A convenient multicamera self-calibration for virtual environments. PRESENCE: Teleoperators Virtual Environ. (2005) 10. Markley, F.L., Cheng, Y., Crassidis, J.L., Oshman, Y.: Averaging quaternions. J. Guidance Control Dyn. (2007)

A Lumped Model for Grooved Aerostatic Pad F. Colombo, L. Lentini, T. Raparelli, A. Trivella ✉ , and V. Viktorov (

)

Department of Mechanical and Aerospace Engineering, Politecnico di Torino, 10129 Turin, Italy [email protected]

Abstract. Air pads are often embedded into robotic devices for manufacturing and measuring applications. Investigating the performance of these pads is essen‐ tial to obtain robot characterized by very accurate motions and positioning. Due to their simplicity, mathematical lumped parameter models can be adopted to evaluate quickly both the static and dynamic performance of such bearings. This paper describes a lumped parameter model to study the behavior of a grooved rectangular air pads. The static and dynamic results of the model are validated by using a purpose-built test bench. Keywords: Air pads · Groove · Lumped model · Positioning · High accuracy

1

Introduction

Robotic systems are widely employed in manufacturing and measuring applica‐ tions to obtain automated handling activities. In the light of their zero stick slip, friction and wear, air bearings are commonly embedded into such robotic systems to increase their accuracy and precision of positioning [1–3]. The performance of this kind of bearings can vary significantly depending on the embedded type of feeding systems, e.g., simple [4] and pocketed orifice [5], grooved thrust surfaces [6, 7], porous resistances with sintered powders and metal woven wire [8, 9]. In the light of this, investigating the influence of such feeding systems on the bearing performance is essential to improve their design, thus achieving higher positional accuracy. Aerostatic bearing performance can be improved in different ways. Active [10–12] and passive compensation methods [6] are two widely employed solutions to enhance the air pad performance. However, using pads with grooves may be a valuable alternative because their high load capacity, stiffness and simple manu‐ facturing. Different works analyzed the static and dynamic performance of aero‐ static pads with grooves and the effect on the stability [13–16]. The numerical anal‐ ysis of these systems can be performed by using distributed or lumped parameter models. Distributed models provide more accurate analyses but, on the other hand, they may require long computational time. Accurate lumped parameter models [17] can be a valid alternative to estimate quickly the bearing performance without significantly compromising the reliability of results. In [18] a model on a commer‐ cial rectangular air pad with groove was presented. It simulated the effect of the air gap with more lumped resistances to take in account the effect of the groove. This paper describes a similar air pad model that has been suitably modified to obtain © Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 678–686, 2019. https://doi.org/10.1007/978-3-030-00232-9_71

A Lumped Model for Grooved Aerostatic Pad

679

faster dynamic analysis. The air gap is represented with a single resistance and the effect of the groove is taken into account by introducing a proper mathematical formulation. The conservation equations are linearised and, consequently, the transfer function between the air gap height and pressure is used to predict the dynamic stiffness and damping of the pad. The accuracy of the achieved theoret‐ ical results is validated through an experimental characterization.

2

Pad Description

Figure 1a shows the geometry and dimensions of the investigated bearing. The pad has a rectangular base (A = 60 mm and B = 30 mm) and four pocketed orifices of diameter d = 0.18 mm that are located in the middle of the sides of a rectangular groove line of dimensions a = 45 mm and b = 20 mm. Each orifice present a conical pocket of depth δ = 300 μm and diameter dp = 0.8 mm. Figure 1b shows the cross-section of the feeding hole. The groove presents a triangular cross-section of height hg = 60 μm and with wg = 200 μm.

Fig. 1a. Air bearing geometry

3

Fig. 1b. Feed hole cross section

Numerical Lumped Model

The developed model assumes that the surfaces of the pad and the metal base are perfectly smooth and parallel one each other. Figure 2 shows the pneumatic scheme of the air pad model. It consists of a series of three pneumatic resistances R1 , R2, and R3 . R1 and R2 correspond to the pneumatic resistances due to the cross-section variations at the outlet of each feed hole and the related pocket. The last resistance R3 is the viscous resistance due to the presence of the air gap height h. Flowing from the pad inlet towards the air gap, the air pressure decreases from Ps (supply pressure) to P1 at the inlet of the pocket and, secondly, from P1 to P2 at the inlet of the air gap. Eventually, once at the air gap inlet, the pressurised air flows towards the edge of the pad, thus reaching the ambient pressure Pa . P0 is the mean pressure under the rectangular area surrounded by the groove,

680

F. Colombo et al.

V1 is the volume of each pocket, V0 is the sum of the clearance and groove air volumes (V0 = A B h + Vg ).

Fig. 2. Pneumatic scheme of the air pad.

The mass flow through the first two resistances Ri (i = 1, 2) is described by the ISO formula 6358: Gi =

√ √ T0 ∕TCi Pi 1 − φ2i

φi =

(Pd ∕Pu ) − b 1−b

(1)

where φi represents subsonic (0 < φi ≤ 1) or sonic (φi = 0) conditions, Pu, Pd are the upstream and downstream absolute pressures of each resistance, b is the pressure critical ( ) ratio, assumed equal to 0.528. T0 and T are a reference T0 = 293 K and the environ‐ mental air temperatures. Ci is the conductance of the ith cross-section. In this instance, the expression of C2 has been modified to take into account the presence of the grooves crossing the pockets. C1 = ψ cdc πd2 ∕4

) ( C2 = ψ cda 𝜋dp h + wg hg

(2)

√ ψ = 0.686∕ R T, R = 287 J kg−1 K, cdc and cda are the discharge coefficients at the feed hole and pocket outlet, which were experimentally obtained in a previous work [19]:

h+𝛿 ⎞ h+𝛿⎞ ⎛ ⎛ −0.001Re −8.2 h + 4𝛿 ⎟; d𝛿 ⎟f1 ; f1 = ⎜1 − 0.3e cdc = 0.85⎜1 − e ⎟ ⎜ ⎜ ⎟ ⎠ ⎝ ⎠ (⎝ ) cda = 1.05f2 ; f2 = 1 − 0.3e−0.005Rea ; πdp h + wg hg G2 h 4G1 heq = Rea = ; Re = πdp πμdp heq πμd

(3)

In this model, the air gap pressure distribution was approximated equal to in the rectangular area enclosed by the grooves, whereas, outside this area, it decreases linearly

A Lumped Model for Grooved Aerostatic Pad

681

up to the ambient pressure Pa. The pressure P0 was computed as a function of P2 and h, where h is expressed in [𝜇m]: (

)

5 ⎞ ⎛ ) ( ) ( ⎟ P2 − Pa + Pa h ⎜ P0 = f P2 − Pa + Pa = 1 − 0.007 ⎟ ⎜ ⎠ ⎝

(4)

The linear pressure distribution was computed on the basis of the 2D Reynolds equation under isothermal conditions:

gy dP + 12μRT 3 = 0; dy Ph

g dP + 12μRT x3 = 0; dx Ph

(5)

where gx and gy are the mass flow rates per unit of width along the x and y directions and μ is the air dynamic viscosity. The mass flow rates Gx and Gy, outgoing from each side of the rectangular groove are obtained by integration of the Reynolds equations. The total air consumption G of the pad (G = G3 ) is then obtained as follows:

G = 2(Gx + Gy ) =

) ( 1 υ P20 − P2a h3 ; 6μRT

υ=

(

a b + A−a B−b

) (6)

Assuming this trapezoidal pressure distribution, the correspondent pressure force Fp results equal to:

( ) Fp = Seq P0 − Pa ;

Seq =

[ ] (Ab + aB) 1 ab + AB + 3 2

(7)

The dynamic behavior of the pad is studied with the perturbation method. To do this, the equations of the mass flow rates are linearized around the static position of the air pad and their variations are expressed in Laplace domain1:

1

In the partial derivative of G1 and G2 with respect to h the terms f1 and f2 have been considered constant.

682

F. Colombo et al. G1 = k1 h + k2 P1 ; G2 = k3 P1 + k4 P2 + k5 h; G3 = k6 P0 + k7 h; h + δ√ −8.2 𝜕G1 || π d = ψ6.97 dPS f1 e 1 − φ21 ; k1 = | 𝜕h |0 4 φ1 𝜕G1 || πd2 cdc = −ψ k2 = ; √ 𝜕P1 ||0 4 1−b 1 − φ21 ⎛√ ⎞ 𝜕G2 || P2 1 φ2 ⎜ ⎟ 2 1 − φ + = C √ ⎟; ⎜ 2 2 𝜕P1 ||0 P1 1 − b ⎜ 1 − φ22 ⎟ ⎠ ⎝ √ 𝜕G2 || 𝜕G2 || C2 φ2 ; k5 = k4 = =− = ψπdp cda P1 1 − φ22 ; √ 𝜕P2 ||0 1−b 𝜕h ||0 2 1 − φ2 [ ( ( ) df )] 𝜕G3 || 𝜕G3 || P0 υh2 υh3 ; h + 3 P20 − P2a k6 = k7 = = = 2P0 P2 − Pa | | 𝜕P0 |0 3μ RT 𝜕h |0 6μ RT dh

k3 =

(8)

The transfer function between the air gap displacement and the pressure force FP is obtained from the continuity equations for the control volumes V0 and V1 and by linea‐ rizing the expression (4):

G1 − G2 = k8 s P1 ; 4G2 − G3 = k9 s h̄ + k10 s P0 ; P0 = k11 P2 + k12 h̄ ) df ( P AB V V k8 = 1 ; k 9 = 0 k10 = 0 ; k11 = f ; k12 = P2 − Pa RT RT RT dh H(s) =

a 1 + (a1 ∕a0 )s + (a2 ∕a0 )s2 FP 1 + τ1 s + τ2 s2 = 0 Seq = K S b0 1 + (b1 ∕b0 )s + (b2 ∕b0 )s2 1 + γ1 s + γ2 s2 h̄

( ) a0 = −4k2 k4 k12 + k11 4k2 k5 − k2 k7 + k3 k7 − 4k1 k3 ; ( ) a1 = 4k4 k8 k12 + k11 k7 k8 − 4k5 k8 − k2 k9 + k3 k9 ; a2 = k8 k9 k11 ; ( ) b0 = −4k2 k4 + k11 k2 k6 − k3 k6 ( ) b1 = 4k4 k8 + k11 k2 k10 − k3 k10 − k6 k8 ; b2 = k8 k10 k11

(9)

(10)

where KS is the static stiffness of the air pad. By expressing the transfer function H(s) in the frequency domain, the theoretical dynamic stiffness K and damping c of the air are computed as: ) ( 1 + τ1 γ1 − τ2 − γ2 ω2 + τ2 γ2 ω4 K(ω) = −KS ReH (ω) = −KS ( )2 ( )2 2 1 − γ ω 2 ( ) ( + γ1 ω ) 2 τ1 − γ1 + τ2 γ1 − τ1 γ2 ω Im (ω) c(ω) = −KS H = −KS ( )2 ( )2 ω 1 − γ 2 ω 2 + γ1 ω

(11)

A Lumped Model for Grooved Aerostatic Pad

4

683

Set up Description

Figure 3a shows the scheme of the adopted test bench. The investigated pad 1 was located on a flat metal base 2, static load and the corresponding air gap is imposed through a pneumatic piston 3. To evaluate the pad dynamics, a sinusoidal forces2 is superimposed to the static one through the serial connection of a modal exciter 4, a stinger 5 and a loading tip 6. The force transmitted to the pad is measured by a load cell 7 (38 kHz pass band) placed between the cylinder rod and the loading tip. The displacement of the pad was evaluated through four capacitive sensors S1, S2, S3 and S4 (10 kHz pass band) hold by a sensors carrier 8 (Fig. 3b). Sinusoidal excitation F(ω) was applied as input to the pad, whereas air gap variations h(ω) were considered as output. Signals were acquired at 10 ksample/s. Tests was repeated at different frequencies, in range from 0 to 200 Hz. Experimental dynamic stiffness K and damping c were computed from the real and imaginary parts of the frequency transfer function F(ω)∕h(ω)3 which has been experi‐ mentally obtained.

Fig. 3a. Testing configuration

5

Fig. 3b. Capacitive sensors and loading tip locations.

Results and Discussion

Theoretical and experimental results are obtained with a supply pressure PS equal to 0.525 MPa. Figure 4 shows the static load carrying capacity and the air consumption of the pad.

2 3

The gap amplitude variation has to be maintained at 10–15% of the nominal gap height. It was experimentally verified that the inertial term related to the mass of the pad is negligible over the investigated frequency range.

684

F. Colombo et al.

Fig. 4. Static load and air consumption

Figure 5 shows dynamic stiffness and damping coefficients reported as functions of the excitation frequency f in the presence of air gap heights of 8, 10, 12 and 14 μm.

Fig. 5. Dynamic stiffness and damping versus frequency

Theoretical and experimental results show that stiffness and damping do not exhibit significant variations over the investigated frequency range. Figure 6 shows the pad stiffness and damping expressed as functions of the air gap height by considering different excitation frequencies of 10, 50 and 100 Hz.

A Lumped Model for Grooved Aerostatic Pad

685

Fig. 6. Dynamic stiffness and damping versus air gap height

6

Conclusions

Lumped models can be used as a simple and fast tool to design air pads by performing both static and dynamic studies. In this paper the approach has been applied to an air pads with more complex geometry that consider also the presence of a groove. A lumped model has been developed and the theoretical and experimental results are discussed. The comparison of the results demonstrates the general satisfactory of the simulation both in static and dynamic conditions, however further studies are in progress to improve the accuracy of the model.

References 1. Gao, W., Arai, Y., Shibuya, A., Kiyono, S., Park, C.H.: Measurement of multi-degree-offreedom error motions of a precision linear air-bearing stage. Precis. Eng. 30, 96–103 (2006) 2. Ro, S.-K., Park, J.-K.: A compact ultra-precision air bearing stage with 3-DOF planar motions using electromagnetic motors. Int. J. Precis. Eng. Manuf. 12, 115–119 (2011) 3. Yoshida, K.: Experimental study on the dynamics and control of a space robot with experimental free-floating robot satellite. Adv. Rob. 9, 583–602 (1994) 4. Charki, A., Diop, K., Champmartin, S., Ambari, A.: Numerical simulation and experimental study of thrust air bearings with multiple orifices. Int. J. Mech. Sci. 72, 28–38 (2013) 5. Li, Y.T., Ding, H.: Influences of the geometrical parameters of aerostatic thrust bearing with pocketed orifice -type restrictor on its performance. Tribol. Int. 40, 1120–1126 (2007) 6. Ghodsiyeh, D., Colombo, F., Raparelli, T., Trivella, A., Viktorov, V.: Diaphragm valvecontrolled air thrust bearing. Tribol. Int. 109, 328–335 (2017) 7. van Beek, A., van Ostayen, R.A.J.: The design of partially grooved externally pressurized bearings. Tribol. Int. 39, 833–838 (2006) 8. Belforte, G., Raparelli, T., Viktorov, V., Trivella, A.: Permeability and inertial coefficients of porous media for air bearing feeding systems. J. Tribol. 129, 705–711 (2007)

686

F. Colombo et al.

9. Belforte, G., Raparelli, T., Viktorov, V., Trivella, A.: Metal woven wire cloth feeding system for gas bearings. Tribol. Int. 42, 600–608 (2009) 10. Raparelli, T., Viktorov, V., Colombo, F., Lentini, L.: Aerostatic thrust bearings active compensation: critical review. Precis. Eng. 44, 1–12 (2016) 11. Colombo, F., Lentini, L., Raparelli, T., Viktorov, V.: Actively compensated aerostatic thrust bearing: design, modelling and experimental validation. Meccanica, 1–16 (2017) 12. Colombo, F., Maffiodo, D., Raparelli, T.: Active gas thrust bearing with embedded digital valves and backpressure sensors. Tribol. Trans., 1–7 (2016) 13. Nakamura, T., Yoshimoto, S.: Static tilt characteristics of aerostatic rectangular double-pad thrust bearings with compound restrictors. Tribol. Int. 29, 145–152 (1996) 14. Chen, M.F., Lin, Y.T.: Static behavior and dynamic stability analysis of grooved rectangular aerostatic thrust bearings by modified resistance network method. Tribol. Int. 35, 329–338 (2002) 15. Belforte, G., Colombo, F., Raparelli, T., Trivella, A., Viktorov, V.: Performance of externally pressurized grooved thrust bearings. Tribol. Lett. 37, 553–562 (2010) 16. Belforte, G., Colombo, F., Raparelli, T., Trivella, A., Viktorov, V.: Comparison between grooved and plane aerostatic thrust bearings: static performance. Meccanica 46, 547–555 (2011) 17. Colombo, F., Raparelli, T., Trivella, A., Viktorov, V.: Lumped parameters models of rectangular pneumatic pads: static analysis. Precis. Eng. 42, 283–293 (2015) 18. Colombo, F., Ghodsiyeh, D., Raparelli, T., Trivella, A., Viktorov, V.: Dynamic behavior of a rectangular grooved air pad. In: 6th European Conference on Tribology, Ecotrib, 7–9 June, Ljubljana, Slovenia (2017) 19. Belforte, G., Raparelli, T., Viktorov, V., Trivella, A.: Discharge coefficients of orifice-type restrictor for aerostatic bearings. Tribol. Int. 40, 512–521 (2007)

Social Robotics

Social Robotics in Education: State-of-the-Art and Directions T. Pachidis1(&), E. Vrochidou1, V. G. Kaburlasos1, S. Kostova2, M. Bonković3, and V. Papić3 1

2

Human-Machines Interaction Lab, EMaT Institute of Technology, 65404 Kavala, Greece [email protected] Institute of System Engineering and Robotics, Bulgarian Academy of Science, Acad. G. Bonchev str. B1.2, P.O.B. 79, Sofia, Bulgaria 3 University of Split, Ruđera Boškovića bb, 21000 Split, Croatia

Abstract. Social Robots is one type of cyber-physical systems, that is the social equivalent of “industry 4.0” technology, in applications involving humans e.g. in businesses of services. Our interest here is in applications of Social Robotics in education. This paper provides a road map regarding commercial social robots currently available in education. Recent literature is included regarding (a) analysis and evaluation of the effectiveness of social robots in education in terms of design specifications such as processors, sensors etc., (b) advantages and drawbacks of various robots currently used in education in terms of cost, impact and usability and (c) future potential directions of interest concerning educational robotics. Our study indicates that an effective design of interactive, educational robots calls for robustness and standardization, of both hardware and software. Novel modeling methodologies might be necessary. Future challenges in the field are also discussed. Keywords: Educational robots  Commercial robots Evaluation methodologies  Review

1 Introduction In recent years, robots attract considerable attention and become ever more popular in numerous applications [1]. Among the many applications, educational robotics acquire an increased interest in education of all levels [2]. The Japan Robotics Association (JRS), the United Nations Economic Commission (UNEC) and the International Federation of Robotics (IFR) report an increase in the market of personal robots, including educational robots [3]. The interest of the European Union in social and educational robotics also increases steadily. For example, our research team already participates in two recently funded European projects, namely CybSPEED [4] and RONNI [5].

© Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 689–700, 2019. https://doi.org/10.1007/978-3-030-00232-9_72

690

T. Pachidis et al.

This paper presents comparatively a literature review of commercial robots in education including pros and cons. It also delineates potential future directions of interest in the region (i.e. the Balkans) as well as worldwide. The novelty of this work is that it examines and evaluates commercial social robots from two different perspectives, including, first, design specifications and, second, empirical results of their application in education during the past two years. The layout of this work is as follows: Sect. 2 provides an overview on the robotic platforms available in the market including advantages and drawbacks. Section 3 presents recent applications of robots in education. Section 4, discusses challenges in the design of robots as well as proposals for potential future directions. Conclusions are summarized in Sect. 5.

2 State of the Art of Commercial Robotic Platforms in Education This section reports the commercial robots developed for educational purposes and summarizes design specifications. Based on design according to a recent review paper [6], robotic platforms can be divided into (1) brick-based robot assembly kits (Mindstorms, VEX IQ, etc.), (2) minimal mobile robot design kits (Arduino Starter Kit, BoeBot, etc.), (3) programmable robot manipulators (Servorobotics RA-02, Lynx AL5x, etc.), (4) open-source mobile platforms designed from commercial off-the-self components (MIT SEG, Harvard Kilobot, etc.), (5) fully-assembled commercial mobile robots (Thymio, iRobot Create, etc.) and (6) open-source miniaturized swarm robots (Robomote, Alice, etc.). This work does not mean to be extensive and exhaustive, since commercial robots are numerous. For this reason, seven robotic platforms have been selected and presented according to the following criteria: (1) most recent reports, only in the past two years, of the bibliography regarding educational experimental researches that utilize these robots, (2) age to which these robots are addressed, so as to cover all the range of all K-12 education and university. According to the above, the selected robotic platforms are: Lego Mindstorms [7], EZ-Robot JD Humanoid [8], Vex IQ Robotics [9], NAO [10], Bee-Bot [11], Romibo [12] and Thymio [13]. The provided information is presented comparatively, and it aims to assist educators and researchers in the selection of the most efficient platform, among the seven presented, according to their needs. 2.1

Design Specifications

This section summarizes the design specification of the selected commercial robots developed for education, in terms of processor, programming language, sensors, connection, encoder, battery and cost. Table 1 lists all the above information.

Social Robotics in Education

691

Table 1. Comparison of selected commercial robots regarding design specifications. Robot (Release year)

Processor

Lego Mindstorms EV3 (2013)

TI Sitara EV 3 Software AM1808 (ARM926EJScore) @300 MHz 32-bit ARM EZ-Builder, Cortex Robo-Scratch, C++, C#, Visual Basic ARM Robotc, Cortex-M4 Modcit Visual programming

EZ-Robot JD Humanoid (2013) Vex IQ robotics (2006)

NAO (2008) Intel Atom with 1.6 GHz

Bee-Bot (2011)

x

Romibo (2013)

x

Thymio (2011)

PIC24FJ128

2.2

Programming language

Sensors

Connection Encoder Battery (Hours)

Cost

Ref.

x

$350.00

[7]

USB Wi-Fi x

x

$429.99

[8]

USB Bluetooth

Yes

x

$439.99

[9]

Ethernet, Wi-Fi

Yes

1–1.5

$9000.00 [10]

USB

x

4

$123.00

[11]

USB Wi-Fi x Bluetooth

x

$649.00

[12]

USB Wi-Fi Not directly

3–5

$193.50

[13]

Speaker, Touch, USB Wi-Fi Yes Colour, IR, Motors, Bluetooth Gyroscope

Motors, Camera, Ultrasonic, Orientation

Speaker, Touch, Colour, Distance, Motors, Joystic, Gyroscope, Radio Cameras, Tactile, C, C++, Speaker, Matlab, Java, Python, Urbi, . Microphone, IR, Sonar, Bumpers Net, Choregraphe Touch, Sound, Directional Light sensors intuitive language via buttons SD Cards with Light sensors, IR, questions and Accelerometers phrased VPL, Blockly, IR, Touch, Aseba Studio, Accelerometer, Asebascratch Thermometer, Microphone, Motor, Speaker

Advantages and Drawbacks of Existing Robotic Platforms

When reviewing commercial robotic platforms, aspects such as ease-of-use, power, expandability, versatility, reliability, universal appear and integration with other technologies, must be taken into consideration. Lego kits are the most commonly used according to the literature, in all K-12 and universities. Assembly is their basic feature. Their modular design allows students to create their own robots, thus, helps them improve their visual spatial skills and stimulates them to experiment and innovate. They come with a variety of sensors and allow further expansions. Additionally, Lego programming is easy to learn, both for student and teachers. On the other hand, modular design is considered as a disadvantage, since brick components easily go missing. Moreover, there are limited inputs for sensors, thus, the range of potential learning and real-world applications are limited [7].

692

T. Pachidis et al.

EZ-Robot combines the versality of a platform, while, due to its appearance, it provides emotional connection with the user. It provides 16 degrees-of-freedom (DOF), a wide-range of learning opportunities and can be used from all ages to create realworld applications through a friendly programming interface. Additionally, it allows children to elevate their programming skills since it can be programmed starting with Robo-Scratch, to C++, C# and Visual Basic [8]. Vex IQ is another modular robotics platform that allows students to perform traditional-style programming. It includes simple programming languages and has a sufficient number of ports and variety of sensors. One of Vexs’ drawback is its modular design, that is not appealing for a broad range of children [9]. NAO on the other hand, is a humanoid robot with high degree of appeal for children. It is used in real-world robotic applications, such as in special treatment and special education, to engage children with learning difficulties and enhance the therapeutic process. It provides 25 DOF, several languages for programming, including C+ +, Matlab, Java, Python, .Net Framework, and a graphic interface, Choregraphe. However, programming is demanding and thus, it is intimidating for teachers and students. Moreover, its price is not affordable for many educators, and even if purchased as educational robot in the classroom, it would be on finite numbers, one or two in the same classroom. For this reason, NAO is more appropriate in University level and research, rather than in typical K-12 education [10]. Moreover, in general, low processing power of commercial social robots and their low-resolution embedded camera insert additional drawbacks in object recognition that need to be addressed. Bee-Bot is a robot designed for use only by young children. It is easy to operate, friendly in programming using the buttons on its back, appealing appearance and affordable. It is used in teaching sequencing and control, positional and directional language, program sequences and repetitions, and understanding of algorithms [11]. Romibo is remotely controlled, socially assistive robot, with mobility, speech, gesture and face tracking. It is used to train social and academic skills, but usually utilized in special treatment [12]. Thymio is a small robot which allows children to learn a robot’s language. It is affordable, very easy to program and allows numerous experiments [13]. These last robots are non-complex and refer to younger children. In general, the more complex the robot, the easiest to malfunction. A recent study tries to explore the causes of breakdowns in children’s interactions with a robotic tutor [14]. The results comprise four themes to explain why children’s interactions with the robotic tutor break down: (1) the robot’s inability to evoke initial engagement and identify misunderstandings, (2) confusing scaffolding, (3) lack of consistency and fairness, and finally, (4) controller problems. These breakdowns need to be rigorously addressed in order for robotic tutors to be able to feature in education.

Social Robotics in Education

693

3 Analysis and Evaluation of Commercial Robots in Education The technological development of the 21th century, have increased the use of multimedia tools in education, in other words, commercial robots are used more in the classroom. According to [15] children are also playing with robotics during their playtime. For this reason, analysis and evaluation of the commercial robots in education is considered necessary, so as to investigate the influence of robotics on children’s cognition, language, interaction, social and moral development [16–18]. The present work aims to help research on robotic applications to education needs according to the previous two years, in order to guide the way for future studies. 3.1

Overview of Application of Robots in Education

In this section, an overview on the most recent results of applications with the selected social robots for the past two years, takes place. Table 2 summarizes the most recent reported results of the bibliography, regarding the use of each selected commercial robot in educational real-world applications. 3.2

Advantages and Drawbacks of the Use of Robots in Education

Educational theorists [26] claim that robotic activities may improve classroom teaching. However, the empirical evidence of the impact of robots in education is considered limited [27]. Without research evidence to support the influence on students’ academic scores, robotics in education may be characterized as a current trend [2]. The reported outcomes on the use of robots are in most of the cases descriptive, since they are based on reports of educators regarding individual initiatives, involving a small sample of participants and not integrated into official classroom activities [27]. Another reported drawback in the literature is that most of the applications utilize the robots as an end, or a passive tool, in the learning activity where the robot has been constructed or programmed [28]. Giving more autonomy in robots, in sense of intelligence is one of the future challenges in the design of robotics and it is discussed in an upcoming section. Moreover, the range of possible applications in education is rather limited, since they focus mainly in enhancing development and programming skills, rather than engaging more people by introducing a wider range of activities, connecting with more disciplines and interest areas such as music and art [29]. There are studies reporting that the use of robotics has not brought significant increase in student learning [30]. A newly emerged negative factor is the stakeholder’s perception of educational robotics [31]. Research studies [32, 33] investigated the perception of parents, children and teachers on the use of educational robotics. Results revealed that most of the parents felt less confident when playing and teaching their children by using robotics. This is due to the luck of technological skills by the users. It is obvious that meaningful benefits will only be obtained if technology is used skillfully by teachers, aligning the provided tools with each students’ educational needs [34]. This fact reveals another drawback; the lack of investment in well-trained educators,

694

T. Pachidis et al.

comfortable with robots and programming [35]. Emphasis must be given in the correct guidance and the role of teachers, since teachers motivate, stimulate and influence students in their school work [36]. Table 2. Comparison of selected commercial robots regarding results in educational application. Robot (Study year) Lego Mindstorms EV3 (2016)

Age

Area of interest

Reported results

Ref.

18+

Increase motivation on computer science in a bachelor course in mechatronics

EZ-Robot JD Humanoid (2016)

18+

Social engagement via data elicitation and interaction games

Vex IQ Robotics (2017)

12+

Enhance high school students’ learning about biomimicry and Swarm/multi robot systems

NAO (2017)

8+

Treatment of children with autism through imitation games

[19] Students had to program too much and lost the actual content of the exercises. Motivation and fun factor were not increased, due to the workload for the EV3 programming [20] This work tries to explore how humans’ engagement with a social robot can be systematically investigated and evaluated. Only 62.5% of the participants displayed engaging behaviours [21] Students developed an understanding on the characteristics and scope of technology, of engineering design and problem solving [22] Preliminary application results suggest that robotassisted treatment can improve children behaviour. Therapeutic objectives included improvement in social communication and interaction skills, joint attention, response inhibition and cognitive flexibility. Experiments confirmed that a social robot is more readily accepted than a human, by a child with autism (continued)

Social Robotics in Education

695

Table 2. (continued) Robot (Study year) Bee-Bot (2017)

Age

Area of interest

Reported results

Ref.

5–6

Evaluate the short-term effects in preschool children of an intensive educational robotics training on executive functions

[23]

Romibo (2017)

5–7

A long-term study to promote STEAM (Science, Technology, Engineering, Arts, and Mathematics) education in an elementary school

Thymio (2017)

7–8

To teach computer science (CS) with robotics to four second-grade classes

The main finding was a significant improvement in both visuospatial working memory and inhibition skills, with a significant effect also on robot programming skills. These data provide scientific support to the hypothesis that educational robotics are suitable in progressively improving abilities in planning and controlling complex tasks in early childhood, fostering executive functions development Children played with the robots, constructed robot models with clay, and wrote and acted in a theater production with robots. This technique of using robots as actors in children’s theater productions has significant potential for educating children in a number of fields under the STEAM paradigm The goal was to investigate the extent to which students actually learn CS concepts. Findings revealed that students at such an early age were very engaged during the robotics activities and were highly motivated to succeed. Furthermore, these young students do learn CS concepts but find it difficult to create and run their own programs

[24]

[25]

696

T. Pachidis et al.

On the other hand, results regarding the use of robotics in education are, in total, positive. Recent studies [16] reported that robots encourage interactive learning and makes children more engaged in learning activities It is also reported that in education, the use of robots can potentially help children to develop various academic skills e.g. in science understanding, mathematical concepts, improvement of achievement scores [37, 38]. Additionally, the introduction of robotics in curriculum increases the interest of children in engineering [37]. According to [39], the use of robots in education allows children to engage in interactive learning activities. Robots also appear effective in language skill development [40].

4 Challenges and Potential Future Directions The 21st century vision of education is based on innovation and world-level technology. It is known that around 15% of the general population has learning difficulties [41]. Research in the area of robotics have made evident numerous possibilities for further innovation in the education of children. Future research needs to deal with the more effective design of robots to align with the educational needs, in terms of hardware and software. Effective hardware design need to fulfil the following requirement: (1) low cost, in order to support the pedagogical model of one robot per student, (2) advanced design, so as to support a variety of interesting curricula e.g. many sensors for a broader range of applications and (3) usability, so as the robot to have a simple easy-to-explain design. Design is usually the last consideration when incorporating robots into an application. However, studies reveal that design make a difference on robots’ perception, thus enhances children to be more engaged to the activity [42]. Due to the scarce of commercially available robot platforms for education, most research groups design their own robots. This is obvious from the bibliography, since most of the reported applications that use commercial educational robotics in real-world situations, utilize in the majority either Lego for regular education, NAO for special education, or their own featured robots. A future direction for researchers is to contribute by developing affordable technologies for enhancing the learning process. Effective software design for education is also a future challenge. Commercial robots need to support several development environments, allowing students and teachers to develop at an advanced level, starting from block programming to script. At that point, it is worthy to mention the luck of skillful teachers that feel confident near advanced technology. Investment in training educators, in addition to the purchase cost of robots, reinforces the need for investigative research to demonstrate the benefits of each approach to the use of robotics in education, guiding schools towards the effective use of the available robotic technology. Moreover, innovative teaching strategies and methodologies in terms of well-defined curriculum and learning material, transferable across the regions to support effective learning need to be developed. By creating game-like learning environments children in standard and special schools are more likely to reveal their creativity and potential. Design of complex activities for a robot to perform, is highly probable to lead the robot to not being able to provide the guidance necessary to facilitate learning, due to technical limitations in current technology in terms of perception [14]. This invariably

Social Robotics in Education

697

raises an obstacle, where the only solution is to wait that robots reach an adequate level of intelligence to play such roles. Research across the world attempts to give intelligence to social robots so that they can be used as assistants or teachers in education. In conclusion, further directions must be oriented to promote the application of robots in education, overcome the learning difficulties of children and raise the educational level of the future citizens, for a better quality of life and competence of a large number of people. Social robots might call for an innovative modeling methodology due to their interaction with humans according to the following rationale. The operation of conventional (i.e. non-social) robots typically occurs in a physical environment excluding humans based solely on electronic sensors; hence, numerical models suffice. Nevertheless, when humans are involved non-numerical data emerge such as words. In the latter context, the Lattice Computing (LC) paradigm has been proposed for modeling based on numerical and/or non-numerical data in social robot applications [4]. It remains to experiment with alternative modeling paradigms toward confirming any advantages they might have in educational applications. Social robots combined with ICT (Information and Communication Technologies) might be advantageous for delivering education in difficult landscapes especially in Balkan countries as explained in the following. In most Balkan countries, e.g. Bulgaria, there are extensive mountainous ranges. In other Balkan countries, including Greece and Croatia, in addition there are numerous inhabited islands. In all aforementioned countries there are dispersed communities living in small villages/towns, not easily accessed by conventional transportation. For all the latter communities social robots combined with ICT, e.g. the Internet, can imply cost-effective educational opportunities delivered locally [5]. Robots are considered to sustain a physical substance, therefore both code and hardware are subject to licensing. While manufacturing has been the biggest beneficiary of robots’ recent wide use, it is common in recent years that robots enter the mainstream as well. Open-source robotics have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. There are many open-source projects that can help beginners to get started. A number of open-source hardware platforms (Sparki, Hexy, OpenPilot, Arduipilot, TurtleBot etc.) and open-source software projects (LeJOS, Rock, ROS etc.) exist and can support robotic research, education and product development.

5 Conclusions This paper presents a literature review of commercial robots currently available in education. Its scope is to analyze and evaluate the effectiveness of commercial robots in education according to (1) design specifications and (2) to their reported results on educational applications in the last two years. Advantages and drawbacks for both approaches are presented. The aim of the review is to help researchers and teachers, to inform on recent robotics and their applications to education, in order to guide the way for potential future directions.

698

T. Pachidis et al.

The commercial robots presented in this work are selected according to their application in the most recent reports of the bibliography regarding educational experimental researches, and according to the age to which these robots are addressed to, so as to cover a wide range of ages. It should be acknowledged that this study is based on seven selected commercial robotic platforms according to the aforementioned criteria. Other criteria and databases would have yielded more and different commercial robots and reference articles. Acknowledgement. This work has been supported, in part, by the EU Interreg, Danube Strategic Project Fund (DSPF) Project no. 07_ECVII_PA07_RONNI “Increasing the well being of the population by RObotic and ICT based iNNovative educatIon (RONNI)”.

References 1. Tamada, H., Ogino, A., Ueda, H.: Robot helps teachers for education of the c language beginners. In: International Conference on Human-Computer Interaction, pp. 377–384. Springer, Heidelberg (2009) 2. Johnson, J.: Children, robotics, and education. Artif. Life Robot. 7(1), 16–21 (2003) 3. Kara, D.: Sizing and seizing the robotics opportunity. Presentation in RT Los Angeles by Robotics Trends, USA (2003) 4. CybSPEED: Cyber-Physical Systems for PEdagogical Rehabilitation in Special EDucation. Horizon 2020 MSCA-RISE Project no. 777720, 1 December 2017–30 November 2021 5. RONNI: Increasing the well being of the population by RObotic and ICT based iNNovative educatIon. Interreg Danube Transnational Programme Project no. 07_ECVII__PA07_RONNI, 1 January 2018–31 December 2018 6. Karim, M.E., Lemaignan, S., Mondada, F.: A review: can robots reshape K-12 STEM education? In: 2015 IEEE International Workshop on Advanced Robotics and Its Social Impacts (ARSO), pp. 1–8. IEEE (2015) 7. Lego Mindstorms Homepage. https://www.lego.com/en-us/mindstorms. Accessed 12 Jan 2018 8. EZ-Robot Homepage. https://www.ez-robot.com/. Accessed 12 Jan 2018 9. Vex IQ Robotics Homepage. https://www.vexrobotics.com/. Accessed 12 Jan 2018 10. NAO Homepage. https://www.ald.softbankrobotics.com/en. Accessed 12 Jan 2018 11. Bee-Bot Homepage. https://www.bee-bot.us/. Accessed 12 Jan 2018 12. Romibo Homepage. https://www.origamirobotics.com/. Accessed 12 Jan 2018 13. Thymio Homepage. https://www.thymio.org/home-en:home. Accessed 12 Jan 2018 14. Serholt, S.: Breakdowns in children’s interactions with a robotic tutor: a longitudinal study. Comput. Hum. Behav. 81, 250–264 (2018) 15. Beran, T.N., Ramirez-Serrano, A., Kuzyk, R., Fior, M., Nugent, S.: Understanding how children understand robots: perceived animism in child–robot interaction. Int. J. Hum Comput Stud. 69(7), 539–550 (2011) 16. Wei, C.W., Hung, I.C., Lee, L., Chen, N.S.: A joyful classroom learning system with robot learning companion for children to learn mathematics multiplication. TOJET Turk. Online J. Educ. Technol. 10(2), 11–23 (2011) 17. Shimada, M., Kanda, T., Koizumi, S.: How can a social robot facilitate children’s collaboration? In: Social Robotics, pp. 98–107 (2012)

Social Robotics in Education

699

18. Kahn Jr., P.H., Kanda, T., Ishiguro, H., Freier, N.G., Severson, R.L., Gill, B.T., Shen, S.: “Robovie, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Dev. Psychol. 48(2), 303 (2012) 19. Perez, S.R., Gold-Veerkamp, C., Abke, J., Borgeest, K.: A new didactic method for programming in C for freshmen students using LEGO mindstorms EV3. In: 2015 International Conference on Interactive Collaborative Learning (ICL), pp. 911–914. IEEE (2015) 20. Can, W.S.R., Do, S., Seibt, J.: Using language games as a way to investigate interactional engagement in human-robot interaction. What social robots can and should do. In: Proceedings of Robophilosophy 2016/TRANSOR 2016, vol. 290, p. 76 (2016) 21. Doddo, M., Hsieh, S.J.: Board# 121: MAKER: a study of multi-robot systems recreated for high school students. In: 2017 ASEE Annual Conference and Exposition (2017) 22. Amanatiadis, A., Kaburlasos, V.G., Dardani, Ch., Chatzichristofis, S.A.: Interactive social robots in special education. In: IEEE 7th International Conference on Consumer Electronics (ICCE), Berlin, pp. 210–213 (2017) 23. Di Lieto, M.C., Inguaggiato, E., Castro, E., Cecchi, F., Cioni, G., Dell’Omo, M., Dario, P.: Educational robotics intervention on executive functions in preschool children: a pilot study. Comput. Hum. Behav. 71, 16–23 (2017) 24. Barnes, J., FakhrHosseini, M.S., Vasey, E., Duford, Z., Jeon, M.: Robot theater with children for STEAM education. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 61, no. 1, pp. 875–879. SAGE Publications, Los Angeles (2017) 25. Mondada, F., Bonani, M., Riedo, F., Briod, M., Pereyre, L., Rétornaz, P., Magnenat, S.: Bringing robotics to formal education: the thymio open-source hardware robot. IEEE Robot. Autom. Mag. 24(1), 77–85 (2017) 26. Papastergiou, M.: Exploring the potential of computer and video games for health and physical education: a literature review. Comput. Educ. 53(3), 603–622 (2009) 27. Williams, D.C., Ma, Y., Prejean, L., Ford, M.J., Lai, G.: Acquisition of physics content knowledge and scientific inquiry skills in a robotics summer camp. J. Res. Technol. Educ. 40 (2), 201–216 (2007) 28. Mitnik, R., Nussbaum, M., Soto, A.: An autonomous educational mobile robot mediator. Auton. Robots 25(4), 367–382 (2008) 29. Rusk, N., Resnick, M., Berg, R., Pezalla-Granlund, M.: New pathways into robotics: Strategies for broadening participation. J. Sci. Educ. Technol. 17(1), 59–69 (2008) 30. Nugent, G., Barker, B., Grandgenett, N., Adamchuk, V.: The use of digital manipulatives in k-12: robotics, GPS/GIS and programming. In: 2009 39th IEEE Frontiers in Education Conference, FIE 2009, pp. 1–6. IEEE (2009) 31. Toh, L.P.E., Causo, A., Tzuo, P.W., Chen, I., Yeo, S.H.: A review on the use of robots in education and young children. J. Educ. Technol. Soc. 19(2), 148–163 (2016) 32. Liu, E.Z.F.: Early adolescents’ perceptions of educational robots and learning of robotics. Br. J. Educ. Technol. 41(3), E44–E47 (2010) 33. Ruiz-del-Solar, J., Avilés, R.: Robotics courses for children as a motivation tool: the Chilean experience. IEEE Trans. Educ. 47(4), 474–480 (2004) 34. Thomaz, S., Aglaé, A., Fernandes, C., Pitta, R., Azevedo, S., Burlamaqui, A., Gonçalves, L. M.: RoboEduc: a pedagogical tool to support educational robotics. In: 2009 39th IEEE Frontiers in Education Conference, FIE 2009, pp. 1–6. IEEE (2009) 35. Vollstedt, A.M., Robinson, M., Wang, E.: Using robotics to enhance science, technology, engineering, and mathematics curricula. In: Proceedings of American Society for Engineering Education Pacific Southwest Annual Conference, Honolulu, Hawaii (2007)

700

T. Pachidis et al.

36. Lindh, J., Holgersson, T.: Does lego training stimulate pupils’ ability to solve logical problems? Comput. Educ. 49(4), 1097–1111 (2007) 37. Highfield, K.: Robotic toys as a catalyst for mathematical problem solving. Aust. Prim. Math. Classroom 15(2), 22–27 (2010) 38. Barker, B.S., Ansorge, J.: Robotics as means to increase achievement scores in an informal learning environment. J. Res. Technol. Educ. 39(3), 229–243 (2007) 39. Chang, C.W., Lee, J.H., Po-Yao, C., Chin-Yeh, W., Gwo-Dong, C.: Exploring the possibility of using humanoid robots as instructional tools for teaching a second language in primary school. J. Educ. Technol. Soc. 13(2), 13 (2010) 40. Young, S.S.C., Wang, Y.H., Jang, J.S.R.: Exploring perceptions of integrating tangible learning companions in learning English conversation. Br. J. Educ. Technol. 41(5), 78–83 (2010) 41. Oldfield, J., Humphrey, N., Hebron, J.: Risk factors in the development of behaviour difficulties among students with special educational needs and disabilities: a multilevel analysis. Br. J. Educ. Psychol. 87(2), 146–169 (2017) 42. Woods, S.: Exploring the design space of robots: children’s perspectives. Interact. Comput. 18(6), 1390–1418 (2006)

MYrobot – Mobile Educational Platform Ondrej Karpis(&), Juraj Micek, and Veronika Olesnanikova Department of Technical Cybernetics, Faculty of Management Science and Informatics, University of Zilina, Univerzitná 8215/1, 010 26 Zilina, Slovakia {ondrej.karpis,juraj.micek, veronika.olesnanikova}@fri.uniza.sk

Abstract. One of the paradoxes of the present era is that despite the popularity of computer systems in all its forms, the interest of young people in the study of technical sciences, including programming, is diminishing. Technical universities must make a lot of effort to attract a sufficient number of applicants. For this reason, the MYrobot platform was created, which is presented in this article. MYrobot is a mobile platform that can carry a smartphone which provides a powerful robotic system. The developer of an application can use all smartphone components - an accelerometer, a microphone, a camera, etc. Advanced users can also develop expansion boards with various other sensors. The kit’s programming options have been enhanced by a graphical development system that allows developing of simple applications even by primary school pupils. An interactive form of platform programming has the potential to attract those students who are “afraid” of a flashing cursor. Keywords: Robotic platform

 Educational kit  Study motivation

1 Introduction In recent years, we have witnessed a steady decline in young people’s interest in studying technically-oriented study programs. According to our data, interest in studies has also dropped in the past attractive disciplines also such as Informatics or Computer Engineering. Interestingly, the employability of high school and university graduates is excellent. Employers offer over-standard payroll, flexible working hours, homework, or other benefits. However, they are unable to meet the requirements for qualified manpower [1]. It is clear that young people still do not pay much attention to the possibility of their future professional employment. It is possible that schools at all levels of education did not sufficiently adapt to the changes brought by modern information technologies. In particular, the teaching of natural sciences and applied technical disciplines should make much more use of modern technologies and the opportunities they carry. It is not enough to replace the blackboard by interactive whiteboard, it is necessary to change the structure of knowledge, the content of the subjects, prefer the ability to interpret the knowledge before memorizing them, try to find a creative problem solution, etc. At the tremendous pace of technology development, it is necessary to review what is a basic knowledge in

© Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 701–709, 2019. https://doi.org/10.1007/978-3-030-00232-9_73

702

O. Karpis et al.

the field, which the listener must master, and what is just insignificant detail. Note that these boundaries are continually moving in every field. Another important aspect of not wanting to study technology is the question of motivating young people to choose their future profession. In this area, we see large gaps of high schools and technical schools, who are complaining about the lack of skillful young people to study, or their inadequate preparation, but in reality, we rarely meet with the systematic popularization of the study subject. Exceptional events are the “Night of the Scientist” or “Open Doors Day” organized once a year. Our school system, the way teachers are evaluated as well as requirements of their career growth does not support activities related to the actual popularization of the study programme. In this area, therefore, we cannot expect a fundamental change. The relatively poor state in the field of the popularization of the programme Computer Engineering has led us to develop a set of simple devices to create interesting tasks in the field of Computer Science and Computer Engineering. This resulted in the birth of “Yrobot” [2, 3] and later “MYrobot” [4]. The MYrobot kit in cooperation with a smartphone allows solving many different problems. The kit is an open system that enables a wide community of users to develop software solutions and additional application modules. The basic part of the kit consists of a mobile platform that can be autonomous - controlled by an internal microcontroller or works with a smartphone using Bluetooth communication interface and possibly WiFi. We are aware of the fact that there are currently a variety of robotic devices that are suitable for teaching students or for demonstrating interesting examples and popularizing the field. Lego Mindstorms [5] is one of the best-known building kits that allows users to create their own robot and then program it. The great advantage of this system is its practically endless variability. Lego Boost [6] also provides similar opportunities. For those who are interested in creating their own robots, there is Hummingbird kits [7]. Among other systems focused on programming in a playful way, there are Ozobot [8], Dash [9] or Root [10]. These systems are mainly designed to create custom software and do not allow a significant modification of the programmed robot. For children from pre-school age, toys with more limited programming options are designed as Pillar, Kibo, WowWee, BeeBot [11–14]. So why did we choose to develop our own system? At the time of the first version of Yrobot in 2011, there was not too much choice on the market for affordable robotic platforms. Those that were available did not fit all our ideas about an open system that would be maintained and expanded by a broad community of users. Our target group was mainly secondary school students. The kit should be affordable and should allow development of custom accessories. We wanted students to be aware of the presence of underlying hardware, and they were forced to know at least its rough features. These were the main reasons for developing our own platform.

2 Mobile Platform MYrobot As already mentioned, the MYrobot kit was built as a natural continuation of the Yrobot kit. Yrobot is a mobile modular learning system including two motors with integrated transmission, sensors for angular rotation of the wheels, pushbutton barrier sensors,

MYrobot – Mobile Educational Platform

703

signaling LEDs, two 7-segment displays and the acoustic module for audible signaling. The control part of the system is based on an 8-bit ATmega16 microcontroller, which can be programmed via an integrated USB AVR ISP programmer. Development of the application takes place in Atmel Studio in C or C++ programming languages. Gradually, Yrobot adapted to development trends, notably to advances in mobile phone area, and included a smartphone in its solutions, which expanded its limited set of features. The design of the system has taken into account the experience of using the Yrobot kit. Some parts have been changed (the microcontroller, a way of motor revolution sensing, battery location, a way of detecting obstacles), several parts have been removed (USB AVR ISP programmer, 7-segment display, buzzer) and the size of the robot has decreased. In this way the new MYrobot platform was created, Fig. 1.

Fig. 1. Mobile platform MYrobot

The mobile platform is based on the 8-bit microcontroller ATmega328P. The ATmega328P microcontroller manages all platform components. Through the DRV8833 circuit the speed of two DC motors is controlled which provide movement for the platform. The information about the rotation of the wheels gets the microcontroller from two magnetoresistive sensors MRS 1 and MRS 2. The sensors measure the intensity of the magnetic flux generated by the magnetic strip attached to the inside of each wheel. The platform also includes: • The Bluetooth module HC05, with which the MCU communicates via the USART serial communication interface. • Two reflective optical sensors OS 1 and OS 2 located at the front of the mobile platform to identify the presence of an obstacle. • Three buttons: RES - to restart the MCU, USR - a user-programmable button and the On/Off button. • 6-pin ISP connector that serves to program the controlling MCU using a standard USBASP programmer. • The application connector serves to extend the system with additional application modules. At present, the IoT32 module is developed that extends the class of solved tasks and enables the development of interesting applications from the “mobile sensors” area.

704

O. Karpis et al.

The whole platform is powered by two standard Li-Ion batteries (18650). A certain drawback is that the battery charger is not part of the platform. Block diagram of the mobile platform is shown in the Fig. 2.

Fig. 2. Block diagram of the mobile robotic platform MYrobot.

3 Programming of the MYrobot Compared with the Yrobot kit, the MYrobot kit has significantly expanded its programming capabilities. The option for programming in the “traditional” way, i.e., creating source code (usually in C or C++), compiling it and then programming the microcontroller via the USBASP programmer was preserved. This standard programming approach requires users to have a fair amount of knowledge about the programming language used as well as the corresponding development environment. As a result, the set of potential users of the kit is very limited. That is why we have decided to take advantage of the massive expansion of smartphones even among the youngest students, and we have created a system for programming the platform with smart devices - mobile phones and tablets. The very creation of programs has been simplified rapidly - users have the possibility to program in the graphic mode in an intuitive way. Virtually no prior programming experience is needed. Before describing the graphical programming method in more detail, it is necessary to specify possible ways of working with the kit. MYrobot can work in three modes: • Basic: In this mode, MYrobot does not carry any other device (smartphone). It just executes orders received from a handheld smart device (mobile or tablet) via Bluetooth. The robot control options in this mode are very limited, as MYrobot

MYrobot – Mobile Educational Platform

705

itself contains almost no sensors. The main function is robot movement according to the program or direct remote control of the robot using a joystick or accelerometer. The basic mode mainly serves to get acquainted with the mobile robot itself and its mobility options. The main benefit of this mode is the need of only one smart device. Working with the platform is interactive - the user modifies the program and can test it right away. • Advanced mode requires two smart devices. The first device is located on a mobile robotic platform. Together with it, it creates an intelligent mobile device with an extended portfolio of sensors. The smart device communicates with the mobile platform via USB or Bluetooth. The second smart device (tablet is the best choice) is a superior control system and is user-controlled. The user creates a program that is subsequently performed by a mobile device. Smart devices communicate with each other via WiFi direct. The user can use all of the sensors integrated on the device set on the robotic platform (mainly camera, microphone, accelerometer, gyroscope, GPS …). This mode significantly extends the programmer’s possibilities. For example, it makes it possible to create programs in which the mobile platform responds to ambient stimuli (image, sound). With a dedicated application, it is also possible to remotely control the mobile platform with real-time video streaming. • Standalone mode: The robotic platform is a smart device carrier (typically a smartphone). The smartphone is a superior control system - it controls the platform via USB or Bluetooth commands. The smartphone performs in advance created program. It can use all of its sensors that are supported by the platform. It is also possible to co-operate with multiple robotic systems that communicate via WiFi. This mode is intended for advanced users. Its main drawback is that it does not allow interactive work with the platform. The block representation of the individual modes is shown in Fig. 3.

Fig. 3. MYrobot platform activity modes: A - Basic, B - Advanced, C - Standalone

706

O. Karpis et al.

ApplicaƟon

Drivers

Cam

Mic

Acc

GPS

...

BT

USB

MYrobot firmware

Fig. 4. The standard structure of software.

The software of the platform is divided into three layers (Fig. 4): • Firmware layer contains the firmware for the ATmega328P microcontroller integrated on the mobile platform. The main role of firmware is to execute commands that are received through a serial port. Commands allow controlling of the platform movement in particular. The control microcontroller ensures that the desired motion is performed based on the feedback from the revolution sensor. The firmware also allows the parent system to detect the status of subsystems integrated on the platform, i.e., optical sensors, magnetoresistant sensors, user button, and battery voltage. If the user wants to use the graphical programming method, the firmware should not be changed. However, it is possible to create an application that will only use subsystems integrated on the mobile platform and on expansion modules. In this case, the method of creating the application is practically the same as the Yrobot kit (standard programming). • The driver layer: drivers provide interfaces for all supported smartphone parts as well as for the mobile platform itself. The main application can only use those modules that are implemented in the drivers. Drivers must be installed on the device that communicates with the mobile platform. In the advanced mode, it means a smartphone that is placed directly on the mobile platform. A regular user does not have the option to modify the drivers. An advanced user can modify existing drivers or create new ones, e.g. for expansion modules connected to the application connector. Creating drivers requires a good knowledge of the Android operating system. • Application layer contains a user-created application. In the basic and standalone mode, the application is on the same device as drivers. Only in advanced mode the applications are on different devices. The application itself can be created in any programming language that is supported by Android. This concerns, in particular, the standalone mode. In this case, programmers are not limited by the availability of drivers for individual components of the smartphone and can use everything they need.

MYrobot – Mobile Educational Platform

707

One of the reasons for the development of the MYrobot kit was the need to raise interest in studying technical programme at universities for the biggest possible amount of students. In order to attract students without prior programming experience, it was necessary to simplify the programming of the kit as much as possible. There are currently a number of graphical programming languages that make it possible to learn in a playful way the basics of programming and thus develop algorithmic thinking for users. We have decided to create a dedicated development environment that is based on Google’s Blockly library [15]. The application consists of several joined blocks, each block representing a specific operation, e.g. move the mobile platform forward. The portfolio of available blocks also depends on the selected mode. In addition to the basic programming blocks (cycles, conditions), in each mode there are available blocks for movement control of the mobile platform (forward, back, right, left) and also blocks using which the platform-integrated optical sensors can be read. In the advanced and standalone mode, extra blocks are available, depending on smartphone equipment used on the mobile platform. These blocks are e.g.: take a picture, show the picture, turn on the light, wait for the command (voice command, clapping), play the sound, etc. Our goal is to support an open architecture of the development environment that will simplify addition of new features and blocks. The Fig. 5 illustrates a simple application created in graphical mode. At the beginning, the program waits for clapping. Then the robot repeats four times following movement sequence: two steps forward then turn right. The path traveled by the robot after complete execution of the application is square-shaped. At the end of the movement, the robot will be in its starting position.

4x

Fig. 5. Example of application in graphical mode

708

O. Karpis et al.

Of course, programming in the graphics mode will never allow you to make full use of all the options provided by either the MYrobot mobile platform itself or the superior smart device (if used). We remind that the ability to program MYrobot in a classical way, whether the mobile platform (in C or C++) or the smart Android device (in JAVA), is still preserved. Thanks to this, application development is open to people with varying levels of programming and hardware experience. Application development options are practically unlimited. Although this platform is primarily aimed at students with little or no programming experience, it can also grab the interest of advanced programmers. The main attraction is the mobility of the smart device. With our robot platform, it is possible to develop applications with varying levels of difficulty: • Simple applications that only use the mobile platform options, e.g.: robot movement according to fixed program with eventual motion change when detecting an obstacle. • More complex applications using image and audio ready modules: tracking the ball, tracking the line, controlling the robot by voice, etc. • Challenging applications using advanced technologies like neural networks: searching for a path in a maze, mapping of an unknown space, and so on. Wireless communication between multiple robots also enables the development and testing of cooperative algorithms: robotic football, group mapping, etc. Note, that applications are not limited to using only the resources provided by MYrobot or smart devices. Thanks to the application connector, various expansion boards can be developed to further enhance the capabilities of this kit.

4 Conclusion The kit MYrobot introduced in this article is still in the development phase. Besides developing the kit itself and the programming environment, our goal is also to build a high-quality, rich content website (schematics, tutorials, examples, etc.) where the users would be able to present their applications by themselves. We believe that we will be able to create an open platform interesting for a wide range of students, from elementary schools to universities. Hopefully, thanks to such projects, we will be able to change the relationship of youth to technology from purely consuming to creative.

References 1. The Slovak Spectator. https://spectator.sme.sk/c/20470071/shortage-of-qualified-labour-hitsslovakia.html. Accessed 18 Jan 2018 2. Hodon, M., Kochlan, M., Micek, J., Karpis, O.: Yrobot: open HW platform for technical disciplines. In: Global e-learning, Madrid, pp. 257–274 (2015)

MYrobot – Mobile Educational Platform

709

3. Kochlan, M., Hodon, M.: Open hardware modular educational robotic platform - Yrobot. In: 23rd international Conference on Robotics in Alpe-Adria-Danube Region RAAD, Smolenice Castle, Slovakia, pp. 1–6 (2014). https://doi.org/10.1109/raad.2014.7002246 4. Hodon, M., Micek, J., Karpis, O., Sevcik, P.: Robotic chasis for Android phones. In: 9th International Conference on Education and New Learning Technologies, EDULEARN 2017, Barcelona, pp. 3692–3696 (2017). https://doi.org/10.21125/edulearn.2017.1800 5. LEGO Mindstorms Homepage. https://www.lego.com/en-us/mindstorms. Accessed 18 Jan 2018 6. Lego Boost Homepage. https://www.lego.com/en-us/boost. Accessed 16 Mar 2018 7. Hummingbirdkit Homepage. http://www.hummingbirdkit.com/. Accessed 16 Mar 2018 8. Ozobot Homepage. https://ozobot.com/. Accessed 16 Mar 2018 9. Dash Homepage. https://www.makewonder.com/dash. Accessed 16 Mar 2018 10. Root – Kickstarter project homepage. https://www.kickstarter.com/projects/1509453982/ root-a-robot-to-teach-coding. Accessed 18 Jan 2018 11. Pillar. http://www.fisher-price.com/en_CA/brands/think-and-learn/products/Think-andLearn-Code-a-Pillar. Accessed 16 Mar 2018 12. Kibo Homepage. http://kinderlabrobotics.com/kibo/. Accessed 16 Mar 2018 13. WowWee Homepage. https://wowwee.com/elmoji. Accessed 16 Mar 2018 14. BeeBoot Homepage. https://www.bee-bot.us/beebot.html. Accessed 16 Mar 2018 15. Blockly Homepage. https://developers.google.com/blockly/. Accessed 18 Jan 2018

On Ethical and Legal Issues of Using Drones Ivana Budinska(B) Institute of Informatics Slovak Academy of Science, Dubravska cesta 9, 845 07 Bratislava, Slovakia [email protected]

Abstract. Extensive use of drones raises many new questions regarding ethics and morality. Questions concern the civilian and military use of drones. Drones can serve as a mobile network that can reach places, where other devices or people can’t get to or can only get to with difficulties. They can efficiently collect data from large and hard-to-reach areas. They are often used for scanning and exploring forest and agricultural areas. Together with advanced image and scene recognition methods, they can greatly reduce hard work and help reduce stress in crop growth and protect forests from infestation. Other application areas are e.g. for archaeologists, when exploring remote areas. The speed and efficiency of the capture of a scenenery is a great benefit of this new technology. Many people also use drones for recreational purposes. Therefore, the ethical and legal problems associated with their widespread use should be emphasized.

Keywords: UAVs

1

· Ethics · Legislature

Introduction

Unmanned aerial vehicle, often called a drone, can be a remote-controlled or an autonomous airplane or a model aircraft. The body of the drone can have different shapes and different forms of drives. Most often we encounter drones that move with the help of propellers most often. According to the number of propellers, we recognize helicopters, tricopters, quad-copters, multicopters. However, the drones may also have a different construction, for example the form of a slope wing. The UAV also has an independent management system, which is located directly on the device and allows for autonomous execution of some tasks. Drones are equipped with powerful sensors, especially camera systems and various types of sensors to recognize environments and objects in the environment. In addition, the droning device may also be provided with a carrier for conveying other objects if necessary for a particular method of use. The device itself is part of a wider system that consists of other components. The most important ones are terrestrial, remote management system, communication, data transmission, c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 710–717, 2019. https://doi.org/10.1007/978-3-030-00232-9_74

On Ethical and Legal Issues of Using Drones

711

storage and processing, and other support systems. UAVs categories include a large number of aircrafts, from aircrafts that are comparable in size and equipment to piloted aircrafts to very small flying equipment used as a toy or in research. In terms of size, we classify the drones into four categories: – nano and micro-drones, which are mainly used in research but are expected to be used, for example, in medicine, – small drones, the size of which is tens of centimeters, and are currently used mainly for recreational and sporting purposes, – medium drones which are mainly intended for commercial use, – large and very large drones that approach their size of a large piloted airplanes, and their use for civilian purposes is not yet foreseen. In particular, small drones are currently very widespread. This is due to their affordability and ease of use. Another way of categorizing drones is based on their performance, i.e., based on the size of the space in which they can operate. This is due both to sensory equipment and battery performance. The use of drones is very extensive and new opportunities are still appearing. In general, we can categorize the drones into: recreational, commercial, and military. Ethical issues for each type are described in the article [1] Recreational users of drones also include airplane model enthusiasts as well as photography and filmmaking hobbyists. Commercial use includes the use of UAVs for the purpose of monitoring and guarding objects, monitoring the traffic situation, controlling remote infrastructures (electricity lines, gas pipelines, oil pipelines) to obtain weather forecasting information. There is a great potential for the use of drones in agriculture and forestry, but also in services, such as delivery of parcels. Drones will be increasingly important in rescure work such as searching for affected areas and finding victims. Experiments are being carried out in which the drones can provide the first aid needed for the affected people and transport them to safety. Drones work alone or in a group. The current trend in research is the coordination of the robot group, even heterogeneous, where airborne and terrestrial robots, or robots moving in the aquatic environment, are also present. The application capabilities of unmanned aerial vehicles are very large. Their widespread use places demands on the legislative and ethical issues we have to address. The following chapter provides an overview of the current status of the UAV legislation in Slovakia in the context of European Union law. Section 3 raises specific ethical concerns about the widespread use of UAVs. Section 4 deals with the use of UAVs for military and defense purposes. At the end of the article, the development of robotics, its importance and its applications in the field of unmanned aerial vehicles is outlined.

2

Current State of Legislation for UAV Operation

The operation of unmanned aerial vehicles interferes with several areas of security and protection of the population. The European Union has established a

712

I. Budinska

European Aviation Safety Agency (EASA) to coordinate the European Union civil aviation procedure. EASA in a document [2] defines three categories of safety requirements for drones: – open category - low risk: No authorization to operate outside prohibited areas. These are defined by national authorities. Safety is guaranteed by the product safety rules and the minimum set of operating rules, as well as limitations to the operating weight, speed and method of communication. – specific category - medium risk: permission is required for national authorities to be issued on the basis of a risk assessment for a specific facility. EASA provides a list of some standards and parameters as a tool for assessing the degree of risk. – authorized category - high risk: Requirements are comparable to requirements for airborne vehicles. Restrictions on UAV operation are not limited to safety. If UAVs are equipped with recording capability (video, audio, etc.), their privacy should also be regulated. UAVs are increasingly used to monitor mass events, sports and cultural events, and various kinds of air shows. In some countries insurance is required for these purposes. 2.1

Valid Legislation for UAV in Slovakia

In Slovakia, the operation of unmanned aerial vehicles is currently regulated by the decision of the Transport Office no. 1/2015 of 19 August 2015 [3], which determines the conditions for the flight to be operated by an airplane capable of flying without a pilot in the airspace of the Slovak Republic. This Decree defines unmanned aerial vehicles covered by its provisions, defines persons who may be holders of a permit to fly a UAV and responsibility for the airworthiness of such a facility. The Decree does not apply to kites and unmanned balloons. This Decree regulates the conditions for the conduct of flight by an autonomous and remote controlled aircraft, the technical parameters of such means and the requirements for conducting a flight in controlled airspace. Performing a flight by an autonomous airplane is prohibited in the airspace. The decisive criterion for the execution of flights is the maximum take-off weight according to the Decree. Devices with a maximum take-off weight of 20 kg and above are subject to registration at the Transportation Office. However, weight-based categorization is not sufficient, as it is clear that kinetic performance can also be critical to the potential threat to persons and things. The Decree also introduces additional obligations for those who control unmanned aircraft. Among other things, it is necessary to keep a logbook with a record of the flights. Other limitations are related to aerial photography. If unmanned devices are used for imaging, they are subject to a specific law of the Aircraft Act and may not operate without permission. However, the actual existence of a recording device on the UAV does not imply an obligation to apply for an authorization. The situation will change

On Ethical and Legal Issues of Using Drones

713

when the device is used. Performing aerial photography without permission is sanctioned. It is important to note that authorization to operate sensing devices is generally issued for one year and is legally levied. In a further conflict with the law, the drones operator can also get in a conflict with the law when operating the said drone in a fauna sensitive area. In this case, it must also have the consent of the nature conservation authority. 2.2

Examples of UAV Legislation in Other Countries

In the neighboring Czech Republic, the situation is similar to ours. An overview of the obligations of an unmanned aerial vehicle operator can be found on the Internet1 . The unmanned system operation is modified by Supplement X - Unmanned Systems, Regulation L 2 - Rules of Flying. Operation of a UAV up to 20 kg for recreational and sports purposes is possible without permission and registration. Public authorities require insurance. For commercial purposes, authorization and registration is required for all UAVs regardless of the maximum takeoff weight. In Germany, the operation of the UAV is legally embedded in an amendment to the existing Air Traffic Act, which defines UAVs as aircraft operated for nonrecreational and sporting purposes. The German law also deals in detail with data protection, privacy and copyright protection. It also defines sanctions in case of violation of the law.2 Even in the UK, the legislative restrictions do not apply to equipment used for recreational and sporting purposes, provided their weight does not exceed 20 kg. However, in all cases, privacy and data protection laws must be respected.3 The situation in the US is slightly different, where UAVs have been used for a long time. Advanced technologies of autonomous unmanned systems are used primarily for military and Spying purposes. However, civilian use is more widespread than in Europe. Unmanned systems are used for various commercial purposes, but also include police and rescue equipment. There have been cases of a drone helping in an arrest of a dangerous criminal. The added value of the drones and their potential for different areas is indisputable. As the number of cases when drones used for recreational and sports purposes pose a safety risk to the population rises, the issue of stricter legislation is very current. It is important for law-makers to be involved in legislating, as well as users and developers of drones. Such an approach will certainly bring positive results and create space for user-friendly and safe use of drones by the public.

1 2 3

http://www.caa.cz/letadla-bez-pilota-na-palube?lang=1. https://www.wbs-law.de/internetrecht/civilian-drones-legal-issues-surroundinguse-50459/. http://www.telegraph.co.uk/technology/2016/04/18/drone-laws-in-the-uk–whatare-the-rules/.

714

3

I. Budinska

Ethical Issues of UAVs Use

Although the ethical issues related to UAVs are mostly related to their military use, there are still issues that we need to address even in the civilian use of drones. 3.1

Remote and Autonomous Motion Control

Imagine a scenario in which we run a drone, which operates remotely from the operator. Its task is to collect information about the unknown environment, to monitor the agricultural or forest areas, to control remote boarderlines. In the current state of the art, such activities can also be carried out autonomously by drones. Their advantage is that they are not subjected to fatigue at work, they are not distracted by the circumstances and the task they were designed for and programmed for they will do without error. They can get into environment which is dangerous for people or they can perform their tasks in an environment in which the work is very demanding for people. Everything seems to be all right. So what are the ethical issues? The first circuit concerns the work of the operator who is responsible for managing the UAV in a remote environment. For example, if remote controlled or an autonomous drone searches for victims in places of natural disaster, and a record of the victims is transferred to an operator’s center. How will the images of the suffering people affect the operator’s psyche when he she is unable to help? What ethical principles will he apply if he / she has to choose whom he/she will give help to earlier and whom not to help at all? The recordings of the situation will be able to be played again and the correctness will be verified afterwards. Misstatement of operator decisions, which will also have an impact on his psyche. The second area of ethical problems in this context is general availability. The availability of technologies for the inhabitants of differently developed regions. Their use will bring about the deepening of inequality and, ultimately, the tensions among the population. Another factor is the trust of people in the work of autonomous facilities. If unmanned systems operate in remote areas, their presence will be inconsistent with local residents, irrespective of the objectives of these facilities. Here, it is necessary to apply generally accepted ethical principles and not to carry out activities that would or could lead to harm other people, fauna, flora or property. 3.2

Data Collection and Acquisition

This area is relatively well legislatively developed in the framework of data protection and privacy protection. The ethical issues involved relate to the decision as to whether and under what circumstances it is possible to use the data obtained to ensure the safety of persons and things. Systems with artificial intelligence also play an irreplaceable role in the fight against terrorism. With advanced technologies, they can recognize faces, suspicious activity, suspicious objects in a crowd of people. UAVs will get into dangerous places and provide us with

On Ethical and Legal Issues of Using Drones

715

information about them. In fulfilling the tasks, however, they obtain information about the number of other inhabitants. Even in these cases, we need to assess to what extent we are willing to renounce privacy in the interest of safety. 3.3

Autonomous Decision Making

The field of autonomous decision making of artificial intelligence systems is one of the most serious and most discussed issues in the field of robotics. Assume that the drone, an autonomous robot in general, is equipped with an advanced sensory system and has developed cognitive abilities. It is already clear today that robots, thanks to advanced technologies, Can assess the situation better than a person. They are equipped with senses that enable them to recognize objects hidden behind an obstacle, better orientate in the dark, have computational power to help them quickly and efficiently recognize objects. Missions are not burdened with emotions when we do not program them. And here is the cornerstone of the problem. How to program the robot’s decision making mechanism to deal with complex tasks in terms of utilitarian or deontological ethics. Which principle should be preferred. Can a robot refuse to execute an operator’s order if, based on its capabilities, it evaluates the situation differently from the operator? A robot equipped with the ability to make a decision based on his or her own assessment of the situation is considered a moral robot. [4] However, how to develop a moral robot? There are different moral principles depending on culture, social status, religiosity and geographical location. Scientists experiment with different approaches to the development of moral robots. According to [5], the robot needs to know what is right and what is wrong. However, each decision applies to a particular situation, and the evaluation of good and bad can vary from one situation to another. Therefore, Malle and Scheutz attempt to compile a vocabulary of the moral notions of Kristen Clark. The group of volunteers assesses possible activities in the context of a situation as correct and incorrect. The results will be stored in a semantic network that will show the relationships between the different activities and the context. In this way, a network will be created to decide how very right or wrong is the performance of certain activity in the context. If such a network has a moral robot available, we can assume that its behavior will respond to ethical principles in that context. However, creating such a general vocabulary of moral terms is unrealistic. Therefore, we must confine ourselves to creating ethical rules for designing intelligent decisionmaking algorithms. When creating decision-making mechanisms correctly, it is possible to assume that the artificial intelligence system will decide not to take the lead and to prioritize a solution that will lead to more individuals or solutions, the consequences of which will be more favorable. 3.4

Environmental Protection

Important issues of environmental protection are related to environmental noise pollution and wrecks of damaged and destroyed UAVs, especially when using micro drones that will work in large groups. Such inhalation or ingestion of such

716

I. Budinska

drones pose a serious health risk not only to humans but also to the animals in the environment. This also applies to the use of robots moving in the aquatic environment.

4

UAVs for Military and Defense Purposes

This issue has its own specifics and is dealt with in detail by institutions working in the military field. From the point of view of a civilian researcher, it is important to realize that each result is abusive and that not all military activities are inhumane. Gugliemo Tamburina in his article [6] draws attention to the dangers associated with the use of autonomous weapons. Some legal and ethical aspects are discussed in [7]. At present, a wide range of robots operating in different environments are used for military purposes. Their border guard services (e.g. in Israel and South Korea) and working in dangerous environments are extremely beneficial to people. From an ethical point of view, however, they are very controversial. Above all, it must be remembered that current technologies are so advanced that with some of their abilities they predominate man, on the other hand even the most advanced contemporary machines are not infallible. This is, of course, not a man, but in the case of a failure of the autonomous system, the consequences can be much more extensive. UAVs are used for military purposes to retrieve information about foreign territory, transport various equipment, information and technology to remote locations and can also be used for direct military interventions. In the case of an open war conflict, it can be assumed that their activities will be more humane in a certain sense. The robot is not subject to stress or bad emotions. Against the enemy, he does not show hatred, he does not tend to commit crimes and violence. Even in the case of military conflicts, UAVs can be used for sovereign human purposes, such as looking for injured soldiers and transporting them to safety. Nevertheless, concerns about the use of autonomous unmanned vehicles in wartime conflicts are justified. UAVs conducting a survey above enemy territory may contain data that becomes misleading when captured by wrong actors. Using UAVs creates an imbalance in the power and resistance of local residents. While robots are equipped with advanced technologies, they always risk being mistaken for civilian and military objects. Attacks to objects may cause further damage and unplanned casualties to the civilian population. What is an acceptable share of innocent victims to evaluate the attack as successful? According to the International Convention, the use of weapons with a death rate of more than 25% is prohibited. For example, chemical and biological weapons are forbidden because they are too effective. Can we consider the robot, whose efficiency is close to 100%, more humane just because it does not target civilian objects?

5

Conclusion

The need for broad application of ethical and legal principles in connection with the widespread use of drones is very urgent. The main issues to be addressed

On Ethical and Legal Issues of Using Drones

717

are related to the problems of privacy, the protection of the environment and the health of the population [8,9]. The Aviation Strategy from the year 2015 also contains a legislative suggestion that would allow a proposal to establish technical rules and standards for drones and drone flights. It is expected that the European Parliament and the EU Member States will reach an agreement as soon as possible. In November 2017, a Helsinki Declaration was issued in relation with the real world drones applications. It is calling for citizens to be protected, based on safety, protection, privacy and the environment. The Helsinki Declaration calls for cooperation of scientists, developers and applicators of drones on three basic pillars: legal requirements, testing and verification of real world application of drones and the introduction of standards for efficient development of digital technologies for drones. It is of the utmost importance for the scientists and engineers working on autonomous drones and intelligent systems in general to think about the possible consequences of their work. We have a chance to build a better society, to better understand our weaknesses and to utilize the potential of artificial intelligence systems to improve human life. Acknowledgment. This work has been supported by Slovak Scientific Grant Agency VEGA, grant No. 2/0154/16.

References 1. Wilson, R.L.: Ethical issues with use of drone aircraft. In: 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering (2014) 2. EASA: Introduction of a regulatory framework for the operation of unmanned aircraft (2015). https://doi.org/10.1109/HPDC.2001.945188 3. D/L001-A/v3: Rozhodnutie. 1/2015 z 19.08.2015, ktorm sa uruj podmienky vykonania letu lietadlom spsobilm lieta bez pilota vo vzdunom priestore Slovenskej republiky (2015) 4. Sullins, J.P.: When a Robot is a Moral Agent. In: Capurro, R., Hausmanninger, T., Weber, K., Weil, F. (eds.) Ethics in Robotics, International Review of Information Ethics (2016). ISSN 1614-1687 5. Malle, B.F., Scheutz, M.: Moral competence in social robots. In: IEEE International Symposium on Ethics in Engineering, Science, and Technology, June, Chicago, IL (2014) 6. Tamburrini, G.: On banning autonomous weapons systems: from deontological to wide consequentialist reasons. In: Bhuta, N. et al. (eds.) Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge University Press, Cambridge (2016) 7. Kreps, S., Kaag, J.: The use of unmanned aerial vehicles in contemporary conflict: a legal and ethical analysis. In: 2012 Northeastern Political Science Association 0032-3497/12 (2012). www.palgrave-journals.com/polity/ 8. Veruggio, G.: EURON roboethics roadmap. In: EURON Roboethics Atelier Genoa, 27th of February 3rd of March, 2006, Scuola di Robotica (2016) 9. Tamburrin, G.: On the ethical framing of research programs in robotics. AI Soc. (2014). https://doi.org/10.1007/s00146-015-0627-2

Effects of Physical Activity Based HCI Games on the Attention, Emotion and Sensory-Motor Coordination Hasan Kandemir(B) and Hatice Kose Computer Engineering Department, Istanbul Technical University, Istanbul, Turkey {kandemir16,hatice.kose}@itu.edu.tr

Abstract. In this paper, several kinect-based games which are developed and implemented for the improvement of attention and sensorymotor coordination will be presented. The interface, and difficulty levels of these games are specially designed for the ease of different age groups. The games involve physical activities for the fulfillment of some basic tasks within the Human-Computer Interaction (HCI) game, such as fruit picking and air hockey, with different difficulty levels based on varying parameters of the games. The human action is observed and recognized via Kinect RGB-D sensors. The games are tested with a group of deaf children (3.5–5 years) as a part of the experiments of an ongoing project, to decrease the stress of the children, and increase their enjoyment, attention and sensory-motor coordination before the main tests. Both the game results and the evaluation of the therapists and the pedagogues show that the games have a positive impact on the children. The games are also tested with a group of adults as a control group and the attention levels of the adults were also observed via mobile EEG device. The children were supposed to use the EEG device in the main tests therefore the device was not integrated to their game sessions. Keywords: Brain computer interface · HCI Games Development of cognitive skills · Attention · Games · Kinect Deaf children

1

Introduction

In the recent years, game playing is successfully used for increasing the learning capacity or developing cognitive skills [1]. Also, games that are not designed for special purposes also assist in the development of such skills as presented in [2]. Some games encourage the players to think about the strategies in the game and concentrate on the game in order to improve their performance in the game [3]. This study is supported by the Scientific and Technological Research Council of Istanbul Technical university, under the contract of BAP 39679, “Robot and Avatar based interactive game platform for Deaf children”. c Springer Nature Switzerland AG 2019  N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 718–727, 2019. https://doi.org/10.1007/978-3-030-00232-9_75

Effects of Physical Activity Based HCI Games

719

Game playing is useful for this purpose, since it assists in developing attention (Attention indicates the intensity of mental focus. The attention level increases when a user focuses on a single thought or an external object, and decreases when distracted), planning and sensory-motor coordination skills, and people tend to spend more time or get motivated for playing games than spending time with other educational or therapautical activities that are designed and implemented for the same purpose [4]. But a main drawback of this approach is being immobile without any physical activity for a long time while playing some of the games. While these games might still help in developing cognitive skills, in the long term they can become physically unhealthy. Also, physical activities themselves will help developing cognitive skills because they will encourage sensory-motor coordination [5]. Combining physical activities and game playing is beneficial to improve the learning experience while avoiding remaining idle for extended amounts of time. In this paper, two kinect-based games are designed and implemented in order to show that physical activity based games help developing cognitive skills by encouraging the person’s increased attention to be successful in the game. This paper also presents the results and statistical analysis on them. This pilot study is a part of an ongoing work on the assistive robotic platforms for the rehabilitation of the deaf children. The games will be played in the assistance of the robot (the display screen on the robot will be employed, and the robot will give feedback about the success of the game) and the effect of the robots presence will be analyzed. This work is the first step of this work. Also the attention and the facial expression of the participants collected during the game are used in the emotion recognition of the children. Especially this is important where it is hard to stimuli the childrens reactions when the difficulty level of the tasks increase.

2

Description of Implemented Games

Two different kinect-based games are developed. The first one is called “Falling fruits game”, while the second one is called “Air hockey game”. Both of these games have adaptive difficulty property that adjusts the difficulty of the game adaptively based on how good the participant plays. 2.1

Falling Fruits Game

In this game, the player will see a number of fruits falling from top to the bottom of the screen. Objective is to collect apples while avoiding the pears. The player will do that using one of their hands (default is right hand but the player can switch between hands at any time of the game session). The player’s hand will control the basket that collects the fruits. Each collected apple will increase the game score by one while each collected pear will decreased the score by one. There are three stages of the game where difficulty level of the game increases by increasing the average falling speed of fruits. Every fruit’s falling speed will

720

H. Kandemir and H. Kose

be randomized based on a pre-defined range for each stage. Maximum of four fruits will be presented on the game screen at every time. Creation of the fruits will be randomized, with probability of 0.5. Each stage’s maximum score is ten (ten apples will be created at each stage). Physics needed for simulating the fall of the fruits are implemented from the scratch. Adaptive difficulty of this game will make the fruits fall faster if the player is performing well and collecting apples, otherwise it will make fruits fall slower. At each collection of a fruit, adaptive difficulty algorithm will run to find adaptive difficulty coefficient that will be applied to falling speed of the fruits on top of stage’s own randomized falling speed value. Adaptive difficulty coefficient can be 0.25, 0.5, 0.75, 1, 1.5 and 2 depending on how high the performance is. If current performance is high, adaptive difficulty algorithm will pick a high valued coefficient. Size of the fruits and the background image of the game can be changed in any time using the menubar at the top. Score, hit and miss values will be shown on the upper right of the screen. Hit is the number of apples collected, miss is the number of apples that player was unable to collect, score is the overall game score calculated by subtracting the number of pears collected from Hit value. A sample image from the game is shown in Fig. 1.

Fig. 1. A screenshot from the fruit game.

2.2

Air Hockey Game

In this game, the participant will try to score goals against computer controlled player. Both the human player and the player controlled by computer can’t reach

Effects of Physical Activity Based HCI Games

721

beyond their half of the playfield. Both of the players will try to hit the puck to score or to defend their goals. The game does not have a maximum score and can be ended anytime the participant wants. Playfield is in the center of the screen. A button that resets the scores is placed at the top. Scores of both human and computer controlled player is at the bottom. Legends for the game is placed at right of the screen. A screenshot from the game is shown in Fig. 2. “CPU” represents the paddle of the computer controlled player, “YOU” represents the paddle of human player, “Ball” represents the puck. Rule-based control module of the computer controlled player and physics needed to simulate collisions created from hitting the puck with paddles(players), reflecting the puck against borders of the field are implemented from the scratch. Algorithm of artificial intelligence can be found in Algorithm 1. An adaptive difficulty system is implemented for this game. This system will adjust the speed of computer controlled player based on human player’s performance in terms of goals scored. Data: posPuck = Position of the puck (x,y), posPlayer = Position of the human’s paddle (x,y), posCPU = Position of the computer’s paddle (x,y) speedPuck = current speed and direction of the puck Result: Current Adaptive Difficulty has been determined if isPuckGoingToCPUsGoal(posPuck,speedPuck) then // defend if isPuckInCPUsSide(posPuck) then if isCPUFronOfPuck(posPuck, posCPU) then getBehindOfThePuck(posPuck, posCPU); end else takeAShotToHumansGoal(posPuck, posCPU,speedPuck,posPlayer); end end else // Go in front of the goal goToDefencePosition(posCPU); end end else // attack if isPuckInCPUsSide(posPuck) then if isCPUFronOfPuck(posPuck, posCPU) then getBehindOfThePuck(posPuck, posCPU); end else goStraightToThePuck(posPuck, posCPU,speedPuck); end end else // Go to center of the CPU’s half field goToAttackPosition(posCPU); end end Algorithm 1: Rule-Based Algorithm For Computer Controlled Player In Air Hockey

722

H. Kandemir and H. Kose

Fig. 2. A screenshot from the air hockey game.

Human player’s paddle is controlled by player’s hand motion. Hand motion of the player is detected by kinect device. Rule-based system (RBS) first will determine if it should be in attacking or in defending mode based on the direction of the puck. If puck is going to RBS’s goal, RBS will try to defend, otherwise it will attack. If puck is not in the RBS’s half area, RBS’s paddle will go to attack position if it is in attacking mode, otherwise it will go to defence position if it is in defending mode. Attack position is the center of RBS’s half area, while defence position is the front of RBS’s goal. In both cases of attacking and defending modes, RBS’s paddle will try to get behind of the puck without touching the it if the paddle is front of the puck. If RBS’s paddle is already behind of the puck, RBS’s paddle will try to score a goal by shooting straight to the human player’s goal if the RBS is in the attacking mode or RBS’s paddle will try to clear the puck out of RBS’s goal by going directly to the puck if RBS is in the defending mode.

3

Experimental Setup

During the experiments, a Brain Computer Interface (BCI) device called Mindwave is used for collecting brainwave data of the participants, both while playing the games and while waiting idle. In these experiments only the data called “attention” is collected. The attention value is in 0 and 100 range. Attention represents the concentration of the participant who wears the Mindwave headset, from brainwave data. This data will allow us to learn if the participant is paying attention and focusing on the game. Experimental setup consists of a laptop, a Kinect RGB-D device and the Mindwave set. The Mindwave device is not used for the experiments where attention value is not collected through out the game session (Fig. 3).

Effects of Physical Activity Based HCI Games

723

Fig. 3. Experimental setup.

4

Experiments

Each of these two games have different properties which require different physical activities, different levels of attention and coordination. They also have different purposes; in “Air hockey game”, you will need to compete against a rule-based system. And in the other game, you will need to maximize your sensory-motor coordination in the other game. In the first set of experiments, attention values from each participant are collected throught out the game sessions for both of the games. Attention values are collected while the participants are sitting idle as baseline, in order to be able to compare the attention values from the normal state and the state when the person is playing the game. Attention values while playing these games are expected to be higher than the attention values while sitting idle. This may support the claim that physical activity based games also increase brain activity. In the second set of experiments, only game score values are collected. These results will show the relation between the participants that are playing the game and their performances at these games. 4.1

Experiments for Measuring Concentration

In these experiments, four participants’ attention values are collected while they are playing both of the game and while they are sitting idle. Results that show mean, standard deviation and statistical significance values of attention values are shown in Table 1. Anova tests are conducted with confidence level of 95%. 4.2

Experiments for Game Scores

In order to calculate performance of the participants, game scores are recorded. These experiments include gameplays of both adult and child participants. Participants were 6 adults with the age of between 20 and 27 and 12 children with

724

H. Kandemir and H. Kose Table 1. Results for attention values Idle

Participant 1 42.2 ± 19.61

Fruit Game

Participant 2 45.35 ± 10.57 54.65 ± 7.53 Participant 3 44.2 ± 24.86

Air Hockey Game P-value S.D*

73.85 ± 13.17 68.02 ± 19.54 65.5 ± 11.54

37.42 ± 15.06 37.98 ± 12.53

Participant 4 35.97 ± 14.77 53.43 ± 12.64 50.8 ± 13.31 S.D = significantly different

0.001

Yes

0.001

Yes

0.092

No

0.001

Yes



the age of between 3.5 and 61 . Children were wearing cochlear implants due to their hearing loss. Results of both adults and children are treated seperately. Histograms of game scores for both of adults and children are created. Histograms for fruit game are in Figs. 4 and 5 for adults and children, respectively. Histograms for air hockey are in Figs. 6 and 7 for adults and children, respectively. Histograms show the game score ranges as bins on the x axis and number of experiments that fall into the game score value of each bin on the y axis.

Fig. 4. Histogram of game scores of adults

Fig. 5. Histogram of game scores of children

While figures for both of fruit game and the air hockey game show that adults performed better, some of the children performed almost as good as adults. This fact shows that games are suitable for all ages. All children who are asked to attend the game voluntarily played and their motivation and reaction to the game is declared as positive by the therapist accompanying the test.

1

The children and the families volunteered for the study in Cerrahpasa Medical School, Audiology Department. They were accompanied by a pedagogue, and the audiologists during the study. The project is approved both by the Ethical Boards of Cerrahpasa Medical School and Istanbul Technical University.

Effects of Physical Activity Based HCI Games

Fig. 6. Histogram of game scores of adults

725

Fig. 7. Histogram of game scores of children

One of the reasons of the fact that performance gap between adults and children are small is the adaptive difficulty system. If the player’s performance is low, game will help them play better by decreasing the difficulty. If the player’s performance is high, game will increase the difficulty to increase the challenge. In both cases, adaptive difficulty system makes the games more enjoyable. Some selected experiments for showing the progress of different gameplays in terms of adaptive difficulty are shown in Figs. 8 and 9 for fruit game and the air hockey game, respectively. These figures show that some of the players played poorly at the start and with the help of adaptive difficulty, they achieved good scores, while the game was starting to be more challenging for a skilled player and this player’s performance was starting to drop at the end of the game session.

Fig. 8. Progress of adaptive difficulty throughout the game

726

H. Kandemir and H. Kose

Fig. 9. Progress of adaptive difficulty throughout the game

5

Conclusions

Experiments in this study show that physical activity based games are increasing the attention of individuals which will help them increase the efficiency of their learning process. The first tests are done with 6 adults as a baseline. The tests are then repeated with 12 children wearing cochlear implants before they attend to a hearing test, as a reward, and to increase their motivation. The results are really encouraging. The children were very motivated, enjoyed the game and adopted themselves to the game very fast. The mindwave set was not used with children to decrease their stress and increase their comfort. It will be used in the upcoming tests. Results and observations of the therapists and experimenters show that both of these games are interpreted as enjoyable by all of the participants. Also these games can help people become physically healthy since they require physical activity to be played while people find them enjoyable. Performance reports of both games suggest that games are suitable for all ages. This fact makes this study more valuable since it can be used for all people to increase their capacity to learn, help them become physically healthy and enjoy their time. This pilot study is a part of an ongoing work on the assistive robotic platforms for the rehabilitation of the deaf children. We plan to utilize these games with the assistance of the robot, robot being the part of the game and give feedback about the game. These interaction games will enable the therapists to build a bridge between the robot and the deaf children, and familiarize them with the robot. The adaptive difficulty levels in the games keep them engaged with the game; refine the sensory-motor coordination and attention of the children and help to decrease the reaction time of the children to the stimuli in the therapy.

Effects of Physical Activity Based HCI Games

727

References 1. Kourakli, M.: Towards the improvement of the cognitive, motoric and academic skills of students with special educational needs using Kinect learning games. Int. J. Child Comput. Interact. 11, 28–39 (2017) 2. Chen, C.-L.D., Yeh, T.-K., Chang, C.-Y.: The effects of game-based learning and anticipation of a test on the learning outcomes of 10th grade geology students. Eurasia J. Math. Sci. Technol. Educ. 12(5), 1379–1388 (2016) 3. Stadler, M.A.: Role of attention in implicit learning. J. Exp. Psychol. Learn. Mem. Cogn. 21(3), 674–685 (1995) 4. Divjak, B., Tomi, D.: The impact of game-based learning on the achievement of learning goals and motivation for learning mathematics-literature review. J. Inf. Org. Sci. 35(1), 15–30 (2011) 5. Schmidt, R.A., Lee, T.D.: Motor Control Learn. A Behav. Emphasis, vol. 4. Human Kinetics, Champaign, IL (2005) 6. Wang, A.I., Ibanez, J.G.: Learning recycling from playing a kinect game. Department of Computer Science and Technology, Norwegian University of Science and Technology, Trondheim (2014) 7. Torres, A.C.S.: Cognitive effects of video games on old people. Int. J. Disabil. Hum. Dev. 10(1), 55–58 (2011) 8. Tsai, C.-H.: Development and evaluation of game-based learning system using the Microsoft Kinect sensor. Int. J. Distrib. Sens. Netw. 11(7), 498560 (2015) 9. Zelinski, E.M., Reyes, R.: Cognitive benefits of computer games for older adults. Gerontechnol. Int. J. Fund. Aspects Technol. Serve Ageing Soc. 8(4), 220 (2009) 10. Roelfsema, P.R., van Ooyen, A., Watanabe, T.: Perceptual learning rules based on reinforcers and attention. Trends Cogn. Sci. 14(2), 64–71 (2010) 11. Sarmiento, D., Daz, Y., Ferro, R.: Using games to improve learning skills in students with cognitive disabilities through kinect technology. In: International Workshop on Learning Technology for Education in Cloud. Springer, Cham (2016)

The Impact of Robotics in Children Through Education Scenarios Ápostolos Tsagaris1 ✉ (

)

, Maria Chatzikyrkou2, and Gabriel Mansour3

1

3

Department of Automation, Technological Educational Institution of Thessaloniki, Thessaloniki, Greece [email protected] 2 Department of Philosophy and Education, Aristoteles University of Thessaloniki, Thessaloniki, Greece [email protected] Laboratory for Machine Tools and Manufacturing Engineering, Mechanical Engineering Department, Aristoteles University of Thessaloniki, Thessaloniki, Greece [email protected]

Abstract. This research explores the impact of educational robotics on children aged 9–15. The research was conducted after a training robotics seminar and, as evidenced by its results, the learners were thrilled by the content of the program, although they initially mistrust the program because of relative ignorance. However, they seemed happy that they met new friends and worked with them without any particular difficulty. The use of computer as well as the programming did not present any difficulty. The research concluded that older children were more familiar with concepts that are directly related to programming, obviously because they are every day involved with technology and less with robotics. Of course, with regard to the construction of various vehicles, the trainees showed their impatience and inventiveness for this and were extremely effective. It is therefore obvious that through educational robotics, children can learn to coop‐ erate more effectively with each other and that the teaching of basic principles of computer science, mathematics, geometry, physics, mechanics, and in general mechatronics, can escape the narrow limits of conventional teaching and to take the form of a game. Keywords: Educational robotics · Children · Mechatronics

1

Introduction

Technology is evolving so fast that many times people cannot follow these develop‐ ments. The change concerns all the scientist areas, whether they relate to his daily life, his work or his family life. In this context, the learning process could not be an exception. Any form of learning involves technology and, more importantly, programming that is not only a cognitive object but also an educational tool and is used to develop all the learners’ mental abilities. Computer literacy, as provided in education, is going to be no longer meaningful because children are born in technology. They are “technological natives” and not “technological immigrants” such as the older people. A traditional © Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 728–736, 2019. https://doi.org/10.1007/978-3-030-00232-9_76

The Impact of Robotics in Children Through Education Scenarios

729

approach to programming, as it is attempted at a school where a general purpose programming language is taught, does not meet its increased requirements and does not help in understanding new concepts and its use, cannot have the expected results. In most cases, both students and teachers are disappointed with this approach. It is therefore imperative to introduce new programming teaching methods in order to make teaching more attractive and to gain the maximum knowledge. In recent years, have been developed physical mechanical models that interact with the environment, executing commands and making moves and actions in general. These are called mechatronic systems, and in particular robotic systems. Educational robotics as a teaching tool creates the need for a more flexible experiential curriculum that can support the cross-thematic and constructivist approach. Learning is usually perceived as the acquisition of knowledge, the appropriation of intellectual content. But the rela‐ tionship with knowledge, which defines the relationship with learning, is the relation of a subject to the world, to himself and to others. That is why, among other things, we should promote experiential learning processes that will promote the student’s active participation, liberation of his/her creativity, responsibility for the course of learning, strengthening the critique of his/her thinking and consciousness. The theory of experi‐ ential learning, therefore, emphasizes the important role that experience plays in the learning process. If the collaborative method is added to the model of experiential learning, then the results will be optimal. All the relevant researches rooted in educational robotics showed that students who were inexperienced in planning initially felt they would not be effective, but evolution showed that they were soon able to plan how to solve a problem, to analyze it and then to develop the program so that their construction accomplishes the functions they wanted. As long as this process was successful, the interest and dedication to the activity increased. Consequently, an additional benefit was that students were socializing better as they came into contact with other members of their team, working together harmo‐ niously, encouraging experimentation, and having the opportunity to see the results of their programming directly so that they perceive it as an interaction process. The purpose of this research is to find out whether robotics participants find teaching effective and help them interact with mechatronic systems in general. The survey reveals the pleasure and satisfaction of participation, the challenge of new experience and the joy of participation. Also, the age of learners is correlated with their perceptual ability in basic concepts related to technology.

2

State of the Art

In recent years educational robotics has timidly made its appearance and it’s constantly gaining ground. This has effectively contributed to the emergence of special packages that combine the constructional background of objects (e.g. lego blocks) with sensors, processing units, actuators (e.g. motors) and friendly (visual) software that work together harmoniously for the construction and programming of a robotic construction. Educa‐ tional robotics is a very interesting activity for children and very attractive because it enables trainees to build an object such as a wheeled vehicle and then to schedule it to

730

Á. Tsagaris et al.

perform various moves depending on the scenario given to them by the trainer. In these cases they take into account parameters from the environment that are constantly changing and which will process through a program that they will build through a userfriendly programming environment. At this point, the role of the trainer, who has to design his lesson, organize his teams and direct the whole process, is decisive. The unpleasant and at the same time an inhibiting factor is that most of the time the teacher does not have the proper training and the necessary support, such as the lab, the computers, the robotics kit, the lesson plans etc. [1]. Atmatzidou and Dimitriadis [2] present an educational model for Teaching Robotic, considering that it contributes to the development of general and horizontal skills of students. Avramidou also argues that educational robotics contributes to the develop‐ ment of computational thinking skills [3] while Tsang [4] proposes an approach that exploits educational robotics in conjunction with visual programming. In some cases, the Lego Mindstorms for Schools robotic package was used to learn basic programming principles (and in particular selection and iteration structures) in a game-based problem solving project. In other cases, a teaching proposal is presented which attempts the pedagogical exploitation of educational robotics in the form of an interdisciplinary synthesis work within the framework of Secondary Education. In fact, many of the early attempts to exploit educational robotics have been funded by robotic kit manufacturers, with the result that the teaching content and the corresponding workshops are geared to the specific kits, with the implications of the advantages and disadvantages but in any case by ignoring the general picture and discouraging the move towards alternative methods [5]. There are, of course, open source platforms where, as an open source platform, one can build and move constructions similar to those offered by the Lego Mindstorms classic kits with the advantage of using commercial subsystems, the joy of creating connections that can applied to a large number of electronic devices and of course with a fraction of the cost of their respective commercial kits [6]. It is worth noting that creating a “farm” of such structures, each with specific functions, can achieve complex functions and constructions, directly applicable to both the domestic and industrial sectors [7].

3

Proposed Methodology

Before the implementation of the present research the relevant literature was studied, the purpose and the individual objectives were identified, the research questions were raised and the appropriate methodology was chosen. When designing the research, every effort was made to comply with all ethical rules governing the research process. The study took place in the region of West Thessaloniki between September 2016 and March 2018 and the sample consisted of 120 students aged 9–15. In this study, the questionnaire was used to collect data. This was decided because, in any case, there are no specific rules under which the method for studying a research problem should be chosen, as each has its advantages and disadvantages. The selection of the research method therefore takes into account several factors, such as the object itself, the existing resources, the time available, etc.

The Impact of Robotics in Children Through Education Scenarios

731

Trial Application Before the questions were finalized, the questionnaire was tested in order to check in time whether there were any ambiguities in the wording of the questions that would prevent the completion of the questionnaire or would lead to any misinterpretation by the respondents. From the observations of the five trainees who received the test ques‐ tionnaire, it was found that the most important problems arose from the fact that some questions were not understandable because there was a specific terminology that was not perceived by the particular group of people. For this reason, some minor adjustments were made and the average time to complete was calculated. Sample The sample was sampled with feasibility. The questionnaire was completed during the courses as part of a training robotics training seminar for children of primary and secon‐ dary school. From this seminar in the survey participated a total of 120 people who attended the training program in groups of 20 people. Questionnaire The primary concern was the identification of the subject under study. As part of this effort, the necessary bibliographic review was carried out, and it was then necessary to define clearly the individual axes-themes that make up the research problem, but also the specific information that constitutes each axis-theme, motives and obstacles such as those experienced or learners are still experiencing the process of training in new tech‐ nologies. Subsequently, a variety of categorical variables, other dependent and other independent emerged. The addicts relate to the effectiveness of experiential learning and the success of group collaboration, while the independent ones are related to demo‐ graphic factors such as gender, age, habitat and parenting level, as well as previous computer knowledge, programming and robotics. As far as the scale is concerned, it was decided as a scoring scale to use a quadruple, with four possible answers, 2 positive (Enough and Very) and two negative (Little and Not at all). It is a differentiated form of Likert’s five-level scale, which is widespread in surveys that record beliefs, values, motives and other manifestations of human behavior. This scale has a modest range, that say, neither a three-step limitation nor a wide and indistinguishable boundary between seven or nine tiers, and is considered to include equal distances between them [8]. The reason why there was no medium answer was to avoid the accumulation of neutral responses and thus to facilitate the distribution of responses to negative and positive responses. The importance attributed to the four degrees of our own scale is as follows: for the second section of the questionnaire, where the participation incentives were included, the correspondence is 1 = not at all, 2 = little, 3 = enough, 4 = Very At the beginning of the questionnaire, a brief reference is made to the nature of the survey and to the responsible organization that supports it, as well as information confi‐ dentiality issues, as well as completion instructions. The structure of the questionnaire includes two parts, the first of which refers to the general elements of the trainees (ques‐ tions 1–8) and the second in the details of participation in the robotics educational program. In the questions concerning general trainees, it was considered appropriate to

732

Á. Tsagaris et al.

ask for gender, age, housing and parenting level, as they are crucial to the results of research, thus defining the independent variables of the survey. The questions of the second section refer to the details of the implementation of the educational process. Questions 6, 7 and 8 refer to whether they have their home PCs, how often they use it, and why, questions 9, 10, 11 and 12 address the prior knowledge and use of program‐ ming, robotics, algorithms and Lego devices. Questions 13, 14, 15, 16 and 17 detect the effective use of group co-operation learning with emphasis on teamwork and potential problems that may arise from them, while questions 18, 19, 28, 29 and 30 refer to pleasure - satisfaction that the participants felt. Questions 20 and 21 make an effort to identify the ease of use of the software, while 22 for ease of manufacture, since expe‐ riential techniques are an important factor in research. Finally, Questions 23, 24, 25, 26 and 27 refer to the ease of implementation of the scenarios for solving the problems that are being raised.

4

Experimental Results

In the experiment participated 120 volunteers, aged from 9 to 15. Each subject is seated in front of the mechatronic system and is asked to complete the some tasks. After completion of the last course, a rest period of few minutes is given and the subjects were asked to complete a questionnaire. All subjects and their parents gave informed consent prior to the experiment. The committee consisted of the authors of the current research article1. In order to analyze the data collected through the research process, electronic processing and statistical analysis, the Microsoft Office excel and the SPSS statistical data processing program were used. After the collection of the questionnaires, an initial check was made. It was followed by their numbering, the coding of the answers and the recording of the data in computerized electronic files. Electronic data processing and statistical analysis followed, resulting in the corresponding findings and conclusions. The survey sample consisted of 120 trainees, of whom 81 (67.5%) were boys and 39 (32.5%) were girls. With regard to their ages, 14.17% of respondents are 9 years old, 20.83% are 10 years old, 14.17% are 11 years old, 23.33% are 12 years old, 10.83% are 13 years old, 10.0% are 14 years old 6.67% 15 years (Fig. 1). Of the 120 trainees in the sample, 2 (1.67%) have mother with a primary school graduate, 44 (36.67%) high school graduates and 74 (61.67%) university graduates. As regards the fathers of the 120 trainees in the sample, 1 (0.83%) has a primary school graduate, 61 (50.84%) high school graduate and 58 (48.33%) university graduates (Fig. 2).

1

The children and the families volunteered for the study in Automation Department of ATEI of Thessaloniki. They were accompanied by a pedagogue during the study. The project is approved both by the Ethical Boards of ATEI of Thessaloniki.

The Impact of Robotics in Children Through Education Scenarios

733

Fig. 1. Age and gender

Fig. 2. Parents education level

Of the 120 trainees in the sample, the overwhelming majority (118–98.33%) have a computer at home, while only 2 (1.67%) do not. The frequency of computer use is described by the following data where 3 (2.5%) do not use computers, 34 (28.33%) use little, 40 (33.33%) enough and 43 (35.83%) much (Fig. 3).

Fig. 3. The use of computer at home

734

Á. Tsagaris et al.

To analyze the correlation between age and the use of technology by children, the criterion x2, known as the chi square test of independence, or the contingency tables, was used. Examines whether the two variables that make up the dual input table are independent or not. It is essentially a statistical control based on the use of the statistical distribution x2 at materiality level α (5%). It is therefore an attempt to study whether age affects the impact of educational robotics on children (or otherwise if the impact of educational robotics on children is influenced by age). From the correlation test made in the results of the questionnaires, there was an age dependence with the previous use of Lego technology (Question 9). Observing the results we find that the younger the children have the more familiarity with the specific technologies. This means that in the last few years they have been growing rapidly, and therefore the older children have apparently not yet been recipients of these technologies. Also, programming knowledge

Table 1. Questionnaire Questions 9. Have you ever used Lego? 10. Did you know what Robotic would say? 11. Did you know PC programming? 12. Did you know what the algorithm would say? 13. How much did you want to participate in the robotics seminar? 14. Did you know the rest of your team? 15.Did you work in the past with them? 16. Did you feel moral shyness (shame) when you joined your team with other members? 17. Do you feel you are being pushed into your team? 18. How much joy do you feel every time you participate in a seminar activity? 19. How afraid you will not be able to meet the requirements of the lessons? 20. How easy was the PC program you used to program robotics? 21. Do you remember more blocks to choose for each command? 22. How easy did you fit the vehicle with the tracks? 23. How easy did you plan the vehicle to turn 90°? 24. How often did the vehicle escape from what you wanted to do? 25. If the vehicle was out of order how easy did you correct the mistake? 26. When did you use a distance sensor how often was the vehicle impacted in a hurdle? 27. How easy did you program the vehicle with the color sensor follow the black line? 28. Have you been tired or bored during the lessons? 29. Overall did you like the seminar? 30. Would you recommend it to a friend of yours?

1 22% 2% 13% 50% 2% 16% 43% 81%

2 30% 30% 28% 21% 3% 16% 20% 15%

3 12% 27% 35% 13% 22% 24% 19% 2%

4 36% 41% 24% 16% 73% 44% 18% 2%

72% 16% 6% 6% 1% 25% 30% 44% 53% 32% 9%

6%

3% 2% 2% 4%

38% 48% 44% 35%

16% 14% 13% 17%

43% 36% 41% 44%

4% 35% 43% 18% 1% 24% 54% 21% 21% 46% 29% 4% 5%

24% 32% 39%

60% 28% 9% 3% 2% 5% 16% 77% 7% 9% 18% 66%

The Impact of Robotics in Children Through Education Scenarios

735

(Question 11) and the concept of the algorithm (Question 12) also seems to be agerelated, but in this case older children were better acquainted with the concepts. This means that their programming is more familiar than early, while robotics with Lego arose afterwards, where they had grown up and did not deal with them because of the fact that they were playing them as gaming machines. Also, the ease of using the software (Question 20) and the commands (Question’s 23, 27) for robot control, which was easier for younger students, was also directly related to age, and the above finding is confirmed as the young children knew Lego. Summarizing the trainees’ characteristics of participating in technology education are listed in the following table (Table 1) according to the order of the questionnaire.

5

Conclusion

As it turned out from the survey, the children had used extensively in the past Lego (AVG 2.63), although in the majority they had used Lego of the simplest form. They were also familiar with the concepts of Robotics (AVG 3.06) and computer program‐ ming (AVG 2.71), although algorithmic logic was not so familiar (AVG 1.95), because these words are heard almost daily causing confusion in their content, while algorithmic logic is only accessible to those who are more in touch with it. As it turned out, older learners were completely familiar and effective in planning with respect to younger ages. In all age groups, the concepts of robotics were almost unusual. However, their desire to participate in such a program (AVG 3.67) was strong, and although they did not know the participants in advance and had not cooperated with them in the past (AVG 2.13), they did not feel any morals shrinking during the partici‐ pation (AVG 1.26) and did not feel crowded in the group (AVG 1.45). This is, of course, due to the enthusiasm they have shown for the seminar and their anxiety for the successful implementation of the missions that have risen to them, leaving no room for controversy. This enthusiasm and the competition they felt for their rival teams caused great joy (AVG 3.58) every time they performed an activity and never feared (AVG 1.68) whether they would meet the requirements of the lessons. They found the PC program easy (AVG 3.16) and functional for programming the robot (AVG 3.28). Besides, there was the necessary explanation of the interface and the guided practice by the trainers. This made it easier for trainees to further implement the interface and contributed to the successful completion of the scenarios. Also, assembly (AVG 3.27) and motion programming were easy (AVG 3.1). It was observed that mistakes were made several times (AVG 2.74) but were easily corrected (AVG 2.95) without the intervention of the trainer because the trainees actually found the mistake and the correction was easier. The use of the sensors was a little bit worse (AVG 2.17), but the exercise with their use showed ease of use (AVG 3.05). This reflection was awaited as children are not familiar with their daily lives with the use of sensors and much more the way they work. Finally, they did not feel any fatigue (AVG 1.76) while they liked the training program (AVG 3.69) and would comfortably recom‐ mend it to a friend of theirs (AVG 3.43). It is also a fact that in the seminar the acquisition of new knowledge was made in the form of a game that is very important when we are

736

Á. Tsagaris et al.

referring to ages 9–15 years. Thus, the new knowledge of computer science, mathe‐ matics, geometry, physics, mechanics, and mechatronics generally acquired did not cause boredom, fatigue, and any other unpleasant feeling about it, would recommend it to a friend of theirs, contrary to conventional teaching methods.

References 1. Alimisis, D.: The lego mindstorms programming environment as a tool. In: 4th Pan-Hellenic Conference on Computer Science. ASPAITE, Patras (2009) 2. Atmatzidou, S., Dimitriadis, S.: Design and implementation of educational robotics educational framework. In: 8th Panhellenic Conference: Teaching of Informatics, Ioannina, 23–25 September 2016, Ioannina, pp. 89–96 (2016) 3. Avramidou, M.: Educational Robotics and computational thinking development: the role of gender in the composition of the groups. Postgraduate diploma thesis, Aristotle University of Thessaloniki, Department of Informatics, Thessaloniki (2016) 4. Tsang, C.: Constructivist Learning Using Simulation and Programming Environments. MIE2002H Readings in Industrial Engineering I (2004) 5. Bredenfeld A., Hofmann, A.: Robotics in education initiatives in Europe-status, shortcomings and open questions. In: Proceedings of International Conference on Simulation, Modeling and Programming for Autonomous Robots (SIMPAR 2010) Workshops, pp. 568–574 (2010) 6. Couceiro, M.S., Figueiredo, C.M., Luz, J.M.A., Ferreira, N.M.F., Rocha, R.P.: A low-cost educational platform for swarm robotics. Int. J. Robots Educ. Art 2, 1–15, 2011 7. Warren J.-D., Adams, J., Molle, H.: Arduino robotics. Springer (2011) 8. DeVellis, R.F.: Scale Development: Theory and Applications. Applied Social Research Methods Series, vol. 26. Sage Publications, London (2003)

Trends in Educational Robotics Daniela Floroiu1, Paul C. Patic2, and Luminita Duta2 ✉ (

2

)

1 UPG University, Ploiesti, Romania Valahia University, 18 Sinaia Ave., Targoviste, Romania [email protected]

Abstract. The present paper tries to emphasize the importance of STEM educa‐ tion in the primary and secondary school, as well as the use of educational soft‐ ware in robotics taught in high schools and universities. Several European and wide world current trends in educational robotics are reviewed. Keywords: Education in robotics · STEM education · Educational software

1

Introduction

Educational Robotics is a branch of Educational Technology that offers vocational skills for future science, technology, engineering and mathematics (STEM) workers towards robotic technology literacy. Educational Robotics should be seen as a tool to encourage cognitive and personal development and team work, through which young people can develop their potential to use their imagination or their creativity skills in order to express themselves [1]. Educational Robotics creates a learning environment in which students can interact with real-world problems. Educational Robotics has a huge impact on young people’s personal development including cognitive, meta-cognitive and social skills, research skills, creative thinking, decision making, problem solving, communication and team work. To practice robotics at an affordable price, dedicated software is used to simulate the actions of robots. In universities, educational software makes easy to understand how to control and program robots. Educational software is a complex computer program, especially designed to be used in the teaching process. It can also be considered as a manual, not in the form of a simple document, but with a friendly interface to allow user interaction. The performance of the educational software should be drawn from the quality of the presentation that ensures the information requirements for a specific topic and the interaction between the computer and student or professor. The software should be able to adapt itself to the characteristics of the user (for example, programs should submit several levels of difficulty, a transition to a higher level assuming browsing through the previous levels etc.). There are many educational software classifications, but making a summary thereof, we can consider the following:

© Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 737–744, 2019. https://doi.org/10.1007/978-3-030-00232-9_77

738

D. Floroiu et al.

• Interactive learning software - which facilitates the transmission or the interactive presentation of knowledge; • Simulation software - simulation of real situations on which the student may make a study or analysis in order to draw conclusions; • Training software - for the development of the specific skills; • Soft of investigation - is an environment from which the student may retrieve infor‐ mation on his own; • Thematic presentation - addresses topics from various areas of school curricula, providing opportunities for widening the horizon of knowledge; • Software assessment – for administration of the evaluation tests; • Software Utilities – provides tools like dictionaries, tables, formulas etc. • Educational games - in which, under the form of a game, teaching purposes are achieved; • Administration and management of education - product for supporting the school activities or training in general. The efficiency of educational software can be established according to several criteria: the degree of difficulty, the area of coverage, the internal structure and vulner‐ ability, the way of operating from the point of view of the user. The educational software must allow browsing through the stages of a completely learning act: the first contact with new knowledge, the application of knowledge recently acquired, and the updating of knowledge after a certain period of time. Modular plat‐ forms are designed for education and training programs. Featuring standard industrial-grade components and comprehensive academic tools, Educational Robotics Platforms can be customized to fit curriculum layout. They offer intelligent virtual environments or friendly interfaces which pupils can easily use to learn robots controlling and programming. The rest of the paper is organized as it follows: the STEM concept and some imple‐ menting difficulties in schools are presented in the second section. The third section describes some examples of STEM programs implemented in Europe and world-wide. Some Romanian projects and initiatives in the field of educational robotics are presented in the fourth section.

2

Robotics in Schools: The STEM Concept

Nowadays pupils and students live in a different world from their parents and grand‐ parents. New technologies are reinforcing old ways of teaching and learning. As the twenty century was the time of microelectronics, computers and internet, the twentyfirst century is that of robots and artificial intelligence. The educational concept called STEM (Science, Technology, Engineering and Mathematics) implies an interdisciplinary and applied approach. The objective of STEM education is to teach pupils how to put in practice their knowledge in these four fields. Teachers must use different strategies and concepts to provide students multiple pathways into robotics and to engage young people with diverse interests and learning

Trends in Educational Robotics

739

styles [2]. Different strategies and methodologies for the implementation of the robotics curricula are applied in practice, followed by testing and continuous improvements. Unfortunately, the way of introducing this discipline in curricula is not systematic, especially in European schools. Researches accomplished by legislative organizations as the International Federation of Robotics, the United Nations Economic Commission for Europe, or the Japan Robotics Association, indicate that the demand for entertainment and educational robots is growing and this trend may continue in future [3]. In Europe, most national education authorities are encouraging the development of projects on educational robotics in schools. However, educational robotics has not been introduced in the European school curricula yet. Most of the experiments involving robotics research and conception activities take place in after-school programs, in week‐ ends or in summer camps [4]. Obstacles in implementing robotics as part of the regular curriculum in schools appear to be linked to the duration of the robotic activities, the cost of the equipment and the practical work required from teachers to cope to arrange all the pieces in the right place. Other problems in implementing robotics curriculum are the lack of teachers in this field and the inappropriate laboratories existing in schools. However, robotics has a huge potential to offer in education so it is obvious that we have to rethink our approaches in Educational Robotics. Robotics develops students’ motivation to learn mathematics, electronics, mechanics, physics, it develops compe‐ tences and practical skills and encourages team work and collaboration. Some important objectives have to be taken in account so as the implementation of robotics in schools being successful: • Promoting communication and networking between researchers, teachers, and learners in view of sharing experiences, products and expertise • Supporting teacher education • Encouraging implementation of educational robotics in schools curricula • Validating new methodologies in teacher education • Forming groups interested to study specific issues in the domain of Educational Robotics • Providing reports on the latest developments in the domain of educational robotics to authorities, parents or teachers.

3

Worldwide Landmarks

Nowadays, there are calls in education in Europe and worldwide for educational approaches that will encourage student creativity and inventiveness. Appropriate learning methodologies such as constructionism and constructivism lead to the development of creativity, systematization, critical observation, collaboration and communication. There are many sites and companies that provide STEM resources like [9, 10], or [11]. Among science resources we find lab-aids kits in physics, chemistry or electricity. Examples of technological resources: drones, robots, flight simulators, 3D printers.

740

D. Floroiu et al.

Engineering kits are provided by grades: for elementary school, middle and high schools. Mathematical tools are also classified by type, grade and subject. LEGO Mindstorm, VEX Robotics, and Fischertechnik are the most widely used robotic kits. They are composed of libraries of prefabricated parts. Alternatives to these popular kits are highly modular (e.g., Kondo, Bioloid, Cubelets, K-Junior V2, and Kephera) expensive and unaffordable for the majority of schools, or single-configuration robots (e.g., AERObot, iRobot, and Boe-Bot) with a restricted number of possible actions [19]. In the United States, National Science Foundation (NSF) is an American agency which ensures support for all fields of fundamental science and engineering. NSF has several programs in STEM education, which implements higher cognitive skills for students and enables them to inquire and use techniques used by professionals in the STEM fields. There is also STEM Academy which is a national nonprofit organization dedicated to improving STEM literacy for all students. It developed 5,200 programs in over 4,700 schools in all 50 states [5]. Canada is on the 12th place out of 16 peer countries in respect of the percentage of graduates in STEM programs [5]. The country with the greatest proportion of graduates in STEM programs is Finland, which has over 30% of their university graduates coming from science, mathematics, computer science, and engineering programs [6]. Many Super Science High Schools emphasize the importance of education of math‐ ematics and science. However, few high schools emphasize that of technology and engineering. This is because of the lack of facilities of making things by using 3D printers and laser cutters. That is why the STEM Education is not popular in Japan yet [7]. In December 2015, the Australian Federal Government announced a National Innovation and Science Agenda. The program was given almost $65 million for the professional development of teachers and as grants for specialized STEM programs in classrooms. As a result, university curricula were changed. For example, science has become a pre-requisite to enter a Bachelor of Education primary course at some universities. In addition, coding is now being taught from the primary school to the 10th grade [8]. Microbric, an Australian company based in South Australia, launched EDISON, an educational robot, in 2014. Edison is a programmable robot designed to be a complete STEM teaching resource for coding and robotics education for students from 4 to 16 years of age [15]. In Turkey, there is an association of teachers and academicians who make a huge effort to increase the quality of education in STEM fields rather than focusing on increasing the number of STEM graduates. In Germany, on the initiative of the University of Education Freiburg and the DZLM (German Centre for Mathematics Education), was founded The European STEM Profes‐ sional Development Centre Network (STEM PD Net), today comprising 30 institutions from all Europe. This program objective is to ensure that all pupils are provided with the best STEM education by supporting international exchange of students and the continuous profes‐ sional development of STEM teachers.

Trends in Educational Robotics

741

For example, there are dedicated programs that prepare teachers to efficiently implement STEM education. ABB’s SMART Certification Program1 (Software, Maintenance and Robotics Training) certifies teachers in STEM methodologies to teach the curriculum to their students. In the Netherlands, RoboMind software is specifically developed to support tech‐ nology education. By programming a robot, students learn about logic, computer science and robotics. In many EU member states, due to demographic developments there are less and less young people. However, the proportion of young people with STEM competences should increase to meet the various technological challenges. At the same time there are more vacancies for STEM jobs as well as high youth unemployment. There is a need for one million additional researchers by 2020 in order to keep Europe growing. The EU STEM Coalition – launched in October 2015 – is helping to develop and implement national strategies to promote STEM subjects across Europe. The EU STEM Coalition focuses on the development and improvement of national STEM strategies that increase impact of STEM related activities on the national level, through the active exchange of best practices between the existing STEM platforms. Resources are available on the internet for teachers, for example, robot kits such as Lego Mindstorms and Vex Robotics, simple programmable robots such as Sphero balls, and lesson plans. At present, the educational methodology based on LEGO® MIND‐ STORMS® kit is used successfully in more than 25000 of schools from all around the world from primary schools to universities. Most of them being relatively affordable, they are also used in the laboratories of the universities in lab support. In the developed countries, there are special educational centers, in the US and UK it is implemented in the form of the optional method or as a support for the creation camps during the holidays. Sophisticated, engaging robots such as the NAO robot are also available. For example, ASK NAO is a suite of games that have been developed for the NAO robots to teach autistic children. There is even a worldwide competition, the World Robot Olympiad, which meets annually to the representative of the member affiliated teams, for a wager of intelligence, creativity and at the same time promoting innovative education concept based on NXT set. In the same category of large-scale competitions is the First Lego League, the world organization purpose being to maintain the interests of the young generation in science and technology, and to provide memorable experiences in the organized competitions.

4

Romanian Landmarks

In Romania, there are several projects and initiatives, but most of them are condemned to remain only on paper because teachers did not find the necessary financial resources. There is also an acute lack of teachers with training in the field of IT and robotics.

1

ABB is a pioneering technology leader that works closely with utility, industry, transpor‐ tation and infrastructure customers to write the future of industrial digitalization.

742

D. Floroiu et al.

Moreover, at the ministry level, there is no coherent strategy for the promotion of educational robotics in the school curricula, the teaching staff having the possibility to decide on the content of curricula. The alternative is aimed at determining the child to learn doing what he likes most, which is playing. The child is challenged to “play smart”. He no longer receives a simple toy of which most of the times he bores quickly, he now receives “ingredients” – as the LEGO® kit and guidance. From the most simple to the most sophisticated robots able to perform complex tasks, from the simple programming language based on the intuitive graphical blocks to the advanced programming language as well as C and Java, students are trained in an adventure of knowledge based also on solid theoretical foundations, but also on the game, imagination and heuristics methods. For many, this “play” gave birth to passions that will last a lifetime and it will be a real support in choosing the path to be followed in carrier. However, in Romania there are clubs, groups, camps and organizations that promote robotics and educational software so as pupils that are passionate in this field could apply their abilities in construction and programming a robot. There are dedicated teachers that work with students after the school program and participate together to thematic courses and contests. There are also dedicated platforms that ensure free support and materials for education in robotic field [12]. Some free courses are taught by Bosch in the program Kids in Tech. From the autumn of the school year 2017–2018, Romanian students in secondary and upper-cycle enrolled in the Kids in Tech clubs from all over the country will receive one Arduino kit per club and Robotic courses for free, sponsored by the BOSCH, a global leader in providing innovative technologies and services. This initiative aims to encourage and support passionate students about technology and programming to accomplish projects in robotics. Running as a private initiative - organizing courses for groups of 8–12 and 12–16 years age, the project SmartClass is based on solid foundations, a curriculum correlated with the level of knowledge of students and a solid partnership with the Carnegie Mellon Robotics Academy department of robotics, who developed the language ROBOTC®, a simplified version, dedicated robotics MINDSTORMS®, a popular programming envi‐ ronment in Visual C, promoted by Microsoft® [13]. Another example is the BRD FIRST Tech Challenge Romania that invests in future tech leaders. It empowers Romanian students in their educational journey as we aim to bring “learning by doing” and “having fun” while creating a robot from scratch. They started in the first season with 54 high schools, over 800 students from 33 cities who had the challenge to take part in the biggest Robotics Championship in Romania. The contest has reached season two with plans of reaching around 90 teams, over 1200 students, 330 mentors and 300 volunteers. The grand finale of Championship will take place in Bucharest in March 2018 [14]. At the academic level, students use educational software to create and simulate robot actions. They also use computer applications for controlling and programming different kind of robots. For example, they use ARIA to dynamically control robot’s velocity,

Trends in Educational Robotics

743

heading, or other motion parameters, to receive estimate positions, to read sonar and other current data sent by the robot platform [16]. Different software is used to conceive and simulate robots actions (3dsMAX soft‐ ware®, Alice 3D, MobileSim or ARENA). Students animate applications using Java or C++ programming. As they put passion in their work, they are formed to follow a research career in the field of robotics.

5

Conclusions

Educational Robotics should be seen as a tool to enhance personal skills through which students can develop their practical potential, to use their imagination, to apply their technical abilities, to collaborate and to communicate each other, to work in teams and to value their professional knowledge. Different strategies for introducing students to robotics technologies and concepts should be employed by teachers to provide multiple pathways into robotics and to ensure that there are premises to engage young people in this field. Education must not remain behind technology. That’s why, Robotics curricula must suffer continuous improve‐ ments that keep up with new technologies. STEM learning is a vision for an innovative future that must be promoted in schools at a large scale [17, 18]. The open-source robotics role has increased lately and it allows the use of free soft‐ ware platforms and hardware devices. Open-source systems are guaranteed to have their designs available for ever so communities of users can continue support after the manu‐ facturer has disappeared [20]. More European projects in educational robotics will enhance knowledge, expertise and further networking of researchers in the field.

References 1. Obdrzalek, D., Gottscheber A.: Education in robotics. In: Proceedings of International conference EUROBOT 2011, Prague. Springer (2011) 2. Alimisis, D.: Educational robotics. Open questions and new challenges. Themes Sci. Technol. Educ. 6(1), 63–71 (2013) 3. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a systematic review. Comput. Educ. 58(3), 978–988 (2012) 4. Blikstein, P.: Digital fabrication and “making” in education: the democratization of invention. In: Walter-Herrmann, J., Bόching, C. (eds.) FabLabs: Of Machines, Makers and Inventors, pp. 1–21. Transcript Publishers, Bielefeld (2013) 5. http://www.conferenceboard.ca/hcp/Details/education/graduates-science-math-computerscience-engineerin.aspx?AspxAutoDetectCookieSupport=1 6. https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics 7. Kadota, K.: STEM education in Japanese Technical High School: through curriculum development of the robot education (2015). https://archive.org/details/Fab11Paper3 8. http://www.smh.com.au/national/education 9. https://www.stemfinity.com/Free-STEM-Education-Resources

744 10. 11. 12. 13. 14. 15. 16. 17.

D. Floroiu et al.

https://education.lego.com/en-us/products http://www.robots.education/training.html https://app.schoology.com http://www.smartclass.ro https://natieprineducatie.ro/ https://meetedison.com/ http://robots.mobilerobots.com/wiki/ARIA Eguchi, A., Uribe, L.: Robotics to promote STEM learning. In: Integrated STEM Education Conference (ISEC). IEEE (2017) 18. STEM 2026, U.S. Department of Education Report. https://innovation.ed.gov/files/2016/09/ AIR-STEM2026_Report_2016.pdf 19. Susilo, E., Liu, J., Rayo, Y.A.: STORMLab for STEM education: an affordable modular robotic kit for integrated science, technology, engineering, and math education. IEEE Robot. Autom. Magaz. 23(2), 47 (2016). Special Issue on “Educational Robotics” 20. https://en.wikipedia.org/wiki/Open-source_robotics

Author Index

A Aggogeri, Francesco, 325 Andris, Pavel, 588 Angeli, Stefano, 151 Anton, Florin, 618 Anton, Silvia, 618 Argyros, Antonis, 181 Arias-Montiel, M., 283 Arsicault, Marc, 24, 35 Aspragathos, Nikos A., 3, 143, 475, 597 Ayoubi, Y., 24 Azariadis, P., 359 B Bader, Markus, 504 Balaska, Vasiliki, 572 Bampis, Loukas, 572, 580 Belforte, Guido, 333 Berns, Karsten, 63, 256, 466 Berselli, Giovanni, 272 Bevec, Robert, 551, 651 Bilancia, Pietro, 272 Boiadjiev, G., 112 Boiadjiev, T., 112 Bonković, M., 689 Borangiu, Th., 618 Borboni, Alberto, 325 Botello-Aceves, S., 376 Brancati, Renato, 531 Brandstötter, Mathias, 316 Bruzzone, Luca, 272 Buchegger, Klaus, 504 Budinská, Ivana, 521, 710

C Cafolla, D., 283 Cafolla, Daniele, 205 Carbone, Giuseppe, 93, 205, 283 Carello, Massimiliana, 333 Castañeda, Eduardo Castillo, 205 Ceccarelli, Marco, 205, 283 Chalvatzaki, Georgia, 132 Chatzikyrkou, Maria, 728 Chavdarov, I., 112 Chávez-Conde, E., 376 Chikurtev, Denis, 121 Chivarov, Nayden, 121 Ciężkowski, Maciej, 660 Colombo, F., 436, 678 Cosenza, Chiara, 531 D Dalla Vedova, Matteo D. L., 640 De Benedictis, Carlo, 102 Decatoire, A., 93 Delchev, K., 112 Dežman, Miha, 291 Díaz, Arturo, 342 Dimeas, Fotios, 53 Dobrovodský, Karol, 588 Dometios, Athanasios C., 132 Doulgeri, Zoe, 53 Dugone, Davide, 151 Duta, Luminita, 737 E Espinosa-Garcia, F. J., 283

© Springer Nature Switzerland AG 2019 N. A. Aspragathos et al. (Eds.): RAAD 2018, MMS 67, pp. 745–747, 2019. https://doi.org/10.1007/978-3-030-00232-9

746 F Faitli, Tamás, 389 Fanghella, Pietro, 272 Ferraresi, Carlo, 102, 333 Florescu, Mihaela, 193 Floroiu, Daniela, 737 Franco, Walter, 102, 245 G Gallina, Paolo, 316 Gams, Andrej, 44, 291 Garau, M., 214 Gašpar, Timotej, 551, 651 Gasteratos, Antonios, 572, 580 Gattringer, Hubert, 264, 300, 398 Georgoulas, George, 173 Gigov, Alexander, 121 Giorgis, Mauro, 151 Gordić, Zaviša, 71 H Harrer, David, 669 Havlik, Stefan, 308 Henrich, Dominik, 669 Hernández, Eusebio, 342, 376 Hofbaur, Michael, 316 Hricko, Jaroslav, 308 I Ichim, Loretta, 236 Infante-Jacobo, M., 376 Ivanescu, Mircea, 193 Ivănescu, Nick, 618 Iversen, Nikolaj, 538 J Jerbić, Bojan, 493, 607 Jörgl, Matthias, 300, 398 Jovanović, Kosta, 71, 425 Jovanović, Miloš, 627 Just, Søren Andreas, 538 K Kaburlasos, V. G., 689 Kamnik, Roman, 456 Kandemir, Hasan, 718 Kanellakis, Christoforos, 173 Kaňuch, Peter, 521 Karpis, Ondrej, 701 Kasanický, Tomáš, 521 Kastelov, R., 112

Author Index Keshtkar, Sajjad, 342 Köpper, Alexander, 466 Kose, Hatice, 718 Koskinopoulou, Maria, 82 Kostova, S., 689 Koumboulis, Fotis N., 416 Koustoumpardis, Panagiotis N., 3 Kraljić, David, 456 Krastev, Evgeniy, 447 Kritikou, Georgia, 475 L Lamprinou, Nefeli, 163 Laribi, Med Amine, 24, 35, 93 Lentini, L., 678 Lingua, Andrea Maria, 151 Loizou, Savvas G., 512 Lugo-Gonzalez, E., 283 Lukić, Branko, 425 M Maffiodo, Daniela, 102, 333 Magdaleno, Ernesto Christian Orozco, 205 Maggiore, Paolo, 640 Maniadakis, Michail, 82 Mansour, Gabriel, 351, 728 Mansouri, Sina Sharif, 173 Manuello Bertetto, A., 214 Markov, Emanuil, 121 Maschio, Paolo, 151 Mavromatakis, Odysseas, 222 Mazza, L., 436 Merezeanu, Daniel, 236 Merlo, Angelo, 325 Miatliuk, K., 368 Micek, Juraj, 701 Miteva, Lyubomira, 121 Morariu, O., 618 Morelli, Umberto, 640 Moreno, Jaime A., 342 Moulianitis, V. C., 143, 359, 368 Müller, Andreas, 264, 300, 398 N Naď o, Ladislav, 521 Nicolae, Maximilian, 236 Nikolakopoulos, George, 173 Niola, Vincenzo, 531 Nisi, Matteo, 245 Nitulescu, Mircea, 193 Ntegiannakis, Theodosis, 222

Author Index O Olesnanikova, Veronika, 701 P Pachidis, T., 689 Papageorgiou, Xanthi S., 132 Papić, V., 689 Paplu, Sarwar Hussain, 63 Patic, Paul C., 737 Pellegrini, Nicola, 325 Pepe, G., 436 Petrič, Tadej, 13, 44 Piantelli, Luca, 151 Piperakis, Stylianos, 181 Piperidis, Savvas, 222 Pisla, D., 93 Poisson, Gérard, 35 Polančec, Mateo, 493 Popescu, Dan, 236 Poulopoulos, Nikolaos, 560 Psarakis, Emmanouil Z., 163, 560 Pucher, Florian, 264 Q Quaglia, Giuseppe, 245 R Răileanu, Silviu, 618 Rallis, Stelios, 580 Rangelov, Ivaylo, 121 Raparelli, T., 436, 678 Ridge, Barry, 651 Rodić, Aleksandar, 627 Ruggiu, M., 214 S Sagris, Dimitrios, 351 Sandoval, Juan, 35 Sartinas, Evangelos G., 163 Savarimuthu, Thiusius Rajeeth, 538 Savino, Sergio, 531 Šekara, Tomislav B., 425 Šekoranja, Bojan, 607 Seriani, Stefano, 316 Sharkawy, Abdel-Nasser, 3 Shivarov, Nedko, 121 Single, Ulrich, 264 Stevanović, Ilija, 627 Stöger, Christoph, 264 Stoian, Viorel, 407 Šuligoj, Filip, 493, 607 Švaco, Marko, 493, 607

747 T Tar, József K., 389 Thanellas, G., 143 Tiboni, Monica, 325 Timotheatos, Stavros, 181 Todoran, George, 504 Tosa, Massimo, 256 Trahanias, Panos, 82, 181 Triantafyllou, Dimitra, 597 Trivella, A., 436, 678 Tsagaris, Ápostolos, 351, 728 Tsintotas, Konstantinos A., 580 Tsourveloudis, Nikos C., 222 Tzafestas, Costas S., 132 U Ude, Aleš, 551, 651 V Valdez, S. Ivvan, 376 Valsamos, C., 368 Van Dong Hai, Nguyen, 193 Vidaković, Josip, 607 Viktorov, Vladimir, 333, 678 Visconte, Carmen, 333 Vitez, Nikola, 607 Vladu, Cristian, 193 Vladu, Ionel Cristian, 407 Vrochidou, E., 689 W Werner, Tobias, 669 Wolniakowski, A., 368 Wolniakowski, Adam, 660 X Xanthopoulos, N., 143 Xidias, E., 359 Y Yovchev, Kaloyan, 121, 483 Z Zafar, Zuhair, 63 Zagurki, K., 112 Zeghloul, Said, 24, 35, 93 Zelenka, Ján, 521 Žlajpah, Leon, 13 Župančić, Ivan, 607

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 AZPDF.TIPS - All rights reserved.